Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 Jan 1.
Published in final edited form as: Account Res. 2021 Feb 8;29(1):55–62. doi: 10.1080/08989621.2021.1880902

Reimagining IRB Review to Incorporate a Clear and Convincing Standard of Evidence

Elise Smith 1, Emily E Anderson 2
PMCID: PMC8349366  NIHMSID: NIHMS1667281  PMID: 33480289

Abstract

This commentary is a critical response to the article written by David Resnik regarding the use of a standard of evidence for Institutional Review Board (IRB) decision making. Resnik suggests that IRBs should not only base decisions on evidence, but that this evidence should be sufficient to ensure a “clear and convincing” standard similar to that used by juries for legal proceedings. We agree that the increased use of evidence to meet this standard would be ideal since this provides clear guidance and could allow for a more transparent IRB review. However, to effectively meet this standard, significant modifications would be required for researchers as well as for IRBs’ processes. First, researchers would be required to identify, understand and include appropriate scientific and ethics evidence in support of their protocol. IRB members and IRB professionals would need to discuss the importance, value and significance of evidence in order to come to a collective decision regarding each protocol. Such responsibilities are justifiable and could bring much needed rigor and transparency to the system but they would require time, training, research and education. While Resnik’s suggestion seems to incorporate a small change with respect to a standard, in application it would actually require a novel system.

Keywords: IRB, evidence, standard of evidence, research ethics on human subjects


In his article, “Standards of Evidence for Institutional Review Board Decision-Making,” Resnik argues that Institutional Review Boards (IRBs) should rely on evidence, and empirical evidence in particular, in their deliberations. He also argues that the evidence should be sufficient to ensure confidence in their decision-making regarding whether a study meets federal approval criteria (Resnik 2020). Although use of evidence is standard in medical and legal decision making, IRBs have often relied on intuition and “gut feeling.” (Pritchard 2011, Klitzman 2013)

We agree with Resnik that the development of clearer standards of evidence would improve rigor, consistency and transparency in IRB decision-making. Indeed, this may also promote other important goals such as participant safety, autonomy and fair subject selection. However, the operationalization of these standards will require important systemic changes. Not only would there need to be an increase in empirical research relevant to IRB decision making, researchers would also need to understand, be familiar with and include evidence in their protocol and IRB submissions. Also, including a specific standard of evidence would shift the way IRB members make decisions and create new roles and responsibilities for IRB staff members. All changes will require resources and education. Here, we briefly reimagine an IRB review process adapted to evidence-based practices.

The IRB Evidence “Gap”

Before we discuss how to integrate evidence throughout the IRB decision-making processs, it is important to acknowledge that the success of the reimagined system hinges on the availability and quality of this evidence. A good example of the use of empirical research to facilitate ethical problems solving exists in trauma research. IRBs have often expressed concern about the potential for research on past traumas to re-traumatize research subjects. In response, trauma researchers gathered data to determine the likelihood and magnitude of participants getting upset and to inform best practices for minimizing emotional harm from research participation (Newman et al. 2006).

Empirical research has also provided much insight into the informed consent (IC) process. IC documents are quite lengthy and detailed, to ensure that the participant has the appropriate information to make an informed decision regarding participation. However, empirical research suggests that lengthy IC forms are rarely read, and that shorter IC documents with less information actually result in better understanding of the research study (Perrault and Keating 2018). Other research has focused on the ability to obtain informed consent in specific particularly challenging contexts (e.g. disaster zones, epidemics) (Gobat et al. 2015), from specific populations (e.g. individuals with neurological disorders)(Vaishnav and Chiong 2018) and using different modalities including multi-media (Anderson et al. 2017).

Along with Resnik, we and others have called for increased research to generate evidence relevant to ethical dilemmas in research and IRB decision making (Anderson and Sieber 2009, Sieber 2009, Anderson and DuBois 2012). There are numerous examples of researchers generating ongoing empirical evidence relevant to human research ethics; there needs to be more, which we acknowledge is greatly contingent on both training and funding.

The Role of Researchers in an Evidence-Based IRB System

In terms of the IRB decision-making process, whether or not there is a clear standard of evidence to be used, the burden of proof lies with the researcher. Indeed, IRBs often request clarifications or justification to support actions related to recruitment methods, informed consent, medication dosing/ administration, safety monitoring, use of placebo, randomization schemes and selection of subjects are one of the main reasons for IRB deferrals (Clapp et al. 2017). It could be argued that a “clear and convincing standard” as defined by Resnik places an increased burden on researchers. We believe rather that such a standard of evidence could help clarify for researchers what and how much information should be included in the protocol – a key step towards transparency about IRB decision making. Researchers should be knowledgable about the ethical implications, best practices for informed consent and minimizing risk, and special protections relevant to the populations and methods with which they workin order to provide sufficient evidence to facilitate IRB review and the decision as to whether regulatory requirements are met.

This will of course require a culture shift regarding how researchers are trained, which may face resistance. Ethics training will need to be better integrated with methods training so that researchers have the skills to provide in their IRB submission evidence that not only demonstrates the scientific justifiability but also the empirically based ethical justifiability of their proposed research. Researchers need to be trained to read, critique, and apply evidence from studies relevant to recruitment, informed consent, risk assessment, etc., and apply this to their proposed studies. This training will need to be very discipline, method, and population specific as the issues are unique. For example, researchers who want to include pregnant women as research participants need to be familiar with evidence relating to the common use of medication during pregnancy, phases in which teratogenic effect is most problematic to assess risk of harm to fetus, benefits to including pregnant women in studies, and the harm in excluding pregnant women (Anderson and DuBois 2012, Lyerly et al. 2015).

The Role of IRB Members in an Evidence-Based System

Resnik ultimately endorses the legal criteria of “clear and convincing evidence” based on the presumed ability of both scientists and non-scientists to understand this legal standard. However, although IRBs and juries may have certain commonalities, they also have important differences. In the criminal legal system, the prosecutor or legal team generally put together as much evidence possible to pursue a conviction. The lawyer can then present the evidence of the case to the jury and explain the importance of the evidence being present. In the research environment, the evidence is presented by the researcher in the written protocol. IRB professionals can ensure that the protocol is complete but their role is not to advocate for the researcher in the same way that a lawyer would advocate for their client. IRB professionals remain somewhat neutral in the IRB review process which will be discussed later in this commentary.

Lastly and probably most importantly, the jury is composed of lay-public and has a significant amount of time as they are concentrating on one case. They can discuss how evidence presented to them by experts may justify their decision. In this context, they can discuss whether or not the totality of the evidence reaches the standard of evidence. In the IRB context, the majority of members are researchers and clinicians and only a few community members are lay-public members. Although IRBs can ask the researcher for clarification in real time during deliberation (Spellecy et al. 2018), in reality this remains rare. The IRB has a limited amount of time as there are many projects to review within a fairly short meeting timeframe. This barrier could be addressed by better identifying and triaging those submissions in need of significant deliberation and by appropriately compensating IRB member time and effort.1

It is implicitly understood that IRB members use evidence that they have acquired as scientists, lawyers, ethicists, doctors, and members of the community to make normative judgments. However, IRB members seldom explicitly discuss the source of that evidence – whether it is from published research, personal clinical experience, or intuition -with other members. It is often simply assumed that the evidence is well founded, corroborated, and reliable. However, if a standard of evidence is to be established as part of the IRB review process, this would entail greater scrutiny and transparency as to the provenance, source and importance of evidence applied by members. IRB communications to investigators would need to differentiate between decisions supported by strong evidence as opposed to intuition, personal opinion or hearsay. All IRB members would need to agree on a “clear and convincing standard” and how it is to apply to IRB members as individuals as well as a collective. New procedures would have to be developed and IRB members trained to identify and agree upon a new standard of evidence and then to clearly communicate and explain the evidence relevant to each decision.

The IRB’s decision as to whether there is “clear and convincing evidence” is not an altogether objective exercise. The interpretation, weight, and importance of empirical knowledge is subjective. Therefore, IRBs should provide transparency in sharing both empirical and value-based judgments. To foreground empirically based decision-making may give the false impression that the review process (system) is perfectly objective. However, although empirically based normative judgments may be more informed and consistent, they are still open to interpretation. For example, as previously noted in the example of research including pregnant women, a researcher may introduce evidence demonstrating how pregnant women in clinical trials are best included and how risks may be reduced. However, IRB members may hesitate to accept that evidence and choose to err on the side of caution for pregnant women. Even if pregnant women probably undergo greater risk in taking certain medication than participating in a research study, protectionism (or paternalism) remains ingrained in our socio-cultural norms regarding research with pregnant women. Overcoming taboos and changing the status quo remains a challenge not simply because of a lack of reasonable evidence but also, because of fear. If the IRB does choose to refuse evidence, the minimal requirement should be that they explain and justify this refusal. Justification may include equally convincing evidence or feasibility concerns. Although evidence will not exclude all deep seated ideology or moral beliefs, requiring the use of more evidence may help in challenging outdated cultural norms.

To distinguish evidence from interpretation and normative judgment in its deliberations, IRBs as currently constituted would need to evolve considerably. Presently, members rely on each other to arrive at decisions for which they are unsure or do not understand. For example, a member may ask a physician-researcher if the risks associated with a lumbar puncture are acceptable in the case of a healthy participant. The physician-researcher may summarily confirm the procedure as acceptable. In this instance, IRB members trust that the physician-research experience is based on enough empirical evidence to make that type of decision and agree. Indeed, there is research that does warrant the use of lumbar punctures in certain research settings (Page-Wilson et al. 2016). However, should that same physician-researcher point out, “there is a risk of paralysis” which is extremely uncommon but also true, IRB members may well arrive at a different decision based on their interpretation of the risk and also their individual risk tolerance.

The Role of HRPP Professionals

IRB members are busy faculty members and non-faculty volunteers (e.g., non-scientist, non-affiliated members) with limited time to devote to the important work of reviewing research protocols. Application of a clear and convincing standard of evidence will certainly increase burden. IRBs are staffed by professionals who ensure that the information submitted is sufficient. In an evidence-based system, like researchers and IRB members, HRPP professionals will need to be familiar with relevant literature providing evidence as well as the application and interpretation of a clear and convincing standard. It might be argued that this is outside the scope of their work, but there is no reason to limit the work of IRB professionals to administrative tasks. The field has seen an increase in professionalization over the past decades as well as transformation of the role of HRPP professionals (PRIM&R CIP exam, etc.), suggesting that more substantive engagement in the ethical work of IRBs might be welcome by many IRB professionals.

Transparency in IRB Decision-Making

Resnik reignites discussions about evidence in IRB decision-making, and we concur that a clearer standard of evidence would make decisions more informed, more consistent, and more aligned with regulatory standards. IRB decisions should also be more transparent, with communications to investigators clearly outlining reasons for any requested modifications or deferrals. Since standards of evidence as well as the evidence used in decision-making would be openly shared, researchers may have increased confidence in the IRB review process. All too often, researchers suspect that the system is unfair and biased; ironically, this in turn leads them researchers to be less concerned about ethics issues after their protocol has passed IRB review (Keith-Spiegel and Koocher 2005). If researchers are transparent about the evidence that justifies the ethics of a research proposal, then IRBs also have an obligation to demonstrate how and why this evidence reaches (or does not reach) the standard of evidence. The system gains in transparency of evidence, process and communication.

Acknowledgments

Funding: Elise Smith is supported in part from the Clinical and Translational Science Award (UL1TR001439) from the National Center for Advancing Translational Sciences, National Institutes of Health.

This commentary discusses the work in an article recently written by David Resnik. Although this commentary was written in a neutral manner, a collegial relationship may unduly influence or perceive to influence the review. We therefore choose to disclose that authors of this review (Elise Smith and Emily E Anderson) have had ongoing conversation with David Resnik regarding his research. Also, Elise Smith was previously a post-doctoral fellow of David Resnik (2016–2019).

Dr. Anderson is supported by the National Center for Advancing Translational Sciences, National Institutes of Health, through Grant UL1TR002003.

The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Footnotes

Conflict of Interest: We have no financial conflict of interests related to this research.

1

There is a difference in time allocation between different types of IRBs. For example, independent IRBs in which members are paid may have more time to review each protocol. IRBs in academic centers include many volunteers that may not have much time to allocate to the process. However, the time allotment for independent IRBs would still be more limited if compared to a jury that could take days to arrive to a verdict.

Contributor Information

Elise Smith, University of Texas Medical Branch, Institute for Bioethics & Health Humanities, Institute for Translational Sciences, Department of Preventive Medicine and Population Health. 700 Harborside Drive, Maurice Ewing Hall, Office 3.102P, Texas, US.

Emily E. Anderson, Loyola University Chicago, Neiswanger Institute for Bioethics and Healthcare Leadership, 2160 S. First Avenue, Maywood, IL 60153.

References

  1. Anderson EE and DuBois JM, 2012. IRB Decision-Making with Imperfect Knowledge: A Framework for Evidence-Based Research Ethics Review. The Journal of Law, Medicine & Ethics, 40 (4), 951–969. [DOI] [PubMed] [Google Scholar]
  2. Anderson EE, Newman SB, and Matthews AK, 2017. Improving informed consent: Stakeholder views. AJOB Empirical Bioethics, 8 (3), 178–188. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Anderson EE and Sieber JE, 2009. The need for evidence-based research ethics. The American Journal of Bioethics, 9 (11), 60–62. [DOI] [PubMed] [Google Scholar]
  4. Clapp JT, Gleason KA, and Joffe S, 2017. Justification and authority in institutional review board decision letters. Social Science & Medicine, 194, 25–33. [DOI] [PubMed] [Google Scholar]
  5. Gobat NH, Gal M, Francis NA, Hood K, Watkins A, Turner J, Moore R, Webb SAR, Butler CC, and Nichol A, 2015. Key stakeholder perceptions about consent to participate in acute illness research: a rapid, systematic review to inform epi/pandemic research preparedness. Trials, 16 (1), 591. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Keith-Spiegel P and Koocher GP, 2005. The IRB Paradox: Could the Protectors Also Encourage Deceit? Ethics & Behavior, 15 (4), 339–349. [DOI] [PubMed] [Google Scholar]
  7. Klitzman R, 2013. How IRBs view and make decisions about coercion and undue influence. Journal of Medical Ethics, 39 (4), 224–229. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Lyerly AD, Little MO, and Faden R, 2015. The second wave: Toward responsible inclusion of pregnant women in research. IJFAB: International Journal of Feminist Approaches to Bioethics, 1 (2), 5–22. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Newman E, Risch E, and Kassam-Adams N, 2006. Ethical issues in trauma-related research: A review. Journal of Empirical Research on Human Research Ethics, 1 (3), 29–46. [DOI] [PubMed] [Google Scholar]
  10. Page-Wilson G, Wardlaw SL, Nguyen KT, and Smiley RM, 2016. Evaluation of pain and stress in healthy volunteers undergoing research lumbar punctures. Neurology, 87 (4), 438–439. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Perrault EK and Keating DM, 2018. Seeking Ways to Inform the Uninformed: Improving the Informed Consent Process in Online Social Science Research. Journal of Empirical Research on Human Research Ethics, 13 (1), 50–60. [DOI] [PubMed] [Google Scholar]
  12. Pritchard IA, 2011. How Do IRB Members Make Decisions? A Review and Research Agenda. Journal of Empirical Research on Human Research Ethics, 6 (2), 31–46. [DOI] [PubMed] [Google Scholar]
  13. Resnik DB, 2020. Standards of Evidence for Institutional Review Board Decision-Making. Accountability in Research, 0 (ja), null. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Sieber JE, 2009. Evidence-Based Ethical Problem Solving (EBEPS). Perspectives on Psychological Science, 4 (1), 26–27. [DOI] [PubMed] [Google Scholar]
  15. Spellecy R, Eve AM, Connors ER, Shaker R, and Clark DC, 2018. The real-time IRB: A collaborative innovation to decrease IRB review time. Journal of Empirical Research on Human Research Ethics, 13 (4), 432–437. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Vaishnav NH and Chiong W, 2018. Informed Consent for the Human Research Subject with a Neurologic Disorder. Seminars in Neurology, 38 (5), 539–547. [DOI] [PubMed] [Google Scholar]

RESOURCES