Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2018 Jul 1.
Published in final edited form as: Am J Bioeth. 2017 Jul;17(7):41–43. doi: 10.1080/15265161.2017.1328537

A Measure of Effectiveness Is Key to the Success of sIRB Policy

Holly A Taylor 1, Ann Margret Ervin 2
PMCID: PMC5568650  NIHMSID: NIHMS889834  PMID: 28661750

In her article “The Final Rule: When the Rubber Meets the Road” Dr. O’Rourke summarizes the mandate that U.S. institutions rely on a single institutional review board (sIRB) for the review of multicenter studies (cooperative research), and cites specific challenges associated with implementation of the policy (O’Rourke 2017). The expectation is that the single IRB model will be more efficient than multiple IRB reviews of the same protocol (Federal Policy 2017; National Institutes of Health [NIH] 2016; Klitzman, Pivovarova, and Lidz 2017). Efficiency seems to be a reasonable expectation, but we do not have sufficient evidence to assess whether the sIRB system is more efficient than the current decentralized system (Ervin, Taylor, and Ehrhardt 2016; Klitzman, Pivovarova, and Lidz 2017). As noted in the preamble to the revised Common Rule, it is not known whether local IRB review alone is responsible for the inefficiencies in the current system, as the conduct of other regulatory or institutional reviews, which will continue despite the sIRB policy, may also delay commencement of research (Federal Policy 2017, 7209). We also don’t know whether efficiency (i.e., the amount of resources required to obtain a certain result) is the most informative measure for the review and oversight of human subjects research. While the amount of time a proposal is under review by an IRB or the time it takes for the Human Research Protections Program (HRPP) to approve the protocol is the variable most often measured to assess the efficiency of the IRB process (Abbott and Grady 2011), it is not clear that it is the only variable of interest or a good proxy for the effectiveness of IRB review. The substantive goal of IRB review is to ensure that the proposed research minimizes risks to participants and maximizes the benefits to be gained by the completion of the research (National Commission 1979; Taylor 2007). The benefits of research may accrue to participants, but ought to at least advance knowledge (i.e., benefit society). Opponents of the sIRB process suggest that review by multiple IRBs maximizes the likelihood that all potential risks will be identified and minimized as no single IRB can be expected to have the “coverage and breadth of knowledge as [with] a collective body of IRBs” (NIH public comments). While there may be a maximum number of reviews beyond which the value of multiple IRBs may diminish, it is likely greater than one. Some opponents of the sIRB policy also believe that only local IRBs reviews can adequately address local considerations that may affect the welfare of study participants (Secretary’s Advisory Committee on Human Research Protections [SACHRP] 2011; NIH 2016). While local context information can and will be solicited by the sIRB, currently there is no guidance regarding the type of information to collect and how local context should be incorporated into the sIRB review process. There is currently no standard measure by which the effectiveness of the IRB/sIRB review process is accomplished (Ervin, Taylor, Meinert, and Ehrhardt 2015). Such a measure would be useful to assess whether the implementation of the sIRB policy is a success.

In absence of standard measures to assess the effectiveness of the IRB/sIRB process or to assess and/or compare the quality of a Human Research Protection Program (HRPP) interested in taking on the role of sIRB, proxy measures such as HRPP accreditation have been promoted. Accreditation by the Association for the Accreditation of Human Research Protection Programs (AAHRPP) has become the standard in the field. The Office for Human Research Protections (OHRP) also has a quality improvement program for IRBs to conduct self-assessments of performance (OHRP 2017). The Master Common Reciprocal Institutional Review Board Authorization Agreement (i.e., the SMART IRB Agreement), developed with funding from the National Center for Advancing Translational Sciences (NCATS), has been promoted by the NIH as the template for agreements brokered between reviewing (sIRBs) and reliant IRBs (Klitzman, Pivovarova, and Lidz 2017). Only HRPP programs that have “undergone an assessment of quality” can participate in the agreement and “the assessment may be accomplished by accreditation through an external organization, or through OHRP’s Quality Assessment Program, or other equivalent approach” (SMART 2017, 2). While these performance assessments incorporate issues relevant to an assessment of the ethical quality of the review process (assessing, e.g., whether the IRB explicitly considers how to minimize risks to participants during its deliberations), these assessments are primarily attempts to operationalize regulatory compliance. (Institute of Medicine [IOM] 2003). Accreditation/quality self-assessment efforts are not misguided, but significant investment in the development and systematic application of robust measures to assess the effectiveness of the IRB review process is still needed.

Taylor (2007) previously proposed a set of seven domains to assess IRB effectiveness. We endorse these domains and believe that all ought to be incorporated into a robust assessment of IRB/sIRB effectiveness. In brief, we next focus on a single domain—“the identification and assessment of risk to subjects/society”—and propose two approaches to the assessment of this domain in the context of multi-center studies reviewed by a sIRB.

  1. Key questions asked at the end of the initial review: Wichman, Kalyan, and Abbott (2006) proposed that an evaluation of IRB deliberations be conducted in real time by an independent observer and developed a checklist to guide this activity. We endorse the idea of a real-time assessment but believe that the evaluation should be qualitative and conducted by the sIRB. Before the conclusion of the initial review of a multicenter study the sIRB members would consider the following open ended questions.

    1. What physical, psychological, and economic risks will the individuals enrolled in this study experience?

    2. How has each of the risks been minimized?

      If the sIRB is unable to address all risks posed by the conduct of the research, a re-review of the relevant components of the protocol and/or follow-up questions are directed to the study team. The initial and final answers to these questions would be included in the review materials.

  2. Key questions asked during annual review: A variation on the questions posed prior to the conclusion of the initial review would be again reviewed by the sIRB during the annual review of the protocol.

    1. What physical, psychological, and economic harms accrued to the study population to date?

    2. How did the anticipated magnitude of any of the risks anticipated compare to the actual harms experienced by the study population?

    3. What unanticipated harms accrued to the study population?

      1. Does the magnitude of any of these harms warrant a revision to the consent process?

    4. What adverse events were reported to the sIRB and what were the considerations or concerns posed by the Data and Safety Monitoring Committee (DSMC)?

Single IRBs would collect the information just described, from all participating sites, in advance of the annual review. The sIRB may also review reports from the DSMC.

The evidence to support the effectiveness of sIRB review is limited by the lack of standardized outcome measures. In addition to supporting the implementation of the sIRB policy at U.S. institutions, stakeholders should also invest in evaluating the utility of sIRB review. Evaluation of sIRB outcome data would support or refute the value of this systematic approach to streamlining multicenter ethical review and would also inform any modifications to the policy going forward. Reliable measures to assess quality are key first steps, and the development and pilot testing of related outcomes are essential prior to widespread implementation of the sIRB policy.

Contributor Information

Holly A. Taylor, Johns Hopkins Bloomberg School of Public Health and Johns Hopkins Berman Institute of Bioethics

Ann Margret Ervin, Johns Hopkins Bloomberg School of Public Health.

References

  1. Abbott L, Grady C. A systematic review of the empirical literature evaluating IRBs: What we know and what we still need to learn. Journal of Empirical Research on Human Research Ethics. 2011;6:3–19. doi: 10.1525/jer.2011.6.1.3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Ervin AM, Taylor HA, Ehrhardt S. NIH policy on single-IRB review—A new era in multicenter studies. New England Journal of Medicine. 2016;375(24):2315–17. doi: 10.1056/NEJMp1608766. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Ervin AM, Taylor HA, Meinert CL, Ehrhardt S. Evidence gaps and ethical review of multicenter trials. Science. 2015;350(6261):632–33. doi: 10.1126/science.aac4872. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Federal policy for the protection of human subjects. Federal Register. 2017;82(12):7149–74. https://www.gpo.gov/fdsys/pkg/FR-2017-01-19/pdf/2017-01058.pdf. [PubMed] [Google Scholar]
  5. Institute of Medicine. Preserving public trust: Accreditation and human research protection programs. Washington, DC: National Academies Press; 2003. [PubMed] [Google Scholar]
  6. Klitzman R, Pivovarova E, Lidz CW. Single IRBs in multisite trials—Questions posed by the new NIH policy. Journal of the American Medical Association. 2017;317(20):2061–2. doi: 10.1001/jama.2017.4624. Available at: http://dx.doi.org/10.1001/jama.2017.4624. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. [accessed April 29, 2017];Master Common Reciprocal Institutional Review Board Authorization Agreement. 2015 Version 1.5. Available at: file:///C:/Users/htaylor/Downloads/SMART_IRB_Agreement%20(1).pdf.
  8. National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. [accessed May 30, 2017];The Belmont report: Ethical principles and guidelines for the protection of human subjects of research. 1979 Available at: https://www.hhs.gov/ohrp/regulations-and-policy/belmont-report. [PubMed]
  9. National Institutes of Health. [accessed May 1, 2017];Public comments on the draft NIH policy on the use of a single institutional review board for multi-site research National Institutes of Health, December 3, 2014–January 29, 2015. 2016 Available at: http://osp.od.nih.gov/sites/default/files/resources/sIRB%2007-21-2015.pdf.
  10. National Institutes of Health. Final NIH policy on the use of a single institutional review board for multi-site research. 2016 Available at: https://grants.nih.gov/grants/guide/notice-files/NOT-OD-16-094.html.
  11. Office for Human Research Protections. [accessed April 29, 2017];Quality assurance assessment tool. 2017 Available at: https://www.hhs.gov/ohrp/education-and-outreach/human-research-protection-program-fundamentals/ohrp-self-assessment-tool/index.html.
  12. O’Rourke PP. The final rule: When the rubber meets the road. American Journal of Bioethics. 2017;17(7):27–33. doi: 10.1080/15265161.2017.1329484. [DOI] [PubMed] [Google Scholar]
  13. Public Comments on the Draft NIH Policy on the Use of a Single Institutional Review Board for Multi-Site Research. [accessed May 30, 2017];National Institutes of Health, December 3, 2014 – January 29, 2015. Available at: http://osp.od.nih.gov/sites/default/files/resources/sIRB%2007-21-2015.pdf.
  14. Secretary’s Advisory Committee on Human Research Protections. [accessed May 30, 2017];Letter to HHS Secretary Sibelius regarding the advance notice of proposed rule making published in the federal register. October 13. Extracted from the OHRP database of comments. 2011 Available at: https://www.hhs.gov/ohrp/sachrp-committee/recommendations/2011-october-13-letter-final/index.html.
  15. Taylor HA. Moving beyond compliance: Measuring ethical quality to enhance the oversight of human subject research. IRB: Ethics and Human Research. 2007;29(5):9–14. [PubMed] [Google Scholar]
  16. Wichman A, Kalyan DN, Abbott LJ, et al. Protecting human subjects in the NIH’s intramural research program: A draft instrument to evaluate convened meetings of its IRBs. IRB: Ethics and Human Research. 2006;28(3):7–10. [PubMed] [Google Scholar]

RESOURCES