Abstract
Retrospective review could improve the efficiency—and perhaps the effectiveness—of human subjects research oversight.
The Advance Notice of Proposed Rule-making (ANPRM) released in 2011 by the U.S. Department of Health and Human Services (HHS) (1) recommends many important changes to federal regulations on protection of human research subjects. Perhaps most important, through the 74 questions it poses, it offers the opportunity to rethink approaches to research oversight. The current regulatory model of prospective review, based on what researchers say they plan to do, focuses the attention of Institutional Review Boards (IRBs, which must approve proposed research) and researchers on perfecting protocols and consent forms rather than interacting with subjects. Such a regulatory model may discourage innovation in human subjects protection. In contrast, we describe how a system based on retrospective, auditlike review of a subset of projects could stimulate assessment of the effectiveness of current approaches and the development of creative alternatives, with efficiencies for all concerned.
Prospective versus Retrospective Review
Oversight of human subjects research in the United States grew out of scandals of the 1960s and early 1970s, culminating in the current U.S. regulations in 1991 (“the Common Rule”). Other countries have developed similar rules (2). Characteristics typical of a prospective approach, however, have contributed to widespread dissatisfaction with human subjects protection (3, 4). Because prospective review can only focus on what researchers say they will do, IRBs inevitably concentrate most of their attention on the minutiae of protocols and consent forms rather than on monitoring actual performance (5). Counterproductively, researchers often fine-tune their IRB submissions rather than improve their interactions with research subjects. Paperwork burdens for researchers have grown, even as studies reveal deficits in subjects’ grasp of the projects in which they have enrolled (6, 7).
Other systems of oversight, however, have evolved quite differently and rely on retrospective review. Examples in the United States include audits of government-funded health-care programs (Medicare and Medic-aid), accreditation of hospitals, Data Safety Monitoring Boards’ assessment of patient safety and treatment efficacy data during clinical trials, federal and state tax collection audits, and the tort system. None of these is exactly comparable to the task of human subjects protection. But all focus reviews on a small proportion of cases flagged as potentially problematic or selected at random, which reduces burdens for the system and those subject to its oversight (8–11).
It is understandable why, in the 1970s, the U.S. Department of Health, Education, and Welfare (parts of which were later reorganized as HHS) embraced a prospective regulatory approach. Public reaction to the revelations of abuses by researchers at Tuskegee and elsewhere called for immediate action. The possibility that additional harms might accrue to human subjects threatened the viability of medical research as a whole. Prospective review—essentially requiring investigators to obtain a permit to perform a study—appeared to be the best means of bringing the system rapidly under control. But over the long run, prospective review may be a suboptimal strategy for the oversight of much human subjects research.
Ill-Suited for Research
Prospective review is most easily applied to relatively clear-cut determinations with predictable outcomes. However, little about human research oversight is particularly straightforward. IRBs often face difficulties defining, interpreting, and applying critical concepts embodied in the regulations and central to human subjects protection (e.g., “justice” and “autonomy”). Regulations prohibit “undue inducement” (i.e., disproportionate payments to encourage participation), but IRBs and ethicists range widely in their views of how much money is “too much” (12, 13). Perceptions of “risk” are highly variable, even for common procedures (14).
Moreover, IRBs have different thresholds for approving studies and struggle with whether protections for subjects are “good enough” (14). IRBs often disagree in reviews of the same study at multiple sites (15–18). The ANPRM asks whether new regulations should standardize the topics to be included in consent forms. But consent forms involve subjective judgments, concerning not just what information to include but how to describe a study. Many IRBs feel they play a vital role in rewording consent forms, because researchers may downplay risks and overemphasize potential benefits. But IRBs “wordsmithing” the contents differently produces inconsistent results (19).
IRBs generally argue that differences among them reflect local community values and thus are justified, but these variations often appear instead to reflect personality or institutional factors (20). Variations occur even within single institutions and IRBs over time (20). These complex, dynamic factors, not only formal regulations, affect how IRBs interpret and apply guidelines.
Although many of these issues might arise in any system of oversight, whether prospective or retrospective, the inherently speculative nature of prospective review exacerbates variability and subjectivity across IRBs. When no one can know whether subjects will understand phrases in a consent form, IRB members lack firm grounding for many judgments. In contrast, retrospective review encourages greater focus on events that have transpired, rather than those merely imagined and feared. At present, once a study is approved by an IRB, an investigator is generally not required to monitor or improve the effectiveness of the consent process or subjects’ reactions to participation. But the possibility of being audited on the basis of how well subjects understood the study or whether they were distressed by the research procedures—based on objective, validated questionnaires—would provide different incentives. Investigators would be encouraged to try new approaches to improve the quality of interactions with subjects. Social and behavioral research, now often hostage to IRB debate over whether participants will be upset by a particular item on a questionnaire (21), should find retrospective review particularly facilitative. As data accumulate on more successful strategies for obtaining consent and avoiding risk, they can be shared with the research community.
A Path Forward
As the U.S. government reconsiders human subjects regulation for the first time in more than 20 years, thoughtful consideration of possible approaches is critical. We cannot simply turn the clock back to 1966, when prospective review was first introduced, and cast it aside. Indeed, for studies with substantial risks to subjects and uncertain likelihood of benefit, prospective review may be desirable (22, 23). But there are clearly ways to begin to integrate retrospective review into the current oversight process.
The ANPRM proposes an important step in this direction, involving a shift to retrospective assessment for certain minimal risk research, which would be excused from prospective IRB review. Investigators would decide for themselves whether their research meets the appropriate criteria, in which case they would merely register their studies with IRBs, complete a simple (approximately one-page) form, and then conduct the research. IRBs could retrospectively audit some protocols to ensure that the investigators’ discretion to define their projects as minimal risk is not abused. Given that even IRBs disagree about definitions of minimal risk (16), some divergence of opinion between IRBs and researchers is to be expected, without necessarily indicating errors on either side. Although the ANPRM does not specify the consequences of negative results of an audit, we envision imposition of closer monitoring and perhaps prospective review of that researcher’s future studies. Careful assessment of the consequences of the changes we describe here is essential.
Of note, an increasing number of institutions have already created mechanisms for selectively auditing studies and thus have some experience with the model. If audits are already being introduced under the current retrospective review framework, why is still more fundamental reform needed? Grafting some degree of retrospective review onto the current process would not address the system’s inefficiencies, including the work and delay inherent in universal prospective review, the undue weight given to written descriptions of procedures rather than actual researcher behavior, and the emphasis on speculative outcomes. Further levels of review can always be added, but each one carries costs in terms of funding, effort, and delay. Shifting toward a retrospective review model will allow finite resources to be used in a more efficient, and perhaps more effective, oversight process.
Retrospective review of minimal risk research also offers an opportunity to create an appellate IRB process, a possibility raised by the ANPRM. Currently, IRBs bear no costs if they unnecessarily nitpick a protocol or interpret regulations idiosyncratically. Each IRB acts as its own appellate court; researchers’ only recourse is usually to the very body that challenged their approach. Although an appeals process could be constructed in a prospective review system, a retrospective system would allow determinations based on evidence of what actually occurred, rather than fears of what might happen. That difference may increase researcher willingness to pursue an appeals process. Regional appellate IRBs could be established by the Office of Human Research Protections, with their number depending on the volume of appeals. Due process is so fundamental a right in modern democracies that there seems little reason to deny it to researchers.
If initial moves away from a strictly prospective model are successful, it may be possible to expand them progressively to studies that pose higher levels of risk. To be sure, as researchers assume these responsibilities, they may require additional education in subject protection and research ethics. We could imagine further exemptions from prospective review for studies that certify compliance with standards, such as currently proposed for federal regulations on privacy of health information. In this way, research oversight would increasingly become a retrospective review process, sparing investigators and IRBs alike the burden of reviewing all aspects of every study in advance.
A Willingness to Rethink
The ANPRM proposes many potentially valuable alterations to the current structure of human subjects oversight, indicating a willingness by HHS to rethink what has been accepted for decades. But the most innovative aspect of the proposed changes is the signal that HHS may not be firmly wed to prospective review. We have raised here more questions than we can answer in this relatively short space, but hope that this discussion can spur analysis and debate. In these possibilities lie important hope for substantial improvement in protecting human subjects and facilitating research.
Acknowledgments
This work was supported in part by NIH (R01-NG04214) and National Library of Medicine (5-G13-LM009996-02).
References and Notes
- 1.HHS. Fed Regist. 2011;76:44512. [Google Scholar]
- 2.HHS. International Compilation of Human Research Standards. www.hhs.gov/ohrp/international/intlcompilation/intlcompilation.html.
- 3.Emanuel EJ, et al. Ann Intern Med. 2004;141:282. doi: 10.7326/0003-4819-141-4-200408170-00008. [DOI] [PubMed] [Google Scholar]
- 4.Kim S, Ubel P, De Vries R. Nature. 2009;457:534. doi: 10.1038/457534a. [DOI] [PubMed] [Google Scholar]
- 5.U.S. General Accounting Office. Scientific Research: Continued Vigilance Critical to Protecting Human Subjects. GAO; Washington, DC: 1996. [Google Scholar]
- 6.Joffe S, Cook EF, Cleary PD, Clark JW, Weeks JC. Lancet. 2001;358:1772. doi: 10.1016/S0140-6736(01)06805-2. [DOI] [PubMed] [Google Scholar]
- 7.Appelbaum PS, Lidz CW, Grisso T. IRB Ethics Hum Res. 2004;26(2):1. [PubMed] [Google Scholar]
- 8.For example, in fiscal year (FY) 2011, the U.S. Internal Revenue Service (IRS) reviewed 1.1% of over 140 million individual tax returns filed, recouping more than $55 billion (9). The cost of FY 2011 enforcement activities was approximately $5.5 billion (10). In addition, audits encourage compliance by nonaudited taxpayers (11).
- 9.IRS. FY 2011 Enforcement and Service Results. www.irs.gov/newsroom/article/0,,id=251923,00.html.
- 10.IRS. FY 2012 Budget Proposal Summary. www.irs.gov/newsroom/article/0,,id=235959,00.html.
- 11.Plumley A. The Determinants of Individual Income Tax Compliance: Estimating the Impacts of Tax Policy, Enforcement, and IRS Responsiveness. IRS; Washington, DC: 1996. www.irs.gov/pub/irs-soi/pub1916b.pdf. [Google Scholar]
- 12.Klitzman R, Albala I, Siragusa J, Nelson KN, Appelbaum PS. J Empir Res Hum Res Ethics. 2007;2:61. doi: 10.1525/jer.2007.2.4.61. [DOI] [PubMed] [Google Scholar]
- 13.Emanuel EJ. Am J Bioeth. 2005;5:9. doi: 10.1080/15265160500244959. [DOI] [PubMed] [Google Scholar]
- 14.Shah S, Whittle A, Wilfond B, Gensler G, Wendler D. JAMA. 2004;291:476. doi: 10.1001/jama.291.4.476. [DOI] [PubMed] [Google Scholar]
- 15.Klitzman R. BMC Med Ethics. 2011;12:13. doi: 10.1186/1472-6939-12-13. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Dyrbye LN, et al. Acad Med. 2007;82:654. doi: 10.1097/ACM.0b013e318065be1e. [DOI] [PubMed] [Google Scholar]
- 17.Greene SM, Geiger AM. J Clin Epidemiol. 2006;59:784. doi: 10.1016/j.jclinepi.2005.11.018. [DOI] [PubMed] [Google Scholar]
- 18.Vick CC, Finan KR, Kiefe C, Neumayer L, Hawn MT. Am J Surg. 2005;190:805. doi: 10.1016/j.amjsurg.2005.07.024. [DOI] [PubMed] [Google Scholar]
- 19.Stair TO, Reed CR, Radeos MS, Koski G, Camargo CA. Acad Emerg Med. 2001;8:636. doi: 10.1111/j.1553-2712.2001.tb00177.x. [DOI] [PubMed] [Google Scholar]
- 20.Klitzman R. AJOB Prim Res. 2011;2:24. doi: 10.1080/21507716.2011.601284. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Schrag Z. Ethical Imperialism: Institutional Review Boards and the Social Sciences, 1965–2009. Johns Hopkins Univ. Press; Baltimore, MD: 2010. [Google Scholar]
- 22.For a systematic analysis of degrees of risk, see (23).
- 23.Rid A, Emanuel EJ, Wendler D. JAMA. 2010;304:1472. doi: 10.1001/jama.2010.1414. [DOI] [PubMed] [Google Scholar]