Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2013 Apr 1.
Published in final edited form as: Science. 2012 Aug 3;337(6094):527–528. doi: 10.1126/science.1218323

Aligning Regulations and Ethics in Human Research

Rebecca Dresser JD 1
PMCID: PMC3612933  NIHMSID: NIHMS447686  PMID: 22859472

Government officials are in the midst of revising the 1991 Common Rule, a collection of regulations that governs most human research in the United States. In July 2011, the Department of Health and Human Services issued an Advance Notice of Proposed Rulemaking (ANPRM) inviting public comments on the potential revisions it described (1). The comment period has closed, but the public will have another chance to comment when officials publish specific proposals for change.

The initiative's overall objective is to remove unwarranted regulatory impediments to research while also strengthening essential human subject protections. Besides asking for feedback in several specific areas, the ANPRM sought proposals for big-picture reforms responding to inadequacies in the existing Common Rule. In the spirit of this invitation, I offer three ideas for meaningful additions to the research oversight system.

Back to Ethical Basics

Each change I propose is tied to one of the 1979 Belmont Report's (2) three ethical principles governing human subject research. Although this report was prepared by a U.S. advisory group, it expresses values embodied in historic documents like the Nuremberg Code, international statements like the Helsinki Declaration, and domestic oversight systems in countries around the world.

The Belmont Report emphasizes the following ethical concepts: respect for persons; beneficence; and justice. Respect for persons requires researchers to obtain subjects’ informed consent to study participation. Justice requires an equitable distribution of research burdens and benefits. Beneficence requires that risks to human subjects be justified by the value of the knowledge the study is expected to generate. Although these concepts underlie many Common Rule provisions, the current revision process is an opportunity to enhance the Common Rule's ethical legitimacy.

Teaching about Research Participation

I begin with the simplest change. One of the Common Rule's major objectives is to promote informed decisions about research participation. To this end, the Common Rule requires researchers to disclose certain facts about a study, such as its purpose and the risks and discomforts it could impose.

The Common Rule lists eight basic elements of informed consent and six additional elements to be disclosed when they are relevant to specific studies. Institutions often add even more material to consent forms, a practice critics say is motivated by liability fears rather than a desire to inform subjects.

These combined government and institutional demands produce long and detailed consent forms that are hard for prospective subjects to understand. Indeed, empirical evidence indicates that many subjects are unaware of essential facts about the studies they join. Moreover, as the ANPRM complained, “Instead of presenting the information in a way that is most helpful to prospective subjects—such as explaining why someone might want to choose not to enroll—the forms often function as sales documents.”

Responding to the widespread dissatisfaction with research consent forms, the ANPRM requests comments on potential changes to the current regulatory requirements. Several of the modifications reflect the popular view that a brief, plain language description of essential facts about a study, together with optional access to more detailed material, would be superior to the current approach.

Regulatory requirements for simplified consent forms could promote more informed choices about research participation (3, 4). But achieving this objective will require another measure described in the ANPRM: an assessment of “how well potential research subjects comprehend the information provided to them.”

As every teacher knows, asking questions is the only way to determine whether a student has understood a lesson. Well-crafted lectures and readings do not necessarily lead to learning, nor do careful study discussions and well-written consent forms. To discover whether the message has gotten through, research team members must evaluate whether potential participants have absorbed what they have heard and read.

The idea of evaluating subject understanding is not new. Experts have developed assessment tools (5), and my own institution has a model assessment form researchers can adapt for use in individual studies (6). During my many years as an Institutional Review Board (IRB) member, however, I have rarely seen the form used. Few studies appear to incorporate evaluation procedures, though there are exceptions. For example, a high-profile randomized trial comparing arthroscopic knee surgery to placebo required participants to write in their charts: “On entering this study, I realize that I may receive only placebo surgery. I further realize that this means that I will not have surgery on my knee joint. This placebo surgery will not benefit my knee arthritis” (7).

Why haven't procedures like this become standard in research decision-making? Part of the reason is surely the extra effort, time, and cost involved in developing assessment measures. But a reluctance to discourage participation might also account for this situation. A collaborator in the knee surgery study reported “a significant refusal rate” among subjects, which he described as “the price you may have to pay if you increase potential subjects’ understanding” (8).

It's possible that genuinely informed individuals will be more likely to decline research participation (9). At the same time, efforts to ensure understanding might promote the research effort, for informed subjects may be less likely to drop out once they decide to enroll. But the main reason to add an evaluation requirement is to promote autonomous choice. If we really believe that research participation is a matter of individual choice, we should be willing to live with the consequences of genuinely informed decision-making.

When Research Harms Subjects

Because risks are unavoidable in research, some subjects will inevitably be harmed as a result of their participation. In a just research system, the burdens and benefits of research are equitably distributed. Some subjects personally benefit from study participation, but subjects are not the primary beneficiaries of research. Instead, research is done for the benefit of the wider community. Because subjects accept research burdens so that others may benefit, it seems only fair that the community offer assistance when subjects end up worse off than they would have been if they had refused to enroll (10).

The U.S. regulatory system fails to incorporate this straightforward moral judgment. The existing Common Rule lets research sponsors and institutions decide whether to cover the costs of research-related injuries. Eve in studies presenting relatively high risk, investigators meet their Common Rule duties simply by warning prospective subjects that they could end up bearing the costs of any injury they suffer as a result of research participation.

Although the ANPRM does not address injury compensation, several national advisory groups have recommended adoption of no-fault compensation programs (11). A 2011 report by the Presidential Commission for the Study of Bioethical Issues expressed strong support for a compensation system (12). Compensation programs are common in other countries and selected U.S. research institutions have compensation programs, too. Despite decades of ethical support, and evidence that compensation programs are practically feasible, U.S. officials have been unwilling to mandate such programs.

Cost concerns underlie the government's failure to adopt a compensation requirement. Research institutions and sponsors predict high expenses if compensation programs become mandatory. Opponents also argue that moral obligations to subjects are met by the risk disclosure element of informed consent, which enables people worried about research injuries to refuse participation. Opponents contend as well that the tort system gives injured subjects adequate opportunities to pursue compensation.

But the ethical considerations supporting compensation outweigh the objections. Although soldiers and police officers accept the risks inherent in their work, we nevertheless believe they are owed assistance when they are injured on the job. The same judgment applies to people serving society as research subjects.

The barriers to tort recovery are so high that many injured subjects will never obtain compensation through this legal mechanism. And the success of existing injury compensation programs, which include a process for determining whether injuries resulted from research participation, should allay concerns about costs and feasibility. Compensation programs make research participation a more attractive option to prospective subjects and allow institutions and injured subjects to avoid litigation costs. More study and planning will be necessary to determine the specifics, but U.S. officials should act to ensure that research subjects receive help when they are injured on our behalf.

Screening Studies for Quality

The third way officials could make research more ethical would be to require that all human studies undergo rigorous merit review. Applying the beneficence principle, the Common Rule limits research risks to those that are “reasonable in relation to ... the importance of the knowledge that may reasonably be expected to result.” Studies that are poorly designed or conducted cannot produce important knowledge, nor can studies with low social value (13).

Although the Common Rule directs IRBs to consider study value in the review process, this directive is unrealistic. Given their limited membership and time constraints, IRBs cannot rigorously assess the value of the studies they review. The Common Rule authorizes IRBs to enlist scientific experts to assist with specific protocol reviews, but IRBs typically lack the resources to establish a thorough assessment of research quality.

Peer review is imperfect, but it is the best available way to evaluate research merit. Many human studies undergo rigorous peer review as part of the National Institutes of Health (NIH) funding process, but many others do not. Nonprofit funding organizations don't always have stringent merit review mechanisms in place, nor do industry research sponsors or institutions dispensing internal funds for research. Studies submitted to the NIH Recombinant DNA Advisory Committee undergo rigorous peer review, but this committee reviews just a narrow category of human research.

Poorly designed and conducted trials do not just waste resources, they also expose subjects to unjustified risk and threaten public trust in the research endeavor (14). Independent merit review could be performed by institutional, professional, or government bodies. Developing a comprehensive and rigorous approach to merit review will be challenging, of course. But until such a system is in place, human subjects will be exposed to harm in studies that make no contribution to scientific and medical progress.

Conclusion

Underlying the research oversight system is a fundamental moral judgment: human subjects have interests that should not be subordinated to the interests of the patients, researchers, industry stakeholders, and others who gain health and monetary benefits from the research enterprise. Allegiance to this moral judgment demands more robust federal rules aimed at educating prospective research subjects, helping subjects who are harmed in research, and evaluating the quality of human research proposals.

Acknowledgments

Supported by NIH Grant ULI RR024992

References

RESOURCES