Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2016 Mar 1.
Published in final edited form as: Health Care Anal. 2015 Mar;23(1):19–31. doi: 10.1007/s10728-012-0233-0

Paternalism and Utilitarianism in Research with Human Participants

David B Resnik 1
PMCID: PMC3566369  NIHMSID: NIHMS415973  PMID: 23076346

Abstract

In this article I defend a rule utilitarian approach to paternalistic policies in research with human participants. Some rules that restrict individual autonomy can be justified on the grounds that they help to maximize the overall balance of benefits over risks in research. The consequences that should be considered when formulating policy include not only likely impacts on research participants, but also impacts on investigators, institutions, sponsors, and the scientific community. The public reaction to adverse events in research (such as significant injury to participants or death) is a crucial concern that must be taken into account when assessing the consequences of different policy options, because public backlash can lead to outcomes that have a negative impact on science, such as cuts in funding, overly restrictive regulation and oversight, and reduced willingness of individuals to participate in research. I argue that concern about the public reaction to adverse events justifies some restrictions on the risks that competent, adult volunteers can face in research that offers them no significant benefits. The paternalism defended here is not pure, because it involves restrictions on the rights of investigators in order to protect participants. It also has a mixed rationale, because individual autonomy may be restricted not only to protect participants from harm but also to protect other stakeholders. Utility is not the sole justification for paternalistic research policies, since other considerations, such as justice and respect for individual rights/autonomy, must also be taken into account.

Keywords: informed consent, justice, paternalism, research participation, risks, utilitarianism

Introduction

The patient’s rights movement that began in the 1960s eroded paternalistic medical practices and ushered in a new era of autonomy and self-determination in medicine. Key court cases, new laws, and changes in medical practice gave patients the right to make their own decisions and to be fully informed about their medical care (Jonsen, 1988). Though paternalism has largely faded from medicine, it continues to play a significant role in biomedical research involving human participants. In a seminal essay on the topic, Miller and Wertheimer (2007) argue that many ethical guidelines, policies, and regulations pertaining to research with human participants are paternalistic. The authors think it is important to face up to paternalism in research ethics in order to determine whether it is justified. Some examples of paternalism discussed by Miller and Wertheimer and others include:

What these different regulations and guidelines have in common is that they restrict the autonomy of research participants for their own good. For example, societies usually do not impose any limits on the amount of money that an adult may receive as compensation for time, effort, or labor, but in human participant research ethical guidelines require that payments not be so excessive that individuals are improperly induced to participate (Grady et al, 2005). Review of research by an IRB can also be considered to be paternalistic because the IRB will not approve research unless it determines that the risks to the individual are reasonable in relation to the benefits to the individual and society (Edwards et al, 2007). Normally, competent adults are allowed to take many types risks without approval from an external authority.

A plausible explanation of paternalism’s influence on human research ethics is that many regulations and guidelines have been developed in reaction to historical abuses of human participants in research, such as Nazi research on concentration camp prisoners, the Tuskegee syphilis study, and the Willowbrook hepatitis experiments (Miller and Wertheimer, 2007). Concerned citizens, policymakers, and scholars have urged governments and institutions to adopt rules that protect human research participants from harm and exploitation by avoiding the mistakes of the past (Shamoo and Resnik, 2009). For example, Congress passed the National Research Act (NRA) in 1973 after holding hearings on the Tuskegee syphilis study and other ethical concerns with biomedical and behavioral research. The NRA authorized federal agencies to develop research regulations and established the National Commission for Protection of Human Subjects in Biomedical and Behavioral Research, which published the Belmont Report in 1979. The Belmont Report set the tone for a revision of the federal regulations by emphasizing the importance of protecting vulnerable populations from harm and exploitation (Shamoo and Resnik, 2009).

As new ethical controversies have emerged, new paternalistic rules have been adopted. For example, in 2006 the U.S. Environmental Protection Agency (EPA) adopted regulations that prohibit the funding of intentional exposure research involving children or pregnant or nursing women. The regulations are more restrictive than the Department of Health and Human Services (DHHS) rules, and prohibit even minimal risk intentional exposure research. The EPA adopted the regulations in response to a Congressional mandate requiring the agency to refrain from funding or relying on pesticide experiments involving children or pregnant or nursing women. The Congressional mandate came about as a result of public concerns about pesticide experiments on human participants conducted by private companies in the 1990s, and an observational study of pesticide exposures in the home conducted by the EPA known as the Children’s Environmental Exposure Study (Resnik, 2007a, 2007b).

Granted that some regulations and ethical guidelines pertaining to research with human subjects are paternalistic, the question naturally arises as to whether they are justified. Is there a sound ethical basis for some type of paternalism in research involving human participants? To answer these questions, I will first define paternalism and then consider the views of Miller and Wertheimer as well as others who have written on the topic. My main thesis is that some types of paternalistic policies can be justified on utilitarian grounds, not only to protect human participants from harm but also to protect investigators, institutions, funding organizations, and the scientific community. Utilitarian concerns are not the sole reason for paternalism, but they can augment other rationales. Since the justification for paternalism is complex and may vary under different circumstances, I will focus on paternalistic policies related to more than minimal risk research that offers no medical benefits to participants with sound decision-making abilities.

What is Paternalism?

Paternalism has been defined as interfering with someone’s liberty for their own good (Dworkin, 1972, 2012). This is different from interfering with liberty in order to prevent a person from harming someone else. For example, laws against murder, rape, and theft are not paternalistic because they restrict freedom in order to prevent harm to others. Seat belt laws are paternalistic because they restrict a person’s freedom in order prevent harm to that person.

There are different types of paternalism. Soft paternalism involves interfering with the liberty of someone who has compromised decision-making abilities, due to lack of information, immaturity, mental disability, or other factors (Dworkin, 1972, 2012). For example, preventing someone from walking unknowingly onto a dangerous bridge would be soft paternalism. Laws the prevent children entering into contracts or purchasing tobacco or alcohol are also soft paternalism. Hard paternalism involves interfering with the liberty of someone who does not have compromised decision-making abilities (i.e. they are competent). Seat belt laws applied to adults are hard paternalism, as are laws requiring adult motorcyclists to wear helmets (Dworkin, 1972). Hard paternalism is usually more difficult to justify than soft because it undermines human freedom. Following in the tradition of nineteenth century British philosopher John Stuart Mill (1869), an ardent defender of liberty, many writers argue that competent adults should be allowed to make their own decisions, including choices that many would regard as unwise, as long as they do not endanger others (Feinberg, 1986).

Paternalism may involve restricting the liberty of individuals or groups (Miller and Wertheimer, 2007). When a doctor withholds information from a patient in order to prevent him (or her) from making a choice the doctor deems inadvisable, this would be individual paternalism. In group paternalism, entire groups of people have their liberty restricted. Group paternalism involves the imposition of laws, policies, or other rules that restrict liberty. For example, motorcycle helmet and seatbelt laws are group paternalism.

Very often it is necessary to restrict the liberty of others in order to promote the good of an individual. For example, laws pertaining to the manufacturing, marketing, and sale of alcohol restrict the liberty of alcoholic beverage companies in order to protect consumers from harm. In this case, the class of people whose liberty is restricted and the class whose good is promoted are not the same. Examples like these have been dubbed impure paternalism. In pure paternalism, the class of people whose liberty is restricted and the class whose good is promoted are the same (Dworkin, 2012).

Sometimes paternalistic regulations and laws have mixed rationales in that they are designed to promote the good of the individual and achieve other goals. For example, laws mandating vaccinations against an infectious disease promote the good of individuals who are vaccinated, but they also promote the good of society by reducing the risk that other people will contract the disease. Vaccination laws promote the health of individuals as well as public health. Because an individual’s actions usually impact other people, paternalism often has mixed rationales. For example, one might think that laws against gambling have an unmixed rationale, because they prevent people from gambling for their own good. However, gambling often has negative effects on other people, such as the gambler’s spouse or dependents, so laws that prohibit gambling are mixed paternalism (Kleinig, 1983).

Justifying Paternalism in Research: Miller and Wertheimer

Miller and Wertheimer (2007) argue for a view that they call group soft paternalism. They build their position on the liberal presumption that individuals should be free to make their own decisions and act on them, as long as they don’t endanger others. The application of the liberal presumption to research with human participants is the doctrine of informed consent, which is the cornerstone of research ethics, according to Miller and Wertheimer. Consent is what justifies exposing individuals to risks in order to obtain scientific knowledge. However, if consent is defective because an individual has compromised decision-making abilities, then restrictions may be imposed to protect that individual from harm. Rules could be applied to an entire class of people, such as children (Miller and Wertheimer, 2007). Regulations and policies which limit the types of risks children can be exposed to in research would also constitute a type of impure paternalism, because they restrict the liberty of investigators by preventing them from conducting certain types of research involving children, in order to protect children from harm.

Few people would dispute the idea that there need to be additional protections for individuals with compromised decision-making abilities who participate in research. Soft paternalism is not very controversial. What about hard paternalism in research? Is this ever justified? Miller and Wertheimer question the legitimacy of regulations, guidelines or policies that restrict the liberty of competent adults to protect them from harm. Much of their discussion of hard paternalism focuses on the issue of setting upper bounds for the risks that adult participants can be exposed to in research. Cancer clinical trials often pose significant risks on participants, such as the possibility of death, but these studies also offer significant benefits, such as the potential for successful treatment or prolongation of life. These are not the kind of studies that Miller and Wertheimer have in mind. They are concerned with studies in which the participants face significant risks but are not expected to receive significant benefits. Research regulations do not impose an upper limit on risks that adult, competent volunteers may be exposed to in research, although the Nuremberg Code (1949) prohibits experiments in which there is an a priori reason to expect that death or disabling injury will occur, except when the investigators also serve as research subjects (Resnik, 2012).

Miller and Wertheimer (2007) consider the example of U.S. Army physician Walter Reed’s famous yellow fever experiments. At the beginning of the 20th century, Reed conducted experiments to determine the cause of yellow fever, which was a major public health problem in tropical regions. Eighteen Americans, including several investigators, and fifteen Spanish immigrants, were exposed to mosquitoes carrying yellow fever or received an injection of infected blood. Six people died, including one investigator (Lederer, 2008). Miller and Wertheimer suggest that there is no good reason to disallow research like the yellow fever experiments if the subjects are competent and provide valid informed consent. Some people may be willing to accept grave risks in order to make an important contribution to science and public health.

Miller and Wertheimer (2007) consider, but do not endorse, a potential argument for placing restrictions on risks in research like the yellow fever experiments. The argument is that such restrictions are necessary to protect the research enterprise from loss of public trust. If participants become gravely ill or die in an experiment, this could lead to a public backlash that could undermine future studies and lead to burdensome regulation and oversight, and undermine the willingness of participants to enroll in studies (Yarborough and Sharp, 2009; Resnik, 2012). Miller and Wertheimer (2007) consider the public trust argument for paternalism to be coherent and plausible, but they criticize it on the grounds that banning some types of high risk research might lead to negative consequences for the research enterprise, such as lost opportunities.

I don’t think the public trust argument should be dismissed so quickly. When research participants who are not already very sick become gravely ill or die in research, the public reaction can be significant and the costs to investigators, institutions and the scientific community can be great. Some incidents that have led to a public outcry and investigations by authorities include:

  • Eighteen-year-old Jesse Gelsinger volunteered for a Phase I gene therapy at the University of Pennsylvania in 1999. Though Gelsinger was not completely healthy, his condition, ornithine transcarbamylase deficiency (a liver enzyme deficiency), was well controlled. The experiment involved infusing an adenovirus vector into Gelsinger’s liver in order to transfer a gene that codes for the ornithine transcarbamylase enzyme into his liver cells. Though there was a remote chance that Gelsinger could benefit from the research, he was not expected to. The benefits would accrue to society. Gelsinger died from a massive immune response to the adenovirus just a few days after receiving the infusion. His death resulted in negative publicity that had an adverse impact on the university and the field of gene therapy. It also led to a lawsuit as well as investigations by federal agencies, including the Food and Drug Administration and the Office of Human Research Protections. Because the investigator and institution had conflicts of interests that were not fully disclosed, several organizations, including the National Institutes of Health and the Association of American Medical Colleges, revised their conflict of interest guidelines in response to this incident (Yarborough and Sharp, 2009).

  • In 2006, eight healthy volunteers were injected with TGN1412, a humanized monoclonal antibody that is an agonist for CD28 receptor in T-cells, in a Phase I trial conducted by Paraxel, a contract research organization. Six of the volunteers developed a massive immune response and multiple organ dysfunction, and were hospitalized. The two volunteers who received a placebo were fine. The study was sponsored by TeGenero Immuno Therapeutics and took place at Northwick Park and St. Mark’s Hospital, London. The incident led to a public outcry in the U.K. and an investigation by the Medicines and Healthcare Products Regulatory Agency (Goodyear, 2006).

  • In 2001, Ellen Roche died from respiratory distress after inhaling hexamethonium, a drug used to block nerves that protect airways, as part of an asthma study conducted at Johns Hopkins University. Although Roche had asthma, she was otherwise healthy. The incident led to investigations by federal authorities (Steinbrook, 2002).

Incidents involving healthy (or nearly healthy) volunteers can cause considerable public outrage because they are unexpected and may seem unfair (Steinbrook, 2002; Yarborough and Sharp, 2009). Though deaths in cancer clinical trials are never welcome news, they are, in some sense, expected, because the participants are often seriously ill and have a poor prognosis. They may die as a result of their disease even if they do not die from an experimental intervention. Also, when a patient with advanced cancer dies in a clinical trial, people will not usually view this occurrence as unfair, because the patient had a chance of benefitting from study participation. One could argue that since serious adverse events involving healthy volunteers can lead to very negative public reactions, rules limiting the risks that participants can be exposed to in these studies are justified (Resnik, 2012). Hard paternalism may therefore be appropriate in some situations in research. Though Miller and Wertheimer do not reject hard paternalism outright, their failure to provide a clear justification for some types of hard paternalism is a weakness of their view.

Justifying Paternalism: Jansen and Wall

Jansen and Wall (2009) develop a different argument for paternalism in research. The authors frame their argument not in terms of liberty and informed consent but in terms of the just distribution of the benefits and burden of research. Jansen and Wall base their view on Arneson’s (1989) critique of libertarian philosophy. Arneson argues that if there are no restrictions on liberty in society, then unequal distributions of welfare will occur, because people differ in their decision-making abilities. Many factors affect decision-making ability, including maturity, mental illness, poverty, and education. Those who are poor decision-makers will tend to choose to participate in activities in which the personal risks outweigh the benefits, while good decision-makers will tend to avoid these activities. Over time, this process will likely result in unequal distributions of welfare in society because poor decision-makers will face greater risks with fewer benefits than good decision-makers. Arneson (1989) argues that some paternalistic laws and policies are necessary to address welfare inequalities resulting from differences in decision-making abilities. People can be protected from harm not just for their own good but also to promote distributive justice.

Jansen and Wall apply Arneson’s view to research with human participants. If there were no other ethical requirements for research participation beyond informed consent, according to the authors, then poor decision-makers would tend to incur more risks and fewer benefits than good decision-makers, because they are less adept at reasoning about benefits and risks than good decision-makers. Differences in welfare due to decisions people make concerning research participation will likely arise. These differences can be evaluated from the point of view of distributive justice as to whether they are fair (or not). Society should take steps, according to Jansen and Wall, to deal with unfair differences in welfare resulting from research participation. The authors do not claim that there should be no welfare inequalities as a result of research participation; they only maintain that these inequalities should be addressed. To minimize welfare inequalities resulting from research participation, some rules are necessary to protect poor decision-makers from harm, according to Jansen and Wall (2009).

Most people would agree that policies that limit individual choices are necessary to protect certain classes of poor decision-makers, such as children, mentally disabled people, and prisoners, from harm related to research participation. This view is precisely the group soft paternalism defended by Miller and Wertheimer and embodied in various regulations and guidelines. But Jansen and Wall (2009) go beyond soft paternalism and argue that their view also applies to competent adults who participate in research. Competent adults also differ in the decision-making abilities: some are well-educated, while others are not; some are highly susceptible to the influence of money, while others are not as susceptible; and some make foolish choices, while others do not. Differences in the decision-making abilities of competent adults who decide to participate in research will also probably lead to differences in welfare. Jansen and Wall (2009) argue that laws and policies that limit the choices of competent adults in research can be justified in order to address unfair distributions of welfare resulting from research participation decisions. Though they pitch their argument in very general terms and do not defend any particular regulation or guidelines, their view has clear policy implications. Jansen and Wall suggest that their argument might justify prohibitions against more than minimal risk research that does not offer significant benefits to competent participants.

Edwards and Wilson (2012) criticize Jansen and Wall’s view on the grounds that the concept of a poor decision-maker is not well-defined. This is an important critique of Jansen and Wall’s view that I will not pursue further here. Instead, I would like to focus on the implications of Jansen and Wall’s view for scientific research. At the conclusion of their article, Jansen and Wall (2009) consider the objection that their view, if adopted, could deny society important benefits by prohibiting some types of research. For example, their view might prohibit studies like Reed’s yellow fever experiments or even Phase I drug studies in which participants are not likely to benefit but face considerable risks, such as toxicity (Edwards and Wilson, 2012). Jansen and Wall (2009) respond to this critique by claiming that these adverse consequences probably will not arise, but that a further defense of their argument requires showing that considerations of utility do not outweigh distributive fairness. The point I would like to press here is that I do not think Jansen and Wall give enough credit to utilitarian objections to their view. There may be sound reasons for avoiding the type of hard paternalism they defend in research if the policies implied by their view would prohibit studies that can yield important social benefits.

Paternalism and Utility

A common theme in my critiques of the views defended by Miller and Wertheimer and Jansen and Wall is that considerations of utility should be taken into account when formulating policies that protect research participants from harm. Policymakers should be mindful of the overall consequences (both good and bad) of the policies they propose. The consequences may include potential harm to research participants, investigators, institutions, sponsors, the scientific community, and society; as well as potential benefits to these same stakeholders. The type of utilitarianism that I would advocate for assessing research policies would be rule-utilitarianism, which evaluates rules in relation to utility (Brandt, 1998). In deciding whether to adopt a rule (i.e. a regulation or guideline), one should consider the consequences for society of implementing the rule.

To apply rule utilitarianism to research with human participants, let’s consider the case that we have focused on in this essay: research that imposes significant risks on competent participants but offers them no significant benefits. As noted earlier, Miller and Wertheimer’s view implies that this research should not be prohibited, provided that valid informed consent is obtained, whereas Jansen and Wall’s implies that this research should be prohibited if it is likely to produce unfair differences in welfare. I have argued that both of these approaches are mistaken because they do not adequately consider the consequences of research policies. The utilitarian view offers a more nuanced perspective. According to the utilitarian view, we should develop policies that maximize the good consequences and minimize the bad. Utilitarians could argue that this type of research above a particular level of risk should be prohibited, but that research lower than that level should be allowed. When research is above a certain level of risk, the bad consequences outweigh the good ones, whereas when it is below that level, the good consequences outweigh the bad. Establishing a particular level of risk would depend on a careful assessment of the facts, such as the risks for participants and others, the potential benefits for science and society, and so on (Resnik, 2012).

For example, consider a hypothetical study on the effects of diesel exhaust on pulmonary function. Healthy adult volunteers, age 18–55, will be exposed to air containing diesel exhaust in a sealed chamber. The percentage of diesel exhaust in the air will not be greater than what one would normally encounter while walking on a city street. The participants will breathe the air for two hours and ride a stationary bike for two twenty-minute intervals. Samples of blood, urine, and sputum will be collected, and blood pressure, pulse, respiration rate, blood oxygenation, and other physical measurements will be recorded. After they have finished their time in the chamber, the volunteers will undergo a transbronchial biopsy, a procedure in which a tube is sent through the mouth down the trachea to collect a small tissue sample from the lung. The procedure, which requires sedation, has a risk of death of 60 out of 100,000 cases (0.06 percent). There are other less serious risks as well, such as bleeding or bruising, temporary breathing difficulties, and infection. Less serious risk occur in about 15% of cases. The subjects will not benefit from this study, but society may benefit as a result of the knowledge gained about how diesel exhaust impacts the human lung.

In thinking about the balance of benefits and risks in this study from a utilitarian perspective, one must determine whether the benefits of the study justify the risks to the subjects, the institution, and the research enterprise. The most significant risk, the risk of death, is only 0.06 percent. Most people would probably consider this an acceptable risk, given the benefits of the study. However, if the risks of death were higher than 1%, many would consider this study to be too risky to perform on healthy volunteers. If someone dies in an experiment in which the risk of death is known in advance to be greater than 1%, there would not only be severe consequences for the volunteer (i.e. premature death) but there would probably also be a significant public backlash that could impact the institution and the field. Regulators might investigate the institution and society might enact new laws designed to protect volunteers from research risks. These are the kinds of the utilitarian considerations one should take into account when deciding whether there should be limits on the risks that healthy volunteers face in research.

Objections and Replies

One of the standard objections to utilitarianism is that it does not provide adequate protection for the rights and welfare of individuals. Utilitarians are willing to sacrifice the good of the individual for the good of society (Smart and Williams, 1973). For example, one might argue that a utilitarian would recommend killing an innocent healthy person in order to use their organs to save five people who need organ transplants. Rule utilitarians can avoid this disturbing implication because they focus on the consequences of rules, not the consequences of particular actions. Rule utilitarianism would not endorse a rule like “kill an innocent, healthy person to use their organs to save five other people” because the negative consequences of adopting the rule, such as diminished respect for human life, corruption of the practice of medicine, and public outrage, would far outweigh good ones (Brandt, 1998).

This critique of utilitarianism manifests itself in debates about research ethics. Some writers have objected to the utilitarian approach to research with human participants on the grounds that it leads investigators to unethically sacrifice the rights and welfare of individuals for the sake of science (Jonas, 1969). According to the line of reasoning advanced here, utilitarianism need not have these undesirable implications because the benefits of policies that allow researchers to compromise individual rights and welfare may not outweigh the harms to the participant and other stakeholders. As noted earlier, failure to protect participants from risks often leads to a public outcry, which will have dire consequences for investigators, institutions, sponsors, and the scientific community. Earlier discussions of utilitarian approaches to research ethics compared the potential harms to the individual and the social good of the knowledge produced. If one thinks of the justification of research policies in this manner, then individuals may not be adequately protected from risk. However, if one includes the public reaction to adverse events in the analysis, the balance shifts toward providing additional protections for individuals.

Another standard objection to utilitarianism is that does not provide an adequate account of distributive justice, because utilitarians hold that distributions of welfare should achieve the greatest balance of benefits/harms for society. Utilitarianism might recommend radically unequal welfare distributions (as occurs in institutionalized slavery) if these maximize utility. One might argue that such distributions would be unfair even if they promote the good of society (Smart and Williams, 1973). Rule utilitarians can respond to this critique by arguing that radically unequal distributions of welfare will not promote the good of society, because they will lead to class envy, crime, disease, and other social problems. Thus, rule utilitarians can recognize the importance of addressing socioeconomic inequalities (Brandt, 1998).

A third objection to utilitarianism is that the policies it recommends are uncertain, because we often lack evidence concerning likely consequences of different options (Smart and Williams, 1973). For example, we do not know whether restrictions on some types of risky research that offers participants no significant benefits will help prevent a public backlash. If something goes wrong, a public backlash might occur even if restrictions are in place. One might argue ethics should not be uncertain; it should be based on epistemically substantiated claims.

While these epistemic problems with utilitarianism are important, they do not defeat the theory, since most ethical theories must deal with these issues. Any ethical theory used to draw policy implications from an assessment of the facts will face epistemological difficulties, because the “facts” are based on scientific evidence, which is subject to revision. At one point it was an accepted fact that the Earth is the center of the solar system, but this changed as a result of discoveries by Copernicus, Galileo and other astronomers. Because the facts are subject to revision, some degree of uncertainty is inevitable when drawing policy implications from ethical theories. We need not allow uncertainty to stifle policy formation, however. We can forge ahead with the best evidence we have at hand, knowing that the policies we adopt may need to be changed in light of new information.

Although I think these objections to utilitarianism do not undermine its usefulness as a tool for developing research policies, I recognize that they raise important concerns about the theory that are not easily dismissed. A full defense of rule utilitarianism is beyond the scope of this article. However, I do not need to defend utilitarianism against all critiques in order to maintain the more modest thesis that it should play a key role in evaluating research policies. An assessment of the consequences of different rules should inform policy development, but other considerations should also be taken into account, such as individual rights/welfare and distributive justice. The ideal policy framework will strike an appropriate balance among utility, protection of individual rights/welfare and distributive justice. Indeed, this is similar to the view adopted by the authors of the Belmont Report, who argued that research ethics should strike a reasonable balance between three fundamental principles: respect for persons (understood as protecting individual rights/welfare), beneficence (understood as maximizing benefits and minimizing risks), and justice (understood as promoting a fair distribution of the benefits and burdens of research) (National Commission, 1979). Thus, utilitarianism can supplement, but not supplant, other approaches to justifying paternalism in research.

A final objection to the view advanced here is that it assumes that the public will react negatively to adverse outcomes in research with human participants. The public acts as a kind of moral check on the behavior of investigators, institutions, and sponsors. However, the public may not react negatively to adverse outcomes if it is ill-informed or simply does not care about the risks that human participants face in research. For example, from the 1940s to the 1990s, the U.S. government sponsored secret experiments that exposed individuals to ionizing radiation. Most the participants did not provide informed consent for these experiments and many were harmed. The public did not object to these experiments because it knew nothing about them until the Clinton Administration declassified them in 1994. From 1932 to 1972, the U.S. government sponsored the Tuskegee Syphilis Study (mentioned earlier), an observational study in which 400 African American males with syphilis received no medical treatment for their disease when an effective medication became available in the 1940s. The participants also did not consent to participating in research. Though many people were aware of the study—it was publicly funded—most did not care much about the research until the media covered the story and Congress held hearings on it in 1972 (Shamoo and Resnik, 2009).

I admit that lack of public reaction to the adverse impacts of research is a potential problem with my view, but I think it can be overcome as long as the public is adequately informed about research and understands its moral aspects. To help ensure that the public is well-informed, it is important to promote transparency and openness in research involving human participants. The public should have access to information about studies that are being conducted, and government agencies should have the ability to oversee research. The media should report on stories of interest to the public. A well-informed and interested public is essential to the ethical conduct of research with human participants.

Conclusion

In this article I have defended a rule utilitarian approach to paternalistic policies in research with human participants. Some rules that restrict individual autonomy can be justified on the grounds that they help to maximize the overall balance of benefits over risks in research. The consequences that should be considered when formulating policy include not only likely impacts on research participants, but also impacts on investigators, institutions, sponsors, and the scientific community. The public reaction to adverse events in research (such as significant injury to participants or death) is a crucial concern that must be taken into account when assessing the consequences of different policy options, because 3public backlash can lead to outcomes that have a negative impact on science, such as cuts in funding, overly restrictive regulation and oversight, and reduced willingness of individuals to participate in research. I have argued that concern about the public reaction to adverse events justifies some restrictions on the risks that competent, adult volunteers can face in research that offers them no significant benefits. The paternalism defended here is not pure, because it involves restrictions on the rights of investigators in order to protect participants. It also has a mixed rationale, because individual autonomy may be restricted not only to protect participants from harm but also to protect other stakeholders. Finally, utility is not the sole justification for paternalistic research policies, since other considerations, such as justice and respect for individual rights/autonomy, must also be taken into account. While this article has focused on restrictions on research involving competent adults the poses significant risks with significant benefits to participants, the view defended here has implications for other types of paternalistic measures, such as IRB review, limits on financial incentives for participation, and requirements for informed consent. Other authors may wish to comment on the justification of these policies.

Acknowledgments

This article is the work product of an employee or group of employees of the National Institute of Environmental Health Sciences (NIEHS), National Institutes of Health (NIH). However, the statements, opinions or conclusions contained therein do not necessarily represent the statements, opinions or conclusions of NIEHS, NIH or the U.S. government.

References

  1. Arneson R. Paternalism, Utility and Fairness. Revue Internationale de Philosophie. 1989;170:409–23. [Google Scholar]
  2. Brandt R. A Theory of the Good and the Right, revised edition. New York: Prometheus Books; 1998. [Google Scholar]
  3. Council for International Organizations of Medical Sciences. [Accessed: 28 June 2012];International Ethical Guidelines for Biomedical Research Involving Human Subjects, 2002 revision. 2002 Available at: http://www.cioms.ch/publications/layout_guide2002.pdf. [PubMed]
  4. Department of Health and Human Services. Protection of Human Subjects, 45 CFR 46. 2009. [Google Scholar]
  5. Dworkin G. Paternalism. The Monist. 1972;56:64–84. [Google Scholar]
  6. Dworkin G. Paternalism. [Accessed: 29 June 2012];Stanford Encyclopedia of Philosophy. 2012 Available at: http://plato.stanford.edu/entries/paternalism/
  7. Edwards SJ, Wilson J. Hard Paternalism, Fairness and Clinical Research: Why Not? Bioethics. 2012;26:68–75. doi: 10.1111/j.1467-8519.2010.01816.x. [DOI] [PubMed] [Google Scholar]
  8. Edwards SJ, Kirchin S, Huxtable R. Research Ethics Committees and Paternalism. Journal of Medical Ethics. 2004;30:88–91. doi: 10.1136/jme.2002.000166. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Feinberg J. Harm to Self. New York: Oxford University Press; 1986. [Google Scholar]
  10. Goodyear M. Learning from the TGN1412 Trial. British Medical Journal. 2006;332:677–8. doi: 10.1136/bmj.38797.635012.47. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Grady C, Dickert N, Jawetz T, Gensler G, Emanuel E. An Analysis of U.S. Practices of Paying Research Participants. Contemporary Clinical Trials. 2005;26:365–75. doi: 10.1016/j.cct.2005.02.003. [DOI] [PubMed] [Google Scholar]
  12. Jansen LA, Wall S. Paternalism and Fairness in Clinical Research. Bioethics. 2009;23:172–82. doi: 10.1111/j.1467-8519.2008.00651.x. [DOI] [PubMed] [Google Scholar]
  13. Jonas H. Philosophical Reflections on Experimenting with Human Subjects. Daedalus. 1969;98:219–247. [Google Scholar]
  14. Jonsen A. The Birth of Bioethics. New York: Oxford University Press; 1988. [Google Scholar]
  15. Kleinig J. Paternalism. Manchester: Manchester University Press; 1983. [Google Scholar]
  16. Lederer S. Walter Reed and the Yellow Fever Experiments. In: Emanuel E, Grady C, Crouch R, Lie R, Miller F, Wendler D, editors. The Oxford Textbook of Clinical Research Ethics. New York: Oxford University Press; 2008. pp. 9–17. [Google Scholar]
  17. Mill JS. On Liberty. Indianapolis, IN: Hackett Publishing Company; 1869. [1978] [Google Scholar]
  18. Miller FG, Wertheimer A. Facing up to Paternalism in Research Ethics. Hastings Center Report. 2007;37 (3):24–34. doi: 10.1353/hcr.2007.0044. [DOI] [PubMed] [Google Scholar]
  19. National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. Washington, DC: Department of Health, Education, and Welfare; 1979. [PubMed] [Google Scholar]
  20. Nuremberg Code. [Accessed 2 July 2012];Directives for Human Experimentation. 1949 http://ohsr.od.nih.gov/guidelines/nuremberg.html.
  21. Resnik DB. The New EPA Regulations for Protecting Human Subjects: Haste Makes Waste. Hastings Center Report. 2007a;37 (1):17–21. doi: 10.1353/hcr.2007.0013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Resnik DB. Are the New EPA Regulations Concerning Intentional Exposure Studies with Children Overprotective? IRB. 2007b;29 (5):5–7. [PMC free article] [PubMed] [Google Scholar]
  23. Resnik DB. Limits on Risks for Healthy Volunteers in Biomedical Research. Theoretical Medicine and Bioethics. 2012;33:137–149. doi: 10.1007/s11017-011-9201-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Shamoo AS, Resnik DB. Responsible Conduct of Research. 2. New York: Oxford University Press; 2009. [Google Scholar]
  25. Smart JJC, Williams B. Utilitarianism: For and Against. Cambridge: Cambridge University Press; 1973. [Google Scholar]
  26. Steinbrook R. Protecting Research Subjects—the Crisis at Johns Hopkins. New England Journal of Medicine. 2002;346:716–20. doi: 10.1056/NEJM200202283460924. [DOI] [PubMed] [Google Scholar]
  27. Wertheimer A. Rethinking the Ethics of Clinical Research: Widening the Lens. New York: Oxford University Press; 2008. [Google Scholar]
  28. World Medical Association. [Accessed: 28 June 2012];Declaration of Helsinki, 2008 revision. 2008 Available at: http://www.wma.net/en/30publications/10policies/b3/index.html.
  29. Yarborough M, Sharp R. Public Trust and Research a Decade Later: What Have We Learned since Jesse Gelsinger’s Death? Molecular Genetics and Metabolism. 2009;97:4–5. doi: 10.1016/j.ymgme.2009.02.002. [DOI] [PubMed] [Google Scholar]

RESOURCES