Abstract
Healthy volunteers in biomedical research often face significant risks in studies that offer them no medical benefits. The U.S. federal research regulations and laws adopted by other countries place no limits on the risks that these participants face. In this essay, I argue that there should be some limits on the risks for biomedical research involving healthy volunteers. Limits on risk are necessary to protect human participants, institutions, and the scientific community from harm. With the exception of self-experimentation, limits on research risks faced by healthy volunteers constitute a type of soft, impure paternalism, because participants usually do not fully understand the risks they are taking. I consider some approaches to limiting research risks and propose that healthy volunteers in biomedical research should not be exposed to greater than a 1% chance of serious harm, such as death, permanent disability, or severe illness or injury. While this guideline would restrict research risks, the limits would not be so low that they would prevent investigators from conducting valuable research. They would, however, set a clear upper boundary for investigators and signal to the scientific community and the public that there are limits on the risks that healthy participants may face in research. This standard provides guidance for decisions made by oversight bodies, but it is not an absolute rule. Investigators can enroll healthy volunteers in studies involving a greater than 1% chance of serious harm if they show that the research addresses a compelling public health or social problem and the risk of serious harm is only slightly more than 1%. The committee reviewing the research should use outside experts to assess these risks.
Keywords: human participant research, risk, ethics, regulations, paternalism, healthy volunteers
Introduction
Healthy participants in biomedical research often face significant risks in studies that offer them no medical benefits [1]. Although there are no systematic data on the risks that healthy volunteers typically face, anecdotal evidence suggests these can be significant [2]. For example, on March 13, 2006, six healthy participants in a Phase I trial at Parexel’s clinical pharmacology research unit at Northwick Park Hospital in London, U.K., developed a dangerous immune reaction to a monoclonal antibody known as TGN1412 and had to be hospitalized with multiple organ dysfunction [3]. On June 2, 2001, twenty-four year old Ellen Roche died after developing respiratory distress due to inhaling hexamethonium, a drug that was used to block nerves that protect airways, as part of an asthma study conducted at Johns Hopkins University [4]. On March 31, 1996, Hoiyan Wan, a healthy nineteen year-old nursing student, died after receiving a fatal dose of lidocaine during a bronchoscopy performed at the University of Rochester as part of an air pollution study [5]. Although these studies were not considered to be excessively risky when they were initiated, harms unfortunately occurred.
Walter Reed’s famous yellow fever experiments on healthy volunteers, however, were considered to be very risky from the outset. Yellow fever was a major public health and economic concern in tropical regions of the world at the beginning of the 20th century, with a mortality rate of 10%–60% [6]. During the Spanish-American War, 400 U.S. soldiers died from yellow fever and 2000 contracted the disease. Though the signs and symptoms of the disease were well known, the mechanism of transmission was not, and there was no cure. Reed and his scientific collaborators believed that mosquitoes transmitted the disease, but they needed proof. In one experiment, health volunteers were exposed to mosquitoes and a control group was not. In another, participants in the experimental group were injected with blood from yellow fever patients. Eighteen Americans, including several researchers, and fifteen Spanish immigrants participated in the studies. They signed consent documents, which were translated into Spanish. The documents informed them that they could contract yellow fever, which is life-threatening. Six participants developed yellow fever after receiving mosquito bites, and one developed the disease after an injection. Jesse Lazear, one of Reed’s collaborators, died from the disease. After Reed proved that Aedes mosquitos were the vector of the disease, the U.S. Army began a mosquito eradication program, which helped to reduce the threat. The yellow fever researchers and participants were hailed as heroes [6].
U.S. law sets no definite limits on the level of risk that healthy volunteers may face in research. Federal research regulations require only that risks be minimized and reasonable in relation to benefits to participants and the expected gain in knowledge (a social benefit) [7,8]. Determining whether risks are reasonable involves careful balancing of risks and benefits: the greater the risks, the greater the benefits must be to justify those risks [9]. Other countries have adopted similar standards concerning risks. For example, Australia [10], Canada[11], Hungary [12], India [13], Kuwait [14], the Netherlands [15], Nigeria [16], South Africa [17], and the U.K. [18] do not set absolute limits on research risks but require that risks be justified in terms of benefits.
Among international ethics guidelines, only the Nuremberg Code sets absolute limits on research risks; the Helsinki Declaration [19] and Council for the International Organizations of Medical Sciences guidelines [20] hold only that risks should be justified with respect to benefits to the participant and the value of the knowledge gained. The Nuremberg Code states that “No experiment should be conducted where there is an a priori reason to believe that death or disabling injury will occur; except, perhaps, in those experiments where the experimental physicians also serve as subjects” [21]. However, the Code is not a useful guide for thinking about limits on research risks because it uses the vague term “a priori reason” and does not specify the degree of probability that death or disabling injury will occur for an experiment to be prohibited. The Code also does not appear to allow potentially life-saving research in oncology where participants face a risk of death or disabling injury [22]. The Code does not explain why a risk of death or severe disabling injury is acceptable if the investigators also participate in the study, though some have speculated that this clause was included to provide a post-hoc justification for Reed’s yellow fever experiments [23].
With the notable exception of essays by Miller and Joffe [22], London [24], and Rid and Wendler [25], the bioethics literature has little in-depth discussion of acceptable risk limits for research on healthy volunteers. The goal of the present inquiry is to develop an ethical framework for setting limits on the risks that may be imposed on healthy volunteers in research. The framework can be used to guide decisions made by institutional review boards (IRBs) or other committees that oversee research involving human participants.
Preliminary Remarks
To set the stage for my arguments, it is important to have a clear understanding of the kind of research I have in mind. First, I will focus only on risks in research involving healthy volunteers. I will not examine risks in research in which participants may receive medical benefits, since this involves a very different consideration of risks and benefits than research on healthy volunteers [24]. Most people would agree that it would be acceptable for a terminally ill patient to participate in a Phase II clinical trial in which he has a 10% chance of dying from the experimental treatment, if participation in this trial offers him the best chance of long-term survival. Significant risks may be taken in research when the potential medical benefits for the participant are also significant. However, the situation is very different when the participant is a healthy volunteer not expected to derive any medical benefits from the research. Most people would have concerns about allowing a healthy volunteer to participate in an experiment in which there is a 5% chance of death. Some commentators doubt that an IRB would approve Reed’s yellow fever experiments if they took place today [22].
Second, I will also not concern myself with risks in studies involving vulnerable populations, such children, pregnant women, prisoners, or cognitively impaired individuals, since these studies raise issues about risks that are very different from the issues that arise when healthy, non-pregnant, adults participate in research. A competent (or rational) adult can make an autonomous decision to accept or avoid risks, whereas a child cannot. There is a moral obligation to limit the risks that children face in research, because they cannot protect themselves [26]. Federal regulations place specific limits on the risks that pregnant women, prisoners, and children may be exposed to in research [7]. Though the federal regulations do not specifically address the risks that cognitively impaired individuals may face in research, the Helsinki Declaration [19] and the CIOMS [20] guidelines do. However, as we shall see below, the rationale for limiting the risks healthy, adult volunteers face in research has much in common with the rationale for limiting the risks that vulnerable participants face.
Arguments for Limiting Risks to Healthy Volunteers
There are two main arguments for setting some limits on risks faced by healthy volunteers in biomedical research. The primary argument is to protect research participants from harm. One could argue that limitations on risks are necessary to protect individuals from participating in research in which they face a significant chance of serious harm, which can be defined as a harm that is (1) permanent, such as death or disability, or chronic illness, or (2) causes injury, illness, or trauma that requires hospitalization or extensive medical or psychological treatment.[27] Limitations on the risks that healthy volunteers can take in biomedical research would be similar to laws concerning food and drug safety. These laws allow people to take risks within a legal framework that provides protection from harm [28].
A secondary argument is to protect the research institution and scientific community from harm [25]. The death of a healthy volunteer in biomedical research can be a traumatic event, often leading to investigations and sanctions from oversights authorities as well as lawsuits [4]. Additionally, negative publicity from the incident can have adverse impacts on the institution and the scientific community by eroding public trust in research. The death of eighteen year-old Jesse Gelsinger in a Phase I gene therapy experiment at the University of Pennsylvania on September 17, 1999 led to investigations by the Food and Drug Administration (FDA) and the Office of Human Research Protections (OHRP) and a lawsuit brought by his parents. Negative publicity from the case had an adverse impact on the university and the field of gene therapy research [29].
Unjustified Paternalism?
Though these two arguments have considerable merit, they must overcome the potential objection that restrictions on the risks that competent adults choose to take in biomedical research would be an unjustified, paternalistic interference with human freedom. Several ethical traditions oppose paternalism. Kantians object to paternalism because it violates human dignity and autonomy by treating individuals as mere instruments for social good [30, 31]. Libertarians argue that paternalistic laws and regulations are unjust because the purpose of government is to protect our fundamental rights, not to promote our good [32]. Even some utilitarians, such as John Stuart Mill, argue that paternalism is usually wrong because it produces more harm than benefit in most cases, since people are the best judges of their own good and will rebel from choices imposed on them by individuals or governments [33].
To respond to the charge of paternalism, it will be useful to say a bit more about this topic, and explain why paternalism may sometimes be justified in biomedical research. As others have observed, many of the regulations governing the conduct of research with human participants are paternalistic [34]. For example, informed consent requirements are paternalistic in that they set terms and conditions on what can be construed as a contract between consenting adults. Rules against excessive monetary incentives in research are paternalistic because they restrict the choices that competent adults can make concerning risks and financial rewards [34].
Paternalism is the doctrine that it is ethical to interfere with a person’s freedom to promote their own good, which includes preventing self-inflicted harm [30]. There are different types of paternalism. Soft paternalism involves restricting a person’s freedom because they lack sufficient cognitive abilities, information or understanding to make a sound decision [30]. Even Mill, one of the most ardent defenders of liberty, acknowledged that it is ethical to limit the freedom of children and mentally ill people to protect them from harm, and to stop a competent adult from walking, unknowingly, onto a dangerous bridge, on the assumption that the person does not understand the risks [33]. Kantians might also admit that soft paternalism is justifiable sometimes, since a person who lacks sufficient cognitive abilities, information or understanding cannot make a fully autonomous choice. Age restrictions on driving, purchasing alcohol or tobacco, and military service are soft paternalism. Laws requiring a doctor’s prescription to purchase some types of drugs are also soft paternalism, because most people do not have enough knowledge of medicine and pharmacology to decide how to use these chemicals properly.
Pure paternalism occurs when the class of individuals whose freedom is restricted and the class whose good is promoted are the same. Impure paternalism occurs when these two classes are different [30]. For example, food safety regulations are impure paternalism because they restrict the freedom of food manufacturers in order to promote the health of consumers. Since most of our actions have significant impacts on other people, case of pure paternalism are rare. Even situations that seem like pure paternalism may actually be impure, because the good of people other than those whose liberty is restricted may be implicated. For example, laws requiring motorcyclists to wear helmets protect motorcyclists from harm, but they are can save society health care costs incurred by people who are injured or disabled as a result of motorcycle accidents.
Most restrictions on the risks that participants are exposed to in biomedical research are soft paternalism. Limitations on the risks faced by children or cognitively impaired adults, mentioned above, would be soft paternalism, because these participants may have compromised decision-making abilities. Limitations on the risks that competent, adult volunteers face in research can also be viewed as soft paternalism, because these participants often do not fully understand the risks they are taking, due to their lack of knowledge and expertise. Though consent documents and discussions are intended to provide participants with some information about risks, this information is often incomplete and people rarely understand it in-depth [35]. Most laypeople do not understand what can happen to their body if they ingest an experimental drug, undergo a bronchoscopy, or receive an injection of a monoclonal antibody. They are not doctors or scientists. One could argue that soft paternalism strikes an appropriate balance between protecting people from harm and respecting autonomy in research.
Hard paternalism is more difficult to defend than soft because it involves restricting a competent adult’s freedom even when the person has sufficient understanding and information to make a decision. Stopping a person from knowingly walking onto a dangerous bridge would be hard paternalism [30]. Requiring motorcycle riders to wear a helmet is a form of hard paternalism, because most motorcyclists understand the risks of riding without a helmet. Requiring a doctor to have a prescription written by someone else to use a drug would also be hard paternalism, assuming the doctor knows how to use the drug properly.
Hard paternalism would be implicated in restrictions on the risks that investigators take when they experiment upon themselves [36]. Reed’s experiments, mentioned above, were not a paradigm case of self-experimentation, even though investigators served as human subjects, because the experiments also included subjects who were not investigators. For a paradigmatic case of self-experimentation, consider Barry Marshall’s research on peptic ulcers. While working as an internal medicine fellow at Perth Hospital, U.K., Marshall drank a solution containing H. pylori to prove that these bacteria can cause peptic ulcers. He experimented upon himself, in part, because he had had difficulty succeeding in infecting laboratory animals. Marshall developed an ulcer in five days, and responded well to antibiotic treatment. His pioneering work showed that many peptic ulcers can be successfully treated with antibiotics. Marshall won a Nobel Prize in Physiology and Medicine in 2005 for his discovery [37].
What would be a possible rationale for prohibiting experiments like Marshall’s? Since Marshall was a competent adult who understood the risks of the experiment and was under no coercion, shouldn’t he be allowed to place his own well-being at risk for the sake advancing science? One might argue that risky types of self-experimentation could be prohibited not necessarily to protect investigators from harm but to protect institutions and the scientific community. If Marshall’s experiment had turned out badly, Perth Hospital would have been investigated and possibly sanctioned by regulatory authorities, and could have suffered negative publicity, which could have had adverse effects on researchers not working at the institution. Although hard paternalism is ethically suspect in many cases, one could argue that it can be justified in the case of self-experimentation to protect institutions and the research community from harm. Limits on the risks of self-experimentation would not be justified when an investigator is working on his own time in his own laboratory and does place his institution or the scientific community at significant risk.
It is also important to note the most of the restrictions on research risks would be impure paternalism, because the people whose freedom is restricted and those whose good is promoted may not be the same. Investigators’ freedom would be restricted in order to protect participants, the institution, and the scientific community from harm. Participants’ freedom would be restricted not only to protect them from harm and but also to protect the institution and scientific community.
Some Guidelines for Limiting Risks
Though I have argued that there should be some limits on risks that healthy volunteers face in biomedical research, I have not said what those limits should be. In this section, I will consider some guidelines for limiting risks. In an incisive article, London argues we can use an accepted social activity that it is comparable to research with human participants to establish benchmarks for acceptable risks. A comparable social activity would be one in which competent adults take risks while engaging in an occupation or endeavor that makes an important contribution to society. There should also be effective oversight mechanisms in place to minimize or control the risks of the activity. London suggests that firefighting would be comparable to participating in research as a healthy volunteer [24]. Thus, according to London’s approach, biomedical research involving healthy volunteers should be no more risky than firefighting.
What are the risks of firefighting and how do these compare to other occupations? Firefighters face significant risks of injury or deaths from burning, smoke inhalation, falling debris, toxic chemicals, and traffic accidents. Focusing on mortality data, 115 U.S. firefighters died, on average, per year from 1977 to 2009 in the line of duty. From 2000 to 2009, 3.4 deaths occurred per 100,000 fire incidents. These statistics exclude outlier data from the destruction of the World Trade Center in New York City on September 11, 2001, in which 450 firefighters were killed [38]. In 2007, 7 firefighters died on the job per 100,000 full-time equivalent workers (FTEWs), nearly double the average U.S. occupational mortality rate of 4 deaths per 100,000 FTEWs. Fisherman and fishing workers had the highest occupational mortality rate at 109.5 deaths per 100,000 FTEWs, followed by loggers (89.1), pilots and flight engineers (70.6), and steel and iron workers (47.8). The lowest occupational mortality rates occurred among educators and librarians (0.3), financial workers (0.5), administrative support staff (0.8), and health care workers (0.9) [39].
Miller and Joffe consider live donor kidney transplantation as a possible comparator for research participation [22]. Although the risks of living without one kidney are not very significant for healthy donors, the risks of nephrectomy (the surgical procedure to remove the kidney) are significant. The mortality rate for nephrectomy has been estimated at 0.03–0.04%. 30–40 people out of 100,000 will die who donate kidneys, making this procedure ten times riskier than firefighting, which has 3.4 deaths per 100,000 fire incidents. Additionally, nephrectomy has risks of serious surgical complications, including infection, bleeding, hernia, pneumothorax, pneumonia, and deep vein thrombosis. 3% of nephrectomy patients have major complications [40].
How do the risks of firefighting and nephrectomy compare to the risks of research participation by healthy volunteers? We do not have a definite answer to this question, because there are no published studies assessing the risks that healthy volunteers face in biomedical research. Because we lack systematic data on the risks of participating in research as a healthy volunteer, it is difficult to decide whether London and Miller and Joffe set the bar too high or too low.
Although we lack good evidence on the risks that healthy volunteers face in biomedical research, we can estimate the risks of some types of studies, based on data about the risks of research procedures and methods. The net risks of a study are the sum of risks from its research procedures and methods [41]. For example, if a study includes three research procedures, each with a 1/10,000 chance of death, then the net risk of death from the study would be 3/10,000, if we assume that the risks are independent, i.e. they don’t affect each other.
If we just focus on mortality and exclude other risks, many studies involving healthy volunteers have virtually no risk of death. Many procedures used in research have a negligible risk of death for healthy individuals. Some of these include: collection of blood and other biological samples, physical examinations, surveys or interviews, electrocardiogram, magnetic resonance imaging, have virtually no risk of death for healthy participants [42]. Thus, there would be almost no risk of death in a study involving collection of blood and urine, a physical exam, and an interview.
Other studies may carry some mortality risk, however. The risk of death from allergy skin testing has been estimated at 1/2.5 million procedures. Since these data include a high percentage of asthmatics, risks for healthy individuals may be lower [43]. Pharmacokinetic studies, which examine the absorption, circulation, metabolism, excretion of drugs in human beings, have a risk of death of about 1 person per 100,000 [42]. About 20 people per 100,000 die from cardiac stress testing. Since these data include individuals with heart disease or other significant health problems, the mortality rate for healthy individuals may be much lower [44]. The mortality risk of diagnostic colonoscopy is about 19 deaths per 100,000 procedures and the risk of diagnostic upper endoscopy is 8 deaths per 100,000 procedures [45]. The risk of death from a transbronchial biopsy, in which a piece of tissue is collected during a broncoscopy, is about 60 deaths per 100,000 procedures [46]. The risk of cardiac catheterization is 110 deaths per 100,000. However, since this number includes patients with heart disease, the risks of catheterization in healthy volunteers may be lower [47] (See Table 1).
Table 1.
Procedure | Mortality Risk |
---|---|
Blood donation | Negligible |
Physical examination | Negligible |
Survey or interview | Negliible |
Magnetic resonance imaging | Negligible |
Electrocardiogram | Negligible |
Allergy skin testing | 1/2.5 million* |
Pharmacokinetic studies | 1/100,000 |
Diagnostic upper endoscopy | 8/100,000 |
Diagnostic colonoscopy | 19/100,000 |
Cardiac stress testing | 20/100,000* |
Transbroncial biopsy | 60/100,000 |
Cardiac catheterization | 110/100,000* |
Risks may be lower in healthy individuals
We could use the data from these riskier procedures and methods to estimate the risks of death for a study. For example, if a study included a cardiac stress testing, electrocardiogram, collection of blood and urine, an interview, a physical exam, and a transbronchial biopsy, then the net risk of death would be 80/100,000, if we assume the these risks are independent. This limited survey of risks associated with some research procedures indicates that mortality risks for biomedical research with healthy volunteers probably range from negligible to over 100 deaths per 100,000 volunteers.
What about estimating the risks of serious harm? If serious harm includes serious risks other than death, such as permanent disability, or illness or injury requiring hospitalization or extensive medical treatment, then it is reasonable to assume that the risks of serious harm are much greater than the risks of death. For example, if the risk of death from a study is 60/100,000, then the risk of serious harm (including death) could be as high as 180/100,000 or greater, depending on the nature of the research. Thus, depending on the study in question, the risks could be much less or much greater than firefighting or live kidney donation.
So which comparator should we use—firefighting, kidney donation, or some other social activity? One could argue that using an acceptable social activity to set limits for biomedical research risks in healthy volunteers is unwise, because there is considerable variation in the risks of research, and establishing an arbitrary upper boundary could deny society of important benefits [22]. If we decided to set the bar at the risks of live kidney donation, then transbronchial biopsies and cardiac catheterizations in biomedical research on healthy volunteers would not be ethically permissible, since the risks of these procedures are much more than the risks of live kidney donation. If we set the bar at the risks of firefighting, then transbronchial biopsy, cardiac catheterization, cardiac stress testing, diagnostic colonoscopy, upper endoscopy, and probably many other procedures would not be allowed in research on healthy participants.
Another problem with using socially accepted activities to set limits on risk is that this begs the very question at issue [22]. Socially acceptable is not the same as ethical. At one time slavery was socially acceptable, but that does not mean that it was ethical. To establish ethical limits on risks to healthy volunteers, one must appeal to ethical considerations, not social conventions, which could be mistaken.
Thus, there are significant difficulties in using comparisons with socially accepted activities to establish upper bounds for risks in biomedical research with healthy volunteers. But this does not imply we should abandon the idea of trying to establish limitations on acceptable risks. A better strategy would be to develop a normative standard that carefully balances and weighs the different values at stake to establish upper boundaries on risks [25]. Upper limits on biomedical research risks should give fair consideration to the social benefits of biomedical research, the rights of participants and investigators, and the need to protect human participants, institutions, and the research community from harm. One could argue that a fair consideration of these different values would allow some risky research to take place but would not permit studies in which there is a significant chance of serious harm. What is a significant chance? People may disagree about how to interpret this idea, but I would argue that a chance that is greater than 1/100 is significant, because when risks reach this level, investigators have good reasons to expect that death, permanent disability, or severe injury or illness may occur during the study. One could argue that IRBs (or other oversight bodies) should not approve studies involving a greater than 1% chance of serious harm for healthy volunteers.
While this proposed guideline would restrict research risks, the limits would not be so low that they would prevent investigators from conducting valuable research. For instance, a study that involved a cardiac catheterization and a cardiac stress test could have a risk of serious harm as high as 0.5%, based on the assumption that the risks of serious harm would be much greater than the risks of mortality. The 1% limit would allow these types of studies but prohibit studies that are twice as risky. These proposed limits would also not be so high that they are meaningless. Prohibiting biomedical research with healthy volunteers that poses a greater than 1% chance of a serious harm would set a clear upper boundary for investigators and signal to the scientific community and the public that some studies are too risky.
Objections and Replies
A possible objection to the 1% proposal is that it is arbitrary. Why not choose 0.1%, or 5%? There seems to be no good reason for setting 1% or greater as the upper boundary for chance of serious harm in research involving healthy volunteers.
While the 1% standard seems arbitrary, one could argue that it represents a fair compromise between overprotectiveness and under-protectiveness. A 0.1% standard would probably prohibit a greater deal of important biomedical research involving healthy volunteers. For example, research in which healthy volunteers receive a transbronchial biopsy would probably be prohibited under a 0.1% standard, which would significantly impede research on the effects of air pollution. Each year, environmental health researchers conduct numerous, IRB-approved studies on the effects of air pollution of respiration, which involve transbronchial biopsies. The biopsies are necessary to collect tissue samples for analysis.[48] Many of these studies would prohibited if a 0.1% standard were used. A standard much higher than 1% would allow excessively risky research to go forward. For example, Reed’s experiments would be approvable under a 5% standard but not under a 1% standard, because the risk of serious harm was greater than 1% but less than 5% in this research.
Another possible objection to the 1% standard is that placing any limits on risks that healthy volunteers face in biomedical research is unwise, because this could deprive society of important scientific discoveries and innovations [22]. Situations might arise in which experiments slightly riskier than some defined limit would be justified, given the importance of the knowledge that could be gained. It is more prudent to require only that risks be reasonable in relation to benefits, not that there be any limit on risks.
I agree that it is important to not prohibit socially valuable research and that there should therefore by some flexibility in the 1% proposal. It should therefore be viewed as a guideline, not as an absolute rule. However, the burden of proof should fall on the investigator to show why an exception to the 1% rule can be made. To enroll healthy volunteers in studies involving a greater than 1% chance of serious harm, the investigator must show that the research addresses a compelling public health or social problem and the risk of serious harm is only slightly more than 1%. To provide additional protection for participants, the IRB should also enlist the aid of outside experts to assess the risks of a study expected to exceed a 1% chance of serious harm.
Conclusion
In this essay, I have argued that there should be some limits on the risks that healthy volunteers face in biomedical research. While these restrictions may be viewed as paternalistic, they are necessary to protect human participants, institutions, and the scientific community from harm. With the exception of self-experimentation, limits on research risks faced by healthy volunteers constitute a type of soft, impure paternalism, because participants usually do not fully understand the risks they are taking. I have considered some possible approaches to limiting research risks and proposed a 1% standard: healthy volunteers in biomedical research should not be exposed to a greater than 1% chance of serious harm, such as death, permanent disability, or severe injury or illness. While this standard provides guidance for decisions made by IRBs and other oversight bodies, it is not an absolute rule. Investigators can enroll healthy volunteers in studies involving a greater than 1% chance of serious harm if they show that the research addresses a compelling public health or social problem and the risk of serious harm is only slightly more than 1%. The IRB should also enlist the aid of outside experts to assess these risks.
Acknowledgments
This article is the work product of an employee or group of employees of the National Institute of Environmental Health Sciences (NIEHS), National Institutes of Health (NIH). However, the statements, opinions or conclusions contained therein do not necessarily represent the statements, opinions or conclusions of NIEHS, NIH or the United States government. I am grateful to Bill Schrader and Frank Miller for helpful comments.
References
- 1.Shamoo Adil, Resnik David. Strategies to minimize risks and exploitation in Phase One trials on healthy subjects. American Journal of Bioethics. 2006;6(3):W1–13. doi: 10.1080/15265160600686281. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Resnik David, Koski Greg. A national registry for healthy volunteers in phase 1 clinical trials. Journal of American Medical Association. 2011;305:1236–67. doi: 10.1001/jama.2011.354. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Goodyear Michael. Learning from the TGN1412 trial. British Medical Journal. 2006;332:677–678. doi: 10.1136/bmj.38797.635012.47. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Steinbrook Robert. Protecting research subjects—the crisis at Johns Hopkins. New England Journal of Medicine. 2002;346:716–20. doi: 10.1056/NEJM200202283460924. [DOI] [PubMed] [Google Scholar]
- 5.Steinbrook Robert. Improving protection for human subjects. New England Journal of Medicine. 2002;346:1425–30. doi: 10.1056/NEJM200205023461828. [DOI] [PubMed] [Google Scholar]
- 6.Lederer Susan. Walter Reed and the yellow fever experiments. In: Emanuel Ezekiel, Grady Christine, Crouch Robert, Lie Rider, Miller Frank, Wendler David., editors. The Oxford Textbook of Clinical Research Ethics. New York: Oxford University Press; 2008. pp. 9–17. [Google Scholar]
- 7.Department of Health and Human Services. Protection of Human Subjects. 2009. 45 CFR 46. [Google Scholar]
- 8.Food and Drug Administration. Institutional Review Boards. 2010. 21 CFR 56. [Google Scholar]
- 9.King Nancy, Churchill Larry. In: The Oxford Textbook of Clinical Research Ethics. Emanuel Ezekiel, Grady Christine, Crouch Robert, Lie Rider, Miller Frank, Wendler David., editors. New York: Oxford University Press; 2008. pp. 514–26. [Google Scholar]
- 10.Australian government, National Health and Medical Research Council. [Accessed 9 June 2011.];National Statement on Ethical Conduct in Human Research. 2007 http://www.nhmrc.gov.au/publications/ethics/2007_humans/contents.htm.
- 11. [Accessed 9 June 2011.];Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans. (2). 2010 http://www.pre.ethics.gc.ca/eng/resources-ressources/news-nouvelles/nr-cp/2010-12-07/
- 12.European Commission. [Accessed 9 June 2011.];National Regulations on Ethics in Research in Hungary. 2011 http://ec.europa.eu/research/science-society/pdf/hu_eng_lr.pdf.
- 13.Indian Council on Medical Research. [Accessed 9 June 2011.];Ethical Guidelines for Biomedical Research on Human Participants. 2011 http://icmr.nic.in/ethical_guidelines.pdf.
- 14.Kuwaiti Institute for Medical Specialization. [Accessed 9 June 2011.];Ethical Guidelines for Biomedical Research. 2011 http://www.kims.org.kw/Ethical%202.doc.
- 15.Netherlands Central Committee on Research Involving Human Subjects. [Accessed 9 June 2011.];About Reviews. 2011 http://www.ccmo-online.nl/main.asp?pid=10&sid=11.
- 16.National Health Research Ethics Committee of Nigeria. [Accessed 9 June 2011.];National Code of Health Research Ethics. 2011 http://www.nhrec.net/nhrec/NCHRE_10.pdf.
- 17.South Africa Department of Health. [Accessed 9 June 2011.];Ethics in Health Research: Principles, Practices, and Processes. 2011 http://www.doh.gov.za/nhrec/norms/ethics.pdf.
- 18.United Kingdom, Department of Health. [Accessed: 9 June 2011.];Governance Arrangements for Research Ethics Committees, a Harmonised Edition. 2011 http://www.dh.gov.uk/prod_consum_dh/groups/dh_digitalassets/documents/digitalasset/dh_126614.pdf.
- 19.World Medical Organization. [Accessed 9 June 2011.];Declaration of Helsinki, 2008 revision. 2008 http://www.wma.net/en/30publications/10policies/b3/index.html.
- 20.Council for International Organizations of Medical Science. [Accessed 9 June 2011.];International Ethical Guidelines for Biomedical Research Involving Human Subjects, 2002 version. 2002 http://www.cioms.ch/publications/layout_guide2002.pdf. [PubMed]
- 21.Nuremberg Code. [Accessed 7 June 2011.];Directives for Human Experimentation. 1947 http://ohsr.od.nih.gov/guidelines/nuremberg.html.
- 22.Miller Frank, Joffe Stephen. Limits to research risks. Journal of Medical Ethics. 2009;35(7):445–9. doi: 10.1136/jme.2008.026062. [DOI] [PubMed] [Google Scholar]
- 23.Annas George. Mengele’s birthmark: the Nuremberg Code in United States courts. Journal of Contemporary Health Law and Policy. 1991;7:17–45. [PubMed] [Google Scholar]
- 24.London Alex. Reasonable risks in clinical research: a critique and a proposal for the integrative approach. Statistics in Medicine. 2006;25:2869–85. doi: 10.1002/sim.2634. [DOI] [PubMed] [Google Scholar]
- 25.Rid Annette, Wendler David. A framework for risk-benefit evaluations in biomedical research. Kennedy Institute of Ethics Journal. 2011;21(2):141–79. doi: 10.1353/ken.2011.0007. [DOI] [PubMed] [Google Scholar]
- 26.Lanie Friedman Ross. Children in Medical Research: Access Versus Protection. New York: Oxford University Press; 2008. [Google Scholar]
- 27.Miller Frank, Grady Christine. The ethical challenge of infection-inducing challenge experiments. Clinical Infectious Diseases. 2001;33:1028–33. doi: 10.1086/322664. [DOI] [PubMed] [Google Scholar]
- 28.Gostin Larry. General justifications for public health regulation. Public Health. 2007;121:829–34. doi: 10.1016/j.puhe.2007.07.013. [DOI] [PubMed] [Google Scholar]
- 29.Yarborough Mark, Sharp Richard. Public trust and research a decade later: what have we learned since Jesse Gelsinger’s death? Molecular Genetics and Metabolism. 2009;97(1):4–5. doi: 10.1016/j.ymgme.2009.02.002. [DOI] [PubMed] [Google Scholar]
- 30.Dworkin Gerald. Paternalism. [Accessed 13 June 2011.];The Stanford Encyclopedia of Philosophy. 2011 Available http://stanford.library.usyd.edu.au/entries/paternalism/
- 31.Kant Immanuel. In: Groundwork of the Metaphysics of Morals. Paton Herbert., translator. New York: Harper and Rowe; 1964. [1785] [Google Scholar]
- 32.Nozick Robert. Anarchy, State, Utopia. New York: Basic Books; 1974. [Google Scholar]
- 33.Mill John Stuart. Utilitarianism and On Liberty. New York: Wiley-Blackwell; 2003. [1869] [Google Scholar]
- 34.Miller Frank, Wertheimer Alan. Facing up to paternalism in research ethics. Hastings Center Report. 2007;37(3):24–34. doi: 10.1353/hcr.2007.0044. [DOI] [PubMed] [Google Scholar]
- 35.Menikoff Jeremy. What the Doctor Didn’t Say. New York: Oxford University Press; 2006. [Google Scholar]
- 36.Davis John. Self-experimentation. Accountability in Research. 2003;10:175–87. doi: 10.1080/714906095. [DOI] [PubMed] [Google Scholar]
- 37.Nobelprize.org. [Accessed: 13 June 2011.];Nobel Prize in Physiology and Medicine 2005. 2005 http://nobelprize.org/nobel_prizes/medicine/laureates/2005/marshall-autobio.html.
- 38.U.S. Fire Administration. [Accessed: 18 June 2011.];On-Duty Firefighter Fatalities 1977–2009. 2011 http://www.usfa.dhs.gov/fireservice/fatalities/statistics/history.shtm.
- 39.U.S. Department of Labor. [Accessed 20 June 2011.];2007 Fatal Injury Rates. 2011 http://stats.bls.gov/iif/oshwc/cfoi/cfoi_rates_2007h.pdf.
- 40.Taliercio J, Nurko S, Poggio E. Living donor kidney transplantation: an update on evaluation and medical implications of donation. Minerva Urologica E Nefrologica. 2011;63(1):73–87. [PubMed] [Google Scholar]
- 41.Wendler David, Miller Frank. Assessing research risks systematically: the net risks test. Journal of Medical Ethics. 2007;33(8):481–486. doi: 10.1136/jme.2005.014043. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Shah Semma, Whittle Amy, Wilfond Benjamin, Gensler Gary, Wendler David. How do institutional review boards apply the federal risk and benefit standards for pediatric research? Journal of the American Medical Association. 2004;291(4):476–82. doi: 10.1001/jama.291.4.476. [DOI] [PubMed] [Google Scholar]
- 43.Bernstein David, Wanner Mark, Borish Larry, Liss Gary Immunotherapy Committee; American Academy of Allergy, Asthma and Immunology. Twelve-year survey of fatal reactions to allergen injections and skin testing: 1990–2001. Journal of Allergy and Clinical Immunology. 2004;113(6):1129–36. doi: 10.1016/j.jaci.2004.02.006. [DOI] [PubMed] [Google Scholar]
- 44.Akinpelu David. Treadmill stress testing. [Accessed: 27 June 2011.];Medscape Reference. 2010 http://emedicine.medscape.com/article/1827089-overview.
- 45.Bandolier [Accessed: 27 June 27 2011.];Harm from endoscopy or colonoscopy. 2011 http://www.medicine.ox.ac.uk/bandolier/booth/gi/endoharm.html.
- 46.Mater Health Services. [Accessed: 27 June 2011.];Broncoscopy. 2011 http://www.mater.org.au/Home/Services/Adult-Respiratory-Medicine/Bronchoscopy.
- 47.Surgery.com. [Accessed: 28 June 2011.];Cardiac catheterization: mortality and morbidity. 2011 http://www.surgery.com/procedure/cardiac-catheterization/morbidity-mortality.
- 48.Center for Environmental Medicine, Asthma, and Lung Biology. [Accessed: 18 October 2011.];About the center. http://www.med.unc.edu/cemalb/about-the-center.