Abstract
The use of deception is typically prohibited in studies that pose greater than minimal risk overall. This approach prevents researchers from using deception to conceal significant risks or to deceive participants about the purpose, potential benefits, or other aspects of a study that are relevant to deciding whether to accept such risks. Yet this approach also mistakenly blocks appropriate research. In particular, it keeps researchers from using deception in studies that pose greater than minimal risk, even when participants are informed accurately about the risks and other aspects of the study that are relevant to deciding whether to participate. Rather than prohibiting deception when the overall study poses greater than minimal risk, policies should prohibit deception when the aspect of the study about which participants are deceived poses greater than minimal risk.
Keywords: human subjects research, deception in research, informed consent, minimal risk, greater than minimal risk
To avoid undermining a study’s scientific validity or social value, researchers sometimes deceive research participants.1 For example, accurately informing potential participants that the goal of a study is to assess whether participants cheat is likely to alter their behavior in the study and thereby undermine the validity of the findings. Instead, some researchers who study cheating tell potential participants that their goal is to assess how quickly participants solve math problems.2 The fact that deceiving participants can help researchers collect valuable data has led to ongoing debate over when the use of deception is ethically acceptable.
It is widely agreed that researchers should not use deception to conceal significant risks. They also should not deceive individuals about aspects of a study that are relevant to deciding whether to accept significant risks. For instance, researchers should not deceive individuals about the purpose of a study that poses greater than minimal risk. And researchers should not disclose the significant risks but then falsely claim that the risks are justified by the potential benefits of enrolling in the study.
To protect research participants from these types of deceptive practices, many institutions and countries prohibit the use of deception in studies that pose greater than minimal risk overall. For instance, the research ethics guidelines in Canada, the Tri-Council Policy Statement, state that deception is permissible only when the study “involves no more than minimal risk to the participants.”3 Similarly, the Indian Council of Medical Research’s national guidelines stipulate that “[r]esearch involving any kind of deception should pose no more than minimal risk.”4 Although regulations in the United States governing research with human subjects (the Common Rule) do not explicitly address when deception is permitted, the requirements for informed consent are commonly interpreted as prohibiting deception when the overall study poses greater than minimal risk.
While this approach protects participants from accepting significant research risks as the result of being deceived, it also unnecessarily blocks valuable and potentially beneficial research. The current approach prevents investigators from using deception in studies that pose greater than minimal risk overall, even when potential participants are accurately informed about all the material aspects of the study, including the risks. When such studies offer participants a chance for important clinical benefit, the current approach also has the potential to undermine their interests. In this article, I describe an alternative approach that avoids these concerns. Rather than prohibiting deception when the overall study poses greater than minimal risk, policies should be revised to prohibit the use of deception when the aspect of the study about which participants are deceived poses greater than minimal risk.
THE CURRENT APPROACH
The use of deception in research is widely assumed to be incompatible with obtaining informed consent. Either researchers can deceive potential participants about a study, or they can obtain their informed consent, but not both. As the guidelines of the Council for International Organizations of Medical Sciences state, informed consent requires individuals to make a decision whether to enroll in research “without having been subjected to coercion, undue influence, or deception.”5 This assumption implies that researchers may use deception only when the research qualifies for a waiver or alteration of the requirement to obtain informed consent. This is important because the U.S. Common Rule allows institutional review boards (IRBs) to waive or alter the requirements for informed consent only when five conditions are satisfied, the first of which mandates that “[t]he research involves no more than minimal risk.”6
The claim that this requirement permits the use of deception only when the overall study poses no greater than minimal risk is reflected in the guidance of many institutions. For example, Massachusetts General Hospital’s guidance states that deception in research is permitted “only in studies posing no greater than minimal risk.”7 Similarly, Stanford University’s guidance prohibits deception when the “research or clinical investigation in its entirety involves greater than minimal risk.”8 This approach makes sense when researchers deceive individuals about the purpose of the research, the potential benefits, the alternatives, or other information that is relevant to deciding whether to enroll. However, current practice regarding deception in research has the potential to block valuable and appropriate research. Consider two examples that illustrate this point: a sham lumbar puncture study and a study involving a bogus taste test.
Sham lumbar puncture.
In a study that investigates the impact of expectation on pain, researchers have participants undergo two lumbar punctures while manipulating their expectations. Imagine, hypothetically, that, in an attempt to increase enrollment, the researchers tell potential participants that the goal of the research is to develop treatments for Alzheimer’s disease. This use of deception is problematic because individuals who support this goal might agree to enroll even when they would have declined to accept the risks if they had known the study’s true purpose. A similar concern arises if, in a second version of the study, the researchers describe the purpose of the research accurately but then falsely tell potential participants that the lumbar punctures could offer important clinical benefit.
Rather than prohibiting deception when the overall study poses greater than minimal risk, policies should prohibit the use of deception when the aspect of the study about which participants are deceived poses such risk.
The current approach to deception in research protects individuals from being deceived in these ways. However, it also prevents researchers from using deception in studies that pose greater than minimal risk overall, even when individuals are informed accurately about all aspects of the study that are relevant to deciding whether to accept the risks.
To see this problem, imagine a third version of the study in which the researchers accurately describe the purpose of the study, the risks of lumbar puncture, the absence of potential benefits, and the fact that participation is voluntary. The deception is limited to telling potential participants that they will undergo two lumbar punctures when, in fact, they will undergo one actual lumbar puncture and one sham puncture which poses lower risks than an actual lumbar puncture. If the risks of the actual lumbar puncture are categorized as greater than minimal, the current approach to deception prohibits this version of the study, even though the researchers accurately disclose the risks that are greater than minimal (the risks of the lumbar puncture) and also accurately describe the aspects of the study that are relevant to deciding whether to accept those risks, including the purpose, the absence of potential benefits, and the alternatives.
Bogus taste test.
Binge eating represents a significant health problem. To develop effective treatments, researchers administer experimental interventions to research participants and then attempt to assess the impact on their level of eating. Binge eaters who are informed that their level of eating will be assessed are able to reduce the extent to which they eat,9 making it difficult to determine whether any reductions in binge eating among participants are a result of the experimental treatment or of their being informed that their level of eating will be assessed. To avoid this problem, researchers rely on the bogus taste test, which is regarded as the state-of-the-art method for testing experimental treatments for binge eating.10
The bogus taste test is designed to evaluate participants’ level of eating without informing them when this assessment is taking place. It involves randomizing participants to the experimental treatment or a placebo and, after they have received the experimental treatment or placebo, presenting them with a tray of food. Participants are instructed to examine and smell each of the food items to determine which ones they find most appealing, at which point the investigator leaves the room. Unbeknownst to the participants, the tray is weighed before and after each session. The difference provides a measure of how much food each participant consumes. Comparing the results between the experimental treatment group and the placebo group provides a measure of the impact of the experimental intervention on binge eating.
The bogus taste test itself poses minimal risk. It essentially involves having participants look at and smell common food items. However, it is typically used in studies that assess the safety and efficacy of experimental treatments, and such treatments, especially those for which there are insufficient safety data, pose greater than minimal risk. As a result, studies that use the bogus taste test to assess experimental treatments pose greater than minimal risk overall. Such studies cannot be approved under current guidelines, even though the aspect of the study about which participants are deceived, the bogus taste test, poses no greater than minimal risk. In addition, the bogus taste test is sometimes used to assess experimental treatments that offer participants the potential for clinical benefit. In such cases, the current approach to deception could potentially undermine participants’ clinical interests.
This raises the challenge of whether it is possible to protect participants from accepting significant risks as the result of being deceived without unnecessarily obstructing valuable and potentially beneficial research. Under the proposed alternative approach to the use of deception, studies such as the bogus taste test to assess experimental treatments for binge eating would be permitted. This provides one reason to prefer the proposed alternative approach. Are there nonetheless reasons to retain current guidelines?
ETHICAL ASSESSMENT OF DECEPTION IN RESEARCH
The use of deception in research typically is not intended to benefit the deceived participants. It is intended to protect the validity of the study and to promote the researchers’ goal of collecting data that might benefit future patients. On these grounds, one might argue that deception is inconsistent with appropriate respect for research participants. This view does not seem unreasonable. Yet the claim undermines the current approach as much as it does the approach I propose. If deception in research is inconsistent with appropriate respect, it should not be permitted, independent of the level of risks a study poses. In contrast, if we grant that deception for the sake of research can be consistent with appropriate respect in minimal risk studies, it seems it should be equally acceptable in studies that pose greater risks, as long as the deception is limited to an aspect of the study that poses no greater than minimal risk. This suggests that respect for research participants does not provide a reason to prefer the current approach over the proposed alternative.
Some commentators offer a different reason to support the current approach: O’Neil and Miller argue that research participants “who are deceived about the purpose of the study are not in a good position to decide for themselves whether running the risks is worthwhile.”11 This is an important point. Researchers should not be permitted to use deception to conceal significant research risks. They also should not be permitted to deceive individuals about aspects of a study, such as its purpose, that are relevant to deciding whether to accept greater than minimal research risks. To preclude this possibility, O’Neil and Miller endorse the current approach of prohibiting deception in all studies that pose greater than minimal risk.
This approach would be necessary to protect participants from misuses of deception if all deception involved aspects of the study that are relevant to deciding whether to accept the risks. However, the two previous examples reveal that not all deceptive research is like this. Consider the bogus taste test again. Studies that use this test to evaluate the efficacy of experimental treatments pose greater than minimal risk overall. Yet, in a proper informed consent process, individuals are accurately informed about the risks that are greater than minimal, namely, those posed by the experimental treatment. They are also accurately informed about the aspects of the study relevant to deciding whether to accept these risks, including the purpose of the study, the chances, if any, that the experimental treatment might benefit them, the duration of their participation, the alternatives to participation, and the fact that enrollment is voluntary. The deception is limited to describing the purpose of the bogus taste test in terms of determining which foods appeal to participants rather than in terms of assessing to what extent participants binge eat.
This example reveals that the current approach cannot be justified over the proposed alternative on the grounds that participants who are deceived necessarily do not know the purpose of the research. Nor can it be justified on the grounds that it better protects participants from accepting significant risks as the result of being deceived. The current approach to deception inadvertently precludes participants from having access to some potentially beneficial interventions, suggesting that the proposed alternative better promotes their interests. The argument to this point suggests that scientific and ethical considerations support the proposed alternative over the current approach to deception.
The question of whether to revise research practice, guidelines, and regulations often involves a dilemma between protecting participants and allowing important research to proceed. But not here. The proposed alternative approach maintains the same level of participant protection while reducing the extent to which valuable research is prohibited. Regulations and guidelines that mandate the current approach to deception in research should thus be revised to incorporate the alternative. In the U.S., adopting the alternative approach does not require revision of the Common Rule. Instead, it requires IRBs to interpret the first condition on waivers and alterations of informed consent as applying to the affected aspects of the study rather than to the entire study. Proponents might object, then, that researchers and review committees are not free to adopt their own interpretations of the regulations, even when they seem preferable on scientific and ethical grounds. This raises the question of whether, with respect to the Common Rule in particular, there are regulatory reasons to maintain the current approach.
ASSESSMENT OF THE COMMON RULE
The question we now face is whether the term “the research” in the Common Rule’s first condition concerning the waiver or alteration of the requirements for informed consent should be interpreted as applying to the overall study or to the aspect of the study about which participants do not give informed consent. Although regulations that agencies promulgate are not the same as statutes passed by legislative bodies, the tenets of statutory interpretation can be helpful in interpreting regulatory provisions. There is significant disagreement over the appropriate steps in statutory interpretation, and further disagreement regarding the relative importance of the various steps.12 Moreover, statutory interpretation is more art than science, and it has been criticized for allowing commentators to pick and choose whichever approach supports the interpretation they antecedently prefer.
These concerns reveal that statutory interpretation is unlikely to yield a definitive conclusion about the best interpretation of the Common Rule. This itself is an important inference. It suggests that statutory interpretation is unlikely to yield definitive support for the current approach to deception that might outweigh the scientific and ethical reasons to prefer the proposed alternative approach. Granting this, consideration of the commonly endorsed steps in statutory interpretation13 offers a way to assess whether there are regulatory reasons to maintain the current approach despite its scientific and ethical shortcomings relative to the proposed alternative.
The obvious first step to identifying the best interpretation of a statute is to look to the plain meaning of the terms, as well as to any relevant dictionary definitions. The word “the” is, of course, commonly used before a noun denoting something already under consideration, or, as Merriam-Webster puts it, a noun that “has been previously specified by context or by circumstance.14 Unfortunately, the challenge in the present case is determining whether the reference to “the research” in the first condition on waivers and alterations refers to the overall study or the aspect of the study about which participants do not give consent. Hence, the plain meaning of the term and dictionary definitions do not seem to offer a way to answer the present question.
A common second step is to consider whether the term in question is defined elsewhere in the Common Rule. However, while the Common Rule defines a number of terms, it does not define whether “the research” in the first condition refers to the overall study or to the aspect of the study for which consent is waived or altered.
A third step is to look to the context in which the term is used, in the present case, determining when IRBs may waive or alter one or more of the requirements for informed consent. These requirements cover waiving or altering consent for an entire study, which supports the focus on the entire study in the current approach to deception. But these requirements also apply to the waiver or alteration of elements of consent with respect to individual aspects of a study, such as the specific procedures to be followed. The context in which the term is used thus does not clearly support either approach.
Fourth, we can consider whether there are any extant agency interpretations of the term in question.15 The U.S. Food and Drug Administration (FDA) has offered relevant guidance, specifying that the consent requirements may be waived or altered only when “the clinical investigation involves no more than minimal risk.”16 While this statement might be used to support either interpretation, the phrase “the clinical investigation” seems more likely to refer to the overall study than to a single component or aspect of the study. For example, the statement that “the clinical investigation completed last week” seems to refer to the overall study rather than to a specific aspect or component of it. Apparently, then, FDA guidance supports the current approach to deception.
The Secretary’s Advisory Committee on Human Research Protections (SACHRP), which advises the secretary of the U.S. Department of Health and Human Services on issues about research with human subjects, has proposed revising the statement that “the research” must involve no greater than minimal risk to the following: “The research, or the component of the research related to the proposed waiver or alteration of consent, involves no more than minimal risk.”17 SACHRP argues that this recommendation is not intended as a substantive change to existing regulations. Instead, it is intended to call attention to the availability of options that are “already permitted in the existing regulations, but nuances in the language have deterred IRBs from exercising the flexibility that the regulations were intended to provide.”18 While this recommendation does not constitute agency policy, it clearly supports the proposed alternative approach to deception over the current approach.
A fifth step to resolving regulatory ambiguity is to examine the intent of those who developed the regulations.19 As noted by the Office for Human Research Protections (OHRP), the Common Rule’s provisions are based on The Belmont Report’s ethical principles for human subjects research—respect for persons, beneficence, and justice.20 When describing the implications of these principles for informed consent, The Belmont Report explicitly points to the ethical challenge that arises when “informing subjects of some pertinent aspect of the research is likely to impair the validity of the research,” and states that deception is permissible when there are “no undisclosed risks to subjects that are more than minimal.”21 This statement makes clear that the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research—which developed The Belmont Report—was aware of the possibility of deception in studies that pose greater than minimal risk. Moreover, the National Commission did not conclude that these studies should be prohibited. Instead, The Belmont Report explicitly notes that these studies can be acceptable, provided participants are informed about any risks that are greater than minimal. This view is consistent with the proposed alternative approach to deception.
A sixth and final step in statutory interpretation asks which option makes the most sense given the goal of the regulations in question. The goal of the Common Rule is to ensure the respect and protection of research participants while allowing socially valuable research to proceed. It is widely agreed that this goal is consistent with permitting deception in the context of minimal risk studies. This suggests that the Common Rule’s goal is also consistent with deceiving participants in the context of riskier studies, provided the deception is limited to an aspect of the study that poses minimal risk. The proposed alternative approach to deception does not permit participants to be exposed to any greater nonconsensual risks than does the current approach and, at the same time, permits a broader range of valuable research to go forward.
In sum, the common steps in statutory interpretation provide some support for the current approach to deception and some support for the proposed alternative. Given the importance of The Belmont Report to understanding the current regulations, the proposed alternative arguably is better supported. At a minimum, there does not appear to be any strong regulatory reason to prefer the current approach, and it has scientific and ethical shortcomings.
CONCLUSION
The current approach to deception in research discourages or prohibits its use in studies that pose greater than minimal risk overall. In the U.S., this approach is based on an interpretation of the Common Rule according to which the first condition for waiving or altering the requirements for informed consent (“the research” involves no greater than minimal risk) mandates that the overall study poses no greater than minimal risk.
One might defend the current approach on the grounds that it is critical to ensuring that research participants are not exposed to greater than minimal risk as a result of being deceived. Yet an alternative approach—prohibit deception when the aspect of the study about which participants are deceived poses greater than minimal risk—provides the same protection in this regard. Moreover, an important scientific reason to prefer the proposed approach over the current one is that the proposed approach permits valuable studies that the current approach blocks. The proposed approach also meets the ethical standard of providing the same level of respect and protection from research risks as does the current approach.
Finally, with respect to the Common Rule, applying the common steps in statutory interpretation to help interpret the research regulations does not clearly support either the current or alternative approach to deception. However, analysis of the recommendations of the National Commission’s Belmont Report, which form the basis for the Common Rule, suggests that the proposed alternative is consistent with those recommendations. Thus, IRBs in the U.S. could adopt the proposed alternative approach to permit the deception of research participants in studies that pose greater than minimal risk overall, provided the deception is limited to minor aspects of the study that pose no greater than minimal risk. ⬧
ACKNOWLEDGMENT AND DISCLAIMER
This work was funded by the Intramural Research Program at the NIH Clinical Center. However, the opinions expressed are the author’s own. They do not represent the position or policy of the National Institutes of Health, the U.S. Public Health Service, or the U.S. Department of Health and Human Services.
REFERENCES
- 1.McCambridge J, et al. , “The Use of Deception in Public Health Behavioral Intervention Trials: A Case Study of Three Online Alcohol Trials,” American Journal of Bioethics 13, no. 11 (2013): 39–47; [DOI] [PMC free article] [PubMed] [Google Scholar]; Wendler D, and Miller FG, “Deception in Clinical Research,” in The Oxford Textbook of Clinical Research Ethics, ed. Emanuel EJ et al. (New York: Oxford University Press, 2008), 315–24, at 316. [Google Scholar]
- 2.Mazar N, Amir O, and Ariely D, “The Dishonesty of Honest People: A Theory of Self-Concept Maintenance,” Journal of Marketing Research 45, no. 6 (2008): 633–44. [Google Scholar]
- 3.“Research Involving Partial Disclosure or Deception,” article 3.7A in chap. 3, “The Consent Process,” of TCPS 2, Government of Canada, Panel on Research Ethics, 2018, https://ethics.gc.ca/eng/tcps2-eptc2_2018_chapter3-chapitre3.html#b. [Google Scholar]
- 4.Indian Council of Medical Research, National Ethical Guidelines for Biomedical and Health Research Involving Human Participants (New Delhi, India: Director-General, Indian Council of Medical Research, 2017), https://www.icmr.nic.in/sites/default/files/guidelines/ICMR_Ethical_Guidelines_2017.pdf, section 9.2.9. [Google Scholar]
- 5.Council for International Organizations of Medical Sciences (CIOMS) in collaboration with World Health Organization, “Commentary on Guideline 9,” in International Ethical Guidelines for Health-Related Research Involving Humans (Geneva, Switzerland: CIOMS, 2016), https://cioms.ch/wp-content/uploads/2017/01/WEB-CIOMS-EthicalGuidelines.pdf, pp. 33–36, at p. 33. [Google Scholar]
- 6.U.S. Department of Health and Human Services, Protection of Human Subjects, 45 C.F.R. 46.116. [Google Scholar]
- 7.“Deception and Incomplete Disclosure in Research Guidance for Investigators,” Partners Healthcare, December 1, 2008, https://www.partners.org/Assets/Documents/Medical-Research/Clinical-Research/Deception-and-Incomplete-Disclosure-in-Research.pdf. [Google Scholar]
- 8.“Findings for Waiver or Alteration of Consent Requirements and Waiver of Documentation (Waiver of Signature) of Consent,” Stanford University HRPP guidance, Stanford University, revised October 2017, http://web.stanford.edu/dept/DoR/compliance/hs/research/documents/Regulation-sWaiverAlterationConsent.pdf.
- 9.Peter HC, Polivy J, and Silver R, “Effects of an Observer on Eating Behavior: The Induction of ‘Sensible’ Eating,” Journal of Personality 47, no. 1 (1979): 85–99. [DOI] [PubMed] [Google Scholar]
- 10.Werthmann J, et al. , “Can(not) Take My Eyes Off It: Attention Bias for Food in Overweight Participants,” Health Psychology 30, no. 5 (2011): 561–69. [DOI] [PubMed] [Google Scholar]
- 11.O’Neil CC., and Miller FG, “When Scientists Deceive: Applying the Federal Regulations,” Journal of Law, Medicine & Ethics 37 (2009): 344–50, at 346. [DOI] [PubMed] [Google Scholar]
- 12.Cross FB, The Theory and Practice of Statutory Interpretation (Stanford, CA: Stanford University Press, 2009). [Google Scholar]
- 13.Eskridge W, Frickey P, and Garrett E, Legislation and Statutory Interpretation, 2nd ed. (New York: Foundation Press, 2006). [Google Scholar]
- 14.Merriam-Webster, s.v. “the,” accessed January 3, 2019, https://www.merriam-webster.com/dictionary/the.
- 15.Jenkins Gardebring v., 485 U.S. 415, 430 (1988); Moore v. Hannon Food Serv., Inc, 317 F.3d 489, 494–96 (5th Cir. 2003). [Google Scholar]
- 16.“IRB Waiver or Alteration of Informed Consent for Clinical Investigations Involving No More Than Minimal Risk to Human Subjects: Guidance for Sponsors, Investigators, and Institutional Review Boards,” U.S. Food and Drug Administration, July 2017, https://www.fda.gov/regulatory-information/search-fda-guidance-documents/irb-waiver-or-alteration-informed-consent-clinical-investigations-involving-no-more-minimal-risk.
- 17.“Attachment D: Informed Consent and Waiver of Consent,” U.S. Department of Health and Human Services, Office for Human Research Protections, 2013, https://www.hhs.gov/ohrp/sachrp-committee/recommendations/2013-january-10-letter-attachment-d/index.html. [Google Scholar]
- 18.Ibid.
- 19.Oregon Gonzales v., 546 U.S. 243 (2006). [Google Scholar]
- 20.“45 CFR 46 FAQs”: “What is the historical basis for the current human research regulations, 45 CFR part 46?,” U.S. Department of Health and Human Services, Office for Human Research Protections, https://www.hhs.gov/ohrp/regulations-and-policy/guidance/faq/45-cfr-46/index.html. [Google Scholar]
- 21. U.S. Department of Health and Human Services, National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research (April 18, 1979), https://www.hhs.gov/ohrp/regulations-and-policy/belmont-report/read-the-belmont-report/index.html#xrespect. [PubMed]
