Abstract
It is widely assumed that the use of deception in research is always inconsistent with obtaining valid consent. In addition, guidelines and regulations permit research without valid consent only when it poses no greater than minimal risk. Current practice thus prohibits studies that use deception and pose greater than minimal risk, including studies that rely on deceptive methods to evaluate experimental treatments. To assess whether these prohibitions are justified, the present paper evaluates five arguments that might be thought to support the assumption that deception is always inconsistent with valid consent. Analysis of these arguments reveals that deception is frequently, but not always, inconsistent with obtaining valid consent for research. This conclusion suggests that, in order to avoid unnecessarily blocking valuable research, current policies and practice should be revised to recognize the conditions under which the use of deception can be consistent with obtaining research participants’ valid consent.
Keywords: deception, valid consent, rights
I. INTRODUCTION
Researchers frequently deceive research participants (Adair et al., 1985; Korn, 1997; McCambridge et al., 2013). They deceive research participants when providing accurate information would undermine a study’s scientific validity or its social value. Informing participants that the purpose of a study is to assess their level of concentration is likely to influence how much they concentrate and thereby undermine the validity of the results. Informing participants that they are receiving a placebo, not an active medication, might influence the extent of the placebo effect and thereby decrease the social value of the results (Vase et al., 2003).
Granting its scientific value, deception raises important ethical concerns (Ortman and Hertwig, 1997, 1998). First, deception involves concealing or misdescribing aspects of the research. It is thus in tension with obtaining participants’ valid consent (Bok, 1995). Second, deception involves manipulating participants to believe things that are false about the study in question. To this extent, it seems to violate participants’ right to decide for themselves whether to enroll. Third, public awareness that researchers sometimes deceive research participants makes it difficult for the potential participants of non-deceptive studies to be confident that they have been provided with accurate information. In this way, the approval of some deceptive studies has the potential to undermine the public’s trust in all researchers and in all research.
Some commentators conclude that deception in research is impermissible and should be prohibited in all cases (Cupples and Gochnauer, 1985). Most notably, the first principle of the Nuremberg Code maintains that clinical research should be permitted only when participants make a voluntary decision whether to enroll, absent “any element of deceit.” While blanket prohibition offers protection for research participants, it also precludes valuable studies that rely on deception. Recognizing this tension, current practice and policies attempt to address the ethical concerns raised by deception, while still permitting its use in limited cases (Sieber et al., 1995). This approach has generated a number of analyses and guidelines on the conditions under which the deception of research participants can be acceptable. The present manuscript focuses on a specific question within this broader discussion: What is the relationship between deceiving research participants and obtaining their valid consent?
Deception involves researchers providing participants with misleading or inaccurate information about the study in question. Obtaining participants’ valid consent, in contrast, requires researchers to describe accurately and participants to understand correctly the study. It is thus commonly assumed that the use of deception is always inconsistent with obtaining valid informed consent. Researchers can deceive research participants or they can obtain their valid consent, but not both. This common assumption has important consequences. Most guidelines and regulations permit researchers to conduct research without participants’ valid consent only when it poses no greater than minimal risk. Hence, the assumption that deception is always inconsistent with valid consent implies that researchers may not deceive participants in the context of studies that pose greater than minimal risk.
Some regulations state this condition explicitly. The CIOMS guidelines maintain that “Deception is not permissible in cases in which the study exposes participants to more than minimal risk” (CIOMS, 2016). Other regulations, including U.S. regulations, do not specify when deceptive studies may be approved. Instead, the common assumption that deception and valid consent are incompatible has led to the practice of permitting deceptive studies only when they qualify for a waiver of the requirement to obtain valid consent. Since most guidelines permit such waivers only when the study poses no greater than minimal risk, the restrictions on deception end up being the same as they are under regulations that explicitly address deception, namely, deceptive studies are permitted only when they pose no greater than minimal risk. Unfortunately, this approach precludes a range of valuable studies, including, most importantly, clinical trials that rely on deceptive methods to assess experimental treatments.
To evaluate whether these prohibitions are warranted, the present paper assesses whether in fact deception is always inconsistent with valid consent. At least five arguments might be thought to show that it is: (1) deception conceals one or more aspects of the study that need to be disclosed and/or understood for valid consent; (2) the use of deception poses greater than minimal risk; (3) deception itself needs to be disclosed and/or understood for valid consent; (4) deception involves investigators intentionally misleading participants; and (5) deception violates participants’ rights. Analysis of these arguments reveals that deception is frequently inconsistent with valid consent. However, researchers can sometimes deceive participants and still obtain their valid consent. To see this possibility, it is important to distinguish deception involving aspects of the study that need to be disclosed and/or understood for valid consent (call these “essential” aspects of the study) and deception involving aspects that do not need to be disclosed or understood for valid consent (“nonessential” aspects).
Deception involving essential aspects is inconsistent with valid consent, as current practice and policies maintain. In contrast, researchers can deceive participants about nonessential aspects and still obtain their valid consent. This analysis reveals that current practice and policies unnecessarily prohibit valuable studies that deceive participants about nonessential aspects. To remedy this, current practice and policies should be revised to recognize that researchers sometimes deceive participants in ways that do not undermine the validity of their consent. In these cases, it can be acceptable to use deception in studies that pose greater than minimal risk overall, provided participants are not deceived about any aspects of the study that are essential to valid consent.
II. THE NATURE OF VALID CONSENT AND DECEPTION
Valid consent involves an agreement that is morally transformative in the sense that it makes permissible what, absent the agreement, would be problematic or impermissible. With respect to enrollment in research, valid informed consent involves individuals agreeing to be enrolled in a study in such a way and under such conditions that it is ethically permissible for the researchers to perform study-related procedures on them. Obtaining valid consent transforms the act of sticking a needle in an individual’s arm from battery into permissible research. To achieve this moral transformation, standard accounts maintain that four conditions must be satisfied: (1) a competent individual who is (2) sufficiently informed must (3) voluntarily decide to enroll in the study and (4) communicate this decision to the research team. The question for the present paper, then, is whether deception is always incompatible with at least one of these four conditions. If it is, the prevailing view is correct. If it is not, the prevailing view is mistaken.
There is no consensus definition of deception. To consider just one point of contention, deceivers often intend for those they deceive to develop false beliefs. However, it is unclear whether an intention to produce false beliefs is a necessary condition on deception. Imagine that, as a result of my tenuous grasp of German, I unintentionally begin my presentation with a misleading statement that leads all of the German speakers in the audience to have mistaken beliefs about what I am going to say. Have I deceived them? Although this is an important question, I bracket it. The goal of the present work is not to develop a comprehensive conceptual analysis of deception. The goal is to evaluate whether, on any plausible understanding, the use of deception is always inconsistent with valid consent.
To try to answer this question, I understand deception in terms of instances of communication that reasonably can be expected to result in the recipient(s) developing false beliefs (Wendler and Miller, 2008, 316). Imagine, for example, that researchers are interested in assessing which factors influence individuals’ concentration. They propose to ask research participants to answer several essay questions while introducing various objects and noises into the room. The extent to which the objects and noises are distracting is determined by the extent to which their introduction results in participants looking up from the test.
Informing potential participants that the purpose of this study is to assess whether they are distracted, and that this is determined by how often they look up, would almost certainly influence the frequency with which participants look up. Recognizing this threat to scientific validity, the researchers propose to tell potential participants that the purpose of the study is to evaluate the creativity of their answers. This statement likely qualifies as deceptive on any plausible definition, including definitions that maintain that deception must be intentional. This statement certainly qualifies as an instance of communication expected to lead reasonable individuals to believe falsely that the study concerns creativity, not concentration.
The most obvious way to get reasonable individuals to believe something false is to tell them something false. In the present example, the researchers get participants to believe that the purpose of the study is not to assess their concentration by telling them that the purpose is to assess the creativity of their answers. However, telling individuals something that is false is not the only way to get them to believe things that are false. One can also get reasonable individuals to believe things that are false by failing to disclose information. More surprisingly, it is possible to get reasonable individuals to believe things that are false by telling them things that are true. How is this possible?
Briefly, researchers can take advantage of the norms governing communication and the expectations they produce, to deceive participants by failing to disclose information and even by telling them things that are true. In some contexts, there is an expectation that, if something is true, it will be disclosed. For example, a common way to deceive research participants is to fail to disclose that the mirror on the wall is a one-way mirror that allows them to be observed. This is an example of deception by lack of disclosure. It works because individuals assume that, if participants are going to be observed, researchers will disclose that fact. Failure to disclose the presence of the one-way mirror thus leads reasonable individuals to believe that they will not be observed.
In many contexts, there is also an expectation that everything disclosed is true. Imagine that the attention study does not offer any payment to participants and explains this in the consent form as follows: “participants will receive either $100 or no money.” Given that participants will receive no money, this disjunctive statement is true. However, reasonable people will think: the consent form would not mention the possibility of receiving $100 unless there is a chance I will receive it. Hence, there must be some chance I will receive $100. Of course, this might, as the investigators intend, increase the chances that individuals enroll in the study.
These examples illustrate the important lesson that researchers, review committees, and others cannot determine whether a consent process is deceptive simply by assessing whether it includes any false statements. They also need to assess whether, given the context, the inclusion of some true statements, or the failure to include them, is likely to lead reasonable individuals to have false beliefs regarding the study.
III. CURRENT PRACTICE
Some commentators and guidelines assume that valid consent is ethically necessary for all research studies. The Nuremberg Code, for example, maintains that valid consent is “absolutely essential” to ethical research. Combining this view with the assumption that deception is always inconsistent with valid consent would have dramatic implications. It would imply that deception is inappropriate in all cases. More recent guidelines and regulations permit the use of deception in limited circumstances.
The Belmont Report states that deception can be justified when (1) incomplete disclosure is truly necessary to accomplish the goals of the research, (2) there are no undisclosed risks to subjects that are more than minimal, and (3) there is an adequate plan for debriefing subjects (National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, 1979). The American Psychological Association allows psychologists to deceive potential participants when (a) the use of deception is justified by the study’s value and nondeceptive procedures are not feasible; (b) the research is not expected to cause physical pain or severe emotional distress; and (c) the deception is explained as early as is feasible and participants who object are permitted to withdraw their data (APA, 2017). As noted, U.S. federal research regulations do not specify when deception is permissible.1 Instead, the common assumption that deception is always inconsistent with valid consent has led to a practice of permitting deceptive studies only when they satisfy the conditions on conducting research without informed consent (DHHS, 2018).
U.S. regulations permit researchers to conduct research without obtaining valid consent only when it satisfies five conditions. Most importantly, research without valid consent is permitted only when it “poses no more than minimal risk” (DHHS, 2018, 46.116). Deception in the context of survey and behavioral studies typically satisfies this, as well as the other conditions on waiver. Current practice thus permits these studies. Unfortunately, other important studies that rely on deception pose greater than minimal risk overall. The common assumption that deception is always inconsistent with valid consent, together with the regulatory stipulation that research may be conducted without valid consent only when it poses no greater than minimal risk, precludes these studies. This possibility is especially troubling in the case of studies that rely on deceptive methods to assess experimental treatments. Consider two examples.
Bogus Taste Test
Binge eating represents a significant health problem, and researchers are trying to identify effective treatments for it. To do so, they administer experimental treatments to research participants and assess their safety and efficacy with respect to binge eating. Given the potential for toxicity, these trials, like essentially all studies that administer experimental treatments whose safety has not been established, pose greater than minimal risk overall. These trials also face a significant scientific challenge: informing binge eaters that their level of eating will be assessed reduces the extent to which they binge eat (Peter et al., 1979). This impact makes it difficult for researchers to determine whether any reductions in binge eating are a result of the experimental treatment or a result of informing participants that their eating will be assessed.
To avoid this potential confound, researchers randomize participants to the experimental treatment or a placebo, and then assess their level of binge eating using the Bogus Taste Test (Werthmann et al., 2011). The Bogus Taste Test is designed to evaluate participants’ level of binge eating without informing them that this assessment is taking place. It does so by presenting research participants with a tray containing a range of foods and telling them that the goal of the session is to determine their preferences regarding different food smells. The tray is set in front of the participants so that they can smell each of the food items while the investigator leaves the room. Unbeknownst to the participants, the tray is weighed before and after each session. The difference provides a measure of how much food each participant consumes. And comparing the results between the experimental treatment group and the placebo group provides a measure of the impact of the experimental intervention on binge eating.
Conditioning and Extinction
Conditioning participants to certain stimuli and then extinguishing the effect is central to a range of research studies. In one paradigm, participants are presented with a picture of a blue square followed by the administration of a mild shock. Participants who undergo a series of these trials come to expect a shock when they see a blue square. The extinction phase involves participants being repeatedly presented with a blue square without the shock. The strength of the association between the blue square and the shock is estimated by how long it takes before the participants no longer expect presentation of a blue square to be followed by a shock.
Telling participants the truth—they will not receive any shocks during the extinction phase of the study—would lead to a rapid, and research-induced extinction of the expectation. To avoid this confound, participants are told that they “may or may not receive shocks” during the extinction phase (Milad et al., 2007). This statement is true, but, like the earlier compensation example, it is also misleading. It is intended to get participants to believe, falsely, that there is a chance they will receive shocks during the extinction phase. Why, after all, would the consent form mention the possibility of receiving shocks? On this basis, reasonable individuals typically believe that they may receive some shocks during the extinction phase. This statement thus qualifies as deceptive.
Many studies involve the administration of experimental treatments that have not been shown to be safe in the study population. These studies pose greater than minimal risk overall. In addition, some of these studies rely on the Bogus Taste Test, the Conditioning and Extinction paradigm, or other deceptive methods to collect valid data. On the assumption that the use of deception is always inconsistent with valid consent, these studies cannot be approved under U.S. regulations (and other regulations and guidelines that permit research that poses greater than minimal risk only with valid consent). This potential cost provides a compelling reason to re-assess the prevailing assumption: is the use of deception always inconsistent with obtaining valid informed consent for research?
IV. IS DECEPTION ALWAYS INCONSISTENT WITH VALID CONSENT?
The Communication, Competence and Voluntariness Conditions on Valid Consent
As noted, standard accounts maintain that four conditions must be satisfied in order for researchers to obtain participants’ valid consent: (1) a competent individual who is (2) sufficiently informed must (3) voluntarily decide to enroll in the study and (4) communicate this decision to the research team. One way to assess the claim that deception is always inconsistent with valid consent is to evaluate whether its use in research is always inconsistent with satisfying one or more of these conditions.
Consider how these conditions apply to the aforementioned attention study. Deceiving research participants to believe that the purpose of the attention study is to assess the creativity of their answers presumably will not undermine their capacity to communicate a decision to the research team (fourth condition on valid consent). Moreover, while accounts of what is required for research participants to be competent vary, deception is unlikely to undermine participants’ competence (first condition on valid consent). For example, a classic account maintains that competence requires participants to have a set of values that is at least “minimally consistent, stable, and affirmed as his or her own.” In addition, participants must have the capacity to reason in light of their values and make a decision on that basis (Brock and Buchanan, 1990, 24–5). Deceiving potential participants to believe that the purpose of the attention study is to assess the creativity of their answers will not undermine the consistency or stability of their values. It also seems unlikely to undermine their ability to reason and make a decision on the basis of their values.
Telling potential participants that the purpose of the attention study is to assess the creativity of their answers might be thought to threaten the voluntariness of their decision to enroll (third condition on valid consent). To support this view, one could argue that threats to voluntariness are any actions which have the potential to influence an individual’s decision. While this view would implicate at least some instances of deception, it is clearly too broad. An investigator’s accurate statement that a study poses a risk of death, or that it offers the potential for clinical benefit, has the potential to influence potential participant’s decision whether to enroll. However, that involves appropriate disclosure, not a threat to voluntariness. To accommodate these cases, one might argue more narrowly that threats to voluntariness involve actions which have the potential to influence individuals’ decisions inappropriately. While that is better, it still seems too broad. Imagine that an investigator conducting a randomized trial promises to violate the protocol and ensure that a potential participant receives the active medication, not the placebo. That would be inappropriate, and it may well influence the potential participant’s decision to enroll, but it does not threaten the voluntariness of their decision.
A leading analysis maintains that threats to voluntariness involve a specific kind of inappropriate influence on individuals’ decisions, namely, inappropriate influences that involve pressuring, threatening, or forcing individuals to enroll (Appelbaum et al., 2009). A doctor who explains a study and then threatens to abandon the patient unless the patient agrees to enroll undermines the voluntariness of the patient’s decision and, hence, the validity of their consent. While the use of deception does not involve pressuring, threatening, or forcing individuals to enroll, it can involve more subtle attempts to manipulate them. For example, in order to increase enrollment, an investigator might describe a study that includes a painful procedure as “involving little discomfort”. This use of deception involves manipulating potential participants’ beliefs with the goal of increasing the chances that they enroll. One could argue, on that basis, that it undermines the voluntariness of potential participants’ decision to enroll (Wilkinson, 2013).
Even if one endorses this broader view, many instances of deception are not intended to influence potential participants’ decision whether to enroll. Instead, they are intended to influence participants’ behavior once they enroll. The statement that the goal of the attention study is to assess participants’ creativity is not intended to influence whether they enroll. There is no reason to think that participants are more (or less) likely to enroll in a study that assesses their creativity than a study to assess their concentration. Instead, this statement is intended to avoid the possibility that disclosing the true purpose of the study influences the level of participants’ concentration during the study.
This analysis suggests that the use of deception has the potential to undermine the voluntariness of participants’ decision to enroll in, at most, some cases. This conclusion is consistent with the views of a leading commentator: “If A engages in deception or withholds important information, B’s consent may not be valid, but A’s action does not compromise the voluntariness of B’s consent” (Wertheimer, 2012, 227). By getting participants to believe things that are false, deception does involve manipulation of their beliefs. On some accounts, such manipulation can threaten the voluntariness of participants’ decision to enroll. Even on those accounts, this possibility arises only when the deception influences individuals’ enrollment decisions. Assuming this is right, the common assumption that deception is always inconsistent with valid consent cannot be supported by the fact that it always undermines the voluntariness of participants’ decision whether to enroll in research. It follows that whether this common assumption is accurate depends on whether the use of deception is inconsistent with the second condition on valid consent in all cases, or at least in the many cases where deception does not undermine the voluntariness of potential participants’ decision to enroll.
Is Deception Inconsistent With the Second Condition on Valid Consent?
The second condition on valid consent requires that participants are sufficiently informed regarding the study in question. Satisfaction of this condition requires two steps. It requires researchers to disclose information about the study, and it requires participants to understand information about the study. Commentators disagree regarding the relationship between the information that needs to be disclosed and the information that needs to be understood. Some argue that participants need to understand all the information researchers need to disclose; others argue that potential participants need to understand only a subset of this information. To capture both views, I understand the “essential aspects” of a study as referring to both the information that needs to be disclosed and the information that needs to be understood.
What counts as an essential aspect (that needs to be disclosed and/or understood) varies, depending on one’s theory. The Belmont Report tentatively proposes that researchers should disclose the information that a “reasonable volunteer” would want to know (National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, 1979). Faden and Beauchamp argue that researchers should disclose the aspects of the study that potential participants regard as worthy of consideration in the process of deliberation (Faden and Beauchamp, 1986). Imagine that a study procedure will leave a major scar. It seems plausible to assume that reasonable volunteers would want to know this. Hence, according to Faden and Beauchamp, this aspect qualifies as (what I am calling) an essential aspect of the study (what they refer to as “germane” for the purposes of authorizing the procedure). Others argue that, in order to obtain valid consent, researchers need to disclose all and only the information that might influence whether potential participants decide to enroll (Bromwich and Millum, 2015), what Feinberg refers to as the participants’ “inducement set” (Feinberg, 1989). On this view, the researchers can obtain valid consent without disclosing the potential for a major scar if it is known that the potential participants will consent either way. That is, it is known that the possibility of a major scar will not influence whether potential participants decide to enroll.
Granting these disagreements, most theories, and almost all regulations, mandate that researchers need to disclose and participants need to understand at least: the purpose of the research, the major procedures, the significant risks and potential benefits, the alternatives, and the fact that participation is voluntary. Thus, to assess whether the use of deception is inconsistent with satisfying the second condition on valid consent, we can assess the extent to which it is inconsistent with accurate disclosure or correct understanding of these things. Does deception always involve concealing or misdescribing one or more of these aspects? Or does it involve concealing or misdescribing one or more of these aspects in the cases where deception does not undermine the voluntariness of potential participants’ decision to enroll?
Does Deception Always Conceal or Misdescribe One or More Essential Aspects?
Many instances of deception either conceal or misdescribe one or more essential aspects of a study. For example, the attention study, like many deceptive studies, misdescribes its purpose. Consistent with the prevailing assumption, these instances of deception are incompatible with obtaining participants’ valid consent. Yet, many studies that use deception accurately disclose all of their essential aspects. Consider again studies that use the Bogus Taste Test to assess experimental treatments for binge eating.
Participants are accurately informed that the purpose of the study is to assess whether the experimental treatment helps to reduce binge eating and that this will be assessed by evaluating how much they eat. They are also informed accurately regarding the major procedures, including the use of a placebo, and the significant risks and potential benefits, including the fact that the treatment has not been shown to be safe or effective for binge eating. They are informed about the duration of the study, the alternatives to participation and the fact that participation is voluntary. The deception is limited to the fact that the purpose of the Bogus Taste Test is misdescribed and participants are not informed that the tray is weighed before and after the session. Arguably, this deception does not involve information that needs to be disclosed or that individuals need to know to give valid consent.
Similarly, studies that use the Conditioning and Extinction paradigm to assess experimental treatments accurately disclose all of the essential aspects, including the purpose, major procedures, significant risks, potential benefits, and the fact that the study involves research and is voluntary. The deception is limited to telling participants that they might receive a few more shocks than they actually receive. In this case too, the use of deception does not seem to withhold or misdescribe any information that needs to be disclosed or that potential participants need to know to give valid consent. It is difficult to see how knowing that the maximum number of shocks is really 20, not 23, might lead some participants to decline to enroll. If we think of valid consent in terms of waiving one’s relevant rights, it seems similarly implausible to argue that participants who agree to being shocked a maximum of 23 times do not waive their right against being shocked a maximum of 20 times.
This analysis suggests that the use of deception in research is sometimes consistent with satisfying the second condition on valid consent. This conclusion, together with the previous conclusions that the same study can be consistent with the other three conditions on valid consent, reveals that the use of deception in research is sometimes consistent with satisfying all the conditions on valid consent. Specifically, researchers can deceive participants who are competent, sufficiently informed regarding the essential aspects of the study, voluntarily decide to enroll, and communicate this decision. Are there other reasons to think that these studies might nonetheless be inconsistent with obtaining valid consent?
Does the Use of Deception Itself Pose Greater Than Minimal Risk?
Researchers who use deception typically do not reveal its use until after participants complete the study. This is important for present purposes because potential participants who are informed after the study that they were deceived may be upset by that fact. If they are, the use of deception itself may pose greater than minimal risk, in which case failure to disclose the use of deception prospectively would involve a failure to disclose an essential element of the study, namely, the use of a procedure (deception) that poses greater than minimal risk.
Current practice approves many deceptive studies as posing no greater than minimal risk. This approach makes sense only if the use of deception itself does not pose greater than minimal risk. In this way, the present attempt to support the common view that the use of deception is always inconsistent with valid consent contradicts current practice. To assess the present argument, then, we need to evaluate current practice: Does the use of deception pose minimal or greater than minimal risk?
Many regulations define minimal research risks based on the risks individuals ordinarily encounter in daily life. U.S. regulations define minimal risks as risks that do not exceed those “ordinarily encountered in daily life or during the performance of routine physical or psychological examinations or tests” (46.102.j). This definition offers two ways to assess whether the use of deception in research poses minimal or greater than minimal risk. Using the first disjunct, individuals encounter deception in many aspects of daily life, including advertising, political campaigns, and surprise parties. We can thus assess whether deception in research poses greater than minimal risk by assessing whether it poses greater risks than deception in these other contexts. Using the second disjunct, we can assess whether the risks of deception in research exceed the risk of routine examinations or tests, such as MRI scans or blood draws.
A number of empirical studies have assessed the impact of deception on research participants (Christensen, 1988). These studies find that deception in behavioral and psychological studies does not upset most college undergraduates (Boynton et al., 2013). For example, Soliday and Stanton found that undergraduates were not bothered by the use of mild deception in six experimental scenarios (Soliday and Stanton, 1995). Smith and Richardson report that participants deceived in psychology experiments rated their overall experience as more positive than those who were not deceived (Smith and Richardson, 1983). These data suggest, on both of the aforementioned tests, that the use of deception in research poses no greater than minimal risk.
Using the first test, individuals typically do not regard much of the deception in daily life, such as the deception in advertising and political campaigns, as enjoyable. The finding that many participants find deception in research to be enjoyable thus suggests it is no more and potentially less risky than deception in daily life. Granting this conclusion, some common activities in daily life, including playing football and driving in a snowstorm, pose significant risks. Evaluating research procedures by comparing their risks to the risks of these daily activities would be problematic, raising the potential for classifying as minimal risk procedures that pose significant risks. While this is a concern in some cases, it does not seem to apply in the present case. The risks to which the impact of deception is being compared (e.g., advertising) do not seem significantly problematic. This conclusion is supported by the second test. Individuals typically do not find routine procedures, such as blood draws, to be enjoyable. These data thus support the claim that deception itself poses no greater than minimal risk. Furthermore, that conclusion undermines the possibility of supporting the common assumption that the use of deception is always inconsistent with valid consent by arguing that it poses greater than minimal risk.
Granting this, there is some reason to question the generalizability of the data on which this conclusion is based. Assessments of the impact of deception tend to involve researchers deceiving participants at enrollment, debriefing them at the end of the study, and then asking whether the participants are upset by the fact that they were deceived. Individuals who are upset at being deceived may be reluctant to admit it, especially to the same researchers who deceived them (Baumrind, 1985). Furthermore, the fact that most or even the vast majority of participants are not bothered by the use of deception is consistent with the possibility that a minority might find it problematic (Smith, 1981). Most of the data were collected in the context of deceptive studies involving students, which may not generalize to deception in other settings (Fisher and Fyrberg, 1994). Many students who take undergraduate psychology courses may be aware that they often involve deceptive experiments. Those who do not like being deceived may choose not to enroll in these courses and those who do enroll may feel some control of whether they are deceived. Hence, these studies may effectively screen out those who might be upset by the use of deception.
Finally, undergraduates’ reliance on researchers tends to be limited to a specific experiment or the grade in an individual course. The deception of patients who participate in clinical trials may raise greater concern, given the extent to which they rely on the trustworthiness of clinician-researchers for their health and well-being. This difference may help to explain the findings, in the few studies that assessed deception in the clinical context, that some research participants are upset by deception and some of these participants are very troubled by the use of deception (Ortman and Hertwig, 2002). For example, one study found that a majority of the individuals who were upset at being deceived indicated that the use of deception would “lower their trust in the medical profession” (Fleming et al., 1989).
This possibility seems similar to the impact of deception in many aspects of daily life. In particular, deception in politics appears to have lowered the public’s trust in politicians (Levi and Stoker, 2000). These findings are consistent with the possibility that deception in research poses risks no greater than those ordinarily encountered in daily life. This argument though compares the risks of research to the risks of activities in daily life that are problematic. This concern is supported by the fact that these negative reactions to being deceived exceed the concerns associated with undergoing routine examinations. While many people do not enjoy undergoing blood draws and MRI scans, it is extremely rare for adults to be very troubled by them. Recognizing that the existing data on the impact of deception in the research context are limited, these considerations suggest that the risks of deception in clinical contexts may exceed the risks of routine examinations. It follows, on existing definitions, that the use of deception, at least in more clinical contexts, has the potential to pose greater than minimal risk to some participants.
To address this possibility, review committees might be tempted to waive the requirement for debriefing, in which case most participants likely will not learn of their being deceived, thus reducing the psychological risks of deceptive studies. For several reasons, this approach is problematic. First, it is inconsistent with the requirement, included in some regulations and guidelines, to debrief participants after participation. Second, by failing to disclose and justify the use of deception, and failing to correct participants’ false beliefs about their research participation, this approach permits the deception to continue indefinitely, effectively expanding the extent to which the researchers manipulate participants’ beliefs (Sommers and Miller, 2013). For example, participants who are not debriefed will continue to have false beliefs regarding the purpose of studies in which they participated. Third, this approach fails to allow participants who object to being deceived to withdraw their data. Instead, it involves participants contributing to the project under false pretenses. This approach undermines appropriate respect for participants, which includes allowing them to decide whether they contribute to a given study. Additionally, this approach may undermine participants’ non-welfare interests if it leads to their contributing to projects inconsistent with their values.
These considerations provide strong reason to debrief participants, despite the increased potential for psychological harm. This conclusion points to the need for an alternative approach for assessing the impact of deception on research participants. One option would be for researchers to debrief participants and then assess the impact of deception on them. Specifically, participants could be debriefed and then asked whether they are upset by the fact that they were deceived. The possibility that participants may not be willing to reveal their concerns to the researchers could be addressed by having an independent person conduct the interviews or having participants complete an anonymous survey. This approach provides a means for assessing, in real time, whether the use of deception is problematic for the participants of the study in question. Review committees could use the results to evaluate whether deceptive studies approved as minimal risk in fact pose minimal risk.
Still another approach would be to mandate that researchers who deceive participants use authorized deception. Authorized deception involves researchers informing potential participants at the time of enrollment that some aspects of the study have not been disclosed, or have been mis-described, without explaining the exact nature of the deception (Wendler and Miller, 2004). For example, the consent form for one study that used authorized deception stated:
You should be aware that the researchers have intentionally mis-described certain aspects of the study. This use of deception is necessary to obtain valid results. However, an independent research committee has determined that this consent form describes all the major risks and benefits of the study. The investigator will explain the mis-described aspects of the study to you at the end of your participation. (Peciña et al, 2018)
By explaining that some aspects of the study have been mis-described, authorized deception allows those who object to being deceived to decline to participate, minimizing the risks of deceptive research, and making it more likely that deceptive studies pose minimal risk (Herrera, 2001).
In sum, this analysis suggests that the use of deception poses no greater than minimal risk to many research participants. In cases where its use raises greater concern, especially clinical contexts, the risks can be minimized by assessing the impact of the deception at the time of debriefing or implementing authorized deception. In all these cases, appeal to the risks of deception does not support the common assumption that deception and valid consent are incompatible.
Is Deception Itself an Essential Aspect?
One might argue that, even when deception itself does not pose greater than minimal risk, individuals want to be informed prospectively of its use, in which case deception itself might be regarded as an essential aspect of the study that needs to be disclosed. This seems especially important with respect to uses of deception which, if disclosed, might lead participants to decline to enroll (Wilson, 2015). While this line of reasoning supports the common assumption that deception is always inconsistent with valid consent, it contradicts current practice, which frequently waives the requirement to obtain informed consent for deceptive studies that pose no greater than minimal risk. If failure to disclose the use of deception is acceptable in the context of studies that pose minimal risk, it seems odd to argue that it is problematic in studies that pose greater than minimal risk, provided participants are informed accurately regarding the risks of the study. Current practice of approving deceptive studies without disclosure of the use of deception thus suggests that deception itself is not typically regarded as an essential aspect of deceptive studies. This approach is supported by the existing data, which suggest that many participants, especially college students, are willing to participate in deceptive studies without being informed prospectively of its use. Finally, in contexts in which the IRB determines that the use of deception is an essential aspect (perhaps more in clinical contexts), it can require that the researchers use authorized deception. This suggests that the use of deception is often not an essential aspect of consent and, when it is, researchers can disclose its use via authorized deception.
The present section considered three arguments that might be thought to show that the use of deception is always inconsistent with valid consent in virtue of failing to disclose one or more essential aspects of the study in question. This assessment reveals that the use of deception frequently does conceal or mis-describe one or more essential aspects. In addition, the use of deception itself may pose greater than minimal risk or otherwise qualify as an essential aspect for some studies. In these cases, the use of deception, at least when it is not combined with authorized deception, is inconsistent with valid consent. However, in other cases, the use of deception does not conceal any essential aspects of the study, does not pose greater than minimal risk, and is not itself an essential aspect. These cases are consistent with sufficient disclosure and understanding, and do not undermine the voluntariness of participants’ consent. It follows that deception in these cases is consistent with the conditions on obtaining valid consent. In the last two sections, I consider whether deception is nonetheless inconsistent with the spirit of valid consent.
Deception Involves Providing Inaccurate Information
Deceptive studies involve participants not knowing about certain aspects of a study. Participants in studies that use the Bogus Taste Test do not know the tray is being weighed. Participants in studies that use the Extinction Paradigm have false beliefs regarding when they might receive shocks. Of course, research participants never know everything about a given study. Research is too complex for that. Even having false beliefs regarding a study can be consistent with valid consent, at least when the beliefs involve aspects of the study that are not essential to valid consent. In standard cases, false beliefs regarding the manufacturer of the MRI machine, the molecular structure of the experimental medication, or how long it takes to undergo a history and physical examination are consistent with providing valid consent.
One might respond that participants in deceptive studies do not merely have false beliefs; they have false beliefs as the result of the researchers intentionally providing them with misleading or inaccurate information. It therefore seems inconsistent with at least the spirit, if not the specific requirements, of valid consent:
The most telling argument against deception is that it is disrespectful of persons. It serves the acquisition of knowledge while demeaning human dignity and is therefore incompatible with the underlying premise of informed consent. (Berg et al., 2001, 292-5)
While this claim seems plausible, it is undermined by the fact that a significant amount of deception in daily life is regarded as acceptable. Examples include deception that occurs in practical jokes, deception with respect to surprise parties, and deception about our personal opinions. It seems strained at best to argue that deceiving my in-laws regarding what I think about their newest knick-knack, or deceiving my friend about why I did not attend her party, is disrespectful of them as persons. The most plausible explanation is that deception about relatively minor issues does not necessarily fail to respect individuals as persons. If the use of deception in research similarly involves a minor aspect of the study, it follows that it can be consistent with the letter, as well as the spirit of valid informed consent. To counter this conclusion, and provide support for the common view that deception is always inconsistent with (the spirit of) valid consent, proponents would need to show that the obligation to respect research participants is different or stronger than the obligation to respect individuals in these other contexts. This seems implausible though, given that these other contexts involve deception of close friends and relatives, individuals to whom we presumably have strong obligations of respect.
Does Deception Violate Participants’ Rights?
Can one support the common assumption that deception is always inconsistent with valid consent on the grounds that deception violates the rights of research participants? Some commentators do argue that deception that conceals the fact that participants are involved in research violates their rights. In contrast, they argue that deception about the purpose of a study is consistent with participants’ rights, since they at least know they are involved in research (Zuraw, 2013). If we think of rights in terms of protections of fundamental interests, this view seems to assume that individuals have a fundamental interest in deciding whether they participate in research. But, they do not have a fundamental interest in deciding the specific studies to which they contribute. Against this, some, perhaps many individuals have no objections to participating in research per se, but they are opposed to specific types of research. For example, some individuals are fundamentally opposed to research on cloning human beings or developing improved methods of abortion. Enrolling individuals in these studies based on a misdescription of their purpose arguably undermines the individuals’ fundamental interests and is, therefore, inconsistent with their right to guide the course of their lives.
Others argue that whether deception violates participants’ rights depends on whether the individuals’ rights are infringed for a sufficiently good reason (O’Neil and Miller, 2009). On this approach, whether deception in research is consistent with respect for participants’ rights depends on three factors: the value of the study, the degree of infringement, and the alternatives to infringement. Applied to the attention study, the researchers would have to show that the data are valuable and deception about the purpose of the study is not a significant infringement on participants’ rights. These claims do not seem implausible. Hence, this approach provides a possible counter to the claim that deception per se violates participants’ rights.
The problem with this response is that allowing the value of a study to justify deceiving participants seems to allow, at least in principle, the possibility of deceiving potential participants about even objectionable aspects of a study, provided the study has sufficient value. For example, on this approach, researchers might be able to justify enrolling individuals who are known to fundamentally object to the goals of the research on the grounds that enrolling them would yield especially valuable data. Appealing to the value of a research study to justify infringing on participants’ rights also seems potentially problematic, given the speculative nature of research. Even if the potential value of a given study is great, it typically is unknown at the outset whether this value will be realized, and, indeed, many studies with significant potential value turn out to have little or no social value. That is an unfortunate implication of the uncertainty of research. Even if one thinks that it can be acceptable to infringe individuals’ rights for significant benefits that cannot otherwise be realized, it seems implausible to permit such infringements for a speculative chance of realizing significant benefits. If our rights protect our interests, they should at least protect our interests against such speculation. Put differently, any plausible theory should hold that individuals’ rights can permissibly be infringed only when at least two conditions are satisfied: (1) the infringement is necessary to realize an outcome of significant importance and (2) the chances that the important outcome will be realized are sufficiently high. Many research studies are unlikely to satisfy the second condition.
Are there other reasons to think that deception in some circumstances is consistent with participants’ rights? To try to answer this question, imagine that the attention study occurs during class time, takes only ten minutes to complete, and involves relatively easy essay questions. Participation in such a study has essentially no impact on the course of the participants’ lives. How the participants, and others, evaluate the course of their lives will not be affected by whether they participate in this study. Hence, if the right to make our own decisions is based on the interest we have in shaping the fundamental course of our lives, deception about such a minor aspect of participants’ lives is not relevant to this right.
This argument suggests deception regarding the purpose of the attention study does not violate participants’ rights. This conclusion seems plausible, even though deception about essential aspects of a study prevents the participants from giving valid consent. Put differently, because involvement in the study represents a relatively unimportant part of the participants’ lives, the right to decide the course of one’s life is not implicated by the decision whether to enroll, even though participants cannot provide valid consent for the study unless they know its purpose.
A good deal of non-deceptive research is permitted without participants’ informed consent. This includes observational research and research on stored biological samples. The present argument provides an explanation for why conducting this research without consent does not violate participants’ rights. Because the research has such a minor impact on the course of participants’ lives, they do not have a right against participating in it (this conclusion also suggests that, when it is obtained, consent for these types of studies does not involve waiving one’s right against participation). Importantly, a similar argument applies to some, perhaps many deceptive studies. Participating in them does not have a significant impact on the participants’ lives. Hence, enrolling them in a deceptive study, without disclosing the use of deception, does not violate participants’ rights.
V. CONCLUSION
It is widely assumed that deception in research is inconsistent with obtaining valid consent. This assumption has led to the view, explicit in some regulations, and common in practice, that the use of deception is permissible only in studies that pose no greater than minimal risk. Current policies and practice thus preclude a range of studies that use deception, including studies that rely on deceptive methods to assess experimental treatments.
Deception is inconsistent with valid consent in many cases. However, in other cases, researchers can deceive participants and still satisfy the conditions on obtaining valid consent. Specifically, researchers can deceive participants who are competent, sufficiently informed, voluntarily decide to enroll, and communicate this decision to the research team. To avoid unnecessarily blocking valuable research, current practice and policies should be revised to recognize this possibility.
First, current practice and policies should be revised to recognize that whether the use of deception is inconsistent with valid consent depends on whether the misdescribed or concealed information involves essential or nonessential aspects of the study. In the latter case, the use of deception can be compatible with obtaining valid consent. When the use of deception itself qualifies as an essential aspect of a study, the review committees can mandate that it be disclosed via authorized deception. Second, practice and policies should recognize that when the deception is limited to nonessential aspects of a study, its use can be appropriate in the context of studies that overall pose greater than minimal risk. If there is concern that the use of deception itself poses greater than minimal risk, review committees might mandate on-going assessment of its impact on participants or mandate the use of authorized deception.
ACKNOWLEDGEMENTS
This work was funded by the Intramural Research Program at the NIH Clinical Center. The opinions expressed are the authors’ own. They do not represent the position or policy of the National Institutes of Health, the US Department of Health and Human Services, or the US government.
Footnotes
The only mention of deception in the U.S. regulations concerns determining when research qualifies as exempt (DHHS, 2018, 46.104).
REFERENCES
- Adair, J. G., Dushenko, T. W. and Lindsay R. C. L.. 1985. Ethical regulations and their impact on research practice. American Psychologist 40(1):59–72. [DOI] [PubMed] [Google Scholar]
- APA Ethical Guidelines for Research. 2017. Section 8: Research and Publication. APA [Online]. Available: http://www.sandplay.org/pdf/APA_Ethical_Guidelines_for_Research.pdf (accessed February 10, 2022).
- Appelbaum, P. S., Lidz, C. W. and Klitzman R.. 2009. Voluntariness of consent to research: A conceptual model. Hastings Center Report 39(1):30–9. [DOI] [PubMed] [Google Scholar]
- Baumrind, D. 1985. Research using intentional deception: Ethical issues revisited. American Psychologist 40(2):165–74. [DOI] [PubMed] [Google Scholar]
- Berg, J. W., Appelbaum P. S., Lidz C. W., and Parker L. S.. 2001. Informed Consent: Legal Theory and Clinical Practice. 2nd ed. United Kingdom: Oxford University Press. [Google Scholar]
- Bok, S. 1995. Shading the truth in seeking informed consent for research purposes. Kennedy Institute of Ethics Journal 5(1):1–17. [DOI] [PubMed] [Google Scholar]
- Boynton, M. H., Portnoy, D. B. and Johnson B. T.. 2013. Exploring the ethics and psychological impact of deception in psychological research. IRB: Ethics & Human Research 35(2):7–13. [PMC free article] [PubMed] [Google Scholar]
- Brock, D. W., and Buchanan A. E.. 1990. Deciding for Others. United Kingdom: Cambridge University Press. [Google Scholar]
- Bromwich, D., and Millum J.. 2015. Disclosure and consent to medical research participation. Journal of Moral Philosophy 12(2):195–219. [Google Scholar]
- Christensen, L. 1988. Deception in psychological research: When is its use justified? Personality and Social Psychology Bulletin 14(4):664–75. [Google Scholar]
- Council for International Organizations of Medical Sciences. 2016. International ethical guidelines for health-related research involving humans.CIOMS [Online]. Available: https://cioms.ch/wp-content/uploads/2017/01/WEB-CIOMS-EthicalGuidelines.pdf (accessed February 10, 2022).
- Cupples, B., and Gochnauer M.. 1985. The investigator’s duty not to deceive. IRB: Ethics & Human Research 7(5):1–6. [PubMed] [Google Scholar]
- Department of Health and Human Services. 2018. 45 CFR 46. U.S. HHS [Online]. Available: https://www.hhs.gov/ohrp/regulations-and-policy/regulations/45-cfr-46/index.html (accessed February 10, 2022).
- Faden, R. R., and Beauchamp T. L.. 1986. The History and Theory of Informed Consent. Oxford, United Kingdom: Oxford University Press. [Google Scholar]
- Feinberg, J. 1989. Harm to self: Moral limits of the criminal law. United Kingdom: Oxford University Press. [Google Scholar]
- Fisher, C. B., and Fyrberg D.. 1994. Participant partners: College students weigh the costs and benefits of deceptive research. American Psychologist 49(5):417–27. [Google Scholar]
- Fleming, M., Bruno M., Barry, K. and Fost N.. 1989. Informed consent, deception, and the use of disguised alcohol questionnaires. The American Journal of Drug and Alcohol Abuse 15(3):309–19. [DOI] [PubMed] [Google Scholar]
- Herrera, C. D. 2001. Ethics, deception, and “those Milgram experiments.” Journal of Applied Philosophy 18(3):245–56. [DOI] [PubMed] [Google Scholar]
- Korn, J. H. 1997. Illusions of reality: A history of deception in social psychology. Albany: State University of New York Press. [Google Scholar]
- Levi, M., and Stoker S.. 2000. Political trust and trustworthiness. Annual Review of Political Science 3(1):475–507. [Google Scholar]
- McCambridge, J., Kypri K., Bendtsen, P. and Porter J.. 2013. The use of deception in public health behavioral intervention trials: A case study of three online alcohol trials. American Journal of Bioethics 13(11):39–47. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Milad, M. R., Wright C. I., Orr S. P., Pitman R. K., Quirk, G. J. and Rauch S. L.. 2007. Recall of fear extinction in humans activates the ventromedial prefrontal cortex and hippocampus in concert. Biological Psychiatry 62(5):446–54. [DOI] [PubMed] [Google Scholar]
- National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. 1979. The Belmont report: Ethical principles and guidelines for the protection of human subjects of research.U.S. H.H.S. [Online]. Available: https://www.hhs.gov/ohrp/regulations-and-policy/belmont-report/read-the-belmont-report/index.html#xrespect.
- O’Neil, C. C., and Miller F. G.. 2009. When scientists deceive: Applying the federal regulations. Journal of Law, Medicine and Ethics 37(2):344–50. [DOI] [PubMed] [Google Scholar]
- Ortman, A., and Hertwig R.. 1997. Is deception acceptable? American Psychologist 52(7):746–7. [Google Scholar]
- ———. 1998. The question remains: Is deception acceptable? American Psychologist 53(7):806–7. [Google Scholar]
- ———. 2002. The costs of deception: Evidence from psychology. Experimental Economics 5(2):1111–31. [Google Scholar]
- Peter, H. C., Polivy, J. and Silver R.. 1979. Effects of an observer on eating behavior: The induction of “sensible” eating. Journal of Personality 47(1):85–99. [DOI] [PubMed] [Google Scholar]
- Peciña, M., Heffernan J., Wilson J., Zubieta, J. K. and Dombrovski A. Y.. 2018. Prefrontal expectancy and reinforcement-driven antidepressant placebo effects. Translational Psychiatry 15(1):222. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Soliday, E., and Stanton A. L.. 1995. Deceived versus nondeceived participants’ perceptions of scientific and applied psychology. Ethics and Behavior 5(1):87–104. [DOI] [PubMed] [Google Scholar]
- Smith, S. S., and Richardson D.. 1983. Amelioration of deception and harm in psychological research: The important role of debriefing. Journal of Personality and Social Psychology 44(5):1075–82. [Google Scholar]
- Smith, C. P. 1981. How (un)acceptable is research involving deception? IRB: Ethics & Human Research 3(8):1–4. [PubMed] [Google Scholar]
- Sommers, R., and Miller F. G.. 2013. Forgoing debriefing in deceptive research: Is it ever ethical? Ethics and Behavior 23(2):98–116. [Google Scholar]
- Sieber, J. E., Iannuzzo, R. and Rodriguez B.. 1995. Deception methods in psychology: Have they changed in 23 years? Ethics and Behavior 5(1):67–85. [DOI] [PubMed] [Google Scholar]
- Vase, L., Robinson M. E., Verne, G. N. and Price D. D.. 2003. The contributions of suggestion, desire, and expectation to placebo effects in irritable bowel syndrome patients. An empirical investigation. Pain 105((1-2):17–25. [DOI] [PubMed] [Google Scholar]
- Wendler, D., and Miller F. G.. 2008. Deception in clinical research. In The Oxford Textbook of Clinical Research Ethics, eds. Emanuel E. J., Grady C., Crouch R. A., Lie R. K., Miller F. G., and Wendler D., 315–24. New York: Oxford University Press. [Google Scholar]
- ———. 2004. Deception in the pursuit of science. Archives of Internal Medicine 164(4):597–600. [DOI] [PubMed] [Google Scholar]
- Wertheimer, A. 2012. Voluntary consent: Why a value-neutral concept won’t work. Journal of Medicine and Philosophy 37(3):226–54. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Werthmann, J., Roefs A., Nederkoorn C.,. K.Mogg, Bradley, B. P. and Jansen A.. 2011. Can(Not) take my eyes off it: Attention bias for food in overweight participants. Health Psychology 30(5):561–9. [DOI] [PubMed] [Google Scholar]
- Wilkinson, T. M. 2013. Nudging and manipulation. Political Studies 61(2):341–55. [Google Scholar]
- Wilson, A. 2015. Counterfactual consent and the use of deception in research. Bioethics 29(7):470–7. [DOI] [PubMed] [Google Scholar]
- Zuraw, R. 2013. Consenting in the dark: Choose your own deception. American Journal of Bioethics 13(11):57–9. [DOI] [PubMed] [Google Scholar]
