Abstract
It is widely accepted that informed consent is a requirement of ethical biomedical research. It is less clear why this is so. As an argumentative strategy the article asks whether it would be legitimate for the state to require people to participate in research. This article argues that the consent requirement cannot be defended by appeal to any simple principle, such as not treating people merely as a means, bodily integrity, and autonomy. As an argumentative strategy the article asks whether it would be legitimate for the state to require people to participate in research. I argue that while it would be legitimate and potentially justifiable to coerce people to participate in research as a matter of first-order moral principles, there are good reasons to adopt a general prohibition on coercive participation as a matter of second-order morality.
Keywords: informed consent, research ethics, autonomy, bodily integrity, coercion
The voluntary consent of the human subject is absolutely essential.
—Nuremberg Code1
Except as provided elsewhere in this policy, no investigator may involve a human being as a subject in research covered by this policy unless the investigator has obtained the legally effective informed consent of the subject or the subject's legally authorized representative.
—The Common Rule2
After ensuring that the potential subject has understood the information, the physician or another appropriately qualified individual must then seek the potential subject's freely-given informed consent.
—Declaration of Helsinki3
Respect for persons requires that subjects, to the degree that they are capable, be given the opportunity to choose what shall or shall not happen to them.
—Belmont Report4
THE CONSENT REQUIREMENT
What I shall call the consent requirement (CR) maintains that a subject's informed consent is a requirement of ethical biomedical research. CR lies at the epicenter of research ethics. As Robert Veatch puts it, ever since Nuremberg, ‘consent has dominated ethics of experimentation’.5 I suspect that it is commonly supposed that CR is so clearly correct as not to require an extended defense. As Dan Brock puts it, ‘The rule that, with a few exceptions, research with humans should not take place without participants’ informed consent is a settled ethical and legal principle’.6
The purpose of this article is to ask whether this ‘settled’ principle is correct. As an argumentative strategy, I propose to start with, which might appear to be an implausible proposal, namely, that it is legitimate to coerce people into participating in biomedical research. I will argue that contrary to what is commonly supposed, neither a prohibition on coercive participation nor CR can easily be justified by appeal to the sorts of principles that are often cited in its defense such as respect for autonomy, respect for persons, a right not to participate in research without consent, respect for bodily integrity, not treating subjects merely as a means, or the like. Rather, I will argue that the best justification for CR is less direct, less simple, less elegant, less definitive, more pluralistic, and more political. In effect, I will argue that we can't get to CR through a straightforward moral argument from basic principles; we can get to something like CR—subject to important exceptions—through the back door.
Now it is entirely uncontroversial that informed consent is not sufficient for ethical research. It is generally assumed that research is ethical only if the research satisfies several additional ethical criteria, for example, that the research has social value, that the design of the research will yield scientifically valid data and, perhaps most importantly, that the risks to subjects are reasonable in relation to the anticipated benefits to subjects or to others.7 There is some dispute as to whether the ‘reasonable risk’ criterion places any upper limit to the risks to which subjects can be asked to consent, but there is no dispute that institutional review boards (IRBs) must determine that the risks to subjects are reasonable before people are offered the opportunity to consent to participate.
More importantly for present purposes, although some formulations of CR are quite categorical, it is also commonly accepted that informed consent is not strictly necessary for ethical research. We may think that research without any sort of consent is justifiable when it is exclusively observational, as when psychologists sought to determine whether wealthy drivers behaved more unethically than less wealthy drivers by observing whether drivers of expensive cars were more likely to cut off other vehicles at an intersection.8 Interventional research without informed consent may be justifiable when subjects must be deceived if research is to produce scientifically valid data.9
Because federal regulations explicitly allow for a considerable amount of research without informed consent, Alex Capron wonders whether the exceptions are so vast as to ‘swallow the rule’.10 Indeed, the regulations do not regard a considerable range of research as involving research with human subjects. This includes purely observational research, research with deidentified medical records or tissue specimens. In these cases, the regulations avoid the issue of informed consent by definitional fiat rather than stating that these are forms of research with human subjects that do not require informed consent.
In addition, federal regulations explicitly allow for waivers of informed consent under conditions that apply to much social and behavioral research and some biomedical research.
d) An IRB may approve a consent procedure which does not include, or which alters, some or all of the elements of informed consent set forth in this section, or waive the requirements to obtain informed consent provided the IRB finds and documents that:
the research involves no more than minimal risk to the subjects;
the waiver or alteration will not adversely affect the rights and welfare of the subjects;
the research could not practicably be carried out without the waiver or alteration; and
whenever appropriate, the subjects will be provided with additional pertinent information after participation.11
Note that provision (2) presupposes that there is no general right not to be included in research without informed consent. Otherwise, any research without informed consent would necessarily violate that proviso.
As contrasted with social and behavioral research, the regulations are more likely to require informed consent in biomedical research, but there are exceptions there as well. Even if we set aside cases in which surrogates consent for the subject (for example, children), there are special circumstances, such as emergency research in which research may be justified even though no sort of consent is possible (assuming that surrogates cannot be located). Research without any kind of consent may also be allowed when it involves public health surveillance, collection of data from health records, quality improvement studies, and cluster randomized trials where it is impractical or impossible to seek everyone's consent. For example, a hospital may want to study the advantages of two regularly prescribed treatments by treating everyone in Ward A with treatment X and everyone in Ward B with treatment Y, but the patients in those wards will not be asked for their consent. It may also be argued that specific consent to participate in research may not be necessary in comparative effectiveness trials where all subjects receive standard treatments for their conditions and where there is little or no incremental risk in receiving one of these treatments as opposed to the other.
These exceptions to CR deserve much more attention than they have received. It is possible, of course, that we should require explicit informed consent (without deception) in all research even though doing so would bring much valuable research to a screeching halt. But assuming that something like the Common Rule's criteria for waiving informed consent reflects a sensible moral position, it can't be the case that people have a strong general right not to be used as a research subject without their valid consent.
Despite these considerable and important exceptions to CR, there is a wide spectrum of cases—particularly in clinical or interventional biomedical research—in which CR remains completely uncontroversial. In such cases, the question is not so much whether valid consent is required, but what is required by valid consent, ie when we should regard a participant's token of consent as valid or morally transformative?
To elaborate on the previous point, there are cases in which we may worry as to whether consent is sufficiently voluntary to be valid. For example, it may be thought that members of certain ‘vulnerable’ groups such as prisoners cannot give valid consent because they are in a coercive environment. Some think that one cannot give voluntary consent if one has no reasonable alternative but to participate, say because one otherwise lacks access to medical care. Some think that offers of payment render consent involuntary or that such offers often constitute undue influence.
Other worries focus on informational or cognitive deficiencies. For example, some argue that those in the grips of the ‘therapeutic misconception’ are not giving valid consent.12 We can also ask whether excessive optimism about benefitting from participation invalidates one's consent.13 Although there is general consensus that informed consent requires that investigators inform the subjects about certain matters but there is controversy as to whether subjects must actually understand that information or what information they must understand. Dan Brock argues that subjects ‘do not need to understand the entire underlying scientific and medical basis of the research; rather, they need to know how their lives are likely to be affected, both positively and negatively, by participation in the research’.14
All that said, and with considerable room for disagreement at the margins, it is generally assumed that it is wrong to conduct interventional biomedical research without a subject's informed consent. In the word of the Belmont Report, this principle is ‘unquestioned’.15 As an empirical claim, this is probably true. But we can still ask: should it be unquestioned? Is it true? And if it is true, why is it true?
The ‘unquestioned’ character of CR is particularly puzzling when viewed in a larger context. It is relatively easy to grasp the force of CR if we view research as a private interaction between investigators and subjects in which research involves another's body or property without their consent, and we are certainly not entitled to coerce them to do what we would like them to do or, for that matter, to do what they have an obligation to do. We don't need a special principle of research ethics to make that claim. A ban on intentional interpersonal harm is sufficient.
But things look different if we view (some) research as an issue of political philosophy, as an interaction between citizens and the state, or researchers whose activities have been authorized by the state. After all, we are inclined to think that it is legitimate for the state to do things to people without consent and there is a wide range of cases where we think that it is legitimate for the state to coerce people to perform acts that are contrary to their interests, for example, to pay taxes, to serve on juries, appear as witnesses at trials, get vaccinations, or purchase car insurance or medical insurance. So we can at least ask whether it would be legitimate for the state to require people to participate in biomedical research.
Of course, it may not be legitimate for the state to require participation in research. But if that is so, why is that so? The purpose of this article is to ask that question.
My plan is this. I will first describe the sort of coercive participation I have in mind. I then ask whether the use of coercion is legitimate as contrasted with justifiable. In the major section of this paper, I consider several candidate principles for regarding coercion as illegitimate, per se, and argue that none of them are sufficient. Having established a prima facie case for the legitimacy of coercive participation, I then ask whether the use of coercion is justifiable in cases of interventional biomedical research. I conclude that it probably is not. Finally, I argue that even if the use of coercion is legitimate and sometimes justifiable as a matter of ‘first-order’ ethics, there are good moral reasons to adopt a rule or policy that bars coercive participation. In the contexts in which consent should be required, it should be required because it is best to regard consent as required.
COERCIVE PARTICIPATION
Why even consider coercing people to participate in research? Is there a problem to which coercion might be an answer? The difficulty in the way of recruiting participants is a serious barrier to successful and timely clinical research. Some studies do not complete. Scott Ramsey and John Scoggins noted that more than one trial in five sponsored by the National Cancer Institute failed to enroll a single subject, and only half reached the minimum needed for a meaningful result. Eighty per cent of trials are delayed at least a month because of unfulfilled enrollment, and an unknown number of studies are not started or developed because it is anticipated that recruitment will be difficult. Given all this, it seems reasonable to assume that an increase in the accrual rate of subjects would lead to more studies being undertaken, more completed studies, and fewer delays in completion. And it also seems reasonable to assume that this would contribute to at least some reduction in morbidity and mortality, and improvement in people's quality of life.
The difficulty of undertaking and completing clinical trials has led some to lament the entire enterprise of the regulation and oversight of research. Whitney and Schneider argue that the regulation of research results in avoidable deaths because it delays the introduction of life-saving interventions into ordinary medical care, not to mention that the regulatory process screens out some potentially beneficial research from being undertaken and deters other potentially beneficial research from ever being proposed.
But even if the sum total of the costs and benefits of the regulatory enterprise—including a commitment to gaining consent of research subjects—were negative, it does not follow that we should reject requiring consent to interventional biomedical research. Whitney and Schneider assume without argument that regulation that ‘does more harm than good is itself unethical’.16 Just as we may have a moral reason to weigh the interest of criminal defendants more than our social interest in convicting the guilty (‘better that ten guilty persons go free than that one innocent person be punished’), we may have a moral reason to weigh the interests and autonomy of research subjects more heavily than the interests of those who would benefit from a less demanding regulatory system that produced more and faster high-quality biomedical research.
There is, of course, a long-standing debate as to how to weigh the interests of subjects and the interests of society, or, more accurately, the interests of those numerous individuals who stand to benefit from biomedical research. Hans Jonas famously argued that avoidable illness and death are regrettable but not of overarching moral significance because ‘progress is an optional goal’.17 In his view, there is no ethical necessity ‘about seeking new knowledge or finding “new miracle cures”’.18 He writes that a ‘permanent death rate from heart failure or cancer does not threaten society’. It is a ‘human misfortune’, but not a ‘social misfortune’. By contrast, society would be ‘threatened by the erosion of those moral values . . . caused by too ruthless a pursuit of scientific progress . . .’.19
I find it a mystery as to why one would want to minimize the importance of this ‘human misfortune’, but the choice of ends is not accurately described. For the morally relevant choice is not between the interests of individual subjects as opposed to something as abstract as ‘scientific progress’ or even ‘society’. The choice is between the interests of those individuals whose interests are set back by participation in research and the interests of those individuals who stand to benefit from biomedical research. And even if there is a good reason to weigh the interests of subjects and prospective subjects more heavily than the interests of the individuals who would benefit from research, it is individuals all the way down.20
In thinking about the benefits of medical research, we do well to remember three features of contemporary medical practice. First, yesterday's ‘miracle cure’ that was the product of ‘optional’ medical research is today's ordinary medical treatment. Second, much of what was or is standard medical practice, such as tonsillectomies, annual physicals, routine EKGs, and PSA tests, is harmful or without any demonstrated benefit. We need research to determine what works and what does not. Third, although we have made considerable progress in treating some diseases, there are virtually no treatment or prevention modalities for some devastating diseases such as Alzheimer's and inadequate treatment for many other diseases or conditions. I find it difficult to accept the view that progress here is optional.
There are numerous impediments to more and faster medical research. Funding is limited. Treating physicians may think it wrong to refer their own patients to clinical trials and participation in research may be burdensome for physicians even when it would be beneficial to their patients. In addition, it is often very difficult to recruit prospective subjects even when it would be rational for people to participate given their own interests, values, and aims.
That said, some decisions not to participate in research are perfectly rational from a self-interested perspective. Consider the contrast between pediatric and adult oncology research. Between 60–80% of children diagnosed with cancer participate in clinical trials, in part because pediatric oncology has historically integrated research and treatment such that children enrolled in randomized controlled trials typically do better than those that do not.21 By contrast, among adults diagnosed with cancer, fewer than 5% participate in trials which is, perhaps, not surprising given that they do not generally have improved outcomes as compared with those who are not enrolled. As a general rule, participants in adult oncology trials may help to generate knowledge that is beneficial to others, but they cannot expect to be much better off themselves.
Although it is rarely discussed in these terms, participation in research sometimes constitutes a classic collective action problem. It is in the ex ante interest of most people that research be conducted and that they are part of a health care system that learns from its performance. At the same time, participation in actual research can be contrary to the interests of each individual. For even when participation poses minimal or no long-term medical risk, it may involve burdens of pain, discomfort, time, inconvenience, and loss of (some) privacy. Ex ante, we may all be better off if all of us do our fair share of participation in research. But the knowledge generated by research is a public good, that is, it is a good that is available to all whether or not one contributed to it, and this is so even if not everyone actually benefits from a particular public good.
To the extent that people are self-interested, they will seek to reap the benefits of public goods without paying the costs; they will free ride on the efforts of others. Precisely for these reasons, we often rely on governmental coercion to solve collective action or public good problems. We tax citizens to pay for public goods (including medical research) rather than rely on voluntary contributions. We require that cars come equipped with catalytic converters to control air pollution because air quality is a public good (or bad) and people are unlikely to voluntarily incur a significant expense to reduce their own pollution. Given that we are prepared to coerce people to contribute to many public goods, we can at least ask whether we should similarly require that people participate in research that generates knowledge that is available to all.
Needless to say, we do not think about coerced participation in research in the way we think about coerced tax payments or catalytic converters. I suspect that coercive participation is rarely taken seriously because it conjures up images of Nazi-like experimentation on people's ability to survive in freezing water. But that is, to say the least, not what I have in mind. Rather, I have in mind a scheme under which prospective subjects are required to participate in research on pain of some sanction for refusal. It might be objected that to subject someone to a penalty that they could easily accept rather than participate is not really coercive. I don't think much turns on words here, so I will just stipulate that this is the type of coercion that I have in mind. And we can at least imagine requiring that people complete surveys or interviews or undergo procedures such as blood draws or lumbar punctures on pain of being penalized for not doing so. We can also imagine requiring people to participate in a randomized controlled trial rather than receiving the treatment that the individual or her physician prefers, particularly when there is no evidence favoring such treatment. In fact, the United States already requires that people participate in a form of social and behavioral research—the Census—on pain of being fined for refusal. I want to ask whether it would be legitimate to take a similar approach to biomedical research, and, if not, why not.
To render the idea of coercive participation at least minimally plausible, I will assume for the sake of argument that any research in which coercion is used would meet several criteria and that it would be subject to review by an IRB that would certify that the research met those criteria. First, the net risks of participation would be reasonable in relation to their anticipated benefit to others. This implies that the research can be expected to produce knowledge that is social, and that the design of the research is scientifically valid. Second, the risks and burdens of participation would not be excessive, although subjects would have to bear the burdens of time, inconvenience, and, perhaps, low-risk procedures necessary for research purposes such as blood draws, blood pressure readings, and interviews about one's health. Third, the identification of subjects—both healthy volunteers and patient/subjects—follows a fair procedure and is based on relevant criteria. Lotteries may be used when appropriate. Fourth, within a coercive context, subjects are treated with concern and respect, and may be offered compensation for participation (as with jurors) and for injuries caused by participation. Fifth, the use of coercion is limited to research conducted or co-sponsored or authorized by the government.
Readers are invited to add other non-consent criteria to the list. The point of this exercise is to isolate the moral significance of coercion and consent by asking whether it would be legitimate for the state to require people to participate in research on pain of being penalized for refusal when all other criteria of ethical research are satisfied save for valid informed consent.
Now the proposal for coercive participation does not presuppose that all citizens have a pro tanto obligation to participate in research because positing such an obligation is not necessary to legitimize the use of coercion. Still, the case for the legitimacy of coercive participation is much easier to make if citizens have such an obligation. I have argued elsewhere that people have an obligation to do one's fair share of participation in non-beneficial research just as they have an obligation to contribute to other public goods such as defense, clean air, and police protection.22 Along these lines, Ruth Faden and colleagues have recently developed an ethical framework for a ‘health care learning system’. They maintain that patients have an obligation ‘to contribute to the common purpose of improving the quality and value of clinical care and the health care system’.23 The authors note that ‘Securing these common interests is a shared social purpose that we cannot as individuals achieve’ and that those goals may require something like ‘near-universal participation in learning activities through which patients benefit from the past contributions of other patients whose information has helped advance knowledge and improve care’.24 This argument does not claim that current patients have such obligations because they have benefitted from the past contributions of other patients. Rather, it argues that a learning health care system will provide benefits to prospective patients in which they will come to benefit from the contributions of other patients.
If people have an obligation to participate in at least some sort of research, the strength and shape of that obligation would remain unsettled. Just as people may have an obligation to make an easy rescue but may not have an obligation to put themselves at a serious risk for the sake of others, people may have an obligation to participate in medical records research but may not have an obligation to participate in interventional clinical research. Or they may have an obligation to accept minimal risks in participation (as in quality improvement studies or comparative effectiveness trials) but not to accept more than minimal risks. In addition, the strength of that obligation may depend upon the extent to which people have or can expect to benefit from the medical care system or the knowledge generated by the participation of others. But subject to a host of complicating factors, it is plausible to maintain that many have some obligation to participate in biomedical research.
Assuming that people have an obligation to perform some act (X) or a case of a class of acts, it is another question as to whether the obligation should be regarded as enforceable. As a general proposition, if people have a pro tanto moral obligation to do something, there is a pro tanto case for penalizing non-performance. As Michael Otsuka observes, we need an explanation as to why we should not be required to do that which we have a moral duty to do or, perhaps, why we should be able to shirk our duties with impunity.25
But it is not always legitimate to require people to do that which they have an obligation to do. For example, one might think that people have an obligation to vote or to limit the size of their families or to use less carbon or that scholars have an obligation to do their fair share of manuscript reviewing or to make an easy rescue, but also think that it is wrong to force people to vote or limit the size of their families or use less carbon or review manuscripts or make an easy rescue. If B has borrowed tools from A on numerous occasions, it seems that B has an obligation of reciprocity to loan a similar tool to A upon request. But this does not mean that it is permissible for A to enforce B's obligation by taking B's tool without B's permission. And so on. So even if people have a moral obligation to participate in research, it does not follow that it would be legitimate to require them to do so or do on pain of penalty or, for that matter, to use them as subjects without their informed consent.
When should a moral obligation be regarded as enforceable? The answer is likely to be highly pluralistic. As Victor Tadros has argued, the enforceability of an obligation depends upon a number of factors such as the moral significance of the duty, the extent to which non-fulfillment of the duty results in harm to others, the extent to which enforcement will actually accomplish its goals, the extent to which there are non-coercive methods for securing such goals, and the extent to which ‘it is important that the person acts on the duty for good reason rather than because she is forced to do so’.26 Still, if people have an obligation to participate in research, it does not seem all that difficult to claim that it is legitimate to require people to participate even if, at the end of the day, it seems unwise or unjustifiable, all things considered, to require them to do so.
I should say that positing an obligation to participate in research is not a necessary condition of legitimate coercion. As Thomas Nagel observes, although there are cases ‘in which a person should do something although it would not be right to force him to do it . . . sometimes it is proper to force people to do something even though it is not true that they should do it without being forced’.27 Nagel suggests that while it is permissible for the state to require people to pay taxes, they may have no obligation to make such payments voluntarily, in part because they may lack assurance that others are doing their fair share and because making voluntary contributions to the state involves ‘excessive demands on the will’. So even if there is no obligation to voluntarily participate in clinical research, that does not settle the question as to whether it is legitimate to require people to do so.
Legitimacy and Justifiability. There are at least two moral questions we can ask about coercive participation. (1) Is it legitimate to coerce people to participate in research? (2) Is it justifiable to do so, all things considered? I begin with (1). In drawing the distinction between legitimacy and justifiability, I follow Joel Feinberg.28 In his four-volume magnum opus on the moral limits of the criminal law, Feinberg aims to identify the principles that render it legitimate for the state to criminalize behavior or limit individual liberty. Along Millian lines, Feinberg argues that ‘harm to others’ (the harm principle) and ‘offense to others’ (the offense principle) are legitimate grounds for criminalization but that it is not legitimate for the state to criminalize behavior on the grounds that it is harmful to a competent adult himself (legal paternalism) or on the grounds that the behavior is wrongful although harmless (legal moralism).
Setting aside these particular principles or issues, Feinberg claims that there is an important distinction between policies that are ‘legitimized by valid moral principles and those that are justified on balance as being legitimate and useful, wise, economical, popular, etc.’.29 For example, even if it is legitimate for the state to prohibit the sale of certain substances, it may be unjustifiable to do so given the costs of enforcement and the unintended consequences of prohibition. If a proposal for the use of state coercion passes a legitimacy test, we can then go on to ask whether it is justifiable all things considered. But if a proposal for the use of state coercion does not pass a test of moral legitimacy, then its justifiability is not on the table.
At first glance it appears that the distinction between legitimacy and justifiability tracks the familiar distinction between deontology and consequentialism. Whereas principles of legitimacy operate as deontological constraints, justifiability appears to take a consequentialist form. But this is somewhat deceiving. First, whereas principles of legitimacy do not involve direct appeal to consequences at the practical level, they may be rooted in consequentialist considerations. Second, a pluralistic view of justifiability may include what might be thought of as deontological values such as autonomy as well as whether a policy is wise, economical, welfare enhancing, or popular. In the final analysis, it may well turn out that the distinction between the legitimacy and justifiability of a policy is not as sharp or as deep as Feinberg supposes. Still, it is a useful place to start because it is generally assumed that it would be beyond the moral pale to coerce people into participating in research.
ARGUMENTS FOR THE ILLEGITIMACY OF COERCIVE PARTICIPATION
I suspect that most bioethicists think that CR is rooted in a simple and basic moral principle that commands widespread support. It turns out, however, that multiple overlapping justifications for CR have been and can be offered. The history of the Nuremberg Code exemplifies the problem. The Nuremberg Code gives pride of place to the principle of informed consent, suggesting that the absence of informed consent was the crucial ethical defect of the Nazi experiments. But as Jay Katz has noted, the first principle of the Nuremberg Code ‘was irrelevant to the case before the tribunal, for the basic problem with the concentration camp experiments was not that the subjects did not agree to participate; it was the brutal and lethal ways in which they were used’.30 The Nuremberg Code's insistence on consent seems designed to prevent the sort of harm and abuse to which victims of the Nazi experiments were exposed. Robert Levine maintains that the requirement of consent ‘is grounded in . . . the universal obligation to treat persons as ends and not merely as means to another's end’.31 Faden and Beauchamp maintain that the Belmont Report reflects the view ‘that the underlying principle and justification of informed consent requirements . . . is a moral principle of respect for autonomy’.32 So because there are multiple arguments for CR, I consider the most plausible candidates below—in no particular order.
Treating People Merely as a Means
As noted above, it has been argued that to enroll people in research without consent is to treat them merely as a means. What van der Graaf and van Delden call this the not merely as a means principle (NMMP) has achieved mantra-like status in bioethics.33 To say that a practice treats someone merely as a means is generally viewed as a conversation stopper. Still we need to ask several questions: when do we treat people merely as a means? Is the principle sound? Does it do moral work not done by other moral principles? And does the best account of NMMP support CR or condemn coercive participation?
As is often pointed out, no sensible moral principle could prohibit using people as a means. In general, we do not treat people wrongly or merely as a means if they consent to the terms of an interaction. The taxi driver uses me as a means to earn an income and I use him as a means to get to my destination. But we do not treat each other merely as a means if we both give valid consent to the terms of the transaction—he does not deceive me about the fare and I do not make a false promise to pay him. It is not clear whether valid consent is always sufficient to satisfy NMMP. For example, it might be thought that a customer treats a prostitute merely as a means even if the interaction is consensual. I think this is doubtful, but, in any case, consent surely goes much of the way towards satisfying NMMP.
Now depending upon what is required to satisfy NMMP, the principle is certainly not inviolable. To use Amartya Sen's example, if A can prevent a heinous rape by taking B's car without B's consent or even coercing B at gunpoint to turn over his keys, A arguably treats B and his property merely as a means, but any principle that would condemn such a ‘use’ should be rejected.34 Samuel Kerstein agrees. He argues that we should reject any prescription ‘never’ to treat people merely as a means. Rather, we should accept a pro tanto or defeasible version of the principle, one that acknowledges that it may be morally permissible—all things considered—to treat someone merely as a means.35
But to say that we should accept a pro tanto version of NMMP is not particularly helpful without knowing something about its weight and what is required to override or outweigh it. It might, after all, be objected that Sen's example shows only that there can be extreme cases that surpass the ‘deontological threshold’ established by NMMP. So we must first determine whether—for a more normal range of cases—a defensible version of NMMP entails that we must seek and receive a person's consent before using her as a means.
There is no single ordinary way in which we think of treating others merely as a means. Derek Parfit has argued that we do not treat B merely as a means just because we use B without B's consent. Consider the scientific use of animals:
One scientist … does her experiments in the ways that are most effective, regardless of the pain she causes her animals. This scientist treats her animals merely as a means. Another scientist does her experiments only in ways that cause her animals no pain, though she knows these methods to be less effective.36
Parfit claims that the second scientist is not treating the animals merely as a means because her ‘use of them is restricted by her concern for their well-being’.
Parfit argues that we treat a being (animal or person) merely as a means when we regard them ‘as a mere instrument or tool: someone whose well-being and moral claims we ignore, and whom we would treat in whatever ways would best achieve our aims’. In this view, NMMP does little work by itself. In the standard view that dominates bioethics, A's doing X to B is wrong because it violates NMMP. In Parfit's view, A's doing X to B violates NMMP only when A's doing X ignores B's moral claims. It is the content of B's moral claims that does the moral work. Richard Arneson has similarly argued that if NMMP is interpreted as the injunction not to use people ‘in ways that are unacceptable according to correct moral principles’, everything turns on the content of those principles.37 A person's moral claims may include respect for her rationality or autonomy, but the specification of those claims would become the relevant task. Indeed, if morality requires that we give equal consideration to everyone's interests, then we do not treat someone merely as a means if we use them to advance the welfare of others as long as we weigh their interests equally along with everyone else.
There may, however, be a different linkage between NMMP and consent. It is sometimes argued that deception and coercion treat people merely as a means not because they block actual consent, but because one could not possibly consent to a deceptive or coercive transaction. As Christine Korsgaard puts it:
According to Kant, you treat someone as a mere means whenever you treat him in a way to which he could not possibly consent. Kant's criterion most obviously rules out actions, which depend upon force, coercion, or deception for their nature, for it is of the essence of such actions that they make it impossible for their victims to consent. If I am forced I have no chance to consent. If I am deceived I don't know what I am consenting to. If I am coerced my consent itself is forced by means I would reject.38 (Emphasis added)
If this is a plausible account of Kant's view, it is worth noting that Kant surely did not think that NMMP requires that people give actual consent to a particular action. After all, Kant defends a retributive theory of punishment on which the state does not violate NMMP when it punishes a criminal who has been judged to be guilty and who receives his just deserts. It is, of course, implausible to suppose that the criminal gives his actual consent to be punished. In one reconstruction of Kant's view, just punishment does not treat criminals merely as a means because they could give rational consent to the laws they are punished for violating and to the punishment system that is used to punish them.
Along these lines, Parfit proposes that we adopt a principle of possible rational consent as a general principle of morality. Susan Wolf objects to this principle on the grounds that it ‘might allow us to do things to someone even if we had no reason whatsoever to suppose that the person affected by it would consent to it—indeed, it would allow us to do things to a person even if he explicitly refuses to consent to it under conditions of full rationality and information’.
This is too quick. As Parfit argues, there are some contexts in which we could give rational consent to a system or rule that does not require actual consent, but there are other contexts in which we could only give rational consent to a system or rule that requires actual consent. The fact that B could rationally consent to have sexual relations with A does not render it permissible for A to have sexual relations with B without B's actual consent because whereas we could give rational consent to some types of acts without our actual consent (such as just punishment for violating laws) there are other cases—such as sexual relations—to which we could not possibly give rational consent to be acted upon without our actual consent.
Is participation in research like sex (in this respect!)? Is participation in research a context in which we could not possibly consent to a practice in which our actual consent is not necessary? Recall the Common Rule's conditions for waiver of consent. First, if we could not give possible rational consent to allow research without consent, then the Common Rule is wrong to allow such waivers. Second, if we could not give possible rational consent to such waivers and it is nonetheless permissible to allow such waivers, then while such waivers allow for the violation of NMMP, the pro tanto force of NMMP is very weak in such cases. Third, if the Common Rule's conditions reflect a sensible or plausible view, then perhaps we could give possible rational consent to a system that allows for a considerable range of research without valid consent. That is the space in which we can ask whether coercive participation is legitimate.
If we could consent to a system that requires us to pay taxes or fasten our seat belts on pain of penalty for not doing so, could we consent to a system that requires us to participate in interventional biomedical research on pain of penalty for not doing so? Interventional biomedical research might be different because, like sex, it involves invasions of a person's body. So we will need to consider whether that feature of biomedical research justifies requiring consent. If it does, then it is the special wrong of violating bodily integrity that does the moral work in applying NMMP.
PROTECTING INTERESTS
Although bioethicists typically discuss consent as if it serves and is entailed by a deontological-type principle such as NMMP or respect for autonomy, the Nuremberg Code's insistence on consent was primarily designed to protect subjects from the sorts of palpable and egregious harms imposed by the Nazis. The importance of this interest-protecting function of informed consent is reflected in the Common Rule's provision that permits waivers of consent only when the interests of subjects are not (much) at stake and it is not practicable to obtain their consent—and this is so even if the subjects might not want to be included in research.
Buchanan and Brock note that there are several reasons why people have an interest in ‘making significant decisions about their lives for themselves’.39 First, self-determination ‘is instrumentally valuable in promoting a person's well-being’. Because people will typically give valid consent to a transaction if but only if the transaction serves their interests, regarding a person's valid consent as necessary and (generally) sufficient method for rendering another's action permissible when it would otherwise not be permissible is a reasonably reliable method for protecting or promoting a person's well-being.
The tie between consent and advancing a person's interests or well-being is strengthened to the extent that a person's interests depend on ‘the particular aims and values of that person’.40 For example, given that prostate surgery may involve a trade-off between some increase in expected survival and a substantial risk of impotence, we cannot say whether surgery will enhance a patient's well-being or interests without knowing the weight that he (reasonably) places on these outcomes. In addition, because people want to make decisions for themselves and enjoy doing so, the simple satisfaction of this desire is also a component of their well-being.
In addition to advancing welfare or well-being, consent also serves to respect, protect, and promote a person's autonomy. I will say more about that below. Here I want to explore the alleged tension between promoting a person's interests and respecting a person's judgement or autonomy. We generally believe that people have a right to make decisions in certain spheres even when the decision does not advance their well-being. Thus, we may think that a Jehovah's Witness has a right to refuse a life-saving blood transfusion even though the refusal does not advance her well-being (even allowing for the value that she attaches to her religious commitments).
The tension between the value of promoting a person's well-being and the value of protecting and promoting autonomy or self-determination is sometimes overstated. Although there are no doubt some cases in which these two values conflict (if Jehovah's Witnesses did not exist, bioethicists would have to invent them), it is arguable that we would not value respecting people's autonomy or their choices if—as a general rule—people made choices that did not advance their interests or aims. It cannot be entirely coincidental that the very conditions that are thought to render an agent's decisions less than fully autonomous—coercion, deception, and incompetence—are also conditions that reduce the likelihood that her decisions advance her well-being.
Interestingly, the tension between considerations of well-being and respect for autonomy is much greater in clinical care than in clinical research. In the former context, there are long-standing debates as to if and when physicians can justify deception or withholding information from patients when they think that full disclosure would not serve a patient's interests. There are debates as to whether physicians should transfuse a patient who would otherwise die if the patient rejects transfusions on religious grounds. In the research context, however, we are considering whether coercive participation is legitimate even though it is contrary to a person's interest to participate in research and she would not do so voluntarily. Here, the course of action that would promote the person's well-being and the course of action that would respect her autonomy are on the same side of the street.
Still, it matters whether we adopt an autonomy or an interest-based justification for CR. If the principal justification for CR is that it protects and promotes the interests of the consenter, then that justification will have relatively little purchase in those cases where participation involves minimal risks and burdens. Moreover, even if it were pro tanto wrong to impose minimal risks and burdens without consent, we would also have to ask why the subject's interests should dominate the interests of present and future people who would benefit from more and faster medical research. Perhaps the interests of subjects should be weighed more heavily than the interests of the beneficiaries of research just as the interests of innocent defendants should be weighed more heavily than the public interest in a higher rate of conviction of the guilty. But we would need an argument for that view.
RIGHTS
As I have noted above, the Common Rule appears to assume that there is no general right not to be involved in research without one's consent when it permits waivers of consent only when ‘The waiver or alteration will not adversely affect the rights and welfare of the subjects’. For if there were such a right, then any waiver of consent would, of necessity, adversely affect the subject's rights and so the provision would be incoherent. Alex Capron suggests that the protection provided by this provision ‘is rather ephemeral because allowing researchers to omit the usual requirement to obtain informed consent in and of itself deprives subjects of their basic right not to be placed in research without their prior consent’.41 Capron assumes what has to be shown, namely, that there is such a right, but he is correct in observing that the Common Rule points in the other direction.
So we need to determine whether there is a general right not to be used in research without one's consent; if so, what grounds such a right, and, perhaps most importantly, we would need to determine the strength of that right. To consider this issue, it is best to step back from the context of research. Consider the multiple ways in which the actions or decisions of others affect us without our consent. People may offend others by what they say or wear or how they smell. People put others at risk when they drive their cars or run a business. Women put demands on their colleagues when they take maternity leave and those with children place burdens on the childless when they send their children to public school. People adversely affect others when they win a competition, be it for a job, athletic victory, or a spot in a university. And all this is unproblematic. So we have no general right that others seek our consent before they act in ways that affect us adversely. Given this, it's hard to see why we should single out using people for the purpose of research as requiring consent on the grounds that the purpose of such use is to develop generalizable knowledge.
We may have a right not to be intentionally harmed in certain ways without our consent in direct interpersonal interactions, but the reason for such harm would be irrelevant as to whether our rights are violated. And there may be a (defeasible) right not to have one's body used or invaded without one's consent. If so, it is that right that supports a right not to be used for interventional biomedical research without consent, but there would be no right not to be used for research, per se, without one's consent.
Even if there were a right not to be used in research without consent, we would still have to determine the strength or weight of such a right. As Richard Arneson observes, ‘you have a moral right not to be tortured murdered for fun, but you also have a moral right that your extra shirt button on your least favorite shirt not be taken from you without your consent’.42 If we assume that something like the Common Rule's conditions for waivers of consent are reasonable, then it seems that if there is a general right not to be used in research without one's consent, that right cannot be very strong. So there are two possibilities (1) there is a weak right not to be used in research without consent or (2) there is no such right.
Now it may be argued that while there is no strong right not to participate in research without one's consent or valid consent, there is, nonetheless, a strong right not to be coerced to participate in research, perhaps because being coerced to participate is morally worse than participating without one's valid consent. There are two points. First, it is not clear why it is worse to require someone to do something when he knows what he is being required to do than to do something to someone without his knowledge or by using deception. Second, even if that argument could be supported, it would not establish that it is illegitimate to coerce people into participating in research on the grounds that doing so violates a general right not to be used for research without their consent.
RESPECT FOR PERSONS AND RESPECT FOR AUTONOMY
Although respect for autonomy is sometimes said to derive from a more general respect for persons, let us start with the more general category. There is no reason to think that respecting persons, as such, entails a high priority to individual freedom or to autonomy or to consent. For example, there is no reason to think that the ‘mandate’ component of the Affordable Care Act that requires people to purchase medical insurance should be rejected or even seriously questioned on the grounds that it fails to respect those persons who would prefer not to purchase medical insurance. If treating a person disrespectfully consists in ‘riding roughshod over his legitimate moral claims’, then to settle what constitutes genuinely disrespectful treatment requires an account of a person's legitimate moral claims.43 The nature of those claims will vary from context to context. We do not fail to show respect for people if they are harmed in legitimate competition or if they are taxed or if they are offended by actions that others are entitled to perform. So, if the task is to show that it is not legitimate to require people to participate in research, a general commitment to ‘respect for persons’ is not up to the task.
Respect for autonomy may fare somewhat better on this score. Tom Beauchamp, who was largely responsible for drafting the Belmont Report, maintains that ‘that the underlying principle and justification of informed consent requirements, at least for autonomous persons, is a moral principle of respect for autonomy, and no other’.44 Does respect for autonomy entails that coercive participation in research is illegitimate?
In a Kantian view, autonomy refers not to ‘self-determination’ in its ordinary sense, but conformity with the moral law. As Rawls puts it, ‘acting autonomously is acting from principles that we would consent to as free and equal rational beings …’.45 This conception of autonomy does not preclude coercing people to do that which they have an obligation to do. If people have an obligation to do their fair share of participation in research, free and equal rational beings could consent to a principle that would require them to do their fair share. Or so it seems.
Of course, the conception of autonomy that is regarded as a core principle of bioethics is not concerned with conformity with the moral law, but with the ability to control one's life and self-determination. Consider this passage from the Belmont Report:
To respect autonomy is to give weight to autonomous persons’ considered opinions and choices while refraining from obstructing their actions unless they are clearly detrimental to others. To show lack of respect for an autonomous agent is to repudiate that person's considered judgments, to deny an individual the freedom to act on those considered judgments, or to withhold information necessary to make a considered judgment, when there are no compelling reasons to do so.46
This passage suggests two distinct dimensions of respect for autonomy. First, to respect a person's autonomy is to respect that person's judgement with respect to her interests, aims, and values. Second, to respect autonomy is to allow people to act on those judgements, either through vetoing interventions to which they do not agree or by authorizing transactions or interactions with others.
Now the first—judgement respecting—dimension of autonomy is more relevant to medical care than to research. In treatment, a principal reason to insist on truthfulness and disclosure of relevant information is to block paternalistic deception and manipulation by physicians if the physician believes (even reasonably) that she is better able to judge what is in a patient's interests than the patient herself.
By contrast, such judgement respecting concerns are irrelevant to whether it is legitimate to coerce people into participation in research. Coercive participation does not disrespect a person's judgement about her interests or undermine her capacity as a decision-maker. Rather, it says that the subject's judgement about whether she wishes to participate in the absence of a penalty does not rule the day just as the state's requirement that I pay taxes is not disrespecting my judgement that I would be better off not doing so. By contrast, the use of deception does undermine the target's capacity to rationally deliberate as to whether an action serves her ends under the circumstances in which she finds herself.
Belmont also maintains that we fail to respect autonomy when we ‘deny an individual the freedom to act on those considered judgments’ (Emphasis added). This dimension of autonomy is surely compromised by the use of coercion. But how important is the freedom to act on one's considered judgements? That depends. Other things being equal, it is certainly preferable that people not be required to do things that they do not (or might not) want to do. But while this may give us a reason to regard the freedom not to participate as an important desideratum, it does not justify elevating consent into a virtual requirement for ethical research if there are countervailing moral reasons that would justify such coercion.
Interestingly, the Belmont Report can be read as endorsing a similar pluralistic approach with respect to the three core values it espouses. Although it recommends that we ‘give weight’ to ‘autonomous persons’ considered opinions and choices’, it does not claim that respect for autonomy trumps or is even weightier than considerations of beneficence and justice. As Beauchamp and Childress put it, ‘The principle of respect for autonomy does not by itself determine what, on balance, a person ought to be free to know or do or what counts as a valid justification for constraining autonomy’.47 Although this view may run counter to a common view among bioethicists that regards autonomy as the first value among equals, it would hardly be a surprise among political philosophers, many of whom think that the state may legitimately require people to perform a wide range of actions.
In particular, and as I noted above, we generally think that we can justifiably use coercion to solve collective action problems or to generate economies of scale. We require that all cars come equipped with catalytic converters because it would not be in any individual's interest to buy one. Similarly, to the extent that participation in research constitutes a collective action problem, we might legitimately require people to participate in research on some fair basis because we (or at least most) stand to benefit, ex ante, from medical research whether or not we contribute via participation.
It might be thought that coercive interference with people's bodies compromises their autonomy or fails to show respect for them in ways that other uses of coercion do not. In this view, it is one thing to confiscate a person's resources through taxation and quite another to coercively extract a unit of blood or a kidney. I will consider that argument below. Here, I want to stress that interferences with our freedom or autonomy are not all of equal importance. In particular, we need to distinguish between interventions that prevent people from living an autonomous life and interventions that infringe on an individual's freedom to make a particular choice. David Archard suggests that ‘. . . an autonomous decision is valuable insofar as it concerns a matter critical to the leading of a . . . person's life—what projects he can undertake, what he finds worthwhile and rewarding in life, what gives his life purpose and value’.48 Along similar lines, Faden and colleagues suggest, ‘Respecting autonomy is primarily about allowing persons to shape the basic course of their lives in line with their values and independent of the control of others’.49 So while requiring a person to buckle his seat belt or serve on a jury for a day (or two) interferes with his freedom to do what he wants, these are relatively trivial interferences with his ability to shape the basic course of his life. And to the extent that appeals to autonomy or freedom derive their moral power from the importance of a person's ability to lead an autonomous life, the appeal to respect for autonomy does not entail that coercive participation is illegitimate.
TAKING STOCK
In the previous sections, I have argued that several related arguments for CR and for a ban on coercive participation simply do not work on their own terms and have unacceptable implications. If one appeals to principles such as respect for autonomy or NMMP or a right not to be a research subject without consent, then we cannot make sense of a large range of research that takes place without any sort of consent by the subject or, as in much social and behavioral research, without a subject's informed or undeceived consent. It might be more difficult to justify the use of coercion, but given that many cases of state coercion barely raise our hackles, we need to explain why the prospect of coercing people into participating in research should be regarded as so abhorrent.
BODILY INTEGRITY
If there is something especially illegitimate about coercing people to participate in interventional biomedical research—other than its inglorious history (and that might be enough)—it may be argued that regardless of the level of risk, interventions that trespass the boundaries of a person's body or personal resources are of much greater moral significance than interventions with a person's ‘external resources’. In this view, what Nir Eyal calls ‘body exceptionalism’, it is bodily integrity, not autonomy that is important.50 Body exceptionalism ranges beyond research. It would hold that it is worse to conscript people's bodily organs than to take their money, and it might help to explain why many people think it worse to inflict corporal punishment than to imprison people even though corporal punishment may impose less total harm and might even have a greater deterrent effect.
Now there is an important distinction between respecting a person's bodily integrity and respecting her autonomous decisions about her body. For example, it is surely wrong to have sexual relations with a person who autonomously refuses to consent to such relations. But it is also wrong to have sexual relations with a person who says either ‘yes’ or ‘no’ while extremely intoxicated, not because we are respecting an autonomous decision, but because it is wrong to penetrate a person's bodily boundaries in this way without her valid consent. And for the same reasons, physicians are not permitted to impose unwanted treatment on patients even if the patients are not making an autonomous decision to refuse treatment.
How important is bodily integrity or a ‘prophylactic membrane’ around the body?51 Following Kasper Lippert-Rasmussen, let us refer to the claim that interventions with a person's body are more ethically problematic than interventions with a person's external resources as the ‘the asymmetry thesis’.52 Some libertarians appear to reject the asymmetry thesis. For example, Robert Nozick argues that since the state cannot legitimately take a part of a person's body such as a kidney without her consent, the state cannot legitimately take a person's external resources to use for the benefit of others. As Nozick famously quipped, ‘Taxation of earnings from labor is on a par with forced labor’.53
By contrast, ‘redistributionist liberals’ are inclined to support the asymmetry thesis. They will argue that it is comparatively easy to justify policies under which the state takes a person's external resources through taxation or eminent domain or requires one to use one's external resources in certain ways, but that it is comparatively difficult to justify policies that would allow the state to take or intervene with a person's body or internal resources be it through forced labor, conscripting organs for redistribution, or requiring people to serve as research subjects.
The asymmetry thesis has strong intuitive appeal. As Charles Fethe remarks: whereas ‘Taxation . . . represents a standard procedure for exacting social obligations’, requiring people to participate in research seems to present ‘a claim for a . . . sacrifice [that] is more personal, deeper within that sphere which we normally like to think of as protected from social encroachment’.54 The present question is not whether we think this way. We do. The question is whether these intuitions mark a matter of intrinsic moral significance.
Charles Fried appears to argue that it does:
The human person identifies himself with his body; he knows that he IS his body, that his knowledge of and relation to the whole of the outside world depends on his body and its capacities, and that his ability to formulate and carry out his life plan depends also on his body and its capacities.55
Despite appearances, this passage does not claim that the body, as such, has intrinsic moral significance. Fried implies that the moral weight of one's control over one's body derives from its importance to what genuinely matters to one's life, that is, to one's ability to carry out ‘his life plan’. If I lose the full use of a finger—as I did—the injury did not interfere with my ability to carry out my life's plan or any activities that are important to me (mainly, because I can still type!). By contrast, a violinist's loss of the use of her finger may well affect the course of her life. In this view, the importance of the body or its parts to one's agency is a factual or contingent matter. It is not of intrinsic importance.
Agency and capacities are not all that matter. The moral significance of one's body—or its parts—also depends on the way in which people respond to bodily contact or invasions of the body. These responses are also contingent or fact-sensitive. First, it matters whether a touching is intentional or incidental. We do not regard being bumped on the subway as a battery, but might take offense at a comparable intentional bumping or an unwanted affectionate touching. Second, touchings of some bodily parts are more worrisome than others. It makes a psychological difference if A gives B (1) a non-consensual kiss on B's cheek as contrasted with (2) a non-consensual kiss on B's lips. Some may regard both (1) and (2) as problematic, but even so, I suspect that they would not regard them as equally problematic. The differences are partly conventional and vary with the cultural or ideological sensibilities of the parties or the relationship between the parties. The principal point is that the psychological and moral seriousness or such touchings are also a contingent matter.
Why is it a serious matter to cut a person's hair without consent? Not simply because one's hair is part of one's body. Rather, it is a serious matter because people want to have some control over their appearance and because the effects are more than momentary. A similar point applies to the moral importance of privacy or information about oneself. Having control over one's identifiable medical records or social security number or credit card or images of one's naked body is important because the information can be used in ways that deeply affect the course of one's life. Once again, the general point is that the moral significance of a resource does not depend on its physical properties or whether it is internal or external to the body. It depends on its fact-sensitive connection to what we care about.
A similar argument applies to external resources. Matt Zwolinski suggests that control over external goods can be deeply connected to a person's projects: ‘Our . . . projects cannot be pursued—especially not over any significant period of time—without the ability to plan and rely on the use of external goods’.56 The importance of these external resources depends on the extent to which reliance on such resources is crucial to our (reasonable) projects and our ability to plan on their use. It is one thing to take someone's fungible money via taxation because we can plan on not having those resources and because few matters of importance are tied to a specific level of money. It is quite another to take someone's house through eminent domain, even if he receives ‘just compensation’ as required by the Fifth Amendment. People can develop deep personal ties to their homes. A musician's instrument may or may not be a fungible external resource. Dylana Jenson, a rising star violinist, was devastated when her patron took back a Guarnerius del Gesu violin: ‘It was an intimate part of my ability to express myself as an artist’.57
But just as the previous point denies that interventions with a person's external resources are necessarily morally unimportant, it cuts against the claim that interventions with one's body a necessarily of great importance. Just as a person's property or external resources have greater moral significance when and because they are crucial to her agency or have psychological significance, the same can be said about the body. As Cecile Fabre puts it, ‘the objection from bodily integrity derives much of its force from the view that in violating people's bodily integrity, one is interfering with their life to an unacceptable extent’.58
To put the previous argument in different terms, we should be careful not to conflate or equate cases that represent interventions on the ‘trivial end of the spectrum and on the serious end and treat them as if there were equally morally important’.59 Consider kidney transplants and finger pricks. Even though kidney transplants can be quite safe when performed under appropriate conditions and the ‘donor’ can generally pursue his life plan without great difficulty, the coercive removal of a kidney would be a serious matter even if it were necessary to save another's life. By contrast, if one's blood had some marvelous factor such that a few drops painlessly extracted from one's finger (in the way in which diabetics test their blood sugar) could save a life, then it might be legitimate to coerce people to provide such blood (if it were necessary to do so).60 Indeed, if the world were such that we knew that drops of blood or the like can have such curative powers, I suspect that some of our moral intuitions about bodily integrity would be quite different.
Three concluding points about the body are as follows. First, the line around the body may be a good heuristic or proxy of moral significance even if it is not of intrinsic importance.
Second, even when a violation of bodily integrity does not interfere with one's life plans, it may give rise to considerable psychological distress. The question then becomes how to understand the moral importance of such distress.
Judith Thomson suggests that we should distinguish between ‘belief-mediated distress’ and ‘non-belief mediated distress’.61 When A pricks B's finger or inserts a needle for a blood draw, A's action causes B to experience simple or non-belief-mediated distress. It can be painful even if B believes that it is legitimate for A to prick B's finger. By contrast, when A causes B to feel embarrassed, afraid, humiliated, insulted, or annoyed, then B experiences belief-mediated distress. If B did not believe that being called a nerd is an insult (say, because B did not understand the word), B would not feel insulted.
Some belief-mediated distress is a function of normative beliefs. There was a time when people did not think they had a right that others not smoke in their presence. The smoke may have caused physical or non-belief-mediated distress, but people did not feel that their rights were violated. By contrast, public smoking now causes both physical and belief-mediated distress if and when people believe that it is wrong for others to smoke in their presence.
Similarly, if one believes that others have no right to touch or intervene in one's body without consent, then such touchings will cause more belief-mediated distress than if one did not have this belief. It is likely that the experience of being ‘pinched’ on the subway causes less distress in some societies than in others. And whereas genital cutting causes considerable non-belief-mediated distress wherever it occurs, the degree of belief-mediated distress will vary in accordance with its perceived acceptability and perhaps religiosity.
Now Thomson believes that we should attribute less moral importance to belief-mediated distress than to non-belief-mediated distress because one bears responsibility for one's belief-mediated distress and because one could often avoid experiencing such distress by changing one's beliefs. There may be something to this point, but not much. In many cases, the issue is not whether the distress is belief-mediated, but whether the belief is independently legitimate or defensible. Consider two cases: (1) some people are offended by the sight of an interracial couple; (2) many African-Americans would take offense at being called colored’. Although people would feel less distress in both (1) and (2) if they had different beliefs, we are inclined to regard the belief-mediated distress in (1) but not (2) as morally irrelevant. So the moral weight of the distress associated with coercive participation in research would be at least somewhat dependent on the extent to which people accepted the moral significance of bodily integrity and whether those beliefs differentiated among the interventions at issue.
Third, and related to the previous point, the identity or role of the non-consensual ‘invader’ of one's body is of moral and psychological significance. It matters whether the invader is an unauthorized private person in pursuit of his own private aims or an authorized government official pursuing important public purposes. It is one thing if a private individual touches one's body without one's consent and quite another if one is subject to a random or special pat down by a TSA official (as when the imaging machine indicates a problem area). Even though these sorts of touchings and interventions can be annoying and even upsetting, and even though we may question the underlying policy, our distress is tempered by the belief that they are undertaken under the color of a legitimate public purpose.
WIDENING THE LENS
The burden of the previous sections has been primarily negative. I have argued that we cannot say that it is illegitimate for the state to coerce people to participate in research by straightforward appeal to several principles that are commonly offered as justifications for CR. In this section, I take a more positive stance in defense of the legitimacy of coercive participation. I do so by considering a range of cases in which many think that it is legitimate for the state to coercively interfere with people's bodies or make decisions that affect their bodies without their consent, or to require people to engage in labor for the benefit of others in ways that are analogous to requiring people to endure the risks and burdens of participation in research.
There are numerous state interventions that coercively interfere with the bodies of citizens. Although some interventions, such as compulsory vaccination, may be justified, in part, on the paternalistic grounds that they are beneficial to the parties themselves, the state may require vaccinations as a matter of public health to create sufficient ‘herd immunity’. We have traditionally required pre-marital testing for disease (although one could avoid the testing by avoiding marriage). The state may obtain blood samples from criminal suspects without violating Fifth Amendment protection on grounds of self-incrimination. It may require a swab of the cheek for DNA identification. The police may stop and frisk people. The state may involuntarily quarantine people with dangerous contagious diseases.
Now it may be argued that such interventions are legitimate because the intervention is designed to prevent harm as contrasted with generating benefits. Charles Fried notes that whereas doctors have been allowed to override the expressed wishes of their patients in order to protect the public or other persons, they are not permitted to compel a person to ‘confer a benefit against his will, for instance by ordering him to donate an organ or blood of a rare type’ (Emphasis added).62 This view echoes Hans Jonas's remark that medical progress is an ‘optional goal’ and that ‘a slower progress in the conquest of disease would not threaten society, grievous as it is to those who have to deplore that their particular disease be not yet conquered’.63
Consider McFall v. Shimp. McFall suffered from a rare disease. His prognosis for survival was very poor unless he received a bone marrow transplant. After considerable searching and testing, it was determined that his cousin, Shimp, was the only plausible donor. When Shimp refused to be tested, McFall asked the court to compel his cousin to submit to further testing and the extraction of bone marrow if the testing indicated that his bone marrow was compatible. The Court was sympathetic to the view that Shimp had a moral duty to give marrow to his cousin, but was not prepared to require Shimp to do so.
For our law to COMPEL the Defendant to submit to an intrusion of his body would change the very concept and principle upon which our society is founded. To do so would defeat the sanctity of the individual and would impose a rule which would know no limits, and one could not imagine where the line would be drawn.64
It is not clear whether the invasiveness of the procedure was crucial to the Court's decision, such that it might have reached a different conclusion if something like a blood draw was sufficient. It does seem that the Court was most concerned that approving the use of coercion in this case would endanger our view of the ‘sanctity of the individual’ and thereby place the law on a slippery slope—‘a rule which would know no limits’—that would raise ‘the spectre of the swastika and the Inquisition . . .’. Its hyperbolic rhetoric aside, if the Court was right not to require Shimp to help McFall because refusing to help is not equivalent to harming, then it is arguable that coercive participation is illegitimate because, as Fried puts it, the subject who ‘refuses to submit to experimentation does not by his refusal constitute a danger to others; he merely refuses to confer a benefit’.65
Although the distinction between harming and not benefitting may be of moral significance, there is a question as to how much moral weight it can bear. To say that medical progress is ‘optional’ reflects an unsupportable bias for the status quo, particularly given that research has shown that many standard therapies are ineffective or harmful. If research shows that tonsillectomies are unnecessary, is it providing a benefit or preventing a harm? And the status quo baseline loses much of its salience if we consider public policies that affect people's bodies (as opposed to direct interventions with people's bodies) without the consent of the affected individuals and where it is difficult to say whether a policy is preventing harm or providing benefits. A decision to place a toxic waste dump in location X rather than location Y may place those in location X at increased risk. A decision to set the standards for air pollution at a given level (as opposed to a feasible lower level) or not to prohibit smoking in casinos puts people's lungs at risk, nay, leading to predictable levels of morbidity and mortality. The designation of speed limits, the number of police on the street, the length of prison sentences for violent criminals, the amount of road salt on winter roads, the prevalence of street lighting, the level of enforcement of food safety, and the level of taxation on alcohol—all these policies affect the frequency with which people are injured or killed or get sick. Of course any speed limit or level of street lighting or level of police patrolling will affect the number of people killed or injured. But the point remains that the state regularly makes policies that affect what happens to our bodies without our consent.
The state also conducts life-affecting research without seeking consent of those involved or affected. Some examples, such as educational research or research on welfare policy or health policy, can affect the quality of people's lives and sometimes whether they live or die. For example, suppose that a state highway department is concerned about the trade-off between the financial and environmental costs of various quantities of road salt and the accident rates on snow covered roads. It might conduct a three-arm trial by using its standard amount on one 10 mile stretch of a highway, half that amount on another 10 mile stretch, and double that amount on a third 10 mile stretch. The highway department is surely conducting life-affecting research without the consent of those affected, unless one implausibly argues that drivers tacitly consent to such research by (as Locke put it) ‘travelling freely on the highway’.66
More generally, most public policy programs create harms to people without their consent. A housing allowance program increases demand for housing, thereby raising the cost to others. A highway program may create more jobs and housing in suburbs, thereby weakening employment opportunities and investment in housing in inner cities. As a general proposition, the government is free to carry out activities that adversely affect individuals or groups of individuals so long as it is pursuing some reasonable conception of the collective good and does not violate certain fundamental rights of individuals.67
Of course, even if these cases rightly illustrate that the state legitimately puts people's lives, bodies, and resources at risk without consent, it does not follow that it is legitimate for the state to require people to participate in biomedical research. Should biomedical research be treated differently? First, it is arguable that there is a distinction between road salt research that puts the bodies of ‘statistical lives’ at risk and research that involves direct intervention such as a blood draw with an identifiable person. Just as we are prepared to do more to save identifiable coal miners trapped in a mine than to prevent similar mining accidents to future statistical miners, we are more willing to conduct road salt research that puts unidentified drivers on slippery roads at risk than to require identifiable drivers to participate in road safety research. Second, we may distinguish between research that evaluates the effect of behavior (such as driving) that people undertake for their own reasons under conditions that we manipulate (varying levels of road salt or different speed limits) and research that intentionally places people in a situation in order to see what happens to them. It's not as if we're requiring people to drive on slippery roads so that we can evaluate the effect of varying levels of road salt. Third, much public policy research occurs in a context in which the state is entitled to make public policy without the specific consent of those affected. By contrast, most biomedical research occurs in a medical context in which the principle of consent is well entrenched.
As an empirical or psychological matter, it is clear that we do in fact make distinctions between the legitimacy of state action that does not involve direct intervention with people's bodies and interventional biomedical research. Although there are ways in which that intuition can be defended, I am not convinced that the arguments just considered are sufficient to sustain a prohibition on coercive participation in interventional biomedical research.
Let us set aside the issues raised by bodily intervention for the moment and focus on the burdens of participation in research, for the burdens of research often constitute a greater ‘cost’ of participation than the risks of the bodily invasion itself. There is not much risk or pain in a blood draw, but getting to a hospital and waiting to be seen might involve a considerable burden in time and inconvenience. Can the state legitimately require people to undergo the inconvenience of participation, to answer surveys (as in the census), and to allow their deidentified medical records or stored tissues to be used to generate knowledge that would benefit others?
Put this way, it is hard to see that there is a serious problem. Consider Mill's defense of the harm principle. Mill first argues that the state can only legitimately interfere with individual freedom to prevent harm to others. He then asks whether the state can legitimately require people to come to the aid of others. Mill says yes because one can harm others by inaction as well as action. It is legitimate for the state to require people to perform ‘certain acts of individual beneficence, such as saving a fellow-creature's life’ because not performing such acts constitutes a harm, says Mill, whenever one has a moral duty to perform the rescue.68
Mill's claim raises knotty questions about causation, morality, and harm. In Mill's view, A's inaction causes a harm to B only if A has a duty to help B. Nurse A causes harm to Patient B by not providing B with medication if she has a duty to give medication to B but Passerby C doesn't cause a harm to B by not providing B with medication because C has no duty to provide it. So one can't say that A has a duty to provide B with medication because not providing it would harm B, because the latter claim is dependent upon the former.
I'm not convinced that Mill is right to describe omissions as harms whenever there is a duty to act. But once Mill asserts that harm to others is the only justification for state coercion, he must also claim that omissions can be harms if he is to claim that it is legitimate for the state to penalize such omissions. Yet if we are freed from the theoretical constraints of the harm principle, we might claim that it can be legitimate for the state to coerce individuals to aid others without claiming that not doing so is a form of harming them. Interestingly, Mill does not invoke the language of harm to justify some coercive policies:
There are also many positive acts for the benefit of others, which he may rightfully be compelled to perform; such as, to give evidence in a court of justice; to bear his fair share in the common defence, or in any other joint work necessary to the interest of the society of which he enjoys the protection. (Emphasis added)69
Mill also understood that a related line of argument has a wide potential application with respect to collective action problems. Consider the sort of working hour legislation that led to the Supreme Court's (in)famous decision in Lochner v. New York, in which the Court invalidated New York's law that limited the number of hours a baker could work per day or per week.70 In his Principles of Political Economy, Mill noted that if we want workers to benefit from a shorter workday, we might have to make it illegal for them to work a longer day. Otherwise, every individual worker could be asked to work longer days for the same pay and might have an individual incentive to do so.71 So if we accept the basic structure of Mill's argument, we can then ask whether generating a sufficient number of research subjects can qualify as a case of ‘joint work necessary to the interest of the society of which he enjoys the protection’.
Let's start with bearing one's fair share ‘in the common defence’. It might be claimed that if it is legitimate for the state to conscript people into military service, then it must be legitimate for the state to conscript people into research where the burdens and risks are lower and the time commitment is comparatively trivial. Although there is something to this line of argument, I prefer not to go that way. First, the legitimacy of military conscription is debatable. Second, to use the (putative) legitimacy of conscription as an analogy would legitimate virtually any sort of state intervention with individual freedom. There are good reasons to treat the possible need for military conscription as a special case of societal survival or threats to humanity, and so it is better to use less fraught comparisons.
So consider some mundane examples of labor or burdens that the state may require people to perform or accept. In my hometown, we once had to sort our recyclables (glass, plastic, paper, metal) and put them in a bin on the street (sorting is no longer required) every week. If we accumulate the required labor over 52 weeks a year times many years, the total required labor is not trivial. But we don't say that it's legitimate for the state to take my fungible money via taxes to pay for the recycling service, but that it's not legitimate for the state to require me to sort my recyclables, keep them until the weekly pick-up, and then move them to the curb (and who knows how much disease or injury is caused by this activity?). We have a collective action problem that requires that we all be coerced to perform such labor in order to achieve a public good even if it imposes a non-trivial burden and even some risk.
The criminal justice system coerces people to perform labor and, sometimes, to put themselves at considerable physical and emotional risk in doing so. We may require a person to testify as a witness to a crime on pain of being held in contempt of court even if the person genuinely and legitimately fears retaliation for doing so. (There is a ‘witness protection program’ for a reason). Victims of crime may be required to testify against their will because the state can decide to prosecute even if the victim does not want to press charges, as may happen in rape cases where the victim fears humiliation in court or in domestic violence cases where the victim fears retaliation by her abuser. We require people to serve on juries. True, this frequently involves minimal labor (one or two days) and little risk. But serving as a research subject also often involves minimal inconvenience and minimal risk. Moreover, jury service sometimes involves the risk of retaliation as well as considerable inconvenience and loss of income (to worker or the employer). So we have clear examples of legitimate state coercion that involve burdens of time, labor, and inconvenience that are comparable to or exceed the burdens of participation in research.
Consider a mandatory national service program that would require young adults (say 18 or 19 year olds) to serve in a national service program of some kind. David Brooks, a moderately conservative New York Times columnist, suggests that in order to reduce social inequality, we need a program that would force people from various ‘social tribes’ to live and work together ‘to spread out the values, practices and institutions that lead to achievement’.72 If it would be legitimate for the government to require that people spend months or a year or two years in service to their society, it is presumptively legitimate to require people to undergo the burdens and risks of participation in at least some forms of medical research in order to advance medical knowledge.
If some or most of the foregoing examples represent legitimate exercises of state coercion, are there good reasons to regard it as illegitimate to require people to accept the burdens of participation in research? It might be argued that research serves less weighty goals or, perhaps more accurately, goals of the sort that do not justify the use of state coercion. But if we accept the general structure of Mill's principle, we can surely ask whether participation in research constitutes a form of ‘joint work necessary to the interests of society’. Jonas would say that medical progress is optional or not ‘necessary’ or, perhaps, that it does not serve a genuine public or societal purpose. Others might disagree. In any case, that is where the debate should occur.
Finally, consider the sort of good Samaritan legislation that penalizes people for failing to make an easy rescue. It is commonly thought—at least by philosophers if not the general public—that such legislation is legitimate and justifiable. There are questions about the degree of risk that it is reasonable to require people to assume, but the basic principle is widely accepted.73 Setting aside the issue of bodily intervention, if good Samaritan laws are legitimate, it would seem that it is also legitimate to require people to accept the burdens of participation in biomedical research.
At first glance, it might be thought that participation in research is not analogous to rescuing those in need because rescuing provides palpable aid to a specific individual in distress whereas Arthur Ripstein's account of the duty to rescue suggests that the analogy is not inapt. In Ripstein's view, a key element of a just society is that it ‘holds certain misfortunes in common’.74 We try to spread the burden of the bad luck that befalls individuals. For that reason, a just society includes ‘equitable schemes of redistributive taxation, so as to pay for such essentials as health and education’. The duty to rescue is not owed to the individuals who are in distress. Rather, it is an obligation to contribute to a social practice in which we all share the burden of mitigating the burdens and risks of individual misfortune. In Ripstein's view, the common law is correct not to regard the failure to rescue as a tort against the person in peril for which the latter could demand compensation in a civil case, for the duty is not owed to that individual. Yet, it would be perfectly legitimate to regard the failure to rescue as a criminal offense against a society-wide practice that is required by considerations of justice.
Similarly, on the plausible assumption that the need for effective and safe medical care is a basic need and illness is a misfortune that we should seek to hold in common, it is arguable that considerations of justice might support a requirement to contribute to a system of medical research by supporting institutions such as NIH and by participation in medical research just as we may be required to contribute to a system of universal access via taxation or required to purchase medical insurance. As Faden and colleagues suggest, contributing one's fair share of financial resources to the system is not enough. We have obligations as patients ‘to contribute to the common purpose of improving the quality and value of clinical care and the heath care system . . .. Securing these common interests is a shared social purpose that we cannot as individuals achieve’.75 In this view, participating in research is—in principle—an enforceable obligation. It would remain to be settled as to precisely what risks and burdens might be required.
The point of the previous sections is not to present a compelling argument for the legitimacy of coercive participation in biomedical research or for the view that we should abandon CR in interventional biomedical research. Rather, the point is to argue that when we consider participation in research as a problem in political philosophy, the claim that coercive participation is legitimate begins to look more plausible. It is certainly not obvious whether and why we can carve out a significant moral distinction between the activities in which state coercion is regarded as legitimate and participation in research where it is not.
DOES THIS ARGUMENT APPLY TO BOTH PATIENTS AND HEALTHY PERSONS?
If, for the sake of argument, we assume that it is in principle legitimate for the state to coerce people into participating in interventional or clinical research, and if we assume that the risks of participation are not too high and that the selection of subjects is done on some fair basis, then it is relatively easy (I don't say absolutely easy) to legitimize requiring healthy persons to serve as subjects, say in Phase I trials or in vaccination trials or in studies of diagnostic techniques. Consider the following example:
Alzheimer's research. Alzheimer disease constitutes an enormous burden on society and its members. To evaluate potential prevention modalities, researchers first need to identify biological markers for its presence. This involves a lumbar puncture—the insertion of a needle into the backs of subjects—to obtain a small amount of cerebrospinal fluid. Researchers need to have samples from Alzheimer's patients and healthy volunteers who serve as controls.76
Assuming that Alzheimer's patients cannot themselves consent to participate, let us also assume that a sufficient number of surrogates for Alzheimer's patients will consent to the procedure because it is low risk and because they believe that participation is consistent with the values of the patient in his or her pre-Alzheimer's condition. Furthermore, let us also assume that few healthy persons would volunteer to undergo the procedure although I think that this is doubtful if people are given incentives to do so. To generate a sufficient number of healthy persons to serve as controls, we could use a process similar to the lottery mechanisms that are used for jury service and make whatever exemptions were thought necessary if participation were particularly burdensome for some.
Of course, even low risk is greater than zero. If large numbers of persons are required to undergo such procedures across the spectrum of biomedical research, we can expect that a few people would be injured or die as a result, just as a few people die as a result of compulsory vaccination and seat belt laws—even if they prevent many more deaths than they cause. If the injuries and deaths consequent to such laws are not decisive objections to the legitimacy of requiring seat belts and vaccinations, then the infrequent deaths and injuries that result from requiring healthy people to participate in low-risk medical research need not be decisive objections to that practice either.
If a sufficient number of surrogates for Alzheimer's patients did not consent for them to participate, would it be legitimate to require patients to undergo procedures such as a lumbar puncture or a blood draw for the non-therapeutic purpose aim of trying to identify biological markers of the disease? Or could we require patients to participate in minimal risk randomized trials such as comparative effectiveness studies between two treatments, both of which are standardly prescribed?
There are competing moral considerations. On the one hand, those with particular diseases are in a unique position to contribute to research, for it is only on them that interventions and pathogenesis studies can be conducted. On the other hand, it may be thought that the sick are already suffering and, as Jonas put it, that ‘the afflicted should not be called upon to bear additional burden and risk [because] . . . they are society's special trust and the physician's particular trust’.77 Second, because disease can strike people ‘randomly’, it puts people at risks that they cannot predict. In addition, it has been argued that if researchers need subjects with particular conditions, then an enforceable ‘universal duty of research participation would do little to meet their needs’.78
I do not think that these objections to the legitimacy of coerced participation of patient subjects are particularly convincing. First, it is unfortunate that the sick are often in a unique position to contribute to the search for generalizable knowledge. It is similarly unfortunate that victims of crime may be in a unique position to contribute to the pursuit of justice. Nonetheless, we still demand that they appear at trial if needed even if they find it inconvenient or have a reason to fear the experience or its consequences. One might object that research does not involve a comparably important public purpose or that it is not a matter of justice. But that objection is orthogonal to the claim that coercive participation of patients is illegitimate because they are already suffering.
Second, although it is true that the sick are already suffering, they may also be the persons who benefit most from biomedical research. So, considerations of reciprocity tell against requiring less from the sick than from healthy persons.
Third, the unpredictability of obligations is a familiar feature of our moral lives. Although some obligations are predictable because they are a function of one's undertakings, as when one makes a promise or assumes parenthood, other obligations are foisted upon us by the circumstances in which we find ourselves, as when we are witnesses to an accident or crime and must report what we have seen and appear in court if necessary. On this score, being able to contribute because one has a disease is no different.
Third, a universal enforceable duty can supply researchers with a supply of subjects with particular conditions depending upon the way in which the universal duty is specified. If all citizens have an enforceable duty to make an easy rescue or report that they witnessed a crime should the situation arise, then there is a universal duty that requires action only when such situations arise. That not everyone will be called to action seems irrelevant to whether the duty is universal or should be enforced. Similarly, if patients can be required to participate in research under ‘to be specified’ conditions, then such a requirement can supply researchers with suitable subjects under the specified conditions. In general, we prefer more systematic methods of insuring that everyone contributes to projects ‘required for the security of all’ rather than impose the burden on the unlucky few. We prefer to socialize or spread the burden of firefighting through our contributions via taxes and hire professional firefighters rather than to ask those near a fire to help out. Most advanced societies socialize the provision of medical care to the poor in one way or another rather than ask physicians to provide care pro bono. But when it is not feasible to socialize the performance of some task as in some cases of rescue, then we can legitimately call on those who are in a position to contribute.
TAKING STOCK (AGAIN)
So we return to the question: can we defend the view that it is illegitimate for the state to require people to participate in interventional research whereas it is legitimate for the state to require people to perform the wide variety of actions that have been mentioned (and others)? I concede that the intuition that the former is illegitimate while the latter are legitimate is very strong. Research exceptionalism runs deep. Yet it is difficult to justify. Assuming the purpose of research is sufficiently public, and if we assume that the research otherwise meets a set of ethical criteria, it is difficult to see why it would be illegitimate to require people to spend the time or undergo the inconvenience and relatively small risks involved in appropriate biomedical research.
FROM LEGITIMACY TO JUSTIFIABILITY (AND BACK AGAIN)
Let us assume, arguendo, that it is in principle legitimate for the state to require that people participate in interventional research. It does not follow that it would be wise, prudent, or morally justifiable to do so. In considering the justifiability of coercive participation or CR, we need to distinguish between ‘first-order’ morality and ‘second-order’ morality. By ‘first-order’ morality I refer to the moral decisions that would be reached by an omniscient moral reasoner who could weigh and aggregate all the relevant moral considerations. By ‘second-order’ morality, I refer to moral decisions that take into account the fact that first-order moral reasoners are not omniscient or are perceived to be so by others.
As a matter of first-order morality, the justifiability of coercing people into participating in research (or, for that matter, doing research without consent) in a particular case turns on at least seven factors (there are no doubt others): (1) the benefit to be gained from research; (2) the risks and burdens of participation; (3) the efficacy of plausible coercive mechanisms; (4) the weight of the ‘deontological’ moral factors that tell in favor of CR; (5) the weight of the indirect or negative externalities that would be generated by the use of coercion; (6) the psychological and social distress that would be caused by a coercive system; (7) the availability of non-coercive means by which to obtain a sufficient number of subjects in a timely manner. Coercive participation would be justified if and only if a sensitive weighing of these factors supports it. I suspect that such a calculation would generally not support the use of coercion, although it is hard to tell. Let us consider the factors I have identified.
a) The benefits of research. The greater the benefits from research, the easier it will be to justify coercive participation. It is difficult to estimate the gain to society from more studies, more complete studies, and more quickly completed studies. One study concludes that new medicines generated 40% of the two-year gain in life expectancy in 52 countries between 1986 and 2000.79
Still, even when research does not lead to significant declines in mortality or morbidity, it can enhance the quality of life. Developments in joint replacement surgery have helped many people gain mobility. And who knows the extent to which drugs for erectile dysfunction have enhanced the quality of the lives of men or their relationships? In addition, comparative effectiveness research can establish which of several common treatments is most effective, and enables us to spend less to achieve comparable medical results. And if recent research on the benefits of mammograms is on the right track, many women will be spared the physical and psychological burdens of the test and society will save hundreds of millions of dollars.80
Estimating the expected benefits of research is very difficult because even if a research protocol fails to generate any significant benefit ex post, a small chance of a large benefit means that much research has a significant benefit ex ante. Something like an NIH ‘scientific review group’ could evaluate the expected benefits from a proposed study, but it is unlikely that we should or would have much confidence in their estimates given that so much valuable research is incremental. Even if the macro-level benefits of the enterprise of research are significant, the link between specific studies and the benefits to other (including future) people is difficult to see. So, even when the expected benefits of research are high, people may not perceive it as such.
b) Risks and burdens. Second, the justifiability of penalizing non-participation in research would surely depend, in part, upon the risks and burdens of participation. If the risks and burdens of participation are offset by compensation, then participation would not constitute a net risk or burden. Just as we coerce people into jury service and then compensate them, we could ‘coerce and compensate’ people into serving as research subjects. If only some people are required to serve as research subject for the sake of public purposes, we can socialize that burden through using tax revenues to compensate them adequately for their service such that participation is reasonably perceived as a benefit (or not a net cost) by most persons. In any case, the magnitude of the uncompensated risks and burdens would have to be part of any reasonable first-order moral calculus as to whether coercion is justifiable.
c) The coercive mechanism. We know what it is like to coerce people to pay taxes, to recycle, to serve on juries, to wear seat belts, and the like. We have less idea as to how coercive participation would actually operate. It is not clear that we could design a coercive mechanism for participation in research that is both effective in motivating people to participate and politically acceptable. If the penalties for non-participation were small and mostly symbolic, as I have assumed, then they may be insufficient to motivate compliance behavior. If the penalties were severe enough to motivate people to comply (for example, like going to jail for contempt of court for refusing to testify as a witness), they might well be viewed as excessively harsh unless there was a substantial cultural shift with respect to the obligation to participate in research.
Now the relationship between cultural support for coercive participation and the introduction of a coercive mechanism can go in both directions. To the extent that people believe that there is a duty to participate in biomedical research, they will be more likely to believe that a coercive approach is justifiable. At the same time, the adoption of a coercive mechanism might express and generate support for the view that participation is obligatory. On the one hand, a cultural shift against smoking and, in particular, the dangers of second-hand smoke led to legislation that prohibits smoking in public places. At the same time, such legislation may have reinforced anti-smoking sentiment.
Still, there has to be some cultural support for legislation to get the ball rolling and it is doubtful that such support is now sufficient with respect to participation in research. It is possible that we could see a shift in public opinion on the obligation to participate in non-interventional medical research such as participating in a registry, making deidentified medical records available and being interviewed by health care personnel to better assess outcomes. But it is unlikely that we will soon witness a sharp change in public opinion with respect to the obligation to participate in interventional biomedical research.
d) Deontological values. Any comprehensive justification for coercive participation or for doing research without consent must put considerable moral weight on ‘deontological’ values such as autonomy, liberty, not being treated merely as a means, respect for bodily integrity, and the like. I place ‘deontological’ in scare quotes for two reasons. First, it is possible that these moral reasons are themselves ultimately grounded in consequentialist considerations even if—at the level of practical ethics—we do not apply them by direct appeal to consequences. For example, although Mill says that the harm principle—which has the form of a deontological principle—is entitled to govern ‘absolutely’ the use of social coercion, he also says the principle is justified on grounds of utility, which he regards as the ‘ultimate appeal on all ethical questions’. So what appears as a deontological-type constraint on government action is rooted in consequentialist considerations.
Second, it is not clear how much weight to assign to these ‘deontological’ considerations. I have argued that, by themselves, these deontological considerations do not support the claim that coercive participation is illegitimate. It's not just that these values may be over-ridden or outweighed under extraordinary conditions. For most non-doctrinaire deontologists will grant that. Rather, it is not clear precisely how much weight such considerations bear for the normal range of public policy issues.
As I have already noted, the Belmont Report is explicitly committed to a pluralistic/balancing view of the basic values of research ethics. And while few (if any) have defended coercive participation, our practices already suggest that we do not regard violations of autonomy or other deontological principles as sufficient to render research without consent to be unethical. It may well be that the deontological values put in jeopardy by interventional biomedical research may be greater than any generic right not to be involved in research without consent. And so they will put greater weight on the scale of justifiability. The magnitude of that weight remains to be settled.
Finally, the strength of the various deontological considerations will turn, in part, on whether we view the interaction as between the state and a prospective subject or between individual investigators and a prospective subject. For whereas the state does not violate our rights when it requires us to perform acts for the public good, other individuals are not authorized to do so. The state can tax us to provide aid to the poor. Robin Hood is still a thief. Whereas the state does not violate our rights when it compels us to be vaccinated for the public good, no individual is authorized to do so. Similarly, whereas the state may not violate a deontological constraint when it coerces people to participate in research, non-state researchers might violate such a constraint if it seeks to do so.
e) Negative externalities. The principal justification for doing research is consequentialist—to improve human well-being—even if it is subject to non-consequentialist constraints. In addition, the principal justification for doing research without consent, insofar as it is justified at all, is that it would generate positive utility that could not otherwise be obtained at acceptable cost. If the positive case for research with or without consent is consequentialist, then it follows that coercive participation is surely not justified if the negative consequences outweigh the positive. And that is distinctly possible.
The negative externalities of coercive participation may take several different forms. First, whether or not coercive participation in biomedical research would constitute an independent wrong, I suspect that many people would experience it as a serious violation unless people's attitudes underwent a significant psychological change. People's fears, aversions, and resentments do not always track physical or moral ‘reality’. Many women have a greater fear of breast cancer than heart disease even though their chances of dying from heart disease are much greater.81 Patients would resent being required to participate in randomized controlled trials because they want to make a choice or have their physicians make a choice among treatments even if there were no reliable basis for making that choice.82 People may resent a coercive medical procedure such as a blood draw more than paying a certain level of taxes even if they would be willing to undergo the procedure if they were paid a comparable amount. In addition, because the harms caused by participation in research tend to be directly traceable to the research interventions, they are more likely to be resented than harms caused by government policies such as speed limits or road salt levels that are not directly traceable to such policies.
Or compare the risks of participation in research with the risks of employment. Although my evidence here is entirely anecdotal, I suspect that people tend to regard the risks of participation in biomedical research as weightier than the risks of employment even when participation is consensual and even though the risks of many jobs such as fishing, construction, and logging are much greater. David Wendler has argued that research-related risks are regarded as particularly fraught because the procedures such as blood draws and lumbar punctures are directly initiated by another person rather than being the result of employment activities that are organized by others but where the injuries are incidental to those activities.83 In addition, people may project some of their attitudes about the ethics of medical care, whose goal is to promote the interests of patients, to the ethics of medical research, whose goal is to yield generalizable knowledge. And this is so even if these activities or relationships should be governed by different norms. Even if people are told that the purpose of research is not to benefit them but to benefit others, they may still tend to assume that physicians and the medical profession are seeking and should always seek to benefit those with whom they interact.
If people would have an aversive reaction to the prospect of coercive participation or at being involved in research without consent (should they come to know about it), how much moral weight should we assign to the simple fact of that aversive reaction? I am not sure, but there are several reasons to take it seriously. First, it is possible that these views reflect a principle of some importance even if those who hold this view are unable to articulate what it is. Second, even if these feelings and beliefs are not independently defensible, they may exert their own moral force. As Nir Eyal puts it (in a related context), ‘. . . the culture of respect for autonomy is beneficial and worth preserving . . . from a consequentialist standpoint. Protecting a culture of respect weighs heavily in support of cultivating opposition to coercion in spheres where coercion is likely to retain its public image as an utter violation of the respect’.84 And this is so even when the use of coercion does not (as I have argued) actually constitute a violation of such respect. The history of abusive medical research (and the perceptions of that history) casts a long shadow even if the egregious abuses of the past are unlikely to reoccur under the present regulatory regime.
A policy of coercive participation might also undermine trust in and support for the research enterprise. It might weaken the public's willingness to support the funding of medical research. And it may alter the public's understanding of the physician–patient relationship. To take but one example, much research with patient subjects is facilitated by treating physicians who identify patients as prospective subjects in research protocols. If patients do not trust their physicians to be concerned only or at least primarily with their interests, then people's trust in their physicians may be weakened.
Along related lines, Alex John London has argued that the system of research oversight by ‘committees of diverse representation’ serves a crucial societal function in addition to protection of individual subjects and the prevention of abusive research: ‘It helps to provide a credible social assurance to the American people that social institutions, funded by their tax dollars and empowered to advance their health and well-being work to: [among other things] respect and affirm the moral equality of all community members’.85 London argues that although individual researchers and projects might benefit from the use of coercion or the use of recruitment measures that involve less than robust consent by participants, the use of such measures would undermine the support on which all rely. From this perspective, it does not matter much as to whether the use of coercion should be viewed so negatively. As long as its use would in fact undermine support for the research enterprise, that is a good reason to avoid it. In sum, we can't weigh the benefits of using coercion as a benefit in our moral calculus without also counting its negative effects, and, at the end of the day, the game of facilitating recruitment may not be worth the candle.
f) Non-coercive strategies of recruitment. Although the problems in the way of the timely recruitment of subjects warrant taking coercive participation seriously, the case is weaker if there are non-coercive or consensual means available. As Victor Tadros puts it, there is a ‘comparative dimension’ to the justification of using coercion.86 Although we may be justified in conscripting people into the military when there are no feasible alternatives, we may not be justified in doing so if we can recruit a sufficient number of qualified persons by offering incentives that are compatible with voluntary consent. And we may not have explored the full potential of recruiting research participants by expanding the proportion of trials that offer payment and by increasing the level of payment offered to subjects.
Would increased reliance on incentives generate a sufficient number of subjects when it is otherwise difficult to do so? For the most part, I think the answer must be yes. To the extent that people avoid participation because of the burdens of research—time, inconvenience, and pain—it should be relatively easy and morally unproblematic to overcome such resistance through the offer of payment. We can surely recruit healthy volunteers to serve as controls in the Alzheimer's study by paying them to undergo a lumbar puncture, and patient subjects might also be paid to accept the extra burden of research-related procedures that do not involve great risk when their care is not compromised. This would include interviews or questionnaires about their health status or outcomes, blood draws, blood pressure readings, and the like.
To the extent that people avoid participation because of the perceived risks of participation, it will be somewhat more difficult to overcome such resistance through offers of payment, but it will often be eminently feasible to do so. First, much research does not in fact pose particularly high risks. Consider what might seem to be a counterexample—a Phase I challenge study of an experimental cholera vaccine at the University of Vermont. The study paid $3000 to those who were randomized to receive a new vaccine or a placebo and then be exposed to the cholera pathogen.87 Whether or not they get sick, the participants can expect to spend at least 10 days in the hospital. There are no serious long-term health risks to cholera if the symptoms are controlled. Those who get sick will experience considerable discomfort and dehydration—they will have a very bad case of diarrhea. The biggest danger is dehydration, but this poses few problems when the symptoms occur in a controlled hospital setting where subjects can receive oral solutions or IVs to maintain their fluids and electrolytes. The researchers had few difficulties recruiting subjects when it offered $3000.
Or consider participation in many randomized controlled trials, including comparative effectiveness trials of standard interventions. These studies may involve research-related procedures beyond the standard treatments, but being randomized to one of those treatments poses little ex ante incremental risks if the arms of the trial are roughly in equipoise and if the patients need one form of treatment or another. I see no reason to doubt that some people would be willing to be randomized for an appropriate payment.
Second, there is nothing unusual or untoward about the idea that people will accept risks in exchange for financial gain or reducing financial loss. People regularly and reasonably accept the risks of employment (think lobster fishermen, coal mining, tunnel digging, truck driving, structural steel workers, loggers, firefighters) in exchange for a wage. More generally, people trade off risk and financial benefit in many everyday decisions. People will go by car rather than fly in order to save money and they may buy less expensive cars rather than more expensive but safer cars.
Third, even if we focus on medical care (as contrasted with research), there is nothing unusual about the trade-off between medical risk and financial benefit. People will take older and cheaper generic drugs than more recent and superior drugs still on patent. People will avoid seeking medical care to save money. Peter Ubel has argued that doctors should discuss out-of-pocket costs with patients just as they discuss any side effects—‘the financial burden of paying for medical care can cause more distress in patients’ lives than many medical side effects, and patients can decide whether any of the downsides of treatment are justified by the benefits’.88 Given that people are willing to accept risks to their life, health, and well-being for financial reasons, there is no reason to think that we could not get many people to accept the risks of participation in research if they were paid an adequate amount and especially if they received appropriate compensation for research-related injuries.
If the problems in the way of recruiting research subjects would respond well to the use of incentives, it is an interesting question as to why incentives are not used more expansively and enthusiastically. Although research sponsors may not want to spend the money, it is likely that the greater problem is that IRB members tend to worry about the ethics of paying research subjects.89 Some of these worries are more valid than others.
First, it may be objected that the use of payment may yield a subject class that is biologically unrepresentative of the target population of the intervention being tested and thereby compromise the scientific validity of a study. In addition, the use of payment may compromise scientific validity if it leads prospective subjects to lie about or withhold information that would lead to exclusion from the study. Although payment should not be used if it compromises scientific validity, its use is often quite compatible with scientific validity when proper controls are in place, and so I will assume that is so in what follows.
Second, it might be objected that increasing the prevalence and amount of payment might constitute coercion or undue inducement and thereby jeopardize the validity of consent. Offers of payment do not coerce because they do not constitute a threat of harm for non-participation. Some think that one is coerced or that one's consent is not voluntary if one has no reasonable alternative but to accept an offer of payment in exchange for participation. I disagree. After all, a patient is not coerced to consent to medical treatment just because she has no reasonable alternative.90 And contrary to what many believe, a prospective subject is not unduly induced or influenced to participate simply because an inducement gets them to participate when they would otherwise not do so. After all, there is nothing morally problematic about inducing someone to mow one's lawn by offering them $20 to do so. Rather, offers of payment constitute an undue inducement and thus compromise the validity of consent if and only if they distort the prospective subjects’ ability to weigh the risks and benefits of participation, and there is little evidence that payment leads to such distortion.
Third, it may be objected that even if payment does not compromise the validity of consent, the use of payment as a recruitment strategy will unfairly burden the poor. If we pay people from public funds to participate in medical research and if the subject pool is disproportionately poor, then the affluents are effectively using the tax system to buy their way out of participating in research.
There are several different albeit related worries here. A democratic or egalitarian argument might claim that it is important that all citizens do their part in providing certain services. There may be something to this thought, but the reasons are not that strong. If we are prepared to allow military force that is based on the use of incentives rather than conscription, despite its demographic unrepresentativeness, I see no reason to think that this argument should disallow a system of using incentives to recruit research participants.
Second, it may be thought that it is unseemly for the affluent to pay people to serve as research participants in their stead. But paying people to do things for us is a characteristic of virtually all work. We pay others to manufacture our cars and clothing, to mine coal, to serve in the military in our stead, to provide public services such as firefighting and police protection and to provide personal services such as landscaping, massage, hair styling, garbage collection, house cleaning, waiting tables, and the like. And it is hard to see why we should regard paying people to participate in research as morally unworthy or unseemly while it is perfectly permissible to pay people to perform these other—often dirty and disagreeable—tasks.
Third, it may be thought that an expanded use of payment will unfairly burden the poor because they would be accepting a disproportionate share of the risks and burdens of participation in research. This argument depends on a dubious conception of ‘burden’. The question is not whether there are disagreeable dimensions (risks and burdens) of participation in research any more than whether there are disagreeable dimensions to working. The question is whether the value of payment to the subject is greater than the disvalue of the risks and burdens of participation. And, if it is, then those who participate are benefitted and not burdened by participation, all things considered.
Fourth, it may be argued that an increased use of payment would reduce the willingness of people to participate in research altruistically. As Richard Titmuss argued with respect to blood donation, people may want to contribute something that cannot be purchased and so the use of payment may deter some people from participating even if it also incentivizes others.91 There are two issues here. First, and with respect to recruitment itself, if the overall effect of payment on recruitment were negative, then we could not justify using payment as a recruitment strategy. But the evidence suggests that Titmuss is wrong; the overall effect on payment for blood is to increase ‘donations’.92 Similarly, it is highly probable that even if the use of payment reduces altruistic participation, its overall effect is to facilitate recruitment. Second, it may be argued that it is of sufficient independent moral importance that people participate in research for altruistic reasons such that we should avoid using incentives even if it facilitates recruitment. I have not encountered a plausible statement of this argument.
There is another way to put the general argument against coercive participation in the face of the option of using incentives. As I have noted above, the Common Rule allows for waivers of informed consent only when there is no ‘practicable’ way to conduct the research with valid consent. Let us assume that the regulation's ‘practicability’ criterion reflects a sensible ethical position. If the use of incentives is compatible with valid consent but the use of coercion is not, and if it is possible to facilitate recruitment through incentives rather than coercion, then it is simply not true that there is no practicable way to conduct research without the use of coercion.
JUSTIFYING COERCION (A SUMMARY)
I have argued that even if it would be legitimate for the state to coerce people to participate in biomedical research under certain conditions, it may still be unjustifiable to do so all things considered. As a matter of first-order moral judgement, the use of coercion may be unjustifiable because a coercive system would be inefficacious or too harsh, because the benefits would not be sufficiently large to override the value of autonomy and control of one's body, because the negative externalities are too great, and because there are incentive-based systems available that could generate an increased and faster rate of recruitment.
THROUGH THE BACK DOOR
I have argued that no simple principle would justify CR or entail that coercive participation in interventional biomedical research is illegitimate. Contrary to what is often supposed, it is simply not true that informed consent is a fundamental ethical requirement of research or biomedical research. Still, I have argued that, all things considered the balance of moral reasons might well tell against the use of coercion in most cases. Other things being equal, it is certainly desirable to seek informed consent. But even if an accurate first-order moral calculation justified the use of coercion in some cases, there may be good second-order reasons to adopt a general prohibition against its use in interventional biomedical research while, perhaps, still allowing the use of coercion in non-interventional biomedical research or behavioral research such as the US Census. More generally, there may be good second-order reasons to adopt CR for interventional biomedical research while rejecting CR as a general requirement for ethical research, per se.
Here we may make a distinction between (1) research without valid consent and (2) coercive participation. It is entirely possible, nay likely, that we can follow the general approach of the Common Rule and carve out exceptions to CR that allow for research without consent or without valid consent (or with deceived consent) under certain specific conditions without undermining a commitment to the general importance of consent. The best conception of these exceptions to CR may not be identical with the provisions in the Common Rule, but they will be similar. Not only will such exceptions be defensible on first-order moral grounds, but there may be no strong second-order reasons to bar such exceptions. But coercion is different. Even if an omniscient moral calculation might support the use of coercion in certain cases, there may be compelling second-order reasons to bar coercion in all but a few cases given that government officials are not omniscient reasoners and are certainly not perceived as such. Given the value of clear and firm rules, it may be better to adopt a simple inflexible prohibition against coercion under the actual conditions in which we live. Or, to put the point slightly differently, we may sensibly decide to treat or regard the use of coercive participation as illegitimate even if it is not illegitimate at its core.
Here we encounter an issue that is well known in the law, namely the choice among using rules, standards, and principles. Rules are the most constraining and rigid. It is a rule that one must be at least 21 to buy alcohol in Vermont or that one must be at least 35 to be the President of the United States. A rule may have to be interpreted, as in the old legal chestnut as to whether a rule that says ‘no vehicles in the park’ applies to bicycles or toy trucks or an old tank mounted on a platform. But once a rule has been interpreted, the application of the rule to the facts is relatively straightforward. The question is not whether a person is ‘mature enough’ to drink, but whether she is 21.
Standards define a set of mandatory considerations but provide for a greater range of choice and discretion by decision-makers. Consider child custody disputes. A law that states ‘Custody should be awarded to the mother or the primary caregiver if she (or he) wishes to have custody’ would be a rule. Such a law allows little judicial discretion. By contrast, a law that says, ‘In awarding custody, courts should be guided by the “best interests” of the child’ would be to employ a standard. It specifies a mandatory and exclusive guideline, but judges would have considerable discretion as to how to apply it.
By contrast with rules and standards, principles are even less constraining. They identify a consideration that should inform a decision, but they allow that other considerations may be relevant. Consider a law that states, ‘In sentencing a person convicted of a crime, judges should take into account the severity of the crime’. This principle does not exclude other considerations, such as the prior record of the criminal or whether he represents a danger to the community. It merely states that the severity of the crime should be a factor in the decision.
In a world with excellent decision-makers and widespread trust in their capacities, we would not need to rely (so much) on hard and fast rules or standards. With respect to the problems discussed in this article, we would ask the decision-makers to decide whether the balance of justificatory considerations requires that subjects be asked for their informed consent, but we would allow them to treat these moral considerations as principles and thus approve the use of coercion (or allow for research without valid consent) when it is justified, all things considered, and disapprove its use when it is not.
There are at least three related difficulties with opting for discretionary decision processes as opposed to using rules or standards. First, such processes are liable to excessive mistakes. As Frederick Schauer puts it, ‘rule-based decision-making is premised in part on the belief that none of us, ordinary or not, have the mental capacity incessantly to consider all of the things than an “all things” considered decision-making model requires of us’.93 Consider the regulation of traffic at intersections. The societal goal is to move traffic through the intersection quickly and safely. There are some intersections where it is efficient and sufficiently safe to use no signs or yield signs. We ask drivers to go when it's safe and yield to other cars when it's not. But there are many intersections where the advantages of allowing drivers to use discretion are outweighed by the dangers. The aggregate cost of even relatively few accidents (frequency X magnitude of cost) may vastly outweigh the benefit (time, fuel, etc.) of avoiding unnecessary stops. And so we use stop signs even though it would often be perfectly safe for drivers to proceed cautiously through the intersection without stopping.
Second, there are social costs to allowing decision-makers to use discretion when (too) many suspect that the criteria are not fairly or correctly applied and cannot reliably predict the way in which such decisions will be made. Consider the decision as to whether to allow someone to purchase alcoholic beverages. We could ask sellers to evaluate the maturity of the purchaser than to use an age-based rule. And it is possible that sellers could do a better job of excluding the immature and including the mature than mechanically applying an age-based rule. But, in part, because we do not trust decision-makers to apply that criterion in a reliable or fair way, we prefer to rely on an arbitrary age, full well understanding that this rule allows some to buy alcohol who we should not allow to buy and excludes many who are mature enough to consume alcohol. Such is life.
Third, discretionary decision processes can place excessive burdens on the decision-maker. In some contexts, those burdens are acceptable. It is plausible to suppose that in child custody cases, the advantages of allowing discretion are sufficient to outweigh the inevitable bad decisions it allows and the costs of litigation and bargaining that it encourages. And the cases are not so frequent so as to impose unacceptable decision-making costs on family court judges.94 In other contexts, however, the burdens are excessive both psychologically and economically. We reduce decision costs by using rules rather than standards or principles.
Mill appeals to something like this argument for rules in defending his harm principle. Anticipating the argument that it may sometimes be best to paternalistically interfere with a person's decision, he replies that the ‘strongest of all the arguments against the interference of the public with purely personal conduct, is that when it does interfere, the odds are that it interferes wrongly, and in the wrong place’.95 This line of argument need not deny that interference with ‘purely personal conduct’ is sometimes justified. It actually assumes it. After all, for it to be the case that the odds are that interference with ‘purely personal’ conduct is wrong, it must be the case that interference with such conduct is sometimes right. In effect, Mill is claiming that because interference with ‘purely personal conduct’ is usually wrong and because society cannot be trusted to interfere primarily when it is likely that such interference is right, it is better to adopt a rule that bars paternalistic intervention. Better to treat all such interference as illegitimate rather than allow decision-makers to determine when such interference is justified and when it is not.
There are numerous decision contexts in which we forego attempting to use theoretically optimally principles and make do with rules that are good enough and command widespread social acceptance rather than rely on principles or standards. Consider sexual relations between psychotherapists and patients.96 Such relations might be morally permissible if both parties could give valid consent and if such relations were not harmful to patients. But even if those conditions sometimes obtain, as must be the case, there is a good reason to think that neither psychotherapists nor patients are well positioned to judge when that is so. Given that the ‘odds are’ that a patient's consent is tainted by transference or a function of underlying mental disorders or that the psychotherapist's judgement is tainted by countertransference, and given that such relations are likely to be harmful to the patients or interfere with a beneficial psychotherapeutic relationship, society is well advised to adopt a hard and fast ban on such relations. It is sometimes said that psychotherapy patients can never give valid consent to sexual relations with their psychotherapists. I doubt that this is actually true. Nonetheless, it may be quite sensible to follow a rule that always treats such consent as invalid or as insufficient to render such relations permissible.
So, too, for the use of coercive participation in interventional biomedical research. Given that the use of coercion would only rarely be justified, and given the choice between an unreliable mechanism for determining when coercion should be used and the adoption of a rule that prohibits its use, it might be preferable to draw a bright line around interventional biomedical research and simply bar the use of coercion. We would continue to allow coercion in other contexts, such as requiring people to serve on juries or to recycle, but it might be better to adopt a rule that would ban the use of coercion in all interventional biomedical research, than to open the door to allow decision-makers to exercise discretion in determining if and when it should be permitted. As a principle of second-order morality, we adopt the rule that a certain class of research cannot go forward without informed consent.
I say a certain class of research. In the context of social and behavioral research or non-interventional medical research where the risks of participation are low and it is not feasible to garner individual consent, I believe that the Common Rule's criteria for waivers of consent are on the right ethical track in adopting a ‘standards’ rather than ‘rules’ approach. They allow IRBs to exercise discretion and allow for research without consent, although IRBs no doubt sometimes refuse to allow such waivers when they should, and sometimes wrongly grant such waivers when they should not. The more discretionary approach to deciding when to require informed consent works reasonably well because most research without consent (as in cluster randomized trials) or without valid informed consent (as in social and behavioral research that uses deception) is of relatively low visibility. It does not seem to generate the sorts of negative externalities that would likely be generated by the use of coercion. By contrast, given that the costs of allowing coercive participation in the biomedical context would be quite visible and provide a field day for Fox News, it is probably better to treat coercion as illegitimate as a matter of course and to require informed consent.
There is also a political dimension to second-order morality. Any justifiable policy must pass the test of democratic legitimacy. It is plausible to assume, for example, that the conditions for altering or waiving informed consent as specified in the Common Rule meet that test—the law that includes these provisions was approved by Congress. But even if I am right in suggesting that participation in research often constitutes a collective action problem of the sort for which we regularly use coercion by the state, it is not generally discussed or seen in those terms. It continues to be viewed, and understandably so, as an interaction between investigators and subjects, where individuals are not permitted to intentionally put others at risk or invade a person's body without consent. Although much non-interventional research proceeds without consent or without informed consent, a generalized commitment to the value of consent (subject to specific exceptions) is well entrenched. So even if people should treat coercive participation as legitimate and potentially justifiable, it is unlikely that they will do so barring a significant change in public opinion. And that makes a change in policy both politically unlikely and morally questionable given a commitment to democratic norms.
To exemplify the previous point, consider the case for barring the ownership of handguns. If we could turn back the clock such that the Constitution did not include the Second Amendment, then we might be well served by the sort of general prohibition on handguns. But there is no turning back, even if a ban on some sorts of firearms (such as assault weapons) is at least a political possibility. In a similar vein, I once argued that there are good reasons to adopt a policy of compulsory voting in the United States, as has been done in several other Western democracies. I also argued that ‘it is a good idea whose time is either past or has not yet come’.97 Also along similar lines, Aaron Spital has argued that while a policy of conscription of cadaveric organs for transplantation would save lives and would pose no harm to the dead, most people oppose such a policy, and so he reluctantly concludes that this is a ‘stimulating’ idea whose time has also not yet come.98
Even if coercive participation were legitimate and more often justifiable than I am inclined to think, much the same may be true in the context of interventional biomedical research. And this is particularly so given the fear—supported by many bioethicists and the subject protection industry—that any weakening of CR would put us on a slippery slope to Nazi-like or at least Tuskegee-like experimentation with human subjects. It is true that people's views can change. Same-sex marriage was not on the radar screen 20 years ago, but is now widely accepted. Still, I do think that society's view about the importance of consent in interventional biomedical research is likely to witness a significant change in the foreseeable future.
CONCLUSION
The major purpose of this article is to ask why we should require informed consent to biomedical research. As an argumentative strategy to make progress on these (and related) questions, I examined the case for the legitimacy of coercive participation in interventional biomedical research. Many seem to think that it is obvious that coercive participation would be wrong and that it is also obvious why that is so. I have argued that the principle and its justificatory story are more complex and pluralistic. It would be nice if we were able to ground CR or a ban on coercive participation in a simple and uncontroversial ethical principle such as respect for persons or not treating people merely as a means or the sanctity of a line around ‘the body’. But if I am right, that is not to be.
I have argued that if we view participation in interventional biomedical research as an interaction between the state and the individual, then no straightforward argument for regarding coercive participation as illegitimate or for requiring voluntary and informed consent for participation in biomedical research can be made to work. We cannot get to that view through the front door. But we can get there through the back door, by seeing the claim that coercive participation is illegitimate as justified as a second-order principle that is rooted in our lack of confidence that any institution has the capacity to make reliable judgements as to when coercive participation is justified by first-order moral principles. Indeed, the same strategy can justify treating all deviations from CR as illegitimate in interventional biomedical research.
As I have argued throughout, it is the interventional dimension of research that is crucial to the argument for CR, rather than the fact that any such intervention is undertaken as research. We have already accepted—as we should—a regime in which a great deal of non-biomedical research can take place without the informed consent of participants. We simply do not believe that there is a strong presumption that it is unethical to engage in research as such without the subject's valid consent. Indeed, and as I have noted above, we are prepared to conduct much biomedical research without informed consent as in cluster randomized trials and perhaps in comparative effectiveness studies when they do not involve biomedical interventions that would not occur if the subjects were not participating in research. And if the advocates for a learning health care system have it right, these exceptions should grow in the coming years, particularly given the possibilities of using ‘big data’ resources with medical records, tissues, etc. Of course, even if consent is not required in many types of biomedical research, it may still be morally desirable to obtain consent when it is practicable to do so. But the reasons for seeking informed consent will fall far short of the reasons that are often advanced in its defense.
If I am right, the need for consent in interventional biomedical research or, indeed, in any form of research may have little to do with the fact that the interaction between investigators and subjects is pursuant to generalizable knowledge. As an interaction between individuals (as opposed to an interaction between the state and individuals), there are some actions we can undertake that affect others that do not require consent and some that do. And we have to determine when and why consent is required. We can do lots of things that have adverse effects on others without their consent, but, as a general rule, we are not entitled to touch or invade another's body without their consent. The targets of such interventions should be informed of the purposes, risks, and benefits of such interventions. Thus, they should be informed that the purpose of the intervention is research—not so much because ‘it's research’ and research is subject to special ethical principles. Rather, targets of such interventions should have the information relevant to an intelligent decision—whatever the purposes of an intervention. In addition, and as a general ethical principle, we are not entitled to ask others to spend time on our projects—whatever they are—without their undeceived consent, whether or not our projects have anything to do with the development of generalizable knowledge.
I am eminently aware that the argument I have given for requiring consent to interventional biomedical research and for regarding coercive participation as illegitimate will prove unattractive to many, as it is decidedly deflationary, inelegant, partially consequentialist, institutional, and, dare I say, political. I understand the attractions of Occam's razor. But if I am right, the truth about CR and the legitimacy of coercive participation is indirect, deflationary, inelegant, partially consequentialist, institutional, and political.
Acknowledgments
The author would like to acknowledge the comments offered by too many colleagues and friends to mention. They know who they are. He would also like to acknowledge the advice of anonymous reviewers for this journal. The views expressed in this article are those of the author. They do not represent the views of the National Institutes of Health or the Department of Health and Human Services. This manuscript was developed while the author was employed as a Research Scholar in the Department of Bioethics, National Institutes of Health.
Footnotes
http://www.hhs.gov/ohrp/archive/nurcode.html (1949) (accessed May 16, 2014).
45 C.F.R. § 46 (2009).
http://www.wma.net/en/30publications/10policies/b3/ (accessed May 16, 2014).
http://www.hhs.gov/ohrp/humansubjects/guidance/belmont.html (1979) (accessed May 16, 2014).
Robert M. Veatch, Ethical Principles in Medical Experimentation, in Ethical and Legal Issues of Social Experimentation 21 (Alice Rivlin & Michael Timpane eds., 1975).
Dan W. Brock, Philosophical Justifications of Informed Consent in Research, in The Oxford Textbook of Clinical Research Ethics 606 (Ezekiel Emanuel et al. eds., 2008).
Ezekiel J. Emanuel, David Wendler & Christine Grady, What Makes Clinical Research Ethical?, 283 JAMA 2701 (2000).
Paul Piff, et al., Higher Social Class Predicts Increased Unethical Behavior, 109 Proc Natl Acad Sci USA 4086 (2012). DOI: 10.1073/pnas.1118373109.
A book about people's propensity to lie and cheat relies heavily on deceptive experiments. See Dan Ariely, The Honest Truth about Dishonesty: How We Lie to Everyone—Especially Ourselves (2012).
Alex Capron, Subjects, Participants, and Partners: What Are the Implications for Research as the Role of Informed Consent Evolves?, in Human Subjects Research Regulation: Perspectives on the Future (I. Glenn Cohen & Holly Lynch, eds., forthcoming).
45 C.F.R. § 46.116(c) (2009).
The locus classicus is Appelbaum; Paul S. Appelbaum, Loren H. Roth & Charles Lidz, The Therapeutic Misconception: Informed Consent in Psychiatric Research, 5 Int J Law Psychiatry 319.
Lynn A. Jansen et al., Unrealistic Optimism in Early-Phase Oncology Trials, 33 IRB: Ethics & Hum Res 1.
Brock, supra note 6, at 609.
Simon N. Whitney & Carl E. Schneider, Viewpoint: A Method to Estimate the Cost of Lives of Ethics Board Review of Biomedical Research, 269 J Intern Med 396 (2011).
Hans Jonas, Philosophical Reflections on Experimenting with Human Subjects, 98 Daedalus 219 (1969).
Id. at 230.
Id. at 245.
See John Harris, Enhancing Evolution: The Ethical Case for Making Better People (2007).
Yoram Unguru, The Successful Integration of Research and Care: How Pediatric Oncology Became the Subspecialty in Which Research Defines the Standard of Care, 56 Pediatr Blood Cancer 1019 (2011).
Owen Schaefer, Ezekiel J. Emanuel & Alan Wertheimer, The Obligation to Participate in Biomedical Research, 302 JAMA 67 (2009).
Ruth R. Faden et al., An Ethics Framework for a Learning Health Care System: A Departure from Traditional Research Ethics and Clinical Ethics, 43 Hastings Center Rep S16 (2013).
Id.
Michael Otsuka, Freedom of Occupational Choice, 21 Ratio 440 (2008).
Victor Tadros, The Ends of Harm (2013).
Nagel Thomas, Libertarianism without Foundations, 85 Yale Law J 136 (1975).
Joel Feinberg, Harm to Others (1984). The other volumes are Offense to Others (1985), Harm to Self (1986), and Harmless Wrongdoing (1990).
Id. at 6 (original emphasis).
Jay Katz, Human Sacrifice and Human Experimentation: Reflections at Nuremberg, 5 Occasional Paper (1996). http://digitalcommons.law.yale.edu/ylsop_papers/5/
Robert J. Levine, Consent Issues in Human Research, in Encyclopedia of Bioethics 1241 (Warren T. Reich ed., 2nd ed. 1995), reprinted in Ethical and Regulatory Aspects of Clinical Research (Ezekiel J. Emanuel et al. eds., 2003).
Ruth R. Faden & Tom L. Beauchamp, A History and Theory of Informed Consent (1986).
Rieke van der Graaf & Johannes J. M. van Delden, On Using People Merely as Means in Clinical Research, 26 Bioethics 76 (2012).
See Amartya Sen, Rights and Agency, 11 Phil & Pub Aff 3 (1982).
Samuel J. Kerstein, How to Treat Persons (2013).
Derek Parfit, On What Matters (2011).
Richard J. Arneson, The Shape of Lockean Rights; Fairness, Pareto, Moderation, and Consent, 22 Soc Phil & Pol'y 255 (2005).
Christine M. Korsgaard, The Reasons We Can Share: An Attack on the Distinction between Agent-Relative and Agent-Neutral Values, 10 Soc Phil & Pol'y 24 (1993).
Allen Buchanan & Dan W. Brock, Deciding for Others, 64 Milbank Q. 17 (1989).
Id. at 47.
Capron, Legal and Regulatory Standards of Informed Consent in Research in supra note 6, at 620.
Richard J. Arneson, Self-Ownership and World Ownership: Against Left-Libertarianism, 27 Soc Phil & Pol'y 168 (2010).
Allan Gibbard, Reconciling Our Aims: In Search of Bases for Ethics (2008).
Buchanan & Brock, supra note 40, at 216.
John Rawls, A Theory of Justice (1971).
Tom L. Beauchamp & James F. Childress, Principles of Biomedical Ethics (5th ed. 2011).
David Archard, Informed Consent: Autonomy and Self-Ownership, 25 J Applied Phil 19 (2008).
Faden et al., supra note 23, at S20.
Nir Eyal, Review Essay: Is the Body Special? Review of Cecile Fabre, Whose Body Is It Anyway? Justice and the Integrity of the Person, 21 Utilitas 233 (2009).
Id. at 237.
Kasper Lippert-Rasmussen, Against Self-Ownership: There are No Fact-Insensitive Ownership Rights Over One's Body, 36 Phil & Pub Aff 86 (2008).
Robert Nozick, Anarchy, State, and Utopia (1974).
Charles Fethe, Beyond Voluntary Consent: Hans Jonas on the Moral Requirements of Human Experimentation, 19 J Med Ethics 99 (1993).
Charles Fried, Medical Experimentation: Personal Integrity and Social Policy (1974).
Matt Zwolinski, The Separateness of Persons and Liberal Theory, 42 J Value Inquiry 147 (2008).
Daniel J. Wakin, Not Everyone's in Tune over Precious Violins, New York Times, Jan. 28, at SR4 (2012).
Cecile Fabre, Whose Body Is It Anyway? (2006).
David Sobel, Self-Ownership and the Conflation Problem, in 3 Oxford Studies in Normative Ethics 98–122 (Mark Timmons ed., 2013).
James Griffin, On Human Rights (2008).
Judith J. Thomson, The Realm of Rights (1990).
Fried, supra note 55, at 23.
Jonas, supra note 17, at 245.
McFall v. Shimp, 10 Pa. D. & C. 3d 90 (1978).
Fried, supra note 55, at 23.
John Locke, Second Treatise of Civil Government, § 119, 1690.
Edward M. Gramlich & Larry L. Orr, The Ethics of Social Experimentation, in Ethical and Legal Issues of Social Experimentation 105 (Alice Rivlin et al. eds., 1975).
John S. Mill, On Liberty, c. 1 (2005).
Id.
Lochner v. New York, 198 U.S. 45 (1905).
John S. Mill, Principles of Political Economy (1848, book 5, c. 11, § 12).
David Brooks, The Great Divorce, New York Times, Jan. 31, at A 75 (2012).
When Seinfeld and friends were arrested for violating a good Samaritan law, their lawyer, Jackie Chiles, remarked: ‘You don't have to help anybody; that's what this country's all about’.
Arthur Ripstein, Three Duties to Rescue: Moral, Civil, and Criminal, 19 Law & Phil 751 (2000).
Faden et al., supra note 23, at S22.
I paraphrase an example from David Wendler, What We Worry about when We Worry about the Ethics of Clinical Research, 32 Theoretical Med & Bioethics 161 (2011).
Jonas, supra note 17, at 238.
Robert Wachbroit & David Wasserman, Research Participation: Are We Subject to a Duty?, 5 Am J Bioethics 48 (2005).
Frank R. Lichtenberg, The Impact of New Drug Launches on Longevity: Evidence from Longitudinal, Disease-Level Data from 52 Countries, 1982–2001, National Bureau of Economic Research Working Paper No. 9754 (Cambridge, MA: NBER, June 2003).
Anthony Miller et al., Twenty Five Year Follow-Up for Breast Cancer Incidence and Mortality of the Canadian National Breast Screening Study: Randomised Screening Trial, 348 BMJ g366 (2014).
Marilyn C. Morris & Robert M. Nelson, Randomized, Controlled Trials as Minimal Risk: An Ethical Analysis, 35 Crit Care Med 940 (2007).
Wendler, supra note 76.
Nir Eyal, Reconciling Informed Consent with Prescription Drug Requirements, 38 J Med Ethics 589 (2012).
Alex J. London, A Non-Paternalistic Model of Research Ethics and Ovesight: Assessing the Benefits of Prospective Review, 40 J Law Med Ethics 930 (2012).
Tadros, supra note 26.
Ken Pincard, UVM Will Make People Sick to Test a Cholera Vaccine, Seven Days, Aug. 14, 2013. http://www.sevendaysvt.com/vermont/uvm-will-make-people-sick-to-test-an-experimental-cholera-vaccine/Content?oid=2265749 (accessed May 20, 2014).
Peter A. Ubel, Doctor, First Tell Me What It Costs, New York Times, Nov. 3, 2013, at A25.
Emily Largent et al., Misconceptions and about Coercion Undue Influence: Reflections on the Views of IRB Members, 27 Bioethics 500 (2013).
Alan Wertheimer & Franklin Miller, Payment for Research Participation: A Coercive Offer?, 34 J Med Ethics 389 (2008).
Richard Titmuss, The Gift Relationship (1971).
Nicola Lacetera, Mario Macis & Robert Slonim, Economic Rewards to Motivate Blood Donations, 340 Science 927 (2013).
Frederick F. Schauer, Playing by the Rules (1991).
Even here, Jon Elster thinks there are good reasons to favor rules over something like the ‘best interest’ standard; see Solomonic Judgments (1989).
Mill, supra note 68, c. 4.
See Alan Wertheimer, Exploitation, c. 6 (1996).
Alan P. Wertheimer, In Defense of Compulsory Voting, in Nomos XVI: Participation in Politics 276–296 (James R. Pennock & John W. Chapman eds., 1975).
Aron Spital, Conscription of Cadaveric Organs for Transplantation: A Stimulating Idea Whose Time Has Not Yet Come, 14 Camb Q Healthc Ethics 107 (2005).