Skip to main content

Some NLM-NCBI services and products are experiencing heavy traffic, which may affect performance and availability. We apologize for the inconvenience and appreciate your patience. For assistance, please contact our Help Desk at info@ncbi.nlm.nih.gov.

NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 Nov 1.
Published in final edited form as: Bioethics. 2020 Apr 7;34(9):899–905. doi: 10.1111/bioe.12743

Using, Risking, and Consent: Why Risking Harm to Bystanders is Morally Different from Risking Harm to Research Subjects

Alec Walen 1
PMCID: PMC7541549  NIHMSID: NIHMS1578643  PMID: 32266732

Abstract

Subjects in studies on humans are used as a means of conducting the research and achieving whatever good would justify putting them at risk. Accordingly, consent must normally be obtained before subjects are exposed to any substantial risks to their welfare. Bystanders are also often put at risk, but they are not used as a means. Accordingly—or so I argue—consent is more often unnecessary before bystanders are exposed to similar substantial risks to their welfare.

Keywords: Risk, Bystanders, Informed consent, the Means principle, Research ethics, Human subjects research


Subjects in studies on humans are used as a means of conducting the research and achieving whatever good would justify putting them at risk. Accordingly, consent must be obtained before subjects are exposed to any substantial risks to their welfare. Bystanders are also often put at risk, but they are not used as a means.1 Accordingly—or so I will argue—consent often need not be obtained before bystanders are exposed to any substantial risks to their welfare.

I am not suggesting that the risks imposed on bystanders are insufficiently grave to merit moral concern. Those risks can include the extremes of illness and death, which might impact entire communities—or even all of humanity, if deadly new communicable diseases are created and released.2 Nor am I suggesting that proposals to inform bystanders of the risks they face and to seek their consent where possible are misguided. My point is that subjects’ claims not to be put at risk of harm are significantly stronger, all else equal, than the claims of bystanders, and that, as a result, it is often permissible to proceed without the consent of bystanders even though it would be impermissible to proceed without the consent of subjects, all else equal.

The case for treating subjects and bystanders differently cuts in both directions: if the claims of bystanders are wrongly elevated and treated as being on a par with the claims of subjects, that might put unnecessary hurdles in the way of important research. Conversely, if the claims of subjects are wrongly denigrated and treated as being on a par with the claims of bystanders, that might undermine the importance of seeking the consent of subjects.

I proceed in six sections: First, I introduce a test case to illustrate my thesis. Second, I demonstrate that my thesis is not already generally accepted by others who write on the ethics of risk to bystanders. Third, citing examples and discussing moral theory, I explain why, absent consent, harmfully using another as a means is harder to justify than causing the same sort of harm as a side effect. Fourth, I explain why that distinction applies to research ethics, which deals in risk rather than straightforwardly causing harm. Fifth, I discuss two cases that may seem to be problematic for my thesis. Finally, I explain why my thesis does not show morally objectionable disrespect to bystanders exposed to risk from research.

TEST CASES

Compare two scenarios in which subjects and bystanders might each seek to exercise a veto over a research project insofar as it affects them. Assume that the research project promises to find a cure for a disease, a cure that would save thousands of lives a year. If the research is not carried out successfully, development of and approval of the treatment will be delayed for at least a few years, and many thousands would likely die who otherwise would live. Against that backdrop, compare the following two cases:

Unwilling Subjects

The research would produce robust results only if 100 subjects participate in the study, and of the thousands of potential subjects (those who have some relevant condition), only 50 consent to being subjects. As a result, either the study’s results would not suffice to get the approval of some regulatory agency whose approval is necessary before treatment can begin, or some extra 50 people will have to be forced into being subjects without their consent.

Unwilling Bystanders

The research must be conducted in 100 different locations, but of the thousands of potential locations, only 50 can be found where there is not a bystander who refuses to consent to the study being performed there. As a result, either the study’s results would not suffice to get the approval of some regulatory agency whose approval is necessary before treatment can begin, or the objecting bystanders in 50 locations (we can assume there is only one in each) will be forced either to suffer the risks associated with the study or to disrupt their lives in a very substantial way (assume that moving away would be a cost even more dire than accepting the risk, so that accepting the risk is, in the end, what the objectors would be effectively forced to do).

To finish fleshing out the thought experiment, assume that the risks to the subjects in Unwilling Subjects are expected to be the same as the risks to the bystanders in Unwilling Bystanders: a 10% chance of developing a serious rash, and a 1% chance of paralysis. Assume that neither the subjects nor the bystanders would benefit directly from the research, but that they have an equal chance of benefitting from any treatment that would result from the research. And assume that there is no feasible way, short of using coercive pressure, for the researchers either to induce more subjects to participate in Unwilling Subjects or to placate the concerns of the bystanders in Unwilling Bystanders.

I believe and will argue that it is easier to justify proceeding in Unwilling Bystanders than in Unwilling Subjects. If too few subjects are willing to participate in the research, we should accept that cost (i.e., the delay in developing and getting approval for the sought after treatment), because humans are not Guinea pigs.3 But if there are too many bystanders who object to the research, it might be morally reasonable to reject their veto. This is because bystanders’ claims to withhold their consent and thereby veto what researchers would otherwise have morally sufficient reason to do are weaker than the analogous claims of subjects.4

CONTRASTING MY THESIS WITH THE GENERAL LINE ON SUBJECTS AND BYSTANDERS

The standard line of writers discussing the ethics of imposing risks on bystanders is to note that bystanders can be put at risk just like subjects, and then to infer from that fact alone that the ethics of consent should be the same for both. Here are three examples:5

  • Evans, Lipsitch, and Levinson appeal to the fact that research can risk harm to both subjects and bystanders to ‘argue that it is a conceptual and ethical mistake to maintain a hard, binary distinction between research involving direct human participants,’ i.e., subjects, and research that only ‘poses risks to non-participants.’6

  • Barker and Polcrack argue that human moral status implies that ‘each person subjected to risk,’ i.e., bystanders as well as subjects, ‘must give informed consent to that risk.’7

  • Battin et al. treat ‘third parties who are at foreseeable and direct risk of contagion when research with human subjects involves a communicable agent’ as being in a position in many ways ‘analogous to direct subjects of the research.’8 The reason they cite is ‘that the research puts them at risk … in the same way direct subjects are put at risk.’9 They then propose a solution that is effectively the same for bystanders as the one accepted for subjects: if the risks to bystanders ‘are serious and unavoidable … their informed consent should be required’ just as it is for research subjects.10

In all three cases, the authors look to the principles that animate the guidelines protecting subjects—e.g., the Belmont Report—and seek to reason from those principles to the protection of bystanders. But in doing so they fail to note that the guidelines that they appeal to are especially careful to protect subjects from harm because subjects are used as a means of doing research.11 In what follows, I try to make the argument that this assimilation of bystanders to subjects is indeed misguided.

IN DEFENSE OF THE MEANS PRINCIPLE

The “means principle” holds that it is significantly harder to justify using people as a means and thereby causing harm to them than it is to harm them as a side effect of pursuing some otherwise valuable end.12 A common variation is the “Doctrine of Double Effect” (DDE) according to which it is significantly harder to justify intending harm to people than producing it as a merely foreseen side effect of pursuing some otherwise valuable end.13 Because I am skeptical that intentions have the sort of significance the DDE ascribes to them, I will focus my argument here on the means principle. If it is morally sound, then it follows—or so I will argue in the next section—that it is harder to justify doing research that imposes risks on subjects without their consent than it is to do research that imposes equivalent risks on bystanders without their consent.14 My task in this section is to argue for the means principle.

There are two ways to argue for the means principle: by appeal to the cases that it seems to explain, and by appeal to deeper moral theory. I propose to canvas both here.

Here are three pairs of cases which seem to call out for something like the means principle:

  • Attacking noncombatants as a means of winning a war vs. killing noncombatants as collateral damage caused by pursuing other means to win a war.

  • Framing and punishing innocent people as a means of deterring crime vs. foreseeing that one is punishing a certain number of innocent people (though not knowing who they are) as an inevitable side effect of using an imperfect system to convict and sentence the guilty.

  • Withholding treatment from sick people so that they will die and their organs may be used to save a number of others vs. withholding treatment from some so that the scarce resources can be used to save a greater number.15

In all three cases, the first option involves using people as a means without their consent, and it is intuitively impermissible even if the good to be achieved substantially outweighs the harm that would befall those used as a means, but the second option is permissible as long as the good to be achieved clearly outweighs the harm caused as a side effect.

But the question has been raised: how can mere causal role carry so much moral weight.16 To answer that question, I offer here a quick sketch of my defense of the means principle in terms of another principle that I call the “restricting claims principle” (RCP).17

The essence of the connection between the means principle and the RCP can be spelled out as follows: Those who are harmed as a side effect of an agent pursuing some end have what I call restricting claims not to be harmed; those who are used as a means of pursuing some end have what I call non-restricting claims not to be harmed as a result of that use.18 Restricting (and only restricting claims) claims push to restrict an agent relative to her baseline freedom of action. And this explains why restricting claims (those of people harmed as a side effect) are weaker, all else equal, than non-restricting claims (those of people harmed through being used as a means).

The following is a useful heuristic for understanding how some claims can push to restrict an agent relative to her baseline freedom of action, and others not:19 Ask what difference it makes if a person is present with a claim that has to be respected as a right, where the baseline against which the difference is measured is the claimant being absent from the scene. If having to respect the claim as a right would restrict the agent against that baseline, then the claim is restricting, otherwise not.

The moral significance of a claim being restricting is that it pushes to make others normatively worse off than they would be if the agent were permitted to act on her baseline freedom. As a result, restricting claims (and only restricting claims) push to impose something morally like a negative externality on others. Just as agents must accept that their freedom must be limited so that they do not impose excessive costs on others, so the claims of those affected by agents must be limited so that they do not impose excessive costs on others. The moral point appeals to basic fairness: if a person’s claim pushes to make others worse off, then it should be weaker than otherwise identical claims that do not push to make others worse off. To avoid imposing costs to an excessive degree, restricting claims must be substantially weaker than otherwise similar non-restricting claims.

To illustrate, let us return to our test cases. Start with Unwilling Bystanders. If they were absent, then there would be no reason not to do the research. Against that baseline, if the bystanders are present and have claims not to be exposed to the risk of a rash or paralysis, claims that must be respected as rights, that would restrict the researchers from developing and getting approval for a treatment expected to save the lives of thousands. It would impose something like a negative externality on those potential beneficiaries. It would make the potential beneficiaries worse off by changing their normative status from people who could and should be saved to people who may not, given the facts, be saved.

Compare that to Unwilling Subjects. If they were absent, the researchers could get no closer to the 100 willing subjects they need to develop and get approval for the treatment they seek. Against that baseline, if the subjects are present and have claims not to be exposed to the risk of a rash or paralysis, claims that must be respected as rights, that would not restrict the researchers relative to their baseline freedom. The potential subjects’ claims not to be exposed to risk would not impose something like a negative externality on those who stand to benefit from the research. The potential beneficiaries could not be saved without those subjects, and they still cannot (permissibly) be saved if the researchers must respect the subjects’ rights not to be used as a means of performing the research.

Because the bystanders’ claims push to impose something like a negative externality, whereas the subjects’ claims do not, the bystanders’ claims should be seen as substantially weaker than the subjects’ claims.20

APPLICATION TO RESEARCH AND RISK

The implications of means principle for research ethics seem straightforward, but one might object that harm to research subjects is normally not a means to achieving research aims; it is merely risked for the sake of research aims. Moreover, one might then argue that when harms do occur, they are an undesired side effect of the research. Thus, one might try to make the argument that the claims of subjects not to be harmed are claims not to be harmed as a side effect of doing the research.21 That would imply that the claims of subjects and bystanders are on a par, and that whatever protections subjects are owed are also owed to bystanders.

Here is an analogy by which to develop that objection: consider innocents who are harmed when juries or judges convict them by mistake.22 Assuming that they were not framed, we cannot say that they were harmfully used for the sake of the goods achieved by the criminal justice system. It is just an unfortunate side effect of having a criminal justice system that some people will be falsely convicted and thereby harmed by it. The thought is that the same could be said of those research subjects who are harmed: unless their harm is an aim of the research, it is merely a side effect of human subject research.

There are two problems with this objection. First, it misunderstands the nature of non-restricting claims; it mistakenly takes the harm or risk of harm as the thing that has to be the means to some end. But the nature of a non-restricting claim is that it is the claim of a person who would be used as a means to an end,23 and thereby harmed or put at risk of harm. The relevance of the harm or risk of harm is that its magnitude determines the weight of the non-restricting claim not to be used. Whether the harm is certain or risked, the claim not to be used as a means, and thereby exposed to it without consent, is quite strong compared to the claim not to suffer or be exposed to an equivalent risk of harm as a side effect.

Second, the analogy with punishment actually supports the use of the means principle to distinguish imposing risks on subjects and bystanders. To see this, start with the punishment analog. Mistaken convictions are inevitable, but harm to the innocent is not a means by which the system operates; it is a side effect of operating the system.24 Some amount of such harm is tolerable as long as the guilty have forfeited their right not to be punished, the good achieved by the system overall is sufficiently great, and it can be achieved in no less harmful way. That is quite different from justifying the framing the innocent (those who have not forfeited their right not to be punished) as a means of achieving certain goals of the criminal law, such as deterrence.

The analogy in the research context arises from the fact that screening for valid consent is imperfect. I assume that most subjects have validly waived their right not to be used as subjects, but even so, some subjects will not have validly waived their right not to be so used—for example, they might not have been sufficiently well informed about the risks.25 Those who become subjects without validly consenting to do so would be put at risk as a side effect of running a research program that aimed to use only validly consenting subjects. Still, some amount of such risk is tolerable as long as most subjects have validly consented to participate, the good achieved by the research is sufficiently great, and it can be achieved in a no less harmful way. And that is quite different from justifying the use of subjects, without their consent, as a means of achieving the goals of research.

In sum, the means principle does support a meaningful distinction between exposing research subjects to risk without their consent and exposing bystanders to the same degree risk without their consent.

PROBLEMATIC CASES AND REPLIES

One may still wish to push back against my thesis by offering problematic cases that seem to present counter-examples. I discuss two sets of cases here.

First, consider three variations on a case in which Jane has to expose an innocent person to some risk to rescue a group of five others from a kidnapper who threatens to kill them.26

Rescue 1

The kidnappers have credibly told Jane that if she plays Russian Roulette with Victor, pointing a gun at his head, spinning the barrel, and pulling the trigger, they will let the five go. The gun has five chambers and only one is loaded, imposing on Victor a 20% risk of being killed.

Rescue 2

The kidnappers are sitting under a tree limb, and Jane could shoot the limb, causing it to crash down on the kidnappers, thereby allowing the five to escape. But she sees that if she does so, there is a 20% chance that the limb will fall and kill Victor, who happens to be walking by, as well.27

Rescue 3

The kidnappers have credibly told Jane that if she gives Victor a paper cut on his thumb, they will let the five go. The problem is that Victor has just stuck his hand in water infused with a virulent bacterium, and there is a 20% chance that he will contract a fatal infection if he is cut before he washes his hands.

The means principle, as I have interpreted it, seems to imply that Rescue 1 and Rescue 3 are fundamentally the same and hard to justify, while Rescue 2 is substantially easier to justify. But it might seem that Rescue 3 is more like Rescue 2, because the harm inflicted is just a paper cut, and the 20% risk of death may seem to be a side effect of that.

I think there is something too this objection. If the risk imposed is sufficiently detached from the using, then it should not be taken into account as part of the using. To see this, consider:

Rescue 4

The kidnappers have credibly told Jane that if she gives Victor a paper cut on his thumb, they will let the five go. But if they let the five go, their exit path will take them over a bridge imposing a 20% risk that debris will fall on and kill Victor who is being held temporarily under the bridge.

In Rescue 4, the 20% risk of death imposed on Victor is a downstream consequence of using him as a means, but it also seems to be, more fundamentally, a side effect of the release of the five. His claim not to be exposed to that risk seems like a restricting claim, given that Jane has the baseline freedom to impose a paper cut on him to save the five.28

The question, then, is really whether Rescue 3 is more like Rescue 1 or Rescue 4. As I read Rescue 3, the conditions of Victor suffering a 20% risk of death from a paper cut are so closely connected to the paper cut, that I read the case as being more like Rescue 1. But insofar as others read the case differently, that strikes me as unproblematic. Morality, like the law, works with some vague concepts, and the concept of being sufficiently connected to the using is one such vague concept. We have, I think, no choice but to do our best to interpret these concepts in a plausible and consistent way.

Turning now to the second problematic case, suppose that a study is proposed that would involve releasing mosquitos infected with a certain virus into a community, and then monitoring how well a certain treatment prevents disease symptoms. Suppose the researchers need to monitor only a few dozen members of the community to conduct the study. Those people would count as the study subjects. The others in the community who might be infected by the mosquitos are mere bystanders. The bystanders face the same risk of harm, however, and therefore it seems that they should have the same right to object as subjects.29

I think what makes this case seem like a counter-example to my distinction between subjects and bystanders is that the subjects and the bystanders in this case seem so interchangeable. Indeed, we can imagine the researchers bypassing the objections of potential subjects by simply finding a few dozen people who do not object, signing them up as subjects, and then treating anyone who objects as a mere bystander. In that context, the distinction seems to allow the researchers to game the situation and effectively bypass the requirement of subject consent.

But in truth there is no reason to accept that the requirement of subject consent becomes meaningless. If the researchers cannot find enough willing subjects, then they may not conduct the study. The real threat here is that they will too easily be able to simply impose the risk on bystanders by finding a few people who are willing to serve as subjects. Thus, this case serves as a good segue to the last section: ensuring that there are sufficient safeguards for bystanders.

A POSITIVE VIEW OF THE DUTY OWED BYSTANDERS

Having said that bystanders have claims not to be put at risk that are weaker than the claims of subjects, I also want to be clear: risk to bystanders is ethically important. It should be imposed only when the expected gains from the research are proportionately great and when there is no alternative to achieving those gains at less cost to bystanders or the public at large. Moreover, accepting the standard thought that claims not to be harmed, even as a side effect, are stronger than claims to be aided, we see that the notion of proportionality still favors bystanders over those who stand to be benefited by research (those whose claims to be aided push to justify risk imposition in first place). My point in developing the contrast between subjects and bystanders is only this: when claims not to be harmed are restricting claims, they should not be taken to be much stronger than claims to be aided all else equal.30

One implication of the necessity condition is that bystanders should in general be informed of the risks so that they can take action to protect themselves and thereby lower the risk.31 Failure to inform people who could protect themselves is normally an ethical failure because it normally imposes on them unnecessary risk.

It is also often morally important to get the consent of bystanders: it makes what would otherwise be ethically problematic much less so, and endangering people is always ethically problematic. Moreover, insofar as communities are at risk, it makes sense to let them have input into whether and how research is done in the community. This can be achieved through some combination of informing leaders and members of the affected community of the risks posed by the proposed research, creating avenues for consultation with interested parties, and giving weight to the votes of representative bodies—weight that would normally be dispositive if they legally ban the research in question in their community.32 Such a consultative approach is not only politically smart, and likely to protect the prospects for future research, it is also morally important, as it allows community members to have the kind of control over their lives that democratic theory rightly emphasizes.

But we should not lose sight of the fact that giving communities input, or even a veto, with regard to research that affects them is not the same as giving bystanders a veto. ‘Collective informed consent’ is not a strict analog to individual consent, as the former allows the refusal of consent by a minority to be simply overridden by the will of a majority. A commitment to true bystander consent would give bystanders the same right to consent or withhold consent as subjects. But providing for true bystander consent is not merely more logistically difficult than providing for subject consent—a point that many authors emphasize.33 The means principle shows that it is also, and more fundamentally, less well justified.

CONCLUSION

People have very strong claims not to be used as a means of doing research, at least when the research imposes on them non-trivial risks of suffering serious harms. Thus, unless they consent to being subjects of research, they normally may not be treated as research subjects when doing so risks seriously harming them. Human welfare matters whether the humans in question are bystanders or subjects. But the claims of bystanders not to have risks imposed on them are substantially weaker than the claims of subjects. Even mere bystanders may not be unnecessarily exposed to the risk of harm; nor may they be exposed to risks that are disproportionately large, given the good that the research aims to achieve. But if a significant good cannot be achieved without imposing risks on nonconsenting bystanders, it may be justifiable to proceed even though the same good would not justify imposing the same risk on nonconsenting subjects.

Acknowledgements

I would like to thank Nir Eyal for inviting me to contribute to this collection, for his guidance in the bioethics literature, and for his incisive and extensive comments on an earlier version of this article. I would like to thank Glenn Cohen, Nir Eyal, Helen Frowe, Matthew Hanser, and Meira Levinson for their comments on the version of this presented at the workshop: Medical Study Risks to Nonparticipants: Ethical Considerations. Finally, I would like to thank the anonymous reviewer for Bioethics and Kim Ferzan, the reviewer who revealed her identity, for very helpful comments on two earlier drafts.

Funding: This work was supported by NIAID Grant 1 R01 AI114617-01A1 (HIV Cure Studies: Risk, Risk Perception, and Ethics).

References

  • 1.I use a functional definition of a subject and bystander that might not track the distinction as articulated in U.S. federal regulations (see 45 CFR 46.102(f)). Consider, for example, a case in which some experimental treatment is given to one group, and the study depends on comparing that group’s response with people in a control arm. Suppose the experimenters ensure that no third parties offer those in the control arm any treatment, and then collect information about them from general census data, without collecting any ‘identifiable private information.’ It is unclear whether blocking third-party action would count as an intervention under the regulations. But for my purposes, it would. Such regulation of the environment in order to run the experiment, with the aim of collecting information from people in the environment, treats them, functionally, as subjects.
  • 2.For a discussion of the more extreme risks, see Evans NG, Lipsitch M, & Levinson M (2015). The Ethics of Biosafety Considerations in Gain-of-Function Research Resulting in Creation of Potential Pandemic Pathogens. Journal of Medical Ethics. 41: 901–908; and [DOI] [PMC free article] [PubMed] [Google Scholar]; Barker J, & Polcrack L (2001). Respect for Persons, Informed Consent and the Assessment of Infections Disease Risks in Xenotransplantation. Medicine, Health Care and Philosophy. 4: 53–70. [DOI] [PubMed] [Google Scholar]
  • 3.Of course, there may be other options. Perhaps more money can be found to increase the incentives to consent to be subjects. Perhaps it would be best to pressure the regulatory agency to allow treatment to go forward on the basis of inadequate evidence. But if these sorts of alternatives are unavailable or unwise, then it would be only in the most extreme cases that forcing non-consenting subjects to endure non-trivial risk of serious harms would be morally justified. I assume this case does not meet that threshold.
  • 4.What if no one would be exposed to any meaningful risk of serious harm? Subjects would still have stronger claims not to be affected than bystanders; only subjects could assert a dignitary interest in not being used as subjects without their consent.
  • 5.A fourth example is the paper in this symposium by Holly Fernandez Lynch.
  • 6.Evans, Lipsitch, & Levinson, op. cit. note. 2, p. 904. [Google Scholar]
  • 7.Barker & Polcrack, op. cit. note. 2, p. 59. [Google Scholar]
  • 8.Battin M et al. (2009). The Ethics of Research in Infectious Disease: Experimenting on This Patient, Risking Harm to That One In Battin MP et al (Eds.) The Patient as Victim and Vector: Ethics and Infectious Disease (pp. 164–183). New York: Oxford University Press, p. 166. [Google Scholar]
  • 9.Ibid.
  • 10.Ibid.
  • 11.Writers in the field of research ethics were not always as oblivious to the importance of not using people merely as a means. See, e.g., Jonas H Philosophical Reflections on Experimenting with Human Subjects. Daedalus 1969; 98 (2): 219–47. [Google Scholar]; Reprinted in Jonas H Philosophical Reflections on Experimenting with Human Subjects In: Jonas H, ed. Philosophical Essays: From Current Creed to Technological Man. Chicago: University of Chicago Press; 1980:105–35. [Google Scholar]; Fried C Medical Experimentation: Personal Integrity and Social Policy. New York: North-Holland: Publishing Company; 1974. [Google Scholar]; Donagan A Informed consent in therapy and experimentation. Journal of medicine and philosophy 1977;2(4):307–29. [DOI] [PubMed] [Google Scholar]
  • 12.For an introduction to other contemporary sources on the means principle, see Alexander Larry and Moore Michael, “Deontological Ethics”, § 2.2, The Stanford Encyclopedia of Philosophy (Winter; 2016. Edition), Zalta Edward N. (ed.), URL = https://plato.stanford.edu/archives/win2016/entries/ethics-deontological/. [Google Scholar]
  • 13.For an introduction to the Doctrine of Double Effect, see McIntyre Alison, “Doctrine of Double Effect”, The Stanford Encyclopedia of Philosophy (Winter; 2014. Edition), Zalta Edward N. (ed.), URL = https://plato.stanford.edu/archives/win2014/entries/double-effect/. [Google Scholar]
  • 14.Forfeiture is another potential path to getting around the restrictions imposed by the means principle. It explains, for example, why criminal punishment is morally permissible. See Wellman C (2017). Rights Forfeiture and Punishment. New York: Oxford University Press; But it is of marginal value to research ethics, which is, for good reason, loath to take people to have forfeited their right not to serve as research subjects. [Google Scholar]
  • 15.In my view, using need not be active; it is a question of the justification for causing or allowing something unwanted to befall another.
  • 16.See Scanlon TM (2008). Moral Dimensions. Cambridge, MA: Harvard University Press, p. 118. [Google Scholar]
  • 17.The sketch I offer tracks the position I developed in AUTHOR 1; I have since revised the account in AUTHOR 2 and refined it further still in AUTHOR 3, chapter 3. Others who have developed a similar position on the importance of not using people as a means are Tadros V (2011). The Ends of Harm. New York: Oxford University Press; [Google Scholar]; Øverland G (2014). Moral Obstacles: An Alternative to the Doctrine of Double Effect. Ethics. 124: 481–506; and [Google Scholar]; Ramakrishnan K (2016). Treating People as Tools, Philosophy and Public Affairs. 44: 133–165. The approach I and these others take is meaningfully different from the traditional Kantian argument. for an introduction to Kant’s mean’s principle, see [Google Scholar]; Johnson Robert and Cureton Adam, “Kant’s Moral Philosophy”, The Stanford Encyclopedia of Philosophy (Spring; 2018. Edition), Zalta Edward N. (ed.), forthcoming URL = https://plato.stanford.edu/archives/spr2018/entries/kant-moral/. [Google Scholar]
  • 18.In AUTHOR 3, op. cit. note 17, I change the label from “non-restricting” claims to “property” claims. Spelling out why requires going deeper into the foundations of the idea of an agent’s baseline freedom than I have room to do here.
  • 19.This heuristic is somewhat misleading, but given the limits of space, it will have to do. A more adequate account can be found in AUTHOR 2 and AUTHOR 3, op. cit. note 17.
  • 20.In AUTHOR 3, op. cit. note 17, pp. 88–89, I develop a separate argument for the strength of claims not to be used, appealing to what I call “autonomy principle” and its strengths relative to what I call the “welfare principle.” But the argument given in the text here suffices for present purposes. [Google Scholar]
  • 21.I got this objection from Nir Eyal in conversation; it tracks a certain kind of move used against those who invoke the DDE: that the harm is not intended, it is merely foreseen.
  • 22.Approximately 3% of convictions seem to be false convictions. See Laudan L (2016). The Law’s Flaws: Rethinking Trial and Errors? Milton Keynes, United Kingdom: College Publications, pp. 54–56. [Google Scholar]
  • 23.I also think a person has non-restricting claims with regard to the use of her property, but that is less relevant to research ethics.
  • 24.See AUTHOR 4.
  • 25.See Sreenivasan G (2003). Does Informed Consent to Research Require Comprehension? The Lancet. 362: 2016–18. If this problem is grave enough, then it undermines the assumption that most subjects have validly waived their right not to be used as subjects. [DOI] [PubMed] [Google Scholar]
  • 26.I am grateful to Kim Ferzan for pressing me to explore this case. The reviewer’s versions of the case were closely connected to a case originally offered by Bernard Williams in Smart JCC & Williams B (1973). Utilitarianism For and Against. New York: Cambridge University Press, p. 98 In those versions, the person exposed to risk faces a worse risk if the agent does not act. That makes it too easy to justify using as a means, because doing so is a Pareto superior alternative. [Google Scholar]
  • 27.In this case, I assume the kidnappers have forfeited their right not to be crushed by the tree limb, so their claims not to be killed carry very little weight.
  • 28.I describe a case like this in AUTHOR 3, op. cit. note 17, p. 92. [Google Scholar]
  • 29.Iowe this example to Matthew Hanser.
  • 30.As I say in AUTHOR 3, op. cit. note 17, p. 85, restricting claims not to be harmed have strength “in the same ballpark” as competing claims to be given equivalent aid. [Google Scholar]
  • 31.For authors who emphasize providing information, see Kimmelman J (2007). Missing the Forest: Further Thoughts on the Ethics of Bystander Risk in Medical Research. Cambridge Quarterly of Healthcare Ethics. 16: 483–490, p. 488 [DOI] [PubMed] [Google Scholar]; Resnik D & Sharp R (2006). Protecting Third Parties in Human Subjects Research. IRB: Ethics & Human Research 28: 1–7, p. 5. [PMC free article] [PubMed] [Google Scholar]
  • 32.For authors who emphasize consulting with the community, see Barker and Polcrack, op. cit. note 2, pp. 65–66; [Google Scholar]; Kimmelman, op. cit. note 26, p. 488. [Google Scholar]; See also Shah S. et al article in this symposium. Note, I qualify the claim that the law should be dispositive because unjust laws may sometimes be ignored.
  • 33.For authors who emphasize the logistical difficulties of getting bystander consent, see Resnik and Sharp, op. cit. note 26, p. 5; [Google Scholar]; Hausman D (2007). Third-Party Risks in Research: Should IRBs Address Them? IRB Ethics & Human Research 29: 1–5, p. 4. [PubMed] [Google Scholar]

RESOURCES