Introduction
Cutting edge neural device research -- such as thought control of a robotic arm,1 deep brain stimulation for psychiatric conditions,2 and brain-computer interfaces for communication -- holds great promise for improving health and well-being. Along the way to achieving this promise, neural device research involves human beings who put their brains and health in the hands of multidisciplinary teams, sometimes for the hope of clinical benefit, sometimes to advance science or help others. We might rightly think of them as brain pioneers.
Standard research ethics aims to ensure that patients are safe and that research is conducted only with their informed consent. But neural device research is on the cutting edge of several evolving issues in research ethics. A paradigmatic example of a cutting-edge research ethics issue in neural device research is the question of what happens to trial participants when a research study ends. Such studies typically require a significant investment of the participant’s time, and brain surgery raises a special set of concerns. In agreeing to brain surgery, participants agree not only to take on the risks of surgery on an organ that is vital to their functioning and well-being but also to allow researchers access to a part of their body that are responsible for mental experience, in ways that can feel intrusive or intimate.
Brain pioneers thus make a significant personal commitment to research. Some may also benefit significantly. They may find that experimental therapy controls their symptoms or gives them an improved sense of agency. Study protocols vary widely, based on conditions addressed, type of technology employed, and participant population, to name just a few factors, but often, at the end of a study, a protocol calls for explantation of the experimental neural device through another surgical procedure.3 Other protocols may offer the option of keeping the device implanted but only within a limited window of time, or with narrow restrictions on maintenance, which can be as little [OK?] as a single battery change.4 If a participant has volunteered to participate in a trial, had hardware implanted in their brains, played a key role in figuring out how to make the implanted system work, and generally made themselves vulnerable for the sake of the research team and science, should they be cut loose at the end of the study?
For many neural device researchers working with human participants, what should happen at the end of the study is a troubling question without a clear answer. Individual researchers with a strong sense of obligation to their participants have sometimes worked hard to ensure continued access, even after the study ends or a company fails.5 As one researcher notes, “There’s no greater shame in our world than offering a service and taking it away. That’s always what keeps researchers up at night […] I think that you have to find a way to keep it in if the patient finds it to be beneficial.”6 But despite widespread recognition of the problem of post-trial obligations for neural device research, and despite a growing literature on the conceptual resources that might be brought to bear on it,7 the problem remains unresolved.
Understanding the why of post-trial obligations might help in figuring out the what and how questions. Rather than trying to resolve all the practical issues regarding how to allocate post-trial access across the wide variety of neural device trial types -- a complex task that requires input from stakeholders across many different organizations and areas of expertise and needs to be contextualized for the kind of device in question -- here we explore the basis for such obligations. We aim to provide a justification for post-trial obligations in implanted neurotechnology trials by considering what Henry Richardson calls moral entanglements.8
How moral entanglements matter in research ethics was explored by Richardson and Leah Belsky under the label of “ancillary care,” or care that “goes beyond the requirements of scientific validity, safety, keeping promises or rectifying injuries.”9 They argued that such care becomes an obligation of researchers based on features of their relationship with participants, who partially entrust the research team with access to their bodies and thus to information that would otherwise be private. Relationships of moral entanglement render participants vulnerable to and dependent on researchers’ expertise and create an obligation in researchers for gratitude, compassion, and engagement.10 Richardson and Belsky’s framework provides an excellent starting place for understanding how the relationships of vulnerability and dependence that develop in the context of first-in-human neurotechnology trials create new researcher responsibilities that extend beyond the typical period of the study. Richardson and Belsky held that “the rationale for providing [ancillary] care … is strongest when subjects are particularly vulnerable to how researchers exercise their discretion, are particularly dependent on the researchers for care, or have been particularly willing to offer themselves up for risky, painful, or inconvenient studies without reward to themselves.”11 In an early review of Richardson’s book, Joseph Fins recognized the aptness of this line of thinking for neurotechnology research. He noted that participants in his brain stimulation studies for minimally conscious patients were also vulnerable and dependent, and he confirmed the sense of “justness” felt by a researcher who provides ancillary care.12 Our aim here is to expand on that initial invocation of the moral entanglement argument for neurotechnology research.
Moral Entanglements in Research
Richardson illustrates the idea of moral entanglements with a story he calls “Old Man and Groceries.”13 Imagine that you see an older man in the neighborhood struggling with his groceries and you offer to help carry them into his house. He accepts the offer and lets you into his home, which waives some of his privacy rights. You, in turn, in accepting the offer to enter his home, are also accepting a certain kind of intimacy and opening yourself to a moral entanglement with him. If you enter and find that his house is a mess, filled with moldy foods and rat feces, then (Richardson claims) you now have an additional special duty of beneficence, not just to witness what is going on, but to act on that information. You might offer to get help for him, make sure he knows the dangers of his current situation, or even help him clean the place up. Ignoring what you have learned about his home seems to shirk your moral duty to some degree; you would feel guilty if you did nothing. According to Richardson, accepting the invitation into the house amounts to accepting a privacy waiver -- a partial entrustment, to use his terminology-- and that waiver creates the conditions for a heightened responsibility for what one might discover through the moral entanglement.
One might think it would be better to mind one’s own business and maintain tactful silence. Richardson argues, though, that intimacies typically cancel any moral inhibition on a duty to warn and indeed introduce a special obligation of beneficence. We may not owe complete strangers many duties of beneficence, but when we become morally entangled with others through a partial entrustment of privacy rights, our obligations expand. The idea, then, is that moral entanglements occur when one party entrusts another with information that would otherwise be private, the entrusted party accepts this “waiver of privacy rights,” and the two parties find themselves with a “created intimacy.”14 The created intimacy introduces new responsibilities for the entrusted party because the exposure of private personal information -- the created vulnerability --introduces a special obligation of beneficence. And while the two parties enter into the relationship voluntarily, they may not realize the extent of their responsibilities. As Richardson puts it, the new duty, “while grounded in a voluntary transaction, was never voluntarily undertaken.”15
Richardson uses this framework to argue that when researchers gain access to private information about their research participants by offering participation and being entrusted with private details about the participant’s body, health, and life, they create a kind of intimacy with them and take on a duty to warn them about threats they may not otherwise recognize but that the researchers can detect. Incidental findings exemplify that threat. Richardson, moral entanglement not only justifies the duty to warn, but also creates a duty to provide ancillary care if the participant needs help. The duty is shouldered even if researchers are not expected to take on the role of the physicians, who have the primary duty of promoting the patient’s health and warning them about potential threats to health. For example, HIV researchers may be obligated not only to inform a participant in an antiretroviral trial that they are HIV positive, but also to provide care -- or help in finding care -- for HIV. Indeed, in 2005, the NIH issued a guidance memorandum encouraging post-trial access to antiretrovirals to participants in trials of those drugs.16
The extent of the researcher’s obligations depends on a variety of factors, including the participant’s vulnerability with respect to access to continued care, the participant’s dependence on the research team for that access, whether the participant has taken on any uncompensated risks or burdens, and the depth of the relationship between the research and participant. The deeper the relationship -- given the intensity, duration, and longevity of the study --the greater the obligation on the part of the researcher.17
It is not true, in other words, that informed consent at the start of a study articulates and delimits the researcher’s obligations and that a participant who gives informed consent has accepted a contract of sorts and must accept all its terms. The moral entanglement view recognizes that some responsibilities cannot be foreseen, and that even if they could be foreseen, cannot be set aside. In part, this is because the research relationship is not symmetric in terms of vulnerability. There is an unequal knowledge base and vulnerability differential between participant and researcher (neural devices are a great example) that can never be fully rectified through informed consent processes.18
Richardson recognizes that ancillary duties associated with newly created intimacies will require limits, so that they do not undermine incentives to do important research. Both the capacity of the researcher and the burden on the researcher are important for articulating those limits. Understanding the degree of such obligations requires balancing competing goals: caring for research participants and not taking advantage of them, gaining new knowledge through maintenance of the scientific research endeavor, and recognizing special obligations of beneficence that arise through moral entanglements of particular forms of research. Not every researcher will have the capacity to provide all that a participant needs or deserves, given their limited capacity, reliance on industry, grant funding, access to expertise, for example. But researchers in studies that involve moral entanglement at least have an obligation to work toward a solution and to push back against a status quo that fails to recognize special duties owed to participants. Similarly, not every company is well positioned to fully address post-trial needs. Some companies may not have the resources, or maybe the device will not be approved by the FDA. But they at least have an obligation to help develop regulatory or other solutions.
The Context of Neural Device Trials
Richardson and Belsky are primarily interested in what researchers are responsible for within a study. In neurotechnology research, we suggest that while the added responsibility similarly arises through moral entanglements created in the context of the study, the expansion of obligations includes forms of support that are owed after the study ends.
Richardson and Belsky propose four factors that help delimit the scope of ancillary care obligations: participants’ vulnerability, uncompensated risks or burdens, the depth (intensity, duration, and longevity) of the researcher-participant relationship, and participants’ dependence on the researchers. These factors are not unique to neural device research, but they overlap and intersect in distinctive ways in that domain.
Vulnerability
Participation in neural device trials often involves being vulnerable in distinct and noteworthy ways. Participants offer researchers access to some of their most intimate spaces: the brain, data related to their feelings and thoughts, and, in some cases, the ability to control or modulate their feelings and thoughts. These may be exactly what the intervention is meant to target, but they create a kind of exposure that can render participants susceptible to feelings of uncertainty and doubt. A participant who is particularly vulnerable can be in a precarious position, amplifying researchers’ obligation to treat participants with beneficence. And their vulnerability may stem both from their participation as well as to from background pre-existing vulnerabilities.19
Qualitative studies exploring the experiences of participants suggest that some individuals have difficulty assessing how they are affected by the device. A participant in a qualitative study we ran with individual using implanted deep brain stimulation (DBS) devices for psychiatric conditions notes: “There are parts of this where you just wonder how much is YOU any more, and you wonder kind of, ‘How much of it is my thought pattern? How would I deal with this if I didn’t have the stimulation system?’ You kind of feel artificial.”20 This experience can also be exacerbated by how people around the participant respond to the presence of the device. Participants may feel uncertain and frustrated about being seen as controlled or managed through stimulation. Another study participant notes, “If things are bad, and I’m feeling bad about what’s going on at home, [my father’s] solution is always to get it [the device] turned up, ‘Why can’t you get turned up?’ … Yeah, just because there’s, like, a little candyman in your head, I don’t have to be nice to you, I just have to turn it up and you’ll be OK with me yelling at you.”21 Being in the study can both ameliorate their symptoms and make them vulnerable to uncertainty about their own agency and to problematic treatment by others.
Indeed, qualitative research suggests that neural devices can have a significant impact on participants’ agency, sense of self, and identity.22 A recent study by Peter Zuk and colleagues exploring this question quotes a researcher whose participant – using a deep brain stimulation (DBS) device that has just been reprogrammed – had a radical response:
[H]e just becomes a completely different person. [He ] becomes very hypomanic, and disinhibited, and giggles [like] a little child in a candy store. At some point he jumps on, or tries to grab one [of] the female neuroscientists. Then hides behind the door giggling about it, and that is not his personality. After the device is reprogrammed, we see him coming back to himself, and he becomes this kind of quiet and reserved person, and says, “Oh, I’m feeling much better. That was a really strange feeling.23
Even knowing that device settings are readjusted prior to leaving the room, the possibility of loss of control – and the embarrassment, confusion, and dismay that might accompany it – can make participants more vulnerable. The mere knowledge that the brain and its functioning – part of the body that is particularly personal and contributes centrally to who one is – will be exposed to direct intervention that could radically alter mood or behavior puts one in a precarious position that requires significant trust in the research team. Not all participants will have such experiences, of course; some may even feel more authentic and empowered following brain stimulation.24 Attending to the particularities of each kind of neural device trial will be important, and the vulnerabilities in question will need to be contextualized to the condition and form of intervention.
Pre-existing vulnerabilities caused by oppression, social and economic inequalities, health inequities, and so forth must also be taken into account. People who enroll in implanted neural device trials are typically either “treatment search fatigued”25 or searching for a way to altruistically benefit science or to make a difference in care for people with their condition – in some cases because their conditions (and widespread ableism) have constrained their abilities to actively participate in work or community life.26 “Treatment search fatigue” refers to the exhaustion, perhaps despair, experienced by people who have treatment-resistant forms of certain conditions, such as epilepsy, obsessive compulsive disorder, and major depressive disorder. A similar form of fatigue may also be felt by people with conditions that have limited therapeutic options (such as essential tremor, spinal cord injury, and Alzheimer disease).
Treatment search fatigue might undermine the ability of individuals to give their consent to participate in neural device trials,27 but it also creates problems for what happens at the end of the study. As Gabriel Lazaro-Muñoz writes, “some researchers were concerned that participants may not fully grasp the long-term implications of a lack of post-trial access, even though it is discussed during the informed consent process.”28 Participant interviews support this concern. As one notes, remembering the informed consent period, “I was at a really low point and I was looking for something – anything – that would give me some relief. I wasn’t really thinking about longer term things.”29 Participants’ desperate focus on the short term at the expense of long-term planning places them in a vulnerable state.
Prior to implantation, participants may understand that the device might provide them with temporary symptomatic relief or a novel way of interacting with the world, but such an understanding often comes from a place of limited awareness. They may not entirely recognize how the experience within the study may change them or what the end of the study agreement will feel like when they arrive there. Consider this researcher’s assessment: “They’re told, but I don’t think they really care in the moment. And, then, later on when they’re faced with a $20,000 bill […] because your battery died and we can’t give you a new one, then they get upset.”30
Many neural device participants describe themselves as having been in a state of depression and isolation prior to participating in the trial – even when they were not entering the study to seek treatment for depression.31 The viability of one’s existence, the extent of one’s depression, the severity of one’s impairment, the impact of ableism on one’s livelihood and social connections – these all, of course, come in matters of degree. The non-disabled person who has not experienced that struggle would not have the same degree of background vulnerability (and would not be eligible to participate in neural device trials).
Given the intimate access granted for neural device research and the connection to central issues of identity and agency, as well as the generalized forms of oppression (especially ableism) and the difficulties faced by prospective participants for neural device trials, participants who enter such studies have a deep vulnerability that deserves attention. Recognizing their vulnerability gives weight to the idea that researchers are morally entangled with them and bear special responsibilities toward them. Hendriks and colleagues, who have noted that “posttrial responsibilities may be greater when participants… are particularly vulnerable,”32 have also argued that the relative novelty of such devices and lack of proven track records create additional prospective vulnerabilities. “Experience with other invasive devices suggests that neural devices’ complexity, limited knowledge about their long-term effects, and expected rapid evolution are also sources of vulnerability for participants that warrant consideration and long-term planning.”33
Uncompensated Risks and Burdens
Richardson and Belsky (2004) also recognize that researchers often owe a debt of gratitude to their research participants, given that they are integral to making the research possible and often take on significant burdens in order to participate. The extent of the special obligation of beneficence, they argue, is at least in part dependent on whether participants have shouldered uncompensated risks and burdens.
To what extent do participants in neural device trials face uncompensated risks and burdens? They are typically not remunerated in significant ways, although the time they spend (as much as two to three sessions per week, for two to three hours each session, over three to five years34) can be equivalent to a part-time job. Funders, institutional review boards, and research teams set limits on what kinds and amounts of compensation are permitted. As a result, research teams may offer financial help with transportation, parking, and perhaps vouchers for lunch, sometimes in addition to modest stipends to recognize time spent, but the amounts are often quite small. The incentives are set relatively low for several reasons – to avoid concerns about undue inducement to enroll,35 to help ensure participants are there primarily to contribute and learn rather than to earn,36 and to recognize financial constraints on yearly earnings for people who receive government support with strict limits on income. In enrolling in a trial, research participants also give up other opportunities. For example, participants who enroll in neural device studies may forgo the possibility of enrollment in stem cell studies or other experimental treatments.
Yet the risks are considerable. First, there are the risks of neurosurgery for implantation and, if necessary, explantation. In addition, there are risks related to the unknown long-term effects of implanting hardware in neural tissue and of maintaining the hardware37; mental health issues that can arise from explantation,38 abrupt end of study,39 or feelings of abandonment if the device is taken away40; and risks related to privacy and security.41 Some of the risks related to implantable devices are not unique to the neural context,42 but when they arise in relation to the brain, at least some of them may feel more momentous, given the brain’s central role in maintaining human well-being.
The lack of significant compensation, coupled with the risks and the investment of time and energy, suggest that neural device trial participants shoulder a substantial burden. To be clear, participants often embrace the opportunity to experience a novel new technology that provides a new way of interacting with the world43, and they gain a sense of contributing to a valuable shared endeavor. Those nonfinancial benefits deserve acknowledgement and should not be discounted. Nonetheless, what participants take on for the sake of scientific progress is considerable and is widely understood to constitute a burden.44 In addition, even if research practices changed significantly and participants were better compensated,45 we would argue that continuing access to the device or to a supportive transition out of the study would be an obligation, given how participants may come to rely on the devices, such that their loss amounts to a harm rather than simply a return to baseline.
Depth of relationship
Recognizing that not all research studies offer similar opportunities for moral entanglement, Richardson and Belsky describe “depth of relationship” as a factor for assessing the degree of responsibility that arises for researchers. As they point out, “different protocols demand interactions of varying intensity, duration, and longevity. Researchers have a stronger moral responsibility to engage with the full range of participants’ needs when the relationship is deeper.” This depth need not be a particularly personal engagement, such as if a researcher and a participant met outside the lab. Rather, it is focused on their interactions within the context of the research, such as the length of time spent together and the intensity of their exchange. Depth relates both to the dimensions of a study and its protocols as well as the particular individuals within it.
In many neural device trials, research participants spend significant time with the research team, working together to try to tune the device, test its functionality, investigate its range of use, explore its constraints and limitations, and troubleshoot challenges. This close mode of interaction creates conditions under which individuals almost inevitably get to know each other well. One participant in our research notes, “I was sad when it came to an end, you know? I got to be here with all these people, you know. They became like your family.”46 Participants may take on their own moral entanglements. Our interviews with research participants revealed that some feel obligated to see a study through or to support the work of doctoral students they know through the study. Conversely, a researcher acknowledges the structural pressures toward developing relationships: “I think it’s important to keep some professional distance. That being said, you interact with these folks. I see them on a monthly basis for years. And the people who actually run the sessions see them twice a week for years. It’s very difficult not to develop some form of personal relationship.”47
Research participants are in many ways both the object of study and active contributors to the investigation. Researchers rely on the fact that they can get first-person reports about participants’ experience in the middle of the study and thus have access to insights that would not be available in studies using non-human subjects. Participants acknowledge this key role. “That was what was striking to me. I had some input about different things. I felt like they took that seriously.”48 Many participants feel like they are “part of the team.”49 Participants even have an obligation to relay how they are feeling and what they are experiencing with the device. And because the device is often intervening on their functioning in a way that can feel intense -- whether because it helps to treat their condition or because it opens up new ways for interacting with the world “just by thinking” – these interactions provide ripe opportunities for created intimacies. Given the significant shared experience, the neural device forms an anchor point from which researchers and participants become “relational agents who rely on each other.”50
In addition, because of the brain’s centrality in behavior and agency, using the device can sometimes alter the participants’ sense of self.51 Participants may experience both delight in using the device and uncertainty about authorship, control, and authenticity, knowing that their actions or mental states are influenced by the device. In many studies, the content for the experiment, then, is something that is both integral to how they perceive themselves and open to alteration through the experiment.
When a study involves such intense and intimate interactions over a long period, the context is conducive to the formation of deeper relationships between the participants and the research team. They not only rely on each other but may come to trust each other. Based on a shared endeavor and the presumption of good will and competence from the other party, they each entrust a valuable thing to the other, who has to use discretion in choosing how to protect it.52 The researchers trust the participant to show up, make a good effort, and report their experiences faithfully, and the participants trust the researchers to protect them when they are vulnerable, make the best use of their shared time, and value their integral role in the scientific study. One party’s failure to live up to expectations may constitute not just disappointment, but betrayal.53
Dependence on researchers
Finally, dependence matters because it may indicate that researchers – including research funders – are in a unique position to help participants. Participants who were already vulnerable prior to the study may become newly dependent on researchers (and the research ecosystem more broadly) to ensure that their implanted neural devices remain safe and operational and maintain the benefits they received in the study.
The experience of using the neural device, especially over a longer period, can be transformative, reshaping the research participant’s “habitus,” in Pierre Bourdieu’s terminology.54 A person’s habitus is a set of embodied habits, including ways of perceiving, acting, and understanding, that are shaped by the social and material world and form a way of life. Explantation at the end of a trial is more significant, then, than just removing a piece of hardware; it interrupts the participant’s new way of being in the world. This alteration comes in degrees, of course, and may vary from study to study, but when it occurs, and especially when it is significant, participants may find themselves dependent on the research team to maintain their new habitus.
Consider when a person with treatment-resistant depression and a long history of failed treatments finds relief through an implanted experimental neural device and gains a vitality and level of functioning that was absent prior to the study. Their whole way of engaging the world may change for the better. The neural device plays a key role in that individual’s habitus, and the prospect of losing access can feel tantamount to a threat of death. One of Lauren Sankary’s participant interviewees reports, “I felt like, like I’d be dead without this treatment.”55 Similar concerns are also apparent in the context of DBS devices used to treat minimally conscious patients, helping them gain conscious awareness and an ability to interact with loved ones.56
In other cases of less directly device-related change, such as when the neural device is not used outside the lab, the long-term intense and scientifically valuable interactions within the lab space may nonetheless rejuvenate a sense of purpose in the individual participant in ways that significantly alter their sense of self and self-respect. Exit from the study, especially when it occurs abruptly and without good support or counseling, can feel like abandonment or betrayal to the participant.
Consider one individual’s claim about using an implanted brain-computer interface in the lab through the course of a study: “it really changed my self-image. It changed, as I said, the empowerment. The feeling, ‘I did this, look what I can do.’ It helped me realize that -- I have a saying up on my wall, ‘You are more than the body you live in.’ I just realized the truth of that statement, that my brain was the most important part of me, and that working meant I could do a lot.”57 This participant’s way of being in the world was improved through the study; loss of that newly normalized gain could undermine her self-confidence and self-image. She experienced important gains through her participation, and becomes vulnerable to losing what she values dearly.
In such a case, what is owed to the participant post-trial may not be continued access to use of the device, particularly if it is not easily transported to be used outside the lab or requires highly trained personnel for tuning and adjustment. But the moral entanglements developed in the course of the study may at the very least require more attention to transitions out of the study, with support for figuring out possibilities for maintenance of the gains developed within the study. As one participant remarked in a study about exiting from neural device studies, “I let you implant things in my brain, I’m walking around with this, you have some responsibility to um, you know, to look after me.”58
If other alternatives for support were widely available, making the participants less dependent on the research team, then the post-trial obligation would be less weighty. In most cases, however, such support cannot be provided by regularly trained physicians. Worse, device maintenance and adjustment can be time-consuming and may not be covered by insurance.59 Given uncertainty about follow-up care, participants may feel abandoned or “dropped” by the research team.60 If a principal investigator moves to a new institution, retires, or otherwise changes course, participants in their study may feel moved to stay in contact, even to follow them, and if they cannot do that, then they require a very clear hand-off to a new researcher. Our informal conversations related to this issue point to a strong sense of felt responsibility on the part of many researchers – but a frustrating lack of the understanding, material resources, or structural support necessary to act on it.
The Scope and Duration of Post-Trial Obligations
Here, then, is the argument from moral entanglement for post-trial obligations in neural device research trials. Given participants’ vulnerabilities, their uncompensated risks and burdens, the depth of relationships that form over the course of neural device trials, and the dependence participants have on highly skilled teams to maintain their devices, researchers have a particularly weighty obligation of care to their participants. They are obligated to engage with participants as whole people with whom they become morally entangled, and not merely as passive subjects of research. Researchers and funders of neural device trials owe something to participants that, we insist, exceeds the usual benefits of participating. Exactly what is owed depends on the degree of each of these four factors, but in many cases, we suggest, it includes ensuring participants’ continued access to neural devices. Moreover, we think, these responsibilities arise not only from the personal relationships between researchers and participants, but also from the research ecosystem as a whole. The responsibilities are not limited to the research team; they also apply to the institutions in which they work and the funding agencies that support the work.
On Richardson and Belsky’s view, it is the trust relationship that the participant has to the researcher and the researcher’s discovery of private information about the participant that gives the researcher a new special obligation. But in research on neural devices, it’s not obvious that participants’ partial entrustment of otherwise private information is the principal locus of ethical concern. The moral entanglements framework might be applied exactly as Richardson and Belsky suggest if, for instance, neuroimaging required for electrode placement or monitoring reveals a new health problem, such as a tumor. This would constitute a classic incidental finding of the sort that interested Richardson and Belsky. But in the cases that most interest us, the post-trial obligations in question are often not related to previously undiscovered kinds of information; they have to do, rather, with the forms of vulnerability and dependence created by the participant’s entrance into the study and the reality that they take on significant burdens that primarily benefit the research team.61
In addition to his discussion of privacy waivers in research, Richardson notes that researchers must avoid using participants as “mere sources of biological materials”62 and must show respect for their autonomy. What respect entails is more complicated in the context of lengthy studies in which participants form relationships with the study team. As Alex London writes, reviewing Moral Entanglements, “It seems difficult to see how a researcher could show respect for a participant as more than a mere data point, by entering into a relationship with that person—asking that participant to contribute time and perhaps to experience discomfort in order to advance the ends of the researcher—and then remaining indifferent to the impact of that participant’s health need on his ability to function.”63 Building on this, London argues that creating the moral entanglement may involve sharing some otherwise private information, but that vulnerability and dependence may be more significant than newly discovered private information in creating new obligations.
Privacy rights get their normative force, on Richardson’s account, because they protect areas that are sensitive and closely connected with human agency. Granting researchers access to those areas may be what creates a special relationship of intimacy and vulnerability. But once that relationship has been established, why wouldn’t concern for participant autonomy and the background concern for general beneficence then kick in to generate a duty to respond to significant needs that lie within the sphere to which the researcher has been granted access (personal health), regardless of whether those needs are private or public?64
Researchers ask people to enter their study and to become vulnerable to the researchers by allowing them to intervene in an organ closely linked to consciousness, identity, and agency. When those people become participants, they become morally entangled with the research team, and that entanglement increases with the growing vulnerability, depth of the relationship, and the participant’s dependence on the research team, particularly where the participant is not well compensated for the risks and burdens they take on.
In other words, when neural device trial participants willingly take on brain surgery and its risks to enter a study, commit to regular lab visits and sometimes to exhausting sessions to test and adjust the device, and gain benefits in the form of therapeutic improvement, individual gain, or social well-being, their well-being becomes entangled with the researchers even if nothing distinctly private is shared.
The moral entanglements framework was focused on obligations of researchers during a trial, not after. The entanglements framework is helpful for extending the scope of researcher obligations. It was not really focused on the duration of obligations beyond the end of trial. Richardson and Belsky do not discuss whether researchers should recontact HIV+ participants years after a trial to inform them about new HIV therapy developments. London notes, therefore, that researchers’ special obligations may pertain primarily to care that happens within the timeframe of the main study:
“For a particular need to give rise to an ancillary-care obligation, it must fall within the relevant scope and meet a variety of conditions that relate to the “strength” or significance of the need itself.… [E]xplicit permissions required to conduct a study determine the scope of the permissions granted and thus the scope of the researcher’s ancillary-care obligations.”65
In many ways, neural device trials are better positioned than many other kinds of research to attend to participant needs during the study. Because the number of study enrollees is often small, and the projects often have neuropsychologists or other study staff; as a result, participants’ clinical or personal needs are much more likely to come to attention and, where possible, be addressed by the study.66
But many obligations of care that moral entanglements in neural device trials seem to demand are different. They are demands that emanate from the study itself but extend forward in time. This is why vulnerability is such a central point. The vulnerability that comes from researchers learning new information about a participant largely ends when new information is no longer being gathered. But the vulnerability of having a device implanted in one’s head continues as long as the device is present. In many ways, in fact, the participant is more vulnerable after leaving a trial, when they are no longer under the watchful eye of the research team.
Acknowledgements
We are grateful to our reviewers and to the UW neuroethics research group for their helpful discussions of the ideas presented here. This work was supported by the NIH Award Number R01MH130457. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
References
- 1.Hochberg L, Bacher D, Jarosiewicz B et al. Reach and grasp by people with tetraplegia using a neurally controlled robotic arm. Nature 485, 372–375 (2012). 10.1038/nature11076 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Widge AS Closing the loop in psychiatric deep brain stimulation: physiology, psychometrics, and plasticity. Neuropsychopharmacol. (2023). 10.1038/s41386-023-01643-y [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Sankary L et al. 2021. Exit from Brain Device Research: A Modified Grounded Theory Study of Researcher Obligations and Participant Experiences. AJOB Neuroscience. 13(4): 215–226; [DOI] [PMC free article] [PubMed] [Google Scholar]; Sankary et al. 2020. Publication of Study Exit Procedures in Clinical Trials of Deep Brain Stimulation: A Focused Literature Review. Frontiers in Human Neuroscience doi: 10.3389/fnhum.2020.581090 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Underwood E 2017. Researchers Grapple with the Ethics of Testing Brain Implants. Science 10.1126/science.aar3698 [DOI] [Google Scholar]
- 5.See Bergstein B 2015. Paralyzed Again. MIT Technology Review. https://www.technologyreview.com/2015/04/09/168424/paralyzed-again/ [Google Scholar]; Also, Dobbs D (2018) Why a ‘Lifesaving’ Depression Treatment Didn’t Pass Clinical Trials. Atlantic. https://www.theatlantic.com/science/archive/2018/04/zapping-peoples-brains-didnt-cure-their-depression-until-did/558032/ [Google Scholar]
- 6.Lazaro-Muñoz G et al. 2022, 1030. Post-trial access in implanted neural device research: Device maintenance, abandonment, and cost. Brain Stimulation 15: 1029–1036. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Lazaro-Muñoz et al. 2018. Continued Access to Investigational Brain Implants. Nature Reviews Neuroscience. 19:317-=318. [DOI] [PMC free article] [PubMed] [Google Scholar]; See also Hendriks et al. 2019. Ethical Challenges of Risk, Informed Consent and Post-trial Responsibilities in Human Research with Neural Devices: A Review. JAMA Neurology 76(12):1506–1514; [DOI] [PMC free article] [PubMed] [Google Scholar]; Sankary et al. 2021.
- 8.Richardson H 2012. Moral Entanglements Oxford University Press. [Google Scholar]
- 9.Richardson H and Belsky L 2004. The Ancillary Care Responsibilities of Medical Researchers: An Ethical Framework for Thinking about the Clinical Care Researchers Owe Their Subjects. Hastings Center Report 34(1): 25–33; p. 26 [PubMed] [Google Scholar]
- 10.Ibid.
- 11.Ibid., p. 32
- 12.Fins J 2013. Review of Moral Entanglements. Notre Dame Philosophical Reviews. https://ndpr.nd.edu/reviews/moral-entanglements/ [Google Scholar]
- 13.Richardson Moral Entanglements 2012, p. 68
- 14.Richardson, Moral Entanglements p. 63
- 15.Richardson H 2012. Moral Entanglements: Ad Hoc Intimacies and Ancillary Duties of Care. Journal of Moral Philosophy https://brill.com/view/journals/jmp/9/3/article-p376_5.xml [Google Scholar]
- 16.Richardson Moral Entanglements 2012, 7–8
- 17.Belsky L and Richardson H 2004. Medical researchers’ ancillary clinical care responsibilities. British Medical Journal 328: 1494–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Fins J. 2013.
- 19.Belsky and Richardson 2004
- 20.Klein E et al. 2016. Brain –computer interface-based control of closed-loop brain stimulation: attitudes and ethical considerations” Brain-Computer Interfaces. 3(3): 140–148. [Google Scholar]
- 21.Ibid.
- 22.Zuk et al. 2023. Researcher Views on Changes in Personality, Mood, and Behavior in Next-Generation Deep Brain Stimulation. AJOB Neuroscience; [DOI] [PMC free article] [PubMed] [Google Scholar]; See also, Klein et al. 2016.
- 23.Zuk et al. 2023. p X
- 24.Klein E et al. 2016. Brain –computer interface-based control of closed-loop brain stimulation: attitudes and ethical considerations” Brain-Computer Interfaces. 3(3): 140–148; [Google Scholar]; Fins 2015. Rights Comes to Mind: Brain Injury, Ethics and the Struggle for Consciousness. Cambridge University Press. [PMC free article] [PubMed] [Google Scholar]
- 25.Zuk and Lazaro-Munoz 2021
- 26.Kögel et al. 2020.
- 27.Dunn LB, Holtzheimer PE, Hoop JG, Mayberg HS, Appelbaum PS. 2011. Ethical Issues in Deep Brain Stimulation Research for Treatment-Resistant Depression: Focus on Risk and Consent. AJOB Neurosci 2(1):29–36; p. 32 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Lazaro-Muñoz G, et al. 2022. Post-Trial Access in Implanted Neural Device Research: Device Maintenance, Abandonment, and Cost. Brain Stimulation. 15(5): 1029–1036; p. 1031 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Sankary et al. 2021, 5
- 30.Lazaro-Munoz et al. 2022, 1031
- 31.Kögel J, Jox R and Friedrich O 2020. What is it like to use a BCI? – insights from an Interview Study with Brain-Computer Interface Users. BMC Medical Ethics. 21(2): [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Hendriks et al. 2019, p. 1511
- 33.ibid.
- 34.see Lazaro-Munoz, 2018.
- 35.Macklin R 1981. ‘Due’ and ‘undue’ inducements: On paying money to research subjects. IRB: A Review of Human Subjects Research 3(5):1–6 [PubMed] [Google Scholar]
- 36.Lemmens T & Elliott C. 1999. Guinea pigs on the payroll: the ethics of paying research subjects. Accountability in research. 7(1):3–20. [DOI] [PubMed] [Google Scholar]
- 37.Sankary et al. 2021;; Hendriks2019.
- 38.White D and Whittaker D 2022, Post-Trial Considerations for an Early Phase Optogenetic Trial in the Human Brain. Open Access Journal of Clinical Trials. 1–9, DOI: 10.2147/OAJCT.S345482; p. 5 [DOI] [Google Scholar]
- 39.Sankary et al. 2021, 4
- 40.White and Whittaker 2022, 2;; Lavery JV. 2008. The Obligation to Ensure Access to Beneficial Treatments for Research Participants at the Conclusion of Clinical Trials. In: Emanuel EJ, et al. , editors. The Oxford Textbook of Clinical Research Ethics. New York: Oxford University Press; 2008. pp. 697–710. [Google Scholar]
- 41.Hendriks2019.
- 42.See, e.g., Hutchison K and Sparrow R (2016) What Pacemakers Can Teach Us About the Ethics of Maintaining Artificial Organs. Hastings Center Report 46: 14–24. [DOI] [PubMed] [Google Scholar]
- 43.Kögel2020.
- 44.see, e.g., White and Whittaker 2022: 5;; Sankary et al. 2021: 4
- 45.Gelinas L et al. 2018. A Framework for Ethical Payment to Research Subjects. New England Journal of Medicine 378(8): 766–771. [DOI] [PubMed] [Google Scholar]
- 46.Sankary et al. 2021, 5
- 47.Peabody Smith et al. 2023. BRAIN Initiative meeting poster. From Guidelines to Tools: NIH BRAIN Investigators’ Perspectives on the Ethics of Intracranial Research. Bethesda MD. [Google Scholar]
- 48.Sankary et al. 2021, 5
- 49.Kögel2020.
- 50.Goering S, Brown T and Klein E (2021) Neurotechnology Ethics and Relational Agency. Philosophy Compass DOI: 10.1111/phc3.12734; p. 6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.See Klein et al. 2016. for a discussion of ambiguous agency;; Goering, Brown and Klein 2021. for a discussion of relational agency in the context of neural devices
- 52.Baier A 1986. Trust and Anti-Trust. Ethics. 96(2): 231–260. [Google Scholar]
- 53.Baier 1986;; see also Jones 1996. “Trust as an Affective Attitude”, Ethics, 107(1): 4–25. [Google Scholar]
- 54.Bourdieu P 1977. Outline of a Theory of Practice. Cambridge Studies in Social and Cultural Anthropology (Nice R, Trans.). Cambridge: Cambridge University Press. [Google Scholar]
- 55.Sankary et al. 2021, 4; we have heard similarly striking statements in unpublished interviews
- 56.Fins J 2015. Rights Come to Mind: Brain Injury, Ethics and the Struggle for Consciousness. Cambridge University Press. [PMC free article] [PubMed] [Google Scholar]
- 57.Kögel et al. 2020, 10
- 58.Sankary et al. 2021, 8
- 59.Lazaro-Munoz2022.
- 60.Sankary et al. 2021, 8
- 61.Though see Kögel et al. 2020, for a discussion of the social benefits participants receive.
- 62.Richardson 2012, Moral Entanglements, 97 [Google Scholar]
- 63.London A 2013. review of Moral Entanglements. Ethics 124(1): 206–209. [Google Scholar]
- 64.London 2013, 208
- 65.London 2013, 206
- 66.Kubu CS, Ford PJ. 2017. Clinical Ethics in the Context of Deep Brain Stimulation for Movement Disorders. Arch Clin Neuropsychology. 32(7): 829–839. [DOI] [PMC free article] [PubMed] [Google Scholar]
