Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 Sep 15.
Published in final edited form as: Philos Compass. 2021 Mar 10;16(4):e12734. doi: 10.1111/phc3.12734

Neurotechnology ethics and relational agency

Sara Goering 1, Timothy Brown 1, Eran Klein 1,2
PMCID: PMC8443241  NIHMSID: NIHMS1736462  PMID: 34531923

Abstract

Novel neurotechnologies, like deep brain stimulation and brain-computer interface, offer great hope for treating, curing, and preventing disease, but raise important questions about effects these devices may have on human identity, authenticity, and autonomy. After briefly assessing recent narrative work in these areas, we show that agency is a phenomenon key to all three goods and highlight the ways in which neural devices can help to draw attention to the relational nature of our agency. Drawing on insights from disability theory, we argue that neural devices provide a kind of agential assistance, similar to that provided by caregivers, family, and others. As such, users and devices participate in a kind of co-agency. We conclude by suggesting the need for developing relational agency-competencies—skills for reflecting on the influence of devices on agency, for adapting to novel circumstances ushered in by devices, and for incorporating the feedback of loved ones and others about device effects on agency.

1. INTRODUCTION

The funding of the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative in 2014, and similar efforts around the world (for a good summary, see Yuste & Bargmann, 2017), stimulated an intensive push to advance basic neuroscience research and translate any gained knowledge into novel neurotechnologies. These neurotechnologies offer not only opportunities to study and understand brain and spinal cord function, but also potentially to “treat, cure, or even prevent” (https://braininitiative.nih.gov/) disorders and injuries. Still, given the significance of our brains for our understanding of ourselves—for example, our identity, authenticity, autonomy, and agency—neuroscientists and ethicists have raised a variety of concerns about the effects these technologies may have on individuals and society more broadly (Hendriks et al., 2019; Yuste et al., 2017, etc.). In this study, we introduce neurotechnologies and summarize three main areas of neuroethical interest in relation to them: identity, authenticity, and autonomy competency. After briefly assessing the work in these areas, we show that agency is a phenomenon key to all three goods and highlight the ways in which neural devices can help to draw attention to the relational nature of our agency.

1.1. Novel neurotechnologies

A number of neurotechnologies are already in use for therapeutic purposes. A common one of these are deep–brain stimulators (DBS) which consist of a signal-generating computer that directs electricity to electrodes implanted in the user's brain. DBS has become an effective treatment for Parkinson's disease (PD), essential tremor (ET), and epilepsy. DBS is currently being explored as an intervention for a wider variety of conditions like obsessive–compulsive disorder (OCD) and major depressive disorder (MDD). Newer DBS models include rechargeable batteries and more user control over settings, and researchers are exploring the possibility of “closed-loop” formats that both record from and stimulate the brain, implementing a feedback loop.

At the same time, brain-computer interfaces (BCIs) have been designed to record neural activity, and after significant training of the user, these devices can identify and use neural patterns associated with a person's particular motor intentions to control external objects, like computer cursors, robotic arms, or wheelchairs. For instance, research participants have successfully used BCIs for communication (Wolpaw et al., 2018) and to reach and grab with a robotic arm (Hochberg et al., 2012), simply through “thinking”—or focusing on specific motor imagery that creates the neural activity that triggers the actuator in the communication device or robotic arm. As such, BCI research participants are now “doing things with thoughts” (Steinert et al., 2019). Combining existing capacities with machine learning systems promises the possibility of having devices that “learn” their user, making control of BCI-mediated movement more immediate and fluid (Rao, 2019).

These fascinating advances promise opportunities to enhance the agency of device users, either through directly addressing impairments caused by disease (e.g., via closed-loop DBS), or through creating a novel mode of enacting one's intentions on the world (e.g., via BCI). Indeed, the literature highlights impressive stories of function restored or gained: a man with amyotrophic lateral sclerosis (ALS) used a non-invasive EEG device that enabled him to communicate via email to run his lab (Sellers et al., 2010); a woman with spinal cord injury learned to use a robotic arm to feed herself chocolate (Wodlinger et al., 2015); a woman who lived with debilitating and treatment-resistant PTSD used DBS to gain control of her life (https://sunnybrook.ca/media/item.asp?c=2&i=1859&f=ptsd-deep-brain-stimulation-research). As Kögel and colleagues report from an interview study, BCI users felt active in their use of the devices. As one interviewee noted, “It really changed my self-image. It changed… empowerment. The feeling, “I did this, look what I can do.” It helped me realize that … “You are more than the body you live in” (Kögel et al., 2020, p. 10).

Still, the picture is not entirely positive. Some DBS users experience unwanted side effects of brain stimulation, both significant and minor (Agid et al., 2006; Schüpbach et al., 2006) and find themselves feeling unsure about the authenticity or authorship of their feelings and behaviors (de Haan et al, 2015; Klein et al, 2016). Similarly, some BCI users seem less sure about their agency over the system, including feeling uncertain about responsibility for actions, especially when the outcome is not what they intended (as a user in Kögel et al, 2020, says, “Well, I guess I did not concentrate hard enough. But it also may be that the measuring was not optimal.” 5). Given the centrality of brain function for personal identity and agency, and the capacity of neural technologies to alter or augment brain function in both intended and unintended ways, this rapidly advancing field calls out for ethical analysis.

In this study, we briefly summarize three arenas that philosophers and neuroethicists have focused on as key to understanding the ethics of neurotechnology in relation to agency: identity, authenticity, and autonomy competency. In all three, authors have identified potentially troubling ways in which individual device users may experience changes to their conceptions of themselves through interaction with neurotechnology. Reviewing the literature, however, also points to ways in which some of the concerns raised by neurotechnologies—about device influence over desires, values, behaviors, and self-conceptions—may highlight the relational nature of human agency writ large. Rather than simply applying philosophical analyses to neurotechnology, we suggest that philosophers may use the insights gleaned from neurotechnology to better understand human agency itself, as a relational phenomenon.

1.2. Three arenas of influence: identity, authenticity, and autonomy competency

In most cases, DBS treats motor disorders successfully with minor side-effects—e.g., temporary tingling sensations and perhaps some speech problems. In others, device users report behavioral effects, such as increased impulsivity, gambling, or aggressiveness (Smedding 2007). In an extreme case, a man using DBS for Parkinson's disease (PD) faced a choice between debilitating motor symptoms without DBS and on-going mania with DBS (Glannon, 2009; Leentjens et al 2004). The potential for such side effects may leave many prospective users concerned about their future. We often think of the brain as a very important center of our values, desires, behaviors, and self-conceptions. We understandably are possessive of the neural activity that makes us who we are, even as we may seek neural interventions to ward off unwanted effects of disease or injury. In addition to concerns about stimulation side effects, prospective users may worry about coming to terms with their augmented selves. An influential study found that some people with PD reported feeling that DBS made them feel artificial or robotic—where some describe feeling like “an electronic doll,” or that they “feel like […] ‘Robocop’” (Agid et al, 2006; Schüpbach et al, 2006). Cases like these demonstrate how DBS might be a potential negative influence on, or even threat to, the user's identity; as Glannon asks, “How much disruption can one's life narrative accommodate without threatening the integrity of the whole?” (2009, 291).

These issues are made more complicated when we consider the experiences of people using DBS to treat psychiatric disorders—e.g., OCD and MDD. DBS, used this way, is meant to help the user escape patterns of behavior that make life difficult (or harmful) for themselves and others. In these circumstances, people are seeking to alter their behavior or mental states, but they also want to recognize themselves after treatment, as the same people who sought treatment, just without the obsessive thoughts or compulsions, or the depression. Individuals who have tried DBS for these conditions as part of research trials or through FDA humanitarian device exemptions sometimes report feeling much better—“I still like the same things. I don't have, like, different values or anything. I just enjoy things more. I'm me without the depression.” (Klein et al., 2016)—but others also articulate that even while some of the targeted behaviors have improved, other issues have arisen anew. For instance, one DBS user with MDD explains, “I have begun to wonder what is me and what is the depression, and what is the stimulator” (Goering et al., 2017a, p. 63). Although he can recognize himself, and he's able to act, we might say that he has “ambiguous agency” (Klein et al., 2016) because he's unsure about his authorship of things that matter to him and that he would typically take to be under his control.Along these lines, Felicitas Kraemer argues that we should understand DBS (and neurotechnologies broadly) in terms of how they move users closer and further from their “authentic selves” (Kraemer, 2011). On this view, each user must decide if the changes they experience are authentic or alienating—that is, in line with who they want to become or how they want to become that. And so “when evaluating the ethical … implications of behavior changes that result from DBS, the subjective state of felt-authenticity and felt-alienation should be taken into consideration” (Kraemer, 2011, p. 496) so that we can better attend to how each patient's experience using DBS may be a process of “recognizing, exploring, and enacting what they regarded as their ‘true selves’” (Kraemer, 2011, p. 496). Erik Parens, in his Shaping Our Selves (2014), recognizes how notions of authenticity involve both self-discovery and self-making. He notes that people use restorative (or even enhancement) technologies in order to “change what it feels like to be in the world” and “improve [one's] experience of [one's] self” (33).

The idea of neural devices that help to address a significant bodily or cognitive problem may introduce the possibility of new problematic effects that make neural devices appear to be a threat to personal identity. A person might try a neural device in order to improve her situation—to treat her Parkinson's or regain mobility after spinal cord injury—and make herself more like she wants to be, but then become changed in the process (see, e.g., a hypothetical case considered by Schechtman, 2009). If a DBS user's values and behaviors seem to change significantly after DBS—if the changes seem to be “the effect of direct electrical stimulation of the brain” (Schechtman, 2009, p. 78)—then we may be less inclined to think of the person's commitments as her own. That is, we might be inclined to think of her commitments as the result of stimulation. And given the centrality of our commitments for our personal identities (Frankfurt 2006), we might consider the person's identity to be under threat. As one neural device user notes, “You just wonder how much is you any more. How much of it is my thought pattern? How would I deal with this if I didn't have the stimulation system? You kind of feel artificial.” (Klein et al., 2016).

François Baylis, however, argues that these (side-)effects of DBS are things that generally can alter but do not threaten personal identity, because personal identity is simply too dynamic (Baylis, 2011). Certainly, she argues, DBS can play an important role in how users understand and perform their identities. But the DBS side effects are not unlike “side effects” of other consequential life events (e.g., severe injury, relationship loss, etc.). In any of these scenarios, a person would have to make decisions about how they will (re-)form and perform their narratives. If DBS constitutes a threat to identity, then, it's only through threatening the person's ability to perform and contribute to these narratives. As such, neurotechnologies like DBS are only a threat to identity when they make it difficult for users to “contribute meaningfully to the authoring of (their lives)” (Baylis, 2011, p. 525)—or when they disrupt users' agency over their lives.

When someone's agency—i.e., their ability to enact their intentions on the world through authoring their actions—is disrupted, that individual will also experience a shift in autonomy. We might think of agency as a kind of lower level phenomenon through which autonomy is made possible. Sue Sherwin, for instance, claims that agency is the bare ability to “exercise reasonable choice” while at a minimum, autonomy requires agency and the capacity “to resist oppression” (Sherwin, 1998, pp. 32-33). If one's agency is significantly undermined, so too will one's autonomy be diminished. Here it is helpful to consider feminist work on what Diana Meyers calls autonomy competencies—a set of skills including the introspective, interpersonal, and volitional skills we use to understand what we want, communicate that with others, and motivate ourselves to act for ourselves (Meyers, 2000, p. 166). These skills admit of degrees and are employed in the service of autonomy. Bringing this more theoretical discussion to neurotechnologies, like DBS, Catriona Mackenzie and Mary Walker argue that such devices are not threats to identity but can disrupt agency and autonomy insofar as they diminish these autonomy competencies (Mackenzie & Walker, 2015; see also Goddard, 2017). A person with a DBS who is significantly more impulsive, is told by her loved ones that she is not acting like herself, and has difficulty acknowledging what she wants or acting accordingly, would have clearly diminished autonomy on this view. She maintains her identity, but with less capacity for autonomy.

Our assessment, from this growing field of neuroethics focused on neural devices, is that many of the cherished human goods that stand out as potentially threatened—identity, authenticity, and autonomy—can be collected together under the umbrella term of agency. We may want to change features of our personal identities, participate in the on-going process of self-creation, and build our autonomy competencies, but these are all versions of seeking to be agential. As Baylis suggests, we care about ensuring that we are authoring our actions, that they flow from our thoughts and beliefs, and not from direct brain manipulation (2011, 524).

1.3. Lessons about relationality

One of the lessons from work in this area of neuroethics, though, is that, as we seek to understand how neural technology may disrupt an individual's sense of agency, we start to realize that we are not working alone in making up our identities. We are relational, narrative selves, who are at least partially constituted through relationships. Our parents, and their communities, initiate us into personhood through all of the practices that surround pregnancy and parenthood. We rely on these others to not only help make us who we are, but also to “hold” us in our identities when we feel unsure (Lindemann, 2014). Particular kinds of holding can be welcome and supportive, but they might also be unwanted and oppressive, as when others impose rigid norms or stereotypes in how they view us (Baylis, 2012; Lindemann, 2001). We act in ways that are predicated on other people recognizing our intentions and giving them uptake in a kind of “social choreography of agency” (Bierria, 2014, p. 131). Our local and broader social institutions enable and support our actions through these forms of recognition and uptake (Lugones, 2003). On this view, we interact and engage with many others in developing our sense of our selves—not just in terms of our identities, but also in terms of our agency and how we express it (Brown, 2020; Goering et al, 2017a; Goering et al., 2017b).

Agency, then, very often involves receiving uptake for one's intentions and having assistance in enacting one's intentions. This much is well-known to disabled people, who already use devices and/or caregivers to help enact their wills on the world. A person who relies on a caregiver for feeding, for instance, does not fail to be an agent simply because she cannot lift a spoon to her mouth independently. Instead, she expresses her agency through choosing a caregiver and directing their action. A key philosophy of the independent living movement helps to articulate this distinction: “Independent living for disabled people means … having choices about who helps you and the ways they help. It is not necessarily about doing things for yourself, it is about having control over your day to day life” (independentliving.org). The person directs the action by making choices, even if the intention is put into effect with assistance from the caregiver. People with cognitive impairments may also engage with others—perhaps even as “prosthetic thinkers” (Silvers & Francis, 2009)—to exercise their agency. Disability experience calls our attention to the social and environmental factors that influence how agency gets enacted (Timpe, 2019).

Interestingly, not all of the relevant agential relationships are human-to-human. A wheelchair user may form a kind of agential relationship with her chair, with the chair helping her to perform her intentions. Exploring the “animacy” of wheelchairs, Julia Watts Belser notes how “in the intimate relationality that emerges between wheelers and their wheels, wheelchairs are rarely experienced as inanimate objects” (Belser, 2016, p. 6); rather, the interaction between the two leads to a “queer animacy” (9) that demonstrates the “transgressive potential of human relations with technology” (9). If we look at neurotechnology, then, from the perspective of the disabled person using a BCI device, we may learn to see the BCI not as a threat to her independent action (or to her separate self), but as part of a co-agency, or as a supportive helper aiming to help enact and make her intention legible. For example, one woman with amyotrophic lateral sclerosis (ALS) who uses BCI to paint using neural signals, “sees herself in a ‘collegial’ work relationship with the BCI, which she says ‘is actually more of a friendly helper than just a technical thing’” (Kögel et al, 2020).

Of course, when we rely on other people and external devices for assistance, we can more easily identify the roles they play in our lives. Because they are external, we are better positioned to recognize their influence over us. Neural devices, however, are so intimately connected (implanted) that they can manipulate us in ways we might not notice, even if they more typically support us as we aim to do what we want to do (Goering et al, 2017a). Whether they qualify as being part of what we take to be an extended mind or not (Buller, 2011; Levy, 2007), these devices present us with the question of how we know when and how we are being manipulated—or how we know we are being unfairly influenced in a way that bypasses rational deliberation, or involves trickery or pressure, without being coercive (Noggle, 2018). A person who is manipulated can still act and choose, but her decisions, preferences, and goals may be perverted in a way that makes them questionably the person's own. But we must remember that we can also be manipulated by caregivers, friends, family members, devices that don't stimulate our brains (like smartphones and wearables), or by environmental influences that act like nudges. We are always already influenced in significant ways by others in our actions—perhaps our very agency is co-constituted by these others (Doris, 2015). Despite these influences being external, we also find it difficult to determine, and easy to forget how, and how much, they play a role in our expressions of agency. Relational agency isn't relevant only in the disabled world; it is a feature of all human agency (Timpe, 2019).

In other words, we are relational beings who are caught up in a web of influence. Relying or being dependent on others to enact our agency is not somehow a diminishment, but the normal state of affairs. In cultivating relationships of love, for instance, we often rely on forms of behavior that might be considered manipulative—through excessive praise, emotional appeals, turning on the charm, and so on (Buss, 2005). Some of these kinds of influence, though, may be relatively benign, even as they shift our behaviors. When we recognize that we are relational agents who rely on others, then we also see that what we care less about the mere fact that the neural devices influence users' actions. Instead, we care more about how each user wants to be influenced, if the device helps make the user's intentions legible, if the user has the ability to push back on (or resist) those influences, and so on. What really matters, then, is that we chart the terrain of the possible relationships between the users and devices, explore how users will navigate them, and decide how stakeholders can improve these relationships in the future (Brown, 2020).

In these relationships, it is especially important that we preserve our ability to push back on or outright resist others when we disapprove of how they influence us. Rather than proposing a model of uninfluenced individual agency, then we can shift to a model of relational agency, which recognizes how others (people and devices) may shift our thinking and our actions. We can then resolve to be vigilant—not on our own, but also with the help of others—in identifying and working against problematic influences. This could mean developing agency-competencies: being more reflective than we've been in the past to make sure we're comfortable with how we've been influenced, learning to improvise agency in novel circumstances (Brown, 2020), and relying on our friends and families to tell us when we've been influenced in ways that make us act poorly, irresponsibly, or in ways that are out of character for us (Goering et al, 2017a). The agency we seek to protect and secure is not a solo phenomenon (see also Doris, 2015), but a deeply relational one. The promise—and threat—of neurotechnologies can help us recognize that reality.

ACKNOWLEDGMENTS

We are grateful for many helpful conversations with our neuroethics research team at the University of Washington (Asad Beck, Ishan Dasgupta, Kate MacDuffie, Darcy McCusker, Michelle Pham, Andreas Schönau, and Paul Tubig) as well as undergraduate research assistants. Our work is funded by the NSF (EEC-1028725) and the NIH (1RF1MH117800-01).

Funding information

National Science Foundation, Grant/Award Number: EEC-1028725; National Institute of Mental Health, Grant/Award Number: 1RF1MH117800-01

Biography

Sara Goering is an Professor of Philosophy at the University of Washington, Seattle, as well as a member of the Program on Ethics and the Disability Studies Program. Since 2012, she has led a neuroethics research team through the University of Washington Center for Neurotechnology.

Timothy Brown is a Postdoctoral Research Associate in the Department of Philosophy at the University of Washington, Seattle, where he recently finished a dissertation on neuromodulation and agency.

Eran Klein is a neurologist at the Oregon Health and Sciences University and the Portland VA. Medical Center in Portland Oregon, and also an Affiliate Assistant Professor of Philosophy at the University of Washington, Seattle, where he co-leads the neuroethics research team at the University of Washington Center for Neurotechnology.

REFERENCES

  1. Agid Y, Schüpbach M, Gargiulo M, & Mallet L (2006). Neurosurgery in Parkinson’s disease: The doctor is happy, the patient less so? Journal of Neural Transmission, 70, 409–414. [DOI] [PubMed] [Google Scholar]
  2. Baylis F (2011). “I Am who I Am”: On the perceived threats to personal identity from deep brain stimulation”. Neuroethics. 6, 513–526. 10.1007/s12152-011-9137-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Baylis F (2012). The self in situ: A relational account of personal identity. In Downie and Llewellyn (Ed.). Being relational: Reflections on relational theory and Health law (pp. 109–131). University of British Columbia Press. [Google Scholar]
  4. Belser JW (2016). Vital wheels: Disability, relationality, and the queer animacy of vibrant things. Hypatia, 31(1), 5–21. [Google Scholar]
  5. Bierria A (2014). Missing in action: Violence, power and discerning agency. Hypatia, 29(1), 129–145. [Google Scholar]
  6. Brown T (2020). Building intricate partnerships with neurotechnology: Deep brain stimulation and relational agency. International Journal of Feminist Approaches to Bioethics, 13(1), 134–154. [Google Scholar]
  7. Buller T (2011). Neurotechnology, invasiveness and the extended mind. Neuroethics, 6(3), 593–605. [Google Scholar]
  8. Buss S (2005). Valuing autonomy and respecting persons: Manipulation, seduction and the basis of moral constraints. Ethics, 115(2), 195–235. [Google Scholar]
  9. Doris J (2015). Talking to ourselves: Reflection, ignorance and agency. Oxford University Press. [Google Scholar]
  10. de Haan S, Rietveld E, Stokhof M, & Denys D (2015). Effects of deep brain stimulation on the lived experience of obsessive-compulsive disorder patients: In-depth interviews with 18 patients. PLoS ONE, 10(8), e0135524. 10.1371/journal.pone.0135524. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Frankfurt H (2009). The reasons of love. Princeton University Press. [Google Scholar]
  12. Glannon W (2009). Stimulating brains, altering minds. Journal of Medical Ethics, 35(5), 289–292. 10.1136/jme.2008.027789 [DOI] [PubMed] [Google Scholar]
  13. Goddard E (2017). Deep brain stimulation through the ‘lens of agency’: Clarifying threats to personal identity from neurological intervention. Neuroethics, 10(3), 325–335. 10.1007/s12152-016-9297-0 [DOI] [Google Scholar]
  14. Goering S, Brown T, & Alsarraf J (2017b). Others’ contributions to an individual’s narrative identity matter. AJOB Neuroscience, 8(3), 176–178. [Google Scholar]
  15. Goering S, Klein E, Dougherty DD, & Widge AS (2017a). Staying in the loop: Relational agency and identity in next-generation DBS for psychiatry. AJOB Neuroscience, 8(2), 59–70. [Google Scholar]
  16. Hendriks S, Grady C, Ramos KM, Chiong W, Fins J, Ford F, & Wexler A (2019). Ethical challenges of risk, informed consent, and posttrial responsibilities in human research with neural devices: A review. JAMA Neurology. 76(12), 1506–1514. 10.1001/jamaneurol.2019.3523 [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Hochberg L, Bacher D, Jarosiewicz B, Massey N, Simeral J, Vogel J, & Donoghue J (2012). Reach and grasp by people with tetraplegia using a neurally controlled robotic arm. Nature, 485, 372–375. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Independent living website. https://www.independentliving.org/docs6/hasler2003.html#3
  19. Klein E, Goering S, Gagne J, Shea C, Franklin R, Zorowitz S, … Widge A (2016). Brain –computer interface-based control of closed-loop brain stimulation: Attitudes and ethical considerations. Brain-Computer Interfaces, 3(3), 140–148. 10.1080/2326263X.2016.1207497 [DOI] [Google Scholar]
  20. Kögel J, Jox R, & Friedrich O (2020). What is it like to Use a BCI? BMC Medical Ethics, 21(2). 10.1186/s12910-019-0442-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Kraemer F (2011) Me, myself, and my brain implant. Neuroethics, 6, 483–497. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Leentjens AFG, Visser-Vandewalle V, Temel Y, & Verhey FR (2004). Manipulation of mental competence: An ethical problem in case of electrical stimulation of the subthalamic nucleus for severe Parkinson's disease. Nederlands Tijdschrift Voor Geneeskunde, 148(28), 1394–1398. [PubMed] [Google Scholar]
  23. Levy N (2007) Rethinking neuroethics in the light of the extended mind thesis. AJOB Neuroscience, 7(9), 3–11. 10.1080/15265160701518466 [DOI] [PubMed] [Google Scholar]
  24. Lindemann H (2001). Damaged identities, narrative repair. Cornell University Press. [Google Scholar]
  25. Lindemann H (2014). Holding and letting go: The social practice of personal identities. Oxford University Press. [Google Scholar]
  26. Lugones M (2003). Pilgrimages/Peregrinajes: Theorizing coalition against multiple oppressions. Rowman & Littlefield. [Google Scholar]
  27. Mackenzie C, & Walker M (2015). Neurotechnologies, personal identity, and the ethics of authenticity. In: Clausen J and Levy N (Eds.), Springer Handbook of Neuroethics (pp. 372–392). Springer. [Google Scholar]
  28. Meyers D (2000). Intersectional identity and the authentic self: Opposites attract! In: Mackenzie C & Stoljar N (Eds.). Relational autonomy: Feminist perspectives on autonomy, agency and the social self (pp. 151–180). Oxford University Press. [Google Scholar]
  29. Noggle R (2018). The ethics of manipulation. In Zalta EN (Ed.), Stanford encyclopedia of philosophy. Stanford University. https://plato.stanford.edu/entries/ethics-manipulation/ [Google Scholar]
  30. Parens E (2014). Shaping ourselves: On technology, flourishing, and a habit of thinking. Oxford University Press. [Google Scholar]
  31. Rao RP (2019). Towards neural co-processors for the brain: Combining decoding and encoding in brain-computer interfaces. Current Opinion in Neurobiology, 55, 142–151. 10.1016/j.conb.2019.03.008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Schechtman M (2009). Getting our stories straight. In Matthews B, & Rabins PV (Eds.), Personal identity and fractured selves (pp. 65–92). Johns Hopkins University Press. [Google Scholar]
  33. Schüpbach M, Gargiulo M, Welter ML, & Mallet L (2006). Neurosurgery in Parkinson disease A distressed mind in a repaired body? Neurology, 66, 1811–1816. [DOI] [PubMed] [Google Scholar]
  34. Sellers EW, Vaughan TM, & Wolpaw JR (2010). A brain-computer interface for long-term independent home use. Amyotrophic Lateral Sclerosis, 11(5), 449–455. 10.3109/17482961003777470 [DOI] [PubMed] [Google Scholar]
  35. Sherwin S (1998). A relational approach to autonomy in Health care. In: Sherwin S & the feminist Health care ethics research network (Eds.). The politics of women’s Health: Exploring agency and autonomy (pp. 19–47). Temple University Press. [Google Scholar]
  36. Silvers A, & Francis LP (2009). Thinking about the good: Reconfiguring liberal metaphysics (or not) for people with cognitive disabilities. Metaphilosophy, 40(3-4), 475–498. [Google Scholar]
  37. Smedding HMM, Goudriaan AE, Foncke EMJ, Schuurman PR, Speelman JD, & Schmand B (2007). Pathological gambling after bilateral subthalamic nucleus stimulation in Parkinson disease. Journal of Neurology, Neurosurgery & Psychiatry, 78(5), 517–519. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Steinert S, Bublitz C, Jox R, & Friedrich O (2019). Doing things with thoughts: Brain computer interfaces and disembodied agency. Philosophy and Technology, 32, 457–482. [Google Scholar]
  39. Timpe K (2019). Moral ecology, disabilities and human agency. Res Philosophica, 96(1), 17–41. [Google Scholar]
  40. Wodlinger B, Downey JE, Tyler-Kabara EC, Schwartz AB, Boninger ML, & Collinger JL (2015). Ten-dimensional anthropomorphic arm control in a human brain-machine interface: Difficulties, solutions and limitations. Journal of Neural Engineering, 12(1), 016011. [DOI] [PubMed] [Google Scholar]
  41. Wolpaw J, Bedlack R, Reda D, Ringer R, Banks P, Vaughan T, & Ruff R (2018). Independent home use of a brain computer interface by people with amyotrophic lateral sclerosis. Neurology, 91(3), e258–e267. 10.1212/WNL.0000000000005812 [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Yuste R, & Bargmann C (2017). Toward a global BRAIN initiative. Cell, 168(6), 956–959. [DOI] [PubMed] [Google Scholar]
  43. Yuste R, & Goering S, & the Morningside Group. (2017). Four ethical priorities for neurotechnologies and AI. Nature, 551, 159–163. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES