Abstract
Brain-computer interfaces (BCIs) are systems in which a user’s real-time brain activity is used to control an external device, such as a prosthetic limb. BCIs have great potential for restoring lost motor functions in a wide range of patients. However, this futuristic technology raises several ethical questions, especially concerning the degree of agency a BCI affords its user and the extent to which a BCI user ought to be accountable for actions undertaken via the device. This paper examines these and other ethical concerns found at each of the three major parts of the BCI system: the sensor that records neural activity, the decoder that converts raw data into usable signals, and the translator that uses these signals to control the movement of an external device.
Keywords: brain-computer interfaces, neuroprosthetics, neural implants, agency, responsibility, neural implants
Introduction
Bionic enhancements—artificial body parts that can be controlled directly by their user’s brain—are a common sci-fi trope. Recent advancements in machine learning and electrode sensitivity have brought these fictional devices remarkably close to reality in the form of implantable brain-computer interface (BCI) assistive technology. This technology has enabled quadriplegics to perform robust seven-dimensional movements with a prosthetic arm (Collinger 2013) and late-stage ALS patients to communicate rapidly using a computer-based interface with a BCI-driven cursor (Gilja 2015), among many other preliminary success stories.
With this technology comes a range of ethical concerns, many of which relate to BCI users’ capacity for autonomy and self-agency. To appreciate these concerns, it is important to understand that the basic BCI paradigm of “using brain signals to control an external device” is, in practice, far less straightforward than it sounds. Accomplishing this goal requires three complex systems: a sensor that records neural activity, a decoding algorithm that turns the raw voltage data into meaningful signals such as the activity of specific neurons, and a translation method that turns these processed signals into movement commands for the external device. Developing each of these systems is an extensive design undertaking in which engineers need to make hundreds of decisions, made more complicated by the ethical issues that arise when considering each process. This paper explores the ethical concerns that need to be addressed by these three systems—the sensor, the decoder, and the translator—and some solutions that have been proposed and implemented by BCI designers.
Throughout this paper, the term BCI or brain-computer interface refers to any system in which neural activity is measured and used to directly control an external device, such as a robotic limb or a cursor on a screen. Agency refers to a person’s power to influence their own actions, exercise control over their thoughts and movements, and produce their desired or intended results. Sense of agency refers to a person’s feeling that they are in control of themselves and their actions. Accountability is the extent to which a person can be held responsible for certain outcomes; it is a product of how much responsibility a person has over a particular situation and how much the outcome of that situation was influenced by that person’s actions or inactions. These concepts all come into play in various aspects of the BCI system.
Discussion
The Sensor
Voluntary muscle movement in humans is regulated by neurons in the motor cortex, a strip of cortical tissue located at the back of the frontal lobe, directly adjacent to the parietal lobe. The motor cortex receives input from the prefrontal cortex, which plans complex behaviors such as coordinated movement; the sensory cortex, which sends it data about the world as gathered by sight, touch, hearing, etc.; and other areas such as the cerebellum and basal ganglia, which help regulate force and timing of movements and correct errors. Neurons in the motor cortex compile all of this input into signals they then transmit to the spinal cord, where motor neurons connecting the spinal cord to muscles throughout the body translate these signals into muscle contractions. Paralysis is typically a result of damage to the spinal cord or disease-induced degradation of motor neurons. Critically, in the vast majority of paralysis cases the motor cortex is fully functional.
BCIs capitalize on this by outfitting the body with new hardware that harnesses the activity in the intact motor cortex to drive an external device that augments or replaces a missing or paralyzed body part. The first step in this chain of hardware is the sensor that records neuron activity data from the motor cortex. Most intracortical BCIs use compact arrays of approximately 100 silicon needle-shaped electrodes in a 4x4mm configuration, such as the Utah array in which probes are 0.5-1.5mm long or the Michigan array with up to 15mm long probes for targeting deeper structures (Choi 2018). In each case, only the last 0.05mm of each electrode is exposed and capable of recording signals. These signals have traditionally been transmitted via subcutaneous wires, but concerns about infection and movement restriction have led to recent developments in wireless cortical recording technology (Rajangam 2016).
Implanting the BCI sensor into the brain raises the same ethical concerns of nonmaleficence (avoiding harm to the patient) as implanting anything into the body does—specifically, the need to balance the risk of infection and tissue damage with the potential benefits provided by the implant, which are referred to as the implant’s beneficence. Some have argued that due to the uncertain longevity of the BCI sensor and the high likelihood of needing repeated surgeries to maintain BCI functionality, implanting a BCI sensor represents an unnecessary and unethical risk to the patient given that non-invasive alternative treatments exist (McGie 2013). Others claim that these alternative treatments, which range from electromyographic prosthetic limbs to head-tilt sensors for manipulating a screen, pale in comparison to the improved functionality BCIs offer, and that it would be unethical and unconscionable to deprive the millions of people who are unable to fully use their bodies of an opportunity for rapid restoration of function (Society for Neuroscience 2017). It is true that invasive BCIs have demonstrated superior performance in giving quadriplegia and ALS patients full control over computer software for communication (Gilja 2015) and in enabling amputees and paralyzed patients to control artificial arms with the most degrees of freedom, producing more complex and functional movements than any other prosthetic arms (Chaudhary 2016). It is also true that extensive surgical risks abound for any neurosurgery, including CSF leakage, pulmonary embolisms, hemorrhages, seizures, and infection; studies of deep brain stimulation implantations reveal over 12% of patients experience serious adverse effects (Ryu 2009). While efforts to reduce these risks are underway, including improved biocompatibility and development of modularity standards (Bowsher 2016), patients and their physicians need to carefully weigh the benefits and drawbacks of BCI use. It is easy for both patients and clinicians to want to try the “next greatest thing” as soon as it is available, but it is critical from an ethical perspective to temper this excitement with a realistic evaluation of alternative available treatment possibilities.
Related to this is the need to ensure full patient agency in acquiring consent for treatment with a BCI. Many potential BCI patients suffer from diseases such as ALS or Parkinson’s disease which, in some cases, are comorbid with dementia and other cognitive deficiencies; even if they don’t have these symptoms at the onset of treatment, they may develop them over the course of treatment, rendering them unable to provide continuous consent. In addition, implanting any device in the brain has the potential to disrupt an individual’s identity or sense of self in unpredictable ways (Klein 2016); this has been documented in deep brain stimulation and, though it seems less likely to occur with a motor cortex implant, must always be a consideration. Ensuring full decision-making capacity and confirming consent at each stage of the BCI treatment process is critical for preserving users’ agency and autonomy.
Other ethical concerns about the sensor hardware relate to its durability and longevity. Human tissues have pre-programmed reactions to the insertion of any non-native object; this is known as foreign body response. In the brain, foreign body response manifests as inflammation within the first two weeks after implantation of a foreign device, followed by the continual activation of astrocytes that wrap long strands of a structural protein called glial fibrillary acidic protein around the object in an attempt to isolate it from the surrounding tissue. This process, which is called gliosis, is to blame for many cases of neural implant failure (Campbell 2018). As the glial scar builds up, it insulates the electrode and pushes neurons further from its surface, diminishing the electrode’s effectiveness. The electrodes are also subject to corrosion, cracking, and other material failures, as well as mechanical failures such as wire damage or dislodgement of the electrode by external forces, which might happen if the user were to fall. These effects combined contribute to one-year microelectrode failure rates that can exceed 50% (Barrese 2013). Putting a patient through the rigors of device implantation and training is perhaps ethically questionable given how likely sensor failure is and how devastating the subsequent loss of restored function could be. The only way to ameliorate this ethical burden could be to reduce the device failure rate. Fortunately, research into improving the biocompatibility and longevity of neural implants is well underway (Ferguson 2019). This work is focused on improving the material properties of implants and designing features such as hydrogel coatings and bioactive interfaces to reduce the intensity of the foreign body response.
The Decoder
The 4x4x0.05mm volume of brain tissue surrounding the exposed electrode tips of the Utah array contains about 24,000 neurons and half a billion synapses (Stieglitz 2009), yet all of this activity is captured by only about 100 electrodes. Each electrode can only clearly record from neurons that are within ~0.05mm of the electrode tip—which is, on average, 3.4 neurons per electrode (Nordhausen 1996)—so only about 340 neurons, or ~1.5% of the neurons in the array’s theoretical volume, are recorded with enough clarity to isolate their action potentials. The rest of the nearby neurons contribute to background noise in the electrode recordings, so raw microelectrode data is notoriously noisy. The decoder algorithm that extracts these 340 action potentials from the background noise of thousands of other neurons and converts it into a single output command represents a serious challenge both technically and ethically. In employing such an algorithm, engineers are claiming that their processing of this small neuron population is accurate enough to represent the activity of many thousands of neurons that typically generate movement commands, and precise enough that it should be allowed to control a prosthetic device that most would consider to be an extension of its user’s body. Being able to do this, from both a technical and ethical perspective, requires a highly accurate decoder in order to capture a person’s motor agency via recordings of a small population of their neurons.
Neurons communicate with each other and with other cells via action potentials, which are rapid changes in voltage across the cell’s membrane that propagate down the cell. Action potentials are recorded as sinusoidal voltage fluctuations over time which, when converted to the frequency domain, show up as “spikes”. From there, assigning the spikes associated with each time step of the recording to particular neurons is essentially a massive linear algebra problem, which is simplified in part by thresholding operations that enable a spike to be counted and analyzed regardless of any drifts in amplitude that can arise from sub-micrometer movements of the electrode relative to the neuron (Homer 2014). Only very recently have algorithms strong enough to perform this analysis without human manual verification emerged; the current state of the art has sensitivity and precision both around 95%, which it achieves by breaking down the recordings into smaller samples from about 9 electrodes each and randomly selecting only 50,000 spikes from each sample (Diggelmann 2018).
Since neuron spikes have a duration of <1 ms, the sampling rate of the electrode must exceed 10 kHz (10,000 samples a second) in order to ensure spike activity is captured. This requires a large amount of energy and makes the system very sensitive to tiny movements of the electrodes relative to the neurons, which inevitably happens due to micromotion in brain tissue from respiration and blood circulation. As such, some researchers have shifted focus to local field potentials (LFPs)—summations of velocity changes in areas exceeding 0.2mm around each electrode type—instead of relying on spike activity. LFPs are commonly referred to as alpha, beta, and gamma waves and can be recorded using sampling rates less than 100 Hz (100 samples a second). This is promising for reducing energy load and extending the life of the BCI device. However, the concern is that the precise origin of these LFP signals is as yet unknown (Jackson 2017). While it is clear that spikes arise from neurons firing action potentials, lower-frequency LFP signals likely reflect many different underlying processes and it is entirely unclear which of these processes relate to volitional movement control. Despite this, a recent primate study showed that LFP data can be fed into a signal processing algorithm that analyzes changes in voltage in 10Hz frequency bands up to 80Hz and provide information about monkeys’ reach targets almost as accurately as neuron spikes alone; combining the two types of data also results in significantly improved accuracy than using either alone (Hwang 2013).
To summarize all of the technical information thus far, BCIs record information from a 4x4x0.05mm volume within the users’ primary motor cortex. This information can consist of discrete action potentials from about 340 of the neurons within this volume as well as changes in low-frequency signals such as alpha, beta, and gamma waves, which have no specific or known origin, throughout the volume. This is all of the data available for the translator to use to evoke motion in the prosthetic device. Compared to the amount of data the brain normally has at its disposal—signals from the whole 30,000 neurons/mm3 in a motor cortex that is roughly 2cm wide and 2.5mm thick, plus the millions of neurons in other brain regions that are also involved in movement planning and execution—this is miniscule. Is it ethical for BCI engineers to claim that their devices accurately reflect their users’ brain state when they are only recording such a small percentage of it? Relatedly, is it ethical for BCI users to be held accountable for actions performed via the prosthetic device if those actions don’t reflect their complete current brain state? Before diving deeper into these questions, let’s take a closer look at how the translator turns these decoded signals into actual movements.
The Translator
The translator is an algorithm that turns the signal extracted by the decoder into commands for the BCI external device, which, in present applications, is generally either a prosthetic limb or an interface for communicating, such as a computer application in which the BCI user controls a cursor on the screen to select and type words. The translator is essentially a large matrix of constants called a linear filter that, when multiplied by a matrix of recorded neuron firing rates, produces velocities that are imparted onto the BCI device.
One critical aspect of the translator is the calibration process, which is how the constants in the linear filter are determined. Traditionally, to calibrate the BCI users are first asked to imagine performing relevant movements, for example moving a cursor, while their neural activity is recorded. To aid the imagining process, the user is shown a screen with visual targets and a technician moves a cursor systematically through the targets. These initial recordings are used to “seed” the translator with the first version of the filter, generating a user-controlled cursor on the screen (Hochberg 2006). The user then undergoes several sessions in which they manipulate their cursor to match the movements of the technician’s cursor. After each session, the linear filter is updated to more accurately synchronize its predicted cursor velocities with the actual cursor velocities the user achieved. Some algorithms, such as the Kalman filter, have an additional step involving integration over all previous neural signals that further refines the translation matrix based on long-term signaling patterns (Kim 2008). The process for calibrating a prosthetic limb is similar, with the participant watching a limb making pre-recorded movements during the initial imagining step and then gaining control of the limb and operating it with gradually less error attenuation (Hochberg 2012).
This calibration technique raises an interesting ethical point related to user agency. During the training process, especially early on, much of the movement of the BCI device is controlled by a technician or by a computer and not by the user. Despite this, it is very possible for the user to experience a sense of agency over the device. This effect has been investigated in more detail in an experiment where users were connected to an EEG system and told that their EEG signals were going to be used to control the gestures of a virtual hand: thinking about moving their left hand would make the virtual hand give a “thumbs up” and thinking about moving their right hand would give a virtual “okay sign” (Vlek 2014). They were given a live feed of their EEG signals and shown how their hand movement thoughts affected the signals. They were instructed to follow the commands of the facilitator to give either a “thumbs up” or “okay”. However, the virtual hand movements they observed were pre-recorded, conforming to the instructions with a 90% success rate, and had nothing to do with their EEG signals. Despite this, when asked to rate their sense of control of the hand on a scale of 1-7, participants gave an average rating of 5.0. The authors noted that this situation fulfilled the three preconditions for a sense of agency: the priority principle (the thought preceded the action), the consistency principle (the thought was compatible with the reaction), and the exclusivity principle (the thought was the only apparent cause of the action), even though these participants actually had zero agency over the virtual hand.
In calibrating actual BCIs, of course, users are gradually given full control over the devices. However, this only strengthens the reinforcement of the user’s sense of agency, and could well lead to users judging themselves as the originators of actions that their BCI device performed even if these actions did not reflect their mental state. This can affect the BCI user beyond the simple consequences of the mistaken action. Imagine, for example, that Erin, a bilateral upper limb amputee who uses a BCI to control a prosthetic hand, goes to shake an executive’s hand at a dinner party; her hand, for whatever reason (inconsistent calibration, lack of recording from the appropriate neurons, etc.), grips the executive’s hand far too tightly and seriously injures it. Not only is this a major problem for the executive, but Erin is also horrified and feels extremely guilty. She may or may not be legally responsible for the injury, but she certainly feels personally responsible for it—it was, after all, done by her own arm that she has worked with and lived with for years. This greatly affects her emotional state and is bound to affect her personal and professional life; it may also affect her legally and financially. Multiple people are negatively affected despite none of them having any malicious intent—this is clearly an ethically reprehensible situation. Yet without her BCI, Erin lacks the fine motor control that enables her to perform her job and manage other aspects of her full life, and would be at a serious disadvantage. Is having this device worth the risk of it failing to operate properly? The answer is unclear, but reducing the potential for failure would certainly increase the viability of BCI use.
Linear filter methods produce movement accuracy of about 80%, which is excellent in a laboratory setting but not enough to make these devices viable for constant everyday use, especially considering the possibility of cases like Erin’s. To improve calibration and therefore improve integration of the BCI with its user, BCI researchers have begun turning to deep learning. A deep learning algorithm is a series of neural networks with increasing levels of detail and abstraction. A neural network is itself a pattern-recognition algorithm that categorizes raw data based on its features. Neural networks are made up of interconnected nodes, each of which has a set of coefficients, or weights, that it assigns to features of the input data. Based on the sum of these weighted inputs, the node turns either on or off. The final state of the nodes corresponds to the network’s classification for the data. Neural networks are used today in thousands of applications such as identifying objects in images, distinguishing spam emails from non-spam emails, and recognizing emotions portrayed in photos of faces. In order to do any of these tasks, neural networks must be trained with a labeled dataset—a series of images labeled with the objects in them, for example, or a set of emails pre-sorted into spam or non-spam. The network attempts to sort the data into the correct categories, checks its answers against the input labels, and adjusts weights throughout the network to minimize the error between its answers and the input labels. The network repeats this process many times to approach a final steady state. Deep learning uses several layers of nodes to produce more precise, fine-tuned predictions. In the context of BCI, a deep learning translator algorithm can be trained using neural activity labeled for a certain movement or other device output. The translator would then be able to convert new neural signals into the corresponding appropriate movements.
Though deep learning translators are more accurate than linear filters (Thomas 2017), they raise considerable ethical issues. There is a poignant irony in using an enigmatic software neural network to decipher an enigmatic wetware neural network. With linear filters, operators know that the nodes being filtered correspond to individual neural signals; the algorithm is a fairly straightforward conversion from signal to output, and it is possible to look at the individual components of the algorithm’s output and trace them back to particular signals. This is not the case with deep learning, where the weights and nodes have no physical counterpart and are defined with no operator input. Deep learning BCIs insert another “black box” into the already complex task of converting neural input into actionable output, which is troubling in terms of user accountability. Not only are the user’s intentions derived in a BCI from a miniscule subset of their overall activity, but also the algorithm that converts this activity to device action is obscure and impossible to explain. This makes it difficult to definitively ascribe blame to a user if the BCI device acts contrary to the user’s intentions.
There is an argument, however, that this ultimately doesn’t matter in terms of accountability (Holm 2011). A BCI, after all, is essentially a tool, and most people use tools like cars and computers every day without knowing exactly how they work. In the same way that people wouldn’t use a computer if it constantly crashed and wouldn’t drive a car if the steering wheel often locked, people wouldn’t use a BCI if it didn’t work the way the user intended the vast majority of the time. By choosing to use the device, the user accepts responsibility for it in much the same way a driver is responsible for their car, and rare occurrences of BCI malfunction can be treated the same way rare automobile malfunctions are treated. If someone drives their car into a building, most of the time it can be determined whether this was an accident due to unforeseen, uncontrollable mechanical error, an accident due to a person’s willful error (e.g. drunk driving), or a malicious act, based on the circumstances of the event, the occurrence of similar mechanical errors in other vehicles, and the person’s personal motivations. Though it may be impossible to get a perfect analysis of exactly what caused the event, it is typically possible to determine whether the driver should be held accountable. The same logic can be applicable to accidents caused by BCI use or misuse.
The real difference between controlling a car and controlling a BCI, however, is that one is done with the body and the other is done with the mind. People have a lot of experience controlling and manipulating objects with their bodies; virtually everyone has done this every day since birth. Barring medical conditions like epilepsy and paralysis, people have near-complete control of their limbs and it is valid to assume that the movements a person makes with their body are those they intended to make. Conversely, people have no experience controlling objects with their minds. To a large extent, people do not have control over their thoughts; though it is of course possible to actively think about something in particular, thoughts often arise from the subconscious without willful intent, and some researchers have gone so far as to suggest that the vast majority or even all of our conscious thoughts are non-consciously generated (Oakley 2017). BCI sensors, however, likely have access to activity generated by these subconscious processes, and this activity may well contribute to driving the action of the external device. It is probably true that subconscious activity affects movement in able-bodied people’s limbs; however, most people seem to have the ability to override any unwanted effects and generally feel in complete control of their bodies’ movement. It is unclear whether this override ability is rapid enough to prevent unwanted subconscious activity from affecting device movement. Even if it is, the sheer amount and intensity of noise in the sensor’s recorded signal likely means that neural activity unrelated to movement will be fed to the decoder and translator, where it may be interpreted into movements that contradict the user’s intentions. Ought BCI users be held accountable for the translated outcome of their subconscious neural activity when they have no control over that activity?
Consideration should also be made for the questions raised earlier in this paper at the end of the decoder discussion. Drivers controlling a car may not have direct access to all of the parts of the car, but they have gauges and controls allowing for the monitoring of all of the processes that keep the car running. BCI users, in contrast, are controlling their device with only a few hundred of the billions of neurons in their brain. Assuming there are roughly a billion neurons in the motor cortex, this is something like trying to navigate a car through an 0.35mm square hole in a blacked-out windshield (for reference, a human hair is about 0.1mm thick). Though neurons have a lot of plasticity and may adapt to be able to more precisely control the device through training, persistent foreign body response will continually alter the signals coming from the neurons, changing the output of the device. Continual calibration is necessary. Even with these regular updates, the data being fed to the device is such a small fraction of the total brain activity, or even total motor cortex activity, that it seems unreasonable to expect a BCI device to respond to all of its user’s intentions. In other words, the user’s full agency is not being captured; a user’s attempts to override the device or control it in a different way may originate from an area of the motor cortex that the device simply cannot access. Until a larger percentage of brain activity can be captured and recorded, it seems disingenuous to presume that a user should or will have full agency over their device—and without having agency, it cannot be ethically sound to assign accountability to the user.
Conclusion
Brain-computer interfaces are promising and exciting technology, but at present, they are not yet developed enough to be used outside of strictly controlled clinical trials. Improvements to BCI technology should focus on increasing implant biocompatibility, expanding recording capability and sensitivity, further refining calibration technology, establishing calibration techniques for continual device adaptation and learning throughout use, and reducing surgical risks. From a basic research standpoint, more knowledge on LFPs and the impact of subconscious thought on neural recordings would be useful in making more efficient and accurate decoders. There is also a lot of potential for algorithm improvement, both in decoding raw neural signals and in calibration and translation processes. These improvements will help alleviate the ethical concerns related to agency, accountability, and safety that make BCIs currently untenable.
Figure 1:
An overview of the components of a BCI system.
References
- Barrese JC et al. (2013) Failure mode analysis of silicon-based intracortical microelectrode arrays in non-human primates. J of Neural Engineering 10(6), 066014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bowsher K et al. (2016). Brain–computer interface devices for patients with paralysis and amputation: a meeting report. J of Neural Engineering 13, 023001. [DOI] [PubMed] [Google Scholar]
- Campbell A and Wu C (2018). Chronically Implanted Intracranial Electrodes: Tissue Reaction and Electrical Changes. Micromachines 9(9), 430. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chaudhary U, Birbaumer N, & Ramos-Murguialday A (2016). Brain-computer interfaces for communication and rehabilitation. Nature Reviews Neurology 12, 513–25. [DOI] [PubMed] [Google Scholar]
- Choi JR, Kim SM, Ryu RH, Kim SP, & Sohn JW (2018). Implantable Neural Probes for Brain-Machine Interfaces – Current Developments and Future Prospects. Experimental Neurobiology 27(6), 453–71. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Collinger JL, Woodlinger B, Downey JE, Wang W, Tyler-Kabara EC, Weber DJ, McMorland AJ, Velliste M, Boninger ML, Schwartz AB (2013). High-performance neuroprosthetic control by an individual with tetraplegia. Lancet 381(9866), 557–64. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Diggelmann R, Fiscella M, Hierleman A, & Franke F (2018). Automatic spike sorting for high-density microelectrode arrays. J Neurophysiology 120(6), 3155–71. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ferguson M, Sharma D, Ross D, and Zhao F (2019). A Critical Review of Microelectrode Arrays and Strategies for Improving Neural Interfaces. Advanced Healthcare Materials 1900558. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gilja V, Pandarinath C, Blabe CH, Nuyujukian P, Simeral JD, Sarma AA, Sorice BL, Perge JA, Jarosiewicz B, Hochberg LR, Shenoy KV, Henderson JM (2015). Clinical translation of a high-performance neural prosthesis. Nature Medicine 21(10), 1142–5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hochberg LR et al. (2012). Reach and grasp by people with tetraplegia using a neurally controlled robotic arm. Nature 485, 372–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Holm S & Voo T (2011). Brain-Machine Interfaces and Personal Responsibility for Action – Maybe Not As Complicated After All. Studies in Ethics, Law, and Technology 4(3), 7. [Google Scholar]
- Homer ML, Nurmikko AV, Donoghue JP, Hochberg LR (2014). Implants and decoding for intracortical brain computer interfaces. Annual Review Biomed Engineering 15, 383–405. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hwang EJ and Andersen RA (2013). The utility of multichannel LFP for brain-machine interfaces. J of Neural Engineering 10(4), 046005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jackson A and Hall TM (2017). Decoding local field potentials for neural interfaces. IEEE Transactions on Neural Systems and Rehabilitation Engineering 25(10), 1705–14. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kim SP et al. (2008) Neural control of computer cursor velocity by decoding motor cortical spiking activity in humans with tetraplegia. J of Neural Engineering 5(4), 455–76. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Klein E & Ojemann J (2016). Informed consent in implantable BCI research: identification of research risks and recommendations for development of best practices. J of Neural Engineering 13 043001. [DOI] [PubMed] [Google Scholar]
- McGie SC, Nagai MK, & Artinian-Shaheen T (2013). Clinical ethical concerns in the implantation of brain-machine interfaces. IEEE Pulse 10.1109/MPUL.2012.222881 [DOI] [PubMed] [Google Scholar]
- Nordhausen CT, Maynard EM and Normann RA (1996) Single unit recording capabilities of a 100-microelectrode array. Brain Research 726, 129–40. [PubMed] [Google Scholar]
- Oakley DA & Halligan PW (2017). Chasing the Rainbow: The Non-Conscious Nature of Being. Psychology 8, 1924. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rajangam S, Tseng PH, Yin A, Lehew G, Shwarz D, Lebedev MA, & Nicolelis MA (2016). Wireless cortical brain-machine interface for whole-body navigation in primates. Scientific Reports 3(6), 22170. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ryu SI & Shenoy KV (2009). Human cortical prostheses: lost in translation? Neurosurgery Focus 27(1), E5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Society for Neuroscience (2017, Nov 14). Engineering tomorrow's responsive, adaptable neuroprosthetics and robots [Press release]. Retrieved from https://www.eurekalert.org/pub_releases/2017-11/sfn-etr111017.php
- Stieglitz T, Rubehn B, Henle C, Kisban S, Herwik S, Ruther P, & Schuettler M (2009). Brain–computer interfaces: an overview of the hardware to record neural signals from the cortex. Progress in Brain Research 175, 297–315. [DOI] [PubMed] [Google Scholar]
- Thomas J, Maszczyk T, Sinha N, Kluge T & Dauwels J (2017). Deep Learning-based Classification for Brain-Computer Interfaces. 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC) 234–9. [Google Scholar]
- Vlek R, van Acken JP, Beursken E, Roijendijk L, & Haeslager P (2014). BCI and a User’s Judgment of Agency. In Grubler G and Hildt E. (eds) Brain Computer Interfaces in Their Ethical, Social, and Cultural Contexts. The International Library of Ethics, Law and Technology; 12 [Google Scholar]

