Skip to main content
eLife logoLink to eLife
. 2017 Feb 21;6:e18554. doi: 10.7554/eLife.18554

High performance communication by people with paralysis using an intracortical brain-computer interface

Chethan Pandarinath 1,2,3,4,5,, Paul Nuyujukian 1,3,6,7,, Christine H Blabe 1, Brittany L Sorice 8, Jad Saab 9,10,11, Francis R Willett 12,13, Leigh R Hochberg 8,9,10,11,14, Krishna V Shenoy 2,3,6,15,16,17,*,, Jaimie M Henderson 1,3,*,
Editor: Sabine Kastner18
PMCID: PMC5319839  PMID: 28220753

Abstract

Brain-computer interfaces (BCIs) have the potential to restore communication for people with tetraplegia and anarthria by translating neural activity into control signals for assistive communication devices. While previous pre-clinical and clinical studies have demonstrated promising proofs-of-concept (Serruya et al., 2002; Simeral et al., 2011; Bacher et al., 2015; Nuyujukian et al., 2015; Aflalo et al., 2015; Gilja et al., 2015; Jarosiewicz et al., 2015; Wolpaw et al., 1998; Hwang et al., 2012; Spüler et al., 2012; Leuthardt et al., 2004; Taylor et al., 2002; Schalk et al., 2008; Moran, 2010; Brunner et al., 2011; Wang et al., 2013; Townsend and Platsko, 2016; Vansteensel et al., 2016; Nuyujukian et al., 2016; Carmena et al., 2003; Musallam et al., 2004; Santhanam et al., 2006; Hochberg et al., 2006; Ganguly et al., 2011; O’Doherty et al., 2011; Gilja et al., 2012), the performance of human clinical BCI systems is not yet high enough to support widespread adoption by people with physical limitations of speech. Here we report a high-performance intracortical BCI (iBCI) for communication, which was tested by three clinical trial participants with paralysis. The system leveraged advances in decoder design developed in prior pre-clinical and clinical studies (Gilja et al., 2015; Kao et al., 2016; Gilja et al., 2012). For all three participants, performance exceeded previous iBCIs (Bacher et al., 2015; Jarosiewicz et al., 2015) as measured by typing rate (by a factor of 1.4–4.2) and information throughput (by a factor of 2.2–4.0). This high level of performance demonstrates the potential utility of iBCIs as powerful assistive communication devices for people with limited motor function.

Clinical Trial No: NCT00912041

DOI: http://dx.doi.org/10.7554/eLife.18554.001

Research Organism: Human

eLife digest

People with various forms paralysis not only have difficulties getting around, but also are less able to use many communication technologies including computers. In particular, strokes, neurological injuries, or diseases such as ALS can lead to severe paralysis and make it very difficult to communicate. In rare instances, these disorders can result in a condition called locked-in syndrome, in which the affected person is aware but completely unable to move or speak.

Several researchers are looking to help people with severe paralysis to communicate again, via a system called a brain-computer interface. These devices record activity in the brain either from the surface of the scalp or directly using a sensor that is surgically implanted. Computers then interpret this activity via algorithms to generate signals that can control various tools, including robotic limbs, powered wheelchairs or computer cursors. Such tools would be invaluable for many people with paralysis.

Pandarinath, Nuyujukian et al. set out to study the performance of an implanted brain-computer interface in three people with varying forms of paralysis and focused specifically on a typing task. Each participant used a brain-computer interface known as “BrainGate” to move a cursor on a computer screen displaying the letters of the alphabet. The participants were asked to “point and click” on letters – similar to using a normal computer mouse – to type specific sentences, and their typing rate in words per minute was measured. With recently developed computer algorithms, the participants typed faster using the brain-computer interface than anyone with paralysis has ever managed before. Indeed, the highest performing participant could, on average, type nearly 8 words per minute.

The next steps are to adapt the system so that brain-computer interfaces can control commercial computers, phones and tablets. These devices are widely available, and would allow paralyzed users to take advantage of a range of applications that can be easily downloaded and customized. This development might enable brain-computer interfaces to not only allow people with neurological disorders to communicate, but also assist other people with paralysis in a number of ways.

DOI: http://dx.doi.org/10.7554/eLife.18554.002

Introduction

Communication is an important aspect of everyday life, achieved through diverse methods such as conversing, writing, and using computer interfaces that increasingly provide an important means to interact with others through channels such as e-mail and text messaging. However, the ability to communicate is often limited by conditions such as stroke, amyotrophic lateral sclerosis (ALS), or other injuries or neurologic disorders which can cause paralysis by damaging the neural pathways that connect the brain to the rest of the body. BCIs offer a potential solution to restore communication by harnessing intact neural signals. Many candidate BCIs have been developed for this purpose, including those based on electroencephalography (Wolpaw et al., 1998; Hwang et al., 2012; Spüler et al., 2012), electrocorticography (Leuthardt et al., 2004; Schalk et al., 2008; Moran, 2010; Brunner et al., 2011; Wang et al., 2013), and intracortical electrical signals (Serruya et al., 2002; Taylor et al., 2002; Carmena et al., 2003; Musallam et al., 2004; Santhanam et al., 2006; Hochberg et al., 2006; Ganguly et al., 2011; O’Doherty et al., 2011; Gilja et al., 2012; Simeral et al., 2011; Bacher et al., 2015; Nuyujukian et al., 2015; Aflalo et al., 2015; Gilja et al., 2015; Jarosiewicz et al., 2015). Intracortical BCIs (iBCIs), for the purposes of communication in particular, have shown promise in pilot clinical studies (Bacher et al., 2015; Jarosiewicz et al., 2015). However, iBCIs have not yet reached a level of performance that would support widespread adoption by people with motor impairments that interfere with communication. Further, it is unclear whether current BCI approaches can support high performance during cognitively demanding tasks, such as communicating text.

We recently developed a high-performance iBCI for communication. The BCI provided point-and-click control of a computer cursor (illustrated in Figure 1a). Briefly, neural signals (action potentials and high-frequency local field potentials [Gilja et al., 2012, 2015]) were recorded from motor cortex using intracortical microelectrode arrays. These signals were then translated into point-and-click commands using two algorithms developed through prior pre-clinical and clinical research: the ReFIT Kalman Filter for continuous two-dimensional cursor control (Gilja et al., 2012, 2015), and a Hidden Markov Model (HMM)-based state classifier for discrete selection (‘click’) (Kao et al., 2016). To evaluate this interface, we used two approaches: one that represents day-to-day communication use, and one that more rigorously quantifies performance.

Figure 1. Experimental setup and typing rates during free-paced question and answer sessions.

(a) Electrical activity was recorded using 96-channel silicon microelectrode arrays implanted in the hand area of motor cortex. Signals were filtered to extract multiunit spiking activity and high frequency field potentials, which were decoded to provide ‘point-and-click’ control of a computer cursor. (b) Performance achieved by participant T6 over the three days that question and answer sessions were conducted. The width of each black bar represents the duration of that particular block. The black bands along the gray bar just below the black blocks denote filter calibration times. The average typing rate across all blocks was 24.4 ± 3.3 correct characters per minute (mean ± s.d.). Video 1 shows an example of T6’s free typing. The filter calibration and assessment stages that preceded these typing blocks are detailed in Figure 1—figure supplement 3.

DOI: http://dx.doi.org/10.7554/eLife.18554.003

Figure 1.

Figure 1—figure supplement 1. Participant T6’s typed responses during the question and answer sessions.

Figure 1—figure supplement 1.

During these sessions, T6 was prompted with questions and formulated responses de novo. She could engage and disengage the interface using a play/pause button (see Video 1). In these sessions, two successive spaces resulted in the insertion of the ‘.’ character in place of the first space. Consent was obtained to release T6’s typed text.

Figure 1—figure supplement 2. Participant T6’s character selection during the question and answer sessions.

Figure 1—figure supplement 2.

Same as previous table, but also includes all errors and backspace characters entered.

Figure 1—figure supplement 3. Filter calibration, assessment, and typing blocks for the ‘free typing’ sessions performed with participant T6.

Figure 1—figure supplement 3.

This expanded figure shows the free typing data presented in Figure 1, as well as the filter calibration and assessment stages that preceded the collection of those data. These sessions were not optimized for filter calibration, rather, performance was achieved and maintained by running evaluation or recalibration blocks at the participant or researchers’ discretion. The black bands along the gray bar just above the x-axis denote filter calibration time. Red bars denote evaluation blocks or blocks where T6 stopped the block early. Blue bars denote uninterrupted blocks of free typing.

Results

An important real-world application for a communication interface is typing messages in a conversation. We tested whether the BCI could support such an application with T6, a participant in the BrainGate2 pilot clinical trial (http://www.clinicaltrials.gov/ct2/show/NCT00912041). T6 is a 51 year-old woman who was diagnosed with ALS (see Materials and methods: Participants). In these ‘free typing’ sessions, to simulate use of the BCI in a typical conversation, T6 was prompted with questions and asked to formulate responses at her own pace. Once presented with a question, she was able to think about her answer, move the cursor and click on a button at the bottom right corner of the screen to enable the keyboard, and then type her response (detailed in Materials and methods: Free typing task). T6 typed her responses using an optimized keyboard layout (OPTI-II) (Rick, 2010), in which characters are arranged to minimize the travel distance of the cursor while typing English text. T6's mean free typing rate over three days of testing was 24.4 ± 3.3 correct characters per minute (ccpm), which spanned 96 min of typing. (Figure 1b; an example free typing video is included as Video 1; Figure 1—figure supplements 1 and 2 list the questions and typed answers from all free typing blocks, and Figure 1—figure supplement 3 details the filter calibration and assessment stages that preceded the free typing blocks.)

Video 1. Example of participant T6’s free-paced, free choice typing using the OPTI-II keyboard.

Download video file (25.8MB, mp4)
DOI: 10.7554/eLife.18554.007

T6 was prompted with questions and asked to formulate an answer de novo. Once presented with a question, she was able to think about her answer, move the cursor and click on the play button to enable the keyboard (bottom right corner), and then type her response. In this example, the participant typed 255 characters in ~9 min, at just over 27 correct characters per minute. One of two audible ‘beeps’ followed a target selection, corresponding to the two possible selection methods: T6 could select targets using either the Hidden Markov Model-based ‘click’ selection (high-pitched noises) or by ‘dwelling’ in the target region for 1 s (low-pitched noises). The plot at the bottom of the video tracks the typing performance (correct characters per minute) with respect to time in the block. Performance was smoothed using a 30 s symmetric Hamming window. The scrolling yellow bar indicates the current time of that frame. During the free typing task, T6 was asked to suppress her hand movements as best as possible. (During the quantitative performance evaluations, T6 was free to make movements as she wished.) This video is from participant T6, Day 621, Block 17. Additional ‘free typing’ examples for T6 are detailed in Figure 1—figure supplements 1 and 2.

DOI: http://dx.doi.org/10.7554/eLife.18554.007

These free typing sessions demonstrated, in a realistic use case, what to our knowledge is the highest typing rate to date by a person with a physical disability using a BCI. However, in the human-computer interface literature, typing speeds are measured conventionally not in a free typing task, but rather using a ‘copy typing’ assessment, in which a subject is asked to type pre-determined phrases (reviewed in MacKenzie and Soukoreff, 2002). We performed such copy typing assessments with three participants, T6, T5 (a man, 63 years old, with tetraplegia due to spinal cord injury), and T7 (a man, 54 years old, also diagnosed with ALS). Each research session followed a rigorous protocol that aimed to measure peak performance rather than robustness (detailed in Materials and methods: Quantitative performance evaluation and Figure 2—figure supplements 1 and 2). Participants were asked to type one of seven sentences (Figure 2—figure supplement 3), which were prompted on the screen. Performance was quantified by the number of correct characters typed within each two-minute evaluation block. T6 and T5’s performance were assessed using the OPTI-II layout described above as well as a conventional QWERTY layout (Figure 2a,b). For participant T7, who had minimal previous typing experience, the QWERTY keyboard was replaced by an alternative layout (ABCDEF; Figure 2c), which had the same geometry but with letters arranged in alphabetical order. Figure 2d shows examples of prompted and typed text for each participant. We performed five days of testing with T6 (Figure 2e; 21 typing evaluation blocks for each keyboard), two days of testing with T5 (Figure 2f; 14 typing evaluation blocks for each keyboard), and two days of testing with T7 (Figure 2g; 5–6 typing blocks for each keyboard). Example videos that demonstrate cued typing for all participants are included as Videos 27. T6's average performance using the QWERTY keyboard was 23.9 ± 6.5 correct characters per minute (ccpm; mean ± s.d.). T6's average performance using the OPTI-II keyboard was 31.6 ± 8.7 ccpm, 1.3 times faster than her performance with the QWERTY layout. Participant T5 averaged 36.1 ± 0.9 and 39.2 ± 1.2 ccpm for the QWERTY and OPTI-II keyboards, respectively. Participant T7 averaged 13.5 ± 1.9 and 12.3 ± 4.9 ccpm for the ABCDEF and OPTI-II keyboards, respectively. These results represent a 3.4x (T6, OPTI-II), 4.2x (T5, OPTI-II), and 1.4x (T7, ABCDEF) increase over the previous highest performing BCI report that did not include word completion (9.4 ccpm [Bacher et al., 2015]; p<0.01 for all three participants, single-sided Mann-Whitney U tests). Further additions of word completion or prediction should only increase the effective typing rates.

Video 2. Example of participant T6’s ‘copy typing’ using the OPTI-II keyboard.

Download video file (6.6MB, mp4)
DOI: 10.7554/eLife.18554.012

In the copy typing task, participants were presented with a phrase and asked to type as many characters as possible within a two-minute block. T6 preferred that the cursor remain under her control throughout the task – i.e., no re-centering of the cursor occurred after a selection. This video is from participant T6, Day 588, Blockset 2. Performance in this block was 40.4 ccpm.

DOI: http://dx.doi.org/10.7554/eLife.18554.012

Video 3. Example of participant T6’s ‘copy typing’ using the QWERTY keyboard.

Download video file (6.5MB, mp4)
DOI: 10.7554/eLife.18554.013

Same as Video 2, but using the QWERTY keyboard layout. This video is from participant T6, Day 588, Blockset 4. Performance in this block was 30.6 ccpm.

DOI: http://dx.doi.org/10.7554/eLife.18554.013

Video 4. Example of participant T5’s ‘copy typing’ using the OPTI-II keyboard.

Download video file (9.6MB, mp4)
DOI: 10.7554/eLife.18554.014

Same as Video 2, but for participant T5. This video is from participant T5, Day 68, Blockset 4. Performance in this block was 40.5 ccpm.

DOI: http://dx.doi.org/10.7554/eLife.18554.014

Video 5. Example of participant T5’s ‘copy typing’ using the QWERTY keyboard.

Download video file (11.2MB, mp4)
DOI: 10.7554/eLife.18554.015

Same as Video 4, but using the QWERTY keyboard layout. This video is from participant T5, Day 68, Blockset 2. Performance in this block was 38.6 ccpm.

DOI: http://dx.doi.org/10.7554/eLife.18554.015

Video 6. Example of participant T7’s ‘copy typing’ using the OPTI-II keyboard.

Download video file (4.8MB, mp4)
DOI: 10.7554/eLife.18554.016

Same as Video 2, but for participant T7. T7 selected letters by dwelling on targets only. In addition, T7 preferred that the cursor re-center after every selection (i.e., following a correct or an incorrect selection). These across-participant differences are detailed in Materials and methods: Quantitative performance evaluations (under ‘Target selection and cursor re-centering’). This video is from participant T7, Day 539, Blockset 3. Performance in this block was 10.6 ccpm.

DOI: http://dx.doi.org/10.7554/eLife.18554.016

Video 7. Example of participant T7’s ‘copy typing’ using the ABCDEF keyboard.

Download video file (4.3MB, mp4)
DOI: 10.7554/eLife.18554.017

Same as Video 6, but using the ABCDEF keyboard layout. This video is from participant T7, Day 539, Blockset 1. Performance in this block was 16.5 ccpm.

DOI: http://dx.doi.org/10.7554/eLife.18554.017

Figure 2. Performance in copy typing tasks.

(a) Layout for the OPTI-II keyboard (b) Layout for the QWERTY keyboard. (c) Layout for the ABDEF keyboard. (d) Examples of text typed during three copy typing evaluations with participants T6, T5, and T7. Each example shows the prompted text, followed by the characters typed within the first minute of the two-minute evaluation block. Box width surrounding each character denotes the time it took to select the character. ‘<’ character denotes selection of a backspace key. Colored symbols on the left correspond to blocks denoted in lower plots. (e) Performance in the copy typing task with the QWERTY (blue) and OPTI-II (black) keyboards across 5 days for participant T6. QWERTY performance was 23.9 ± 6.5 correct characters per minute (ccpm; mean ± s.d.), while OPTI-II performance was 31.6 ± 8.7 ccpm. X-axis denotes number of days since array was implanted. (f) Performance in the copy typing task with the QWERTY (blue) and OPTI-II (black) keyboards across 2 days for participant T5. Average performance was 36.1 ± 0.9 and 39.2 ± 1.2 ccpm for the QWERTY and OPTI-II keyboards, respectively. (g) Performance in the copy typing task with the ABCDEF (blue) and OPTI-II (black) keyboards across two days for participant T7. Average performance was 13.5 ± 1.9 and 12.3 ± 4.9 ccpm for the ABCDEF and OPTI-II keyboards, respectively. *Participant T7 did not use an HMM for selection.

DOI: http://dx.doi.org/10.7554/eLife.18554.008

Figure 2.

Figure 2—figure supplement 1. Data collection protocol for quantitative performance evaluation sessions.

Figure 2—figure supplement 1.

Each research session followed a strict calibration and data collection protocol. This flow diagram is the research protocol for T6. T5 and T7’s protocols were very similar. T6 started with a glove-controlled movement calibration block (112 trials, 100 pixel target diameter, 750 ms hold time). T5 and T7 did not use data s, and instead that calibration block was substituted for a block where the cursor moved automatically while they attempted to move along with the cursor (‘open-loop’ calibration). This was followed by HMM click decoder calibration for T6. This was omitted for T7 as no HMM was used for sessions with him, but the cursor decoder was recalibrated at this stage. T5 did not need this either, as the HMM was trained on the same block as the cursor movement decoder. If the output of these two calibration steps resulted in a controllable cursor, then the data blocksets were started. Here, the decoders were recalibrated again based on a closed-loop control block, and data was collected under a strict timing protocol (see Figure 2—figure supplement 2). Blocksets were repeated as possible, at the discretion of the participant. Once a blockset was started for a particular research session, starting over with initial calibration was prohibited.

Figure 2—figure supplement 2. Example of the blockset structure for quantitative performance evaluation sessions.

Figure 2—figure supplement 2.

Each blockset began with a 3 min calibration or bias calculation block (purple). Breaks (green) were interspersed throughout the data collection. Each blockset consisted of three evaluation blocks (orange), two minutes each, that tested either copy typing or grid performance. Copy typing performance was evaluated with two keyboards (T6 and T5: QWERTY and OPTI-II, T7: ABCD and OPTI-II). Evaluation blocks were presented in a pseudo-randomized order (detailed in Materials and methods: Quantitative performance evaluations). In this example, T6 was first evaluated on a grid block, followed by a QWERTY cued typing block, followed by an OPTI-II cued typing block.

Figure 2—figure supplement 3. Sentences used to evaluate performance in copy typing tasks.

Figure 2—figure supplement 3.

Seven sentences were chosen for the copy typing task (presented in Figure 2). Sentence one is an English language pangram, i.e., it contains all the letters of the English alphabet, and is traditionally used to evaluate typing performance (e.g. Silfverberg et al., 2000). Sentences 2–4 are common English phrases that were easy for the participants to remember. Sentence five is the beginning of the ‘Rainbow Passage,’ commonly used in the speech pathology field to evaluate speech quality/deficits ([Fairbanks, 1960], pp. 124–139). Sentences 6 and 7 are conversational and were chosen to simulate the types of phrases an assistive communication device might be used to type in a conversation.

A limitation of the ‘copy typing’ task is that the performance measurement is affected by the degree of difficulty of each phrase given the specific keyboard being used, as well as the participant's familiarity with the keyboard layouts (e.g., both T5 and T7 had much less familiarity with the keyboard layouts than T6). To explicitly quantify the information throughput of the BCI itself (independent of a phrase or keyboard layout), performance was also measured using a cued-target acquisition task (‘grid task' [Hochberg et al., 2006; Nuyujukian et al., 2015]), in which square targets were arranged in a 6 × 6 grid, and a randomly selected target was cued on each trial. Performance was quantified using ‘achieved bitrate’ (detailed in Nuyujukian et al. (2015) and Materials and methods: Achieved bitrate), which is a conservative measure used to quantify the total amount of information conveyed by the BCI. Briefly, the number of bits transmitted is the net number of correct ‘symbols’ multiplied by log2(N - 1), where N is the total number of targets. The net number of correct symbols is taken as the total number of correct selections minus the total number of incorrect selections, i.e., each incorrect selection requires an additional correct selection to compensate (analogous to having to select a keyboard’s backspace key). For example, on an eight-target task, if the net rate of correct target selections (after compensating for incorrect selections) were one target / s, the achieved bitrate would be 2.8 bits / s.

Over 5 days of testing with T6 (Figure 3a; 21 grid evaluation blocks), 4 days of testing with T5 (Figure 3b; 29 grid evaluation blocks) and 2 days of testing with T7 (Figure 3c; six grid evaluation blocks), average performance was 2.2 ± 0.4 bits per second (bps; mean ± s.d.), 3.7 ± 0.4 bps, and 1.4 ± 0.1 bps, respectively. This is a substantial increase over the previous highest achieved bitrates for people with motor impairment using a BCI (Table 1), which were achieved by two of the same participants in an earlier BrainGate study (T6: 0.93 bits / s, T7: 0.64 bits / s, from Jarosiewicz et al. (2015); p<0.01 for both participants, single-sided Mann-Whitney U tests). For T6 and T7, who participated in the previous study, performance of the current methods represents a factor of 2.4 (T6) and 2.2 (T7) increase. For T5, the current performance represents a factor of 4.0 increase over the highest performing participant in the previous study. (The previous study measured performance using a free typing task, which includes the cognitive load of word formation [Jarosiewicz et al., 2015]. However, the effects of cognitive load in the current study (i.e., comparing T6’s free typing vs. copy typing) only accounted for a ~30% performance difference, rather than the 2–4x performance difference between studies. Thus, cognitive load is unlikely to account for the differences in performance.) The performance increase over previous work is unlikely to be due to experience with BCI, as participants in the current study had a similar range of experience using the BCI as those in comparable studies (Table 2). Example videos that demonstrate the grid task for all participants are included as Videos 811. In addition, comparisons of the HMM’s performance against the previous highest-performing approach for discrete selection are presented in Figure 3—figure supplement 1. We performed additional grid measurements with T5 in which targets were arranged in a denser grid (9 × 9). This task allows the possibility for higher bitrates than the 6 × 6 grid used above, with the tradeoff that selecting these smaller targets requires higher control fidelity. Across two days of testing with T5 (Figure 3—figure supplement 2 and Video 10; 8 evaluation blocks), average performance was 4.16 ± 0.39 bps, which was significantly greater than the 6 × 6 performance (p<0.01, Student’s t test) and represents, to our knowledge, the highest documented BCI communication rate for a person with motor impairment.

Video 8. Example of participant T6’s performance in the grid task.

Download video file (5.7MB, mp4)
DOI: 10.7554/eLife.18554.022

This video is from participant T6, Day 588, Blockset 3. Performance in this block was 2.65 bps.

DOI: http://dx.doi.org/10.7554/eLife.18554.022

Video 9. Example of participant T5’s performance in the grid task.

Download video file (19.7MB, mp4)
DOI: 10.7554/eLife.18554.023

This video is from participant T5, Day 56, Blockset 4 (Block 28). Performance in this block was 4.01 bps.

DOI: http://dx.doi.org/10.7554/eLife.18554.023

Video 10. Example of participant T5’s performance in the dense grid task (9 × 9).

Download video file (10.6MB, mp4)
DOI: 10.7554/eLife.18554.025

This video is from participant T5, Day 56, Blockset 4 (Block 30). Performance in this block was 4.36 bps.

DOI: http://dx.doi.org/10.7554/eLife.18554.025

Video 11. Example of participant T7’s performance in the grid task.

Download video file (4.3MB, mp4)
DOI: 10.7554/eLife.18554.026

This video is from participant T7, Day 539, Blockset 2. Performance in this block was 1.57 bps.

DOI: http://dx.doi.org/10.7554/eLife.18554.026

Figure 3. Information throughput in the grid task.

(a) Performance in the grid task across 5 days for participant T6. T6 averaged 2.2 ± 0.4 bits per second (mean ± s.d.). (b) Performance in the grid task across 2 days for participant T5. T5 averaged 3.7 ± 0.4 bits per second. (c) Performance in the grid task across 2 days for participant T7. T7 averaged 1.4 ± 0.1 bits per second. X-axis denotes number of days since array was implanted. *Participant T7 did not use an HMM for selection.

DOI: http://dx.doi.org/10.7554/eLife.18554.018

Figure 3.

Figure 3—figure supplement 1. Performance of the HMM-based classifier during grid tasks with participants T6 and T5.

Figure 3—figure supplement 1.

This figure replicates the analysis from Kim et al., 2011 (Kim et al., 2011) (Figure 5), which demonstrated the previous best discrete selection algorithm for a communication BCI. (a) Performance of the HMM for participant T6. The panel on the left shows all the clicks that occurred in the grid task across 5 days of quantitative performance evaluations. Each dot plots the position of a click relative to the center of the cued target (blue - correct click, red - false click), and the green square shows the size of a single grid target. The right panel shows a histogram of the distance between click location and the center of the target for all clicks. T6 clicked on the correct target 92.6% of the time (1007/1087 total clicks). (b) Performance of the HMM for participant T7 in the grid task over 4 days of quantitative performance evaluations. T5 clicked on the correct target 97.7% of the time (2325/2379 total clicks). As shown, for both participants, when incorrect clicks did occur, they primarily occurred close to the edge of the desired target.

Figure 3—figure supplement 2. Information throughput for participant T5 when using a dense grid.

Figure 3—figure supplement 2.

Performance in the grid task for participant T5 when measured with a denser grid (9 × 9, as opposed to the 6 × 6 grid used for all participants for the data shown in Figure 3). An example of T5’s performance in the 9 × 9 grid is shown in Video 10.

Table 1.

Survey of BCI studies that measure typing rates (correct characters per minute; ccpm), bitrates, or information transfer rates for people with motor impairment. Number ranges represent performance measurements across all participants for a given study. Communication rates could be further increased by external algorithms such as word prediction or completion. As there are many such algorithms, the current work excluded word prediction or completion to focus on measuring the performance of the underlying system. The most appropriate points of comparison, when available, are bitrates, which are independent of word prediction or completion algorithms. Similarly, information transfer rates are also a meaningful point of comparison, though they are less reflective of practical communication rates than bitrate (which takes into account the need to correct errors; detailed in Nuyujukian et al. (2015); Townsend et al. (2010)). For the current work, and for Jarosiewicz et al. 2015, we also break down performance by individual participant to facilitate direct comparisons (denoted by italics). As shown, performance in the current study outperforms all previous BCIs tested with people with motor impairment. *These numbers represent performance when measured using a denser grid (9 × 9; Figure 3—figure supplement 2 and Video 10). **For this study, reported typing rates included word prediction / completion algorithms. ***Number range represents the range of performance reported for the single study participant. ****Other reported numbers included word prediction / completion algorithms. †Acronyms used: ReFIT-KF: Recalibrated Feedback Intention-trained Kalman Filter. HMM: Hidden Markov Model. CLC: Closed-loop Calibration. LDA: Linear Discriminant Analysis. RTI: Retrospective Target Inference. DS: Dynamic Stopping.

DOI: http://dx.doi.org/10.7554/eLife.18554.021

Study Participant(s) Recording modality Control modality Etiology of
motor impairment
Average typing rate (ccpm) Average bitrate (bps) Average ITR (bps)
This study average
(N = 3)
intracortical ReFIT-KF+HMM ALS (2), SCI (1) 28.1 2.4 2.4
‘‘ T6 ALS 31.6 2.2 2.2
‘‘ T5 SCI 39.2 3.7 3.7
‘‘ ‘‘ ‘‘ - 4.2* 4.2*
‘‘ T7 (No HMM) ALS 13.5 1.4 1.4
Bacher et al., 2015 S3 intracortical CLC+LDA brainstem stroke 9.4 - -
Jarosiewicz et al., 2015 average (N = 4) intracortical RTI+LDA ALS (2),
brainstem stroke (2)
n/a** 0.59 -
‘‘ T6 ALS ‘‘ 0.93 -
‘‘ T7 ALS ‘‘ 0.64 -
‘‘ S3 brainstem stroke ‘‘ 0.58 -
‘‘ T2 brainstem stroke ‘‘ 0.19 -
Nijboer et al., 2008 N = 4 EEG P300 ALS 1.5–4.1 - 0.08–0.32
Townsend et al., 2010 N = 3 EEG P300 ALS - 0.05–0.22 -
Münßinger et al., 2010 N = 3 EEG P300 ALS - - 0.02–0.12
Mugler, et al. 2010 N = 3 EEG P300 ALS - - 0.07–0.08
Pires et al., 2011 N = 4 EEG P300 ALS (2), cerebral palsy (2) - - 0.24–0.32
Pires et al., 2012 N = 14 EEG P300 ALS (7), cerebral palsy (5),
Duchenne muscular
dystrophy (1), spinal cord injury (1)
- - 0.05–0.43
Sellers et al., 2014 N = 1 EEG P300 brainstem stroke 0.31–0.93*** - -
McCane et al., 2015 N = 14 EEG P300 ALS - - 0.19
Mainsah et al., 2015 N = 10 EEG P300-DS ALS - - 0.01–0.60
Vansteensel et al., 2016 N = 1 subdural ECoG Linear Classifier ALS 1.15**** - 0.21

Table 2.

Participants’ prior BCI experience and training for studies considered in Table 1. The experience column details the number of participants in the respective study that had prior experience with BCIs at the time of the study and, if reported, the duration of that prior experience and/or training.

DOI: http://dx.doi.org/10.7554/eLife.18554.024

Study Participant(s) BCI experience/training
This study average
(N = 3)
1 year
‘‘ T6 1.5 years
‘‘ T5 9 prior sessions (1 month)
‘‘ T7 1.5 years
Bacher et al., 2015 S3 4.3 years
Jarosiewicz et al., 2015 average
(N=4)
2 years
‘‘ T6 10 months to 2.3 years
‘‘ T7 5.5 months to 1.2 years
‘‘ S3 5.2 years
‘‘ T2 4.6 months
Nijboer et al., 2008 N = 4 At least 4–10 months
Townsend et al., 2010 N = 3 All had prior P300 BCIs at home, two had at least 2.5 years with BCIs
Münßinger et al., 2010 N = 3 Two of three had prior experience, training not reported
Mugler, et al. 2010 N = 3 Average experience of 3.33 years
Pires et al., 2011 N = 4 No prior experience, training not reported
Pires et al., 2012 N = 14 Not reported
Sellers et al., 2014 N = 1 Prior experience not reported, thirteen months of continuous evaluation
McCane et al., 2015 N = 14 Not reported
Mainsah et al., 2015 N = 10 Prior experience not reported, two weeks to two months of evaluation
Vansteensel et al., 2016 N = 1 7 to 9 months

We note that in both sets of quantitative performance evaluations (copy typing and grid tasks), participant T6, who retained significant finger movement abilities, continued to move her hand while controlling the BCI. Further research sessions, in which T6 was asked to suppress her natural movements to the best of her abilities, showed similar performance in both copy typing and grid tasks (detailed in Figure 4 and supplements, which quantify her performance and the degree to which she was able to suppress movements). As might be expected, T6 found that suppressing her natural movement was a challenging, cognitively demanding task. Though she did this to the best of her abilities, the act of imagining finger movement still elicited minute movements, both during ‘open-loop’ decoder calibration where she was imagining movements, and during closed-loop control of the BCI. While we were unable to record EMG activity (as permission to do so had not previously been sought), we were able to record the movements of her fingers using a commercially-available ‘dataglove’ sensor system. This was also used for research sessions in which decoder calibration was based on her physical movements. Overall, when T6 actively attempted to suppress movements, her movement was reduced by a factor of 7.2–12.6 (Figure 4—figure supplement 1). Despite this factor of 7.2–12.6 in movement suppression, performance was quite similar to performance when T6 moved freely - across all three quantitative evaluation types (Grid, OPTI-II, QWERTY), the performance differences were within 0–20% and not significant (p>0.2 in all cases, Student’s t test).

Figure 4. Performance of the BCI with movements suppressed.

A potential concern is that the demonstrated performance improvement for participant T6 relative to previous studies is due to her retained movement ability. Participant T6 was capable of dexterous finger movements (as opposed to participants T5 and T7, who retained no functional movements of their limbs). To control for the possibility that physical movements underlie the demonstrated improvement in neural control, we measured T6’s BCI performance during the same quantitative performance evaluation tasks, but asked her to suppress her movements as best as she could. In these sessions, decoders were calibrated based on imagined (rather than attempted) finger movements. (a) During copy typing evaluations with movements suppressed, T6’s average performance using the OPTI-II keyboard was 28.6 ± 2.0 ccpm (mean ± s.d.), and her average performance using the QWERTY keyboard was 19.9 ± 4.3 ccpm (as discussed in the main text, her performance while moving freely was 31.6 ± 8.7 ccpm and 23.9 ± 6.5 ccpm for the OPTI-II and QWERTY keyboards, respectively). (b) During grid evaluations with movements suppressed, T6’s achieved bitrate was 2.2 ± 0.17 bps (compared to 2.2 ± 0.4 bps while moving freely). We note that using the BCI while suppressing movements is a more difficult and cognitively demanding task - since the participant’s natural, intuitive attempts to move actually generate physical movements, she needed instead to imagine movements, and restrict her motor cortical activity to patterns that do not generate movement. (This is supported by the participants own comment regarding the difficulty in controlling the BCI while imagining movement without actually moving: ‘It is a learning curve for me to not move while imagining.’) Despite this additional cognitive demand, performance with movements suppressed was quite similar to performance when the participant moved freely (within 0–20%) - in all three cases, the differences in performance were not significant (p>0.2 in all cases, Student’s t test). Data are from T6’s trial days 595 and 598.

DOI: http://dx.doi.org/10.7554/eLife.18554.027

Figure 4.

Figure 4—figure supplement 1. Participant T6’s movements are greatly reduced when movements are actively suppressed.

Figure 4—figure supplement 1.

In the previous analysis (Figure 4), we demonstrated that T6’s performance was largely unchanged even when she actively suppressed her movements. Here we quantified the degree to which movements were suppressed during those sessions. We first analyzed the participant’s movements during decoder calibration (panels a and b) and then closed-loop BCI control (panel c). For decoder calibration, we compared freely moving sessions and sessions in which movements were suppressed (see Materials and methods: Quantifying movement suppression). Decoders were calibrated using a center-out-and-back task, with the cursor’s position tied to the measured finger position (freely moving sessions) or with the cursor’s position following pre-programmed movements (i.e., ‘open-loop’ calibration) and finger movements were imagined (movement suppressed sessions). For each condition (i.e., freely moving vs. suppressed movement), we measured finger position as a function of time (relative to the starting position for each trial), and averaged these positions across all trials for a given target direction (the position of each pair of traces denotes the target’s position relative to the center target). (a) During movement-based decoder calibration (freely moving sessions), thumb movements (red) controlled the vertical axis, while index finger movements (blue) controlled the horizontal axis. Horizontal scale bars represent 200 ms, and the vertical scale bar represents 100 units on the glove sensor scale (arbitrary units). (b) During open-loop decoder calibration (movement suppressed sessions), in which T6 was asked to simply imagine finger movements but avoid moving to the best of her abilities, finger movements were largely suppressed but minute movement was still detectable. Scale bars match the previous panel. Overall, during decoder calibration, movements were greatly reduced (p<0.01, paired Student’s t test), and the median suppression ratio was a factor of 7.2 (index finger) and 12.6 (thumb). (c) We also quantified the amount of movement during closed-loop BCI control (grid task) in sessions in which movements were suppressed. Because individual trials were highly variable (targets appeared in random locations during the grid task), we grouped trials by the target direction (i.e., the angle between the previous target and the prompted target for the current trial). The position of each pair of traces in the circle denotes the target direction. To ensure that any minute movements were captured in the analysis, the absolute value (rather than the signed value) of the finger position was taken prior to averaging across trials. Scale bars match the previous panel. As shown, movement during closed-loop BCI control was comparable to or less than movement during decoder calibration (panel b), which itself was a factor of 7.2–12.6 times less than movement during movement-based decoder calibration (panel a).

Discussion

The high-performance BCI demonstrated here has potential utility as an assistive communication system. The average copy typing rates demonstrated in this study were 31.6 ccpm (6.3 words per minute; wpm), 39.2 ccpm (7.8 wpm), and 13.5 ccpm (2.7 wpm) for T6, T5, and T7, respectively. In surveying people with ALS, (Huggins et al., 2011) found that 59% of respondents would be satisfied with a communication BCI that achieved 10–14 ccpm (2–2.8 wpm), while 72% would be satisfied with 15–19 ccpm (3–3.8 wpm). Thus, the current performance would likely be viewed positively by many people with ALS. Current performance still falls short of typical communication rates for able-bodied subjects using smartphones (12–19 wpm [Hoggan et al., 2008; Lopez et al., 2009]), touch typing (40–60 wpm [MacKenzie and Soukoreff, 2002]), and speaking (90–170 spoken wpm [Venkatagiri, 1999]); continued research is directed toward restoring communication toward rates that match able-bodied subjects.

Previous clinical studies of intracortical BCIs have either used generalized (task-independent) measures of performance (Simeral et al., 2011; Gilja et al., 2015) or application-focused (task-dependent) measures (Bacher et al., 2015; Jarosiewicz et al., 2015; Hochberg et al., 2012; Collinger et al., 2013a). While application-focused measurements are crucial in demonstrating clinical utility, performance might be heavily dependent on the specific tasks used for assessment. By rigorously quantifying both generalized performance (grid task) and application-specific performance (copy typing task) with all three participants, we aim to provide helpful benchmarks for continued improvement in neural decoding and BCI communication interface comparisons.

Another critical factor for demonstrating clinical utility is characterizing the day-to-day variability often seen in BCI performance. To do so we approached the quantitative performance evaluation sessions (grid and copy typing) with a strict measurement protocol (similar to Simeral et al., 2011), and did not deviate from this protocol once the session had begun. Inclusion of detailed measurement protocols will help in demonstrating the repeatability (or variability) of various BCI approaches and establish further confidence as BCIs move closer to becoming more broadly available for people who would benefit from assistive communication technologies. The grid task and bit rate assessment described previously and in this manuscript may serve as a valuable task and metric to document further progress in BCI decoding.

As mentioned earlier, our quantitative performance evaluation protocol was designed to measure peak performance in a repeatable manner rather than measuring the system’s stability. To standardize the performance measurements, explicit decoder recalibration or bias re-estimation blocks were performed prior to each measurement set (as detailed in Materials and methods: Quantitative performance evaluation and Figure 2—figure supplements 1 and 2). A key additional challenge for clinically useful BCIs is maintaining system stability, and future work will combine our performance-driven approach with complementary approaches that focus on achieving long-term stability without explicit recalibration tasks (Jarosiewicz et al., 2015).

The typing rates achieved in this study were performed without any word completion or prediction algorithms. While such algorithms are commonly used in input systems for mobile devices and assistive technology, our aim in this report was to explicitly characterize the performance of the intracortical BCI, without confounding the measurement by the choice of a specific word completion algorithm (of which there are many). Important next steps would be to apply the BCI developed here to a generalized computing interface that includes word completion and prediction algorithms to further boost the effective communication rates of the overall system. Regardless of the assistive platform chosen, all systems would benefit from higher performing BCI algorithms. We also note that the data for participants T6 and T7 was collected 1.5 years after neurosurgical placement of the intracortical recording arrays. This, along with other recent reports (Gilja et al., 2012; Simeral et al., 2011; Nuyujukian et al., 2015; Gilja et al., 2015; Hochberg et al., 2012; Chestek et al., 2011; Bishop et al., 2014; Flint et al., 2013; Nuyujukian et al., 2014), demonstrates that intracortical BCIs may be useful for years post-implantation.

Central to the results demonstrated with participants T6 and T5 was the identification of independent control modalities to simultaneously support high performance continuous control and discrete selection. Specifically, we found that activity on T6’s array had the highest neural modulation when attempting or imagining movements of her contralateral thumb and index finger, and further, that these two independent effectors could be merged to provide closed-loop control of a single effector (cursor). We also found that this thumb and index finger-based control modality increased system robustness and yielded decoders that were more resilient to nonstationarities. Finally, we found that a separate behavioral approach, ipsilateral hand squeeze, provided an independent, readily-combined control dimension to support discrete selection. We performed a similar protocol for evaluating behavioral imagery strategies with participant T5 and found his highest neural modulation was elicited when imagining movements of the whole arm. We combined this imagery strategy with ipsilateral hand squeeze (mirroring findings from participant T6) to yield simultaneous high performance continuous control and discrete selection.

The BCI approach demonstrated here was first developed with participant T6 and then adapted for participant T7. However, initially, we often found that instabilities would appear in T7’s control on shorter timescales (i.e., across tens of minutes). In these instances, biases in the cursor’s velocity would develop that impeded high performance control. To counteract these effects, we introduced a variant of the bias correction method used in Jarosiewicz et al. (2015); Hochberg et al. (2012) with T7 (detailed in Materials and methods), which continuously estimated and corrected biases during closed-loop BCI use and resulted in more stable control. Further, instead of calibrating a new decoder between measurement sets (as was done for T6), we found it was sufficient to keep the decoder constant and simply perform a short target acquisition task to estimate and update the underlying bias estimate. We therefore incorporated this revised protocol (holding decoders constant, and simply updating the underlying bias estimate) for sessions with T5.

The performance achieved by all participants in this study outperformed all previous BCIs for communication tested with people with motor impairment. However, we note that T6 and T5’s communication rates were substantially better than those of T7. Many factors could have contributed to this difference in performance. Certainly, with any skilled motor task, one expects to see variation in performance across participants, even in able-bodied subjects (e.g. playing sports or musical instruments). As ALS is a disease with a large degree of variance in its effects, participant-specific differences in disease effects or progression may play a role in the differences in performance between T6 and T7. Interestingly, we note that in the center-out-and-back task (where reaction times can be most easily measured), T7 demonstrated increased response latency relative to T6. Specifically, the time between the appearance of a cued target and neural modulation corresponding to a movement attempt was more than 100 ms later for T7 relative to T6. It is unclear whether this additional latency was due to variability in the effects of ALS across participants. Differences in the participants’ prior experiences may have also played a role: T6 was much more familiar with computing devices, while T7 rarely used them. This difference in familiarity / comfort with text entry may have contributed to the difference in typing rates.

At the time of this study, participant T6 still retained the ability to make dexterous movements of her hands and fingers, which may raise the question of whether her high level of performance was related to the generation of movement. As described in the Results section, to test the effects of movement generation on BCI control, we performed separate sessions in which T6 suppressed her movement to the best of her ability, and found no measurable effect on BCI performance. This result is consistent with previous studies that have evaluated the effects of movement on BCI control. For example, Gilja*, Nuyujukian* et al. (Gilja et al., 2012) compared BCI performance in non-human primates while their arms were either restrained or able to move freely, and found little difference in performance. Additionally, Ethier et al (Ethier et al., 2012). showed that monkeys whose grasping movements were prevented using a paralytic agent were still able to reliably generate grasping-related cortical activity, which could then be decoded to activate a functional electrical stimulation system that restored grasping ability. Multiple participants with no movement of their limbs have also successfully controlled a computer cursor or other external device through this intracortical BCI (e.g., ref. [Hochberg et al., 2006], participants S1 and S2, and ref. [Hochberg et al., 2012], participant T2). Finally, we recently investigated the effects of movement on cursor control quality in detail with clinical trial participants and found no decrease in performance when movements were suppressed (Gilja et al., 2015).

In this study we have controlled for the potential issue of movement generation as closely as is possible given the proper boundaries of clinical research. We have presented data from three participants, two of whom had no ability to make functional arm or hand movements, and one who suppressed her movements to the best of her abilities, below a range in which the movements could be functionally useful. All three cases are representative examples of arm and hand movement capabilities of the severely motor impaired population, and, in all three cases, the participants communicated with the BCI at rates that exceeded any previous study of people with motor impairment. Further, there was little if any correspondence between the participants’ movement abilities and BCI performance.

Both participants T6 and T5 used the HMM decoder for discrete selection. Our goal was to also use the HMM with participant T7. However, he passed away (from causes unrelated to the trial) before we were able to perform those research sessions. As mentioned above, we initially found that neural features with participant T7 exhibited drifts in baseline firing rates over time, which necessitated the integration of strategies to mitigate the effects of these baseline drifts on continuous cursor control. Thus, our plan for data collection was to first develop these strategies and carefully document performance with T7 using continuous cursor control only, and subsequently add the HMM for discrete selection. The first part was successful – as shown, T7 achieved high quality continuous control, and the resultant communication performance was double that of the previous highest-performing approach. Unfortunately, however, T7 passed away before the HMM sessions could be conducted.

Previous work with non-human primates from our lab and others (Musallam et al., 2004; Santhanam et al., 2006; Shenoy et al., 2003) demonstrated that BCI strategies which leverage discrete classification can achieve high communication rates. The ‘point-and-click’ approach demonstrated in the current paper (i.e., continuous control over a cursor’s movement, plus a decoder for discrete selection [Simeral et al., 2011; Bacher et al., 2015; Jarosiewicz et al., 2015]) was investigated instead because it has certain practical advantages over the classification approach. In particular, developing a robust point-and-click controller provides a flexible interface that can be applied to a wide variety of computing devices. A point-and-click controller could be integrated with mobile computing interfaces (i.e., smartphones or tablets) that would dramatically increase what is achievable with the BCI, without the need for the development of custom software for each function (as would be needed for a discrete interface). Finally, and perhaps most fundamentally, as this approach enables both continuous movements and selections, it is more general as long as performance is high (as reported here). Thus, point-and-click interfaces are a key step to creating BCIs that allow flexible, general-purpose computing use.

The discrete classification approach provides a promising alternative strategy for communication BCIs. However, there are multiple technical challenges to the previously demonstrated approaches, and multiple unknowns when translating these approaches to people. From a technical standpoint, one of the primary challenges is that a multi-class discrete classifier may need a specified time window over which to classify neural features into a discrete selection. In the earlier high-performance study with non-human primates (Santhanam et al., 2006), this necessitated a ‘fixed pace’ design in which the monkeys were prompted to make a sequence of selections at a fixed timing interval. Such an approach may prove more difficult with people typing messages, which requires the flexibility to actively think about what to type and to type at a free pace. A potential approach to enable a ‘free-paced’ BCI was demonstrated in follow-on studies (Achtman et al., 2007; Kemere et al., 2008), which showed in offline analyses that state transitions could be inferred automatically from neural activity, thus automatically detecting the necessary time window for classification. But this has not been demonstrated in closed-loop experiments by our group or, to our knowledge, by other groups. Thus there are multiple technical and scientific challenges to address, and developing these approaches for clinical trial participants is an active area of research.

The collaborative approach reported here involved carrying out the same investigative protocol by independent teams at sites across the country. This approach supports replication with multiple participants to go beyond initial proof of principle, but presented its own challenges, particularly in designing and implementing closed-loop BCI approaches remotely. Specifically, iterating on decoder designs and troubleshooting performance issues greatly benefits from real-time access to system performance and data. To this end, the development of a framework for remote, real-time performance monitoring (detailed in Materials and methods: System design) was critical to understanding and iteratively addressing performance issues during research sessions with remote participants.

The participants’ comments provided insight on the subjective experience of using the BCI. All three participants commented on the ease-of-use of the system. Participant T6 compared the BCI system to other assistive communication devices, remarking: ‘The one I like is this one as opposed to an eye gaze system... It is quite intuitive.’ Similarly, participant T7 noted: ‘When things go well, it feels good.’ – this statement was a comment on the improvement in control after weeks of development and testing (using the approach outlined above). Additionally, participant T5 compared his typing performance to his standard typing interface (a head mouse-based tracking system), noting ‘After I typed using BrainGate for 2 days, the weekend came and I went back to my existing typing system and it was ponderously slow.’ Interestingly, participant T6 also noted that the motor imagery used for filter calibration did not match the imagery she found most effective during closed-loop BCI control. Specifically, while T6’s continuous control was calibrated based on index finger and thumb movement imagery, T6 commented that during closed-loop BCI control, ‘It feels like my right hand has become a joystick.’

The question of the suitability of implanted versus external BCI systems (or any other external AAC system) for restoring function is an important one. Any technology (or any medical procedure) that requires surgery will be accompanied by some risk; among the most immediate risks that should be considered with any neurosurgery involving a craniotomy include bleeding, infection, seizure, and headache. That risk is not viewed in isolation, but is compared – by the individual contemplating the procedure – to the potential benefit (Hochberg and Cochrane, 2013; Hochberg and Anderson, 2012). There are several important factors one might take into consideration, for example ease-of-use, cosmesis, and performance. Any externally applied BCI system (EEG for example) will require donning and doffing, meaning that it could not be used continuously 24 hr a day. A future self-calibrating, fully implanted wireless system could in principle be used without caregiver assistance, would have no cosmetic impact, and could be used around the clock. Such a system may be achievable by combining the advances in this report with previous advances in self-calibration and in fully-implantable wireless interfaces (Jarosiewicz et al., 2015; Borton et al., 2013). Additional discussion of these topics are found in refs. (Ryu and Shenoy, 2009; Gilja et al., 2011).

In a recent survey of people with spinal cord injury (Blabe et al., 2015), respondents with high cervical spinal cord injury would be more likely to adopt a hypothetical wireless intracortical system compared to an EEG cap with wires, by a margin of 52% to 39%. In another survey, over 50% of people with spinal cord injury would ‘definitely’ or ‘very likely’ undergo an implant surgery for a BCI (Collinger et al., 2013b). Thus, there is a clear willingness among people with paralysis to undergo a surgical procedure if it could provide significant improvements in their daily functioning.

In summary, we demonstrated a BCI that achieved high performance communication in both free typing and copy typing, leveraging system design and algorithmic innovations demonstrated in prior pre-clinical and clinical studies (Gilja et al., 2012, 2015; Kao et al., 2016). Using this interface, all three participants achieved the highest BCI communication rates for people with movement impairment reported to date. These results suggest that intracortical BCIs offer a promising approach to assistive communication systems for people with paralysis.

Materials and methods

Permission for these studies was granted by the US Food and Drug Administration (Investigational Device Exemption) and Institutional Review Boards of Stanford University (protocol # 20804), Partners Healthcare/Massachusetts General Hospital (2011P001036), Providence VA Medical Center (2011–009), and Brown University (0809992560). The three participants in this study, T6, T7, and T5, were enrolled in a pilot clinical trial of the BrainGate Neural Interface System (http://www.clinicaltrials.gov/ct2/show/NCT00912041). Informed consent, including consent to publish, was obtained from the participants prior to their enrollment in the study. Additional permission was obtained to publish participant photos and reproduce text typed by the participants.

Participants

Participant T6 is a right-handed woman, 51 years old at the start of this work, who was diagnosed with Amyotrophic Lateral Sclerosis (ALS) and had resultant motor impairment (functional rating scale (ALSFRS-R) measurement of 16). In Dec. 2012, a 96-channel intracortical silicon microelectrode array (1.0 mm electrode length, Blackrock Microsystems, Salt Lake City, UT) was implanted in the hand area of dominant motor cortex as previously described (Simeral et al., 2011; Hochberg et al., 2012). T6 retained dexterous movements of the fingers and wrist. Data reported in this study are from T6’s post-implant days 570, 572, 577, 588, 591, 602, 605, and 621.

A second study participant, T7, was a right-handed man, 54 years old at the time of this work, who was diagnosed with ALS and had resultant motor impairment (ALSFRS-R of 17). In July 2013, participant T7 had two 96-channel intracortical silicon microelectrode arrays (1.5 mm electrode length, Blackrock Microsystems, Salt Lake City, UT) implanted in the hand area of dominant motor cortex. T7 retained very limited and inconsistent finger movements. Data reported are from T7’s post-implant days 539 and 548. Unfortunately, prior to performing additional research sessions, T7 passed away due to non-research related reasons.

A third study participant, T5, is a right-handed man, 63 years old at the time of this work, with a C4 ASIA C spinal cord injury that occurred approximately 9 years prior to study enrollment. He retains the ability to weakly flex his left (non-dominant) elbow and fingers; these are his only reproducible movements of his extremities. He also retains some slight residual movement which is inconsistently present in both the upper and lower extremities, mainly seen at ankle dorsiflexion and plantarflexion, wrist, fingers and elbow, more consistently present on the left than on the right. Occasionally, the initial slight voluntary movement triggers involuntary spastic flexion of the limb. In Aug. 2016, participant T5 had two 96-channel intracortical silicon microelectrode arrays (1.5 mm electrode length, Blackrock Microsystems, Salt Lake City, UT) implanted in the upper extremity area of dominant motor cortex. During BCI control, the only observed movement of the extremities (besides involuntary spastic flexion) is finger flexion on the non-dominant hand during discrete selection attempts. Data reported are from T5’s post-implant days 56, 57, 68, and 70.

System design

Data were collected using the BrainGate2 Neural Interface System. This modular platform, standardized across clinical trial sites, supports multiple operating systems and custom real-time software, and allows multiple studies to be performed by different researchers without hardware modification. The framework enables a rapid-prototyping environment and facilitates ease of replication of real-time closed loop studies with multiple trial participants. For the present study, neural control and task cuing closely followed ref. (Gilja et al., 2015) and were controlled by custom software running on the Simulink/xPC real-time platform (The Mathworks, Natick, MA), enabling millisecond-timing precision for all computations. Neural data were collected by the NeuroPort System (Blackrock Microsystems, Salt Lake City, UT) and available to the real-time system with 5 ms latency. Visual presentation was provided by a computer via a custom low latency network software interface to Psychophysics Toolbox for Matlab and an LCD monitor with a refresh rate of 120 Hz. Frame updates from the real-time system occurred on screen with a latency of approximately 13 ± 5 ms.

During design and development research sessions leading up to the quantitative performance evaluations, a framework for remote, real-time performance monitoring and debugging was critical to iteratively improving system performance remotely. This was performed using lightweight, custom MATLAB software that monitored performance via network packets from the real-time system and provided insight to researchers located in the laboratory (i.e., away from the participants’ homes, where data were collected). Researchers accessed the remote monitoring and troubleshooting system in real-time using TeamViewer (Tampa, FL). In addition, at the end of each evaluation block, summary data was immediately transferred from the participants’ homes to researchers to facilitate rapid analysis, debugging, and iteration.

Neural feature extraction

The neural signal processing framework closely followed ref. (Gilja et al., 2015). The NeuroPort System applies an analog 0.3 Hz to 7.5 kHz band-pass filter to each neural channel and samples each channel at 30 kSamples per second. These broadband samples were processed via software on the Simulink/xPC real-time platform. The first step in this processing pipeline was to subtract a common average reference (CAR) from each channel (intended to remove noise common to all recorded neural channels). For each time point, the CAR was calculated simply by taking the mean across all neural channels.

Band-pass filters split the signal into spike and high frequency local field potential (HF-LFP) bands. To extract neural spiking activity, a cascaded infinite impulse response (IIR) and finite impulse response (FIR) high-pass filter were applied. A threshold detector was then applied every millisecond to detect the presence of a putative neural spike. Choice of threshold was specific to each array (T6: −50 μV, T5, –95 μV, Medial and Lateral arrays; T7, Lateral array: −70 μV, Medial array: −90 μV). HF-LFP power features refer to the power within the 150–450 Hz band-pass filtered signal. For continuous control, T6 sessions used both spike and HF-LFP features (hybrid decoding), while T5 and T7 sessions used only spike-based features. Figure 5 demonstrates the signal quality for both participants.

Figure 5. Signal quality on the participants’ electrode arrays.

Each panel shows the recorded threshold crossing waveforms for all 96 channels of a given array for a 60 s period during the participant’s first quantitative performance evaluation block. T6 had a single implanted array, while T5 and T7 had two implanted arrays. Scale bars (lower left corner of each panel) represent 150 µV (vertical) and 0.5 milliseconds (horizontal). Voltages were analog band-pass filtered between 0.3 Hz and 7.5 kHz, then sampled by the NeuroPort system at 30 kHz. The resulting signals were then digitally high-pass filtered (500 Hz cutoff frequency) and re-referenced using common average referencing. Thresholds were set at −4.5 times the root-mean-squared (r.m.s.) voltage value for each channel. Channels without a corresponding trace did not have any threshold crossing events during this time period. Data are from sessions 570, 56, and 539 days post-implant for T6, T5, and T7, respectively.

DOI: http://dx.doi.org/10.7554/eLife.18554.029

Figure 5.

Figure 5—figure supplement 1. HF-LFP signals have similar time course and condition dependence to spiking activity.

Figure 5—figure supplement 1.

Control algorithms for T6 incorporated high-frequency LFP power signals (HF-LFP; see Materials and methods). A potential concern with a power signal is that it may pick up artifacts related to EMG from eye movements. Here we analyze activity during a decoder calibration block to show that HF-LFP signals have a strikingly similar time course and condition dependence as spiking activity. (a) Sample of the signals recorded on T6’s array during a decoder calibration block. Some channels show discernible single or multiunit activity (threshold crossings), while others do not. Neural data was processed as in Figure 5, with thresholds set to −4 times the r.m.s. voltage value for each channel. Scale bars (lower left corner) represent 150 µV (vertical) and 0.5 milliseconds (horizontal). (b) Target positions and corresponding colors used to label each condition in the subsequent panels. (c) Threshold crossing rates as a function of time for each condition (movement to a given target location), beginning at the time of target onset, for five example channels with discernable threshold crossing activity. Each trace represents the mean ± s.e.m. threshold crossing rate for a given condition, computed across seven trials for each condition. Horizontal scale bar represents 100 ms, vertical scale bar represents 40 threshold crossings / sec. Traces from individual trials were smoothed by convolving with a Gaussian kernel with 50 ms s.d. prior to mean / standard deviation calculations. (d) Same plots, but depicting HF-LFP power instead of threshold crossing rates, for a different set of example channels that did not have discernible multiunit activity. Horizontal scale bar is again 100 ms, vertical scale (HF-LFP power) is in arbitrary units. The same trials as panel (c) above were used. As shown, HF-LFP power signals display a similar time course following target onset, as well as degree of condition dependence, as threshold crossing activity. Data are from T6’s post-implant day 570.
Figure 5—figure supplement 2. HF-LFP signals show a similar time course and condition dependence to spiking activity during auditory-cued tasks in which the participant had her eyes closed.

Figure 5—figure supplement 2.

Following Figure 5—figure supplement 1, to further rule out the possibility that HF-LFP signals are related to eye movements, we include data recorded as T6 performed an auditory-cued task with her eyes closed as she attempted multiple single-joint movements. The task included a delay period in which she was prompted (via an auditory cue) about the upcoming movement attempt, but was asked to not attempt the movement until receiving a go cue. (a) Sample of the signals recorded on T6’s array during the attempted movement. Some channels show discernible single or multiunit activity (threshold crossings), while others do not. Neural data was processed as in Figure 5, with thresholds set to −4 times the r.m.s. voltage value for each channel. Scale bars (lower left corner) represent 150 µV (vertical) and 0.5 milliseconds (horizontal). (b) Threshold crossing rates as a function of time for attempted single-joint flexion movements (index finger: red, thumb: yellow, wrist: light green, elbow: darker green) for five example channels with discernable threshold crossing activity. Each trace represents the mean ± s.e.m. threshold crossing rate for a given condition, computed across 20 trials for each condition. Horizontal scale bar represents 500 ms, vertical scale bar represents 20 threshold crossings / sec. Red box denotes the time each movement was prompted, and blue box denotes the time of the go cue (break in the traces is due to the randomized delay period across trials). As shown, activity is indicative of both planning and movement attempt epochs. Traces from individual trials were smoothed by convolving with a Gaussian kernel with 50 ms s.d. prior to mean / standard deviation calculations. (d) Same plots, but depicting HF-LFP power instead of threshold crossing rates, for a different set of example channels that did not have discernible multiunit activity. Horizontal scale bar is again 500 ms, vertical scale (HF-LFP power) is in arbitrary units. The same trials as panel (c) above were used. Because there were no visual cues and the participant had her eyes closed, it is unlikely that the participant was making condition dependent eye movements. However, as shown, even in the absence of visual cues, HF-LFP power signals display a similar time course following target onset, degree of planning- and movement-related activity, and degree of condition dependence, as threshold crossing activity. Data are from T6’s post-implant day 488.

A potential concern with decoding a power signal such as these high frequency LFP (HF-LFP) signals (which were used for participant T6) is that they may pick up artifacts related to EMG from eye movements. In intracranial studies, such artifacts have been previously shown in electrocortographic (ECoG) recordings (e.g., Kovach et al [Kovach et al., 2011].). However, as demonstrated in Kovach et al., the magnitude of this phenomenon falls sharply with the distance from the ventral temporal cortical surface. Further, the same study demonstrated that these artifacts are highly correlated across scales less than 1 cm, and that rereferencing on these local scales eliminates the artifacts outside of the immediate ventral temporal cortical surface (Kovach et al., Fig. 9). In our study, data are collected in motor cortical areas which are fairly medial in the precentral gyrus (frontal lobe), and are rereferenced using the common average across the intracortical array (4 mm x 4 mm). Given the large distance between the recording site and the ventral temporal cortical surface, and the common average rereferencing, any minor eye movement-related EMG artifacts are expected to be essentially eliminated.

In order to be certain that these artifacts do not play a role, we provide additional lines of evidence that rule out EMG due to eye movements as being the driver of the observed high performance. First, we include data from an additional participant (T5) in which HF-LFP signals were not used for control (Figures 2 and 3). We found T5’s performance was greater than T6’s – this demonstrates that high performance iBCI control is achievable using spiking activity alone. Second, we analyzed T6’s HF-LFP signals during decoder calibration blocks and show that they have a similar time course and condition dependence as recorded spiking activity (Figure 5—figure supplement 1). Third, we include additional data recorded as T6 performed an auditory-cued task with her eyes closed as she attempted movements of her fingers, wrist, and elbow (Figure 5—figure supplement 2). Because there are no visual cues and the participant has her eyes closed, it is unlikely that the participant is making condition-dependent eye movements. However, even in the absence of visual cues, the HF-LFP signals are quite similar to recorded spiking activity in their time course and condition dependence. These lines of evidence make the possibility that HF-LFP signals are eye movement-related highly unlikely.

During sessions with participant T7, neural features exhibited drifts in baseline firing rates over time. To account for these nonstationarities, baseline rates were computed de novo prior to each block, during a 30 s period in which the participant was asked simply to relax.

Neural control algorithms

Two-dimensional continuous control of the cursor used the ReFIT Kalman Filter (detailed in refs. [Gilja et al., 2012, 2015]). For participants T6 and T5, discrete selection (‘click’) was achieved using a Hidden Markov Model (HMM)-based state classifier, which was previously developed with non-human primates (Kao et al., 2016) and adapted for the current work. At each timestep, the HMM calculated the probability that the participant’s intended state was either movement or click. For T6, only HF-LFP features were used in the HMM, while only spike features were used for T5. Features were pre-processed with a dimensionality reduction step using Principal Components Analysis (PCA). The HMM classified the probability of state sk as:

p(sk,t)=p(sk|zt)ip(sk,t|si,t1)p(si,t1),

where p(sk|zt) is the probability of being in state sk given the current (dimensionality-reduced) neural features at time tzt, and where p(sk,t|si,t1) denotes the probability of transitioning from state si to state sk.p(sk|zt) was modeled as a multivariate Gaussian distribution with separate mean and covariance for each state. The current state was classified as ‘click’ when p(sk,t) exceeded a pre-determined threshold that was calculated in an unsupervised fashion (threshold choice is outlined in the task descriptions).

In this framework, there is a key tradeoff between including more PCs (and potentially more relevant information) and overfitting / mis-estimating the mean and covariance of the Gaussian distribution for each state as more dimensions (PCs) are added. Overfitting these parameters results in poor decoding on ‘out-of-sample’ data. Empirically, we found that 3–4 PCs resulted in an HMM that accurately classified states without overfitting on the limited training data. Therefore, the top four eigenvalue-ranked PCs were kept and used as inputs to the HMM.

Algorithm parameters were calibrated using training data collected during the same research session as evaluation of neural control performance. All calibration data were collected with a center-out-and-back target configuration. For the quantitative evaluations, initial filters were calibrated based on data collected during a center-out-and-back task performed under motor control (T6) or automated open-loop control (all T5 and T7). During motor control tasks, T6 controlled the cursor’s x and y velocities using index finger and thumb movements, respectively, and acquired targets by holding the cursor still over the target (dwell tasks) or squeezing her left hand (ipsilateral to the implanted array) when the cursor was over the target (click tasks). T6’s physical movements were recorded using left- and right-handed datagloves (5DT, Irvine, CA). This was not performed for participants T7 and T5 because they did not have functional use of their arms or hands. During automated open-loop calibration (T5 and T7), the cursor’s movements followed pre-programmed trajectories, and the participants attempted movements to follow the cursor’s movement. In addition, during open-loop calibration, T5 attempted to squeeze his left hand to acquire targets. After initial filter calibration, both continuous control and discrete filters were then recalibrated using closed-loop neural control data. This closed-loop recalibration block closely followed (Gilja et al., 2015), with the addition of a discrete selection for T6 and T5. Because the quality of the initial VKF filter varied from day to day, the recalibration blocks for T6 and T7 also used error attenuation (Hochberg et al., 2012; Velliste et al., 2008) to ensure that the participant could reach all targets.

For participant T6, to control for the possibility that her ability to generate movements led to her high performance, we performed additional sessions in which she was asked to suppress her movements as best as she could (outlined in Figure 4 and supplements). For these sessions, to calibrate the initial continuous and discrete filters, T6 performed an automated open-loop filter calibration protocol as described above. This protocol was also followed for the free typing evaluations (outlined in Figure 1 and supplements).

Neural features used in each filter were selected during the filter calibration process. For the ReFIT-KF, features were first ranked by tuning significance (i.e., p-value of the linear regression between binned neural data and cursor velocity). Features were then added one by one in order of tuning significance to the neural control algorithm, and an offline assessment of directional control was used to predict online control quality. The set of features chosen was the one that minimized the number of features used while maximizing cross-validated decoding accuracy. The discrete decoder (HMM) used all available HF-LFP features.

For both the continuous cursor-positioning ReFIT-KF decoder and the discrete click-state HMM decoder, neural data were binned every 15 ms and sent through the decoders. Thus, for the ReFIT-KF decoder, updated cursor velocity estimates were provided every 15 ms for use in the rest of the BMI system. This velocity was integrated to update the cursor position estimate every 1 ms, and therefore the most recent cursor position was sent to the display every 1 ms. The computer monitor was updated every 8.3 ms (i.e., at the 120 Hz frame rate of the monitor) with the most recent estimate of the desired cursor position. The high update rate is important so as to not inadvertently and deleteriously add latency into the BMI which is a closed-loop feedback control system (Cunningham et al., 2011) and which was possible by using a commercially available high-speed monitor. This system design and these timings are consistent with our previous work (Gilja*, Nuyujukian* et al. Nat Neurosci 2012 [Gilja et al., 2012] binned neural data and used the ReFIT-KF to decode every 50 ms; Gilja*, Pandarinath*, et al. Nat Med 2015 [Gilja et al., 2015] binned neural data and used the ReFIT-KF to decode every 10–50 ms depending on the experiment; a 120 frame/s monitor was also employed). This operates faster and more accurately than a recent report claims is possible with a Kalman filter (Shanechi et al., 2017), and at a higher level of performance than recently reported (Shanechi et al., 2017).

As the HMM click decoder facilitates a discrete decision, a threshold criteria for selection was needed. This threshold value was set after each retraining block at the 93rd quantile of state estimates for the respective retraining block. When running in closed loop, after two consecutive 15 ms bins where the HMM click state probability was above this threshold value, the system generated a click and selected the target under the cursor.

As mentioned above, during sessions with participant T7, neural features exhibited drifts in baseline firing rates over time. On short timescales, these drifts manifested as biases in decoded velocities. Biases were reduced using a variant of the bias correction method used in Jarosiewicz et al. (2015); Hochberg et al. (2012), with the addition of a magnitude term that corrects for the frequency of observed speeds (i.e., low speeds are generally observed for longer time periods than high speeds). Specifically, e.g. for the x direction, velocity bias was estimated as:

Bx(t)=Bx(t1)+(Vx(t)Bx(t1)) × |Vx(t)Bx(t1)| ×Δt/ τ,

where Bx(t) represents the bias estimate for the x direction at time t, Vx(t) represents the velocity estimate for the x direction at time t, Δt is the time step of adaptation (0.001 s), and τ controls the adaptation rate (we set τ to 30 s). A larger τ makes the system slower to respond to changes in bias, but reduces the size of transient fluctuations in the bias estimate when no actual bias is present. The current bias estimate was only updated when speed exceeded a threshold (threshold was set to be roughly the 10–20% quantile for the speeds typically observed for T7).

Data from participant T5 was collected after data collection from participants T6 and T7, and the scientific protocol used with participant T5 reflected the advances made with the prior two participants. Table 3 highlights these changes. As participant T5 and T7’s arrays had a large number of highly modulated spiking channels, no HFLP was necessary to build their decoders. After participant T6’s data collection was completed, it was discovered that both cursor movement and click decoders can be calibrated during the initial open loop block, and this approach was used with participant T5. Similarly, the bias correction algorithm was implemented for participant T7 after data collection for participant T6 was completed and it was discovered that the cursor movement decoder did not need to be retrained after every blockset (a bias correction update block would suffice). This time-saving approach was also used with participant T5.

Table 3.

Summary of the decoding and calibration approaches used with each participant.

DOI: http://dx.doi.org/10.7554/eLife.18554.032

T6

T7

T5

Continuous decoding algorithm

ReFIT Kalman Filter (threshold crossings and HF-LFP)

ReFIT Kalman Filter

(threshold crossings)

ReFIT Kalman Filter

(threshold crossings)

Discrete decoding algorithm

Hidden Markov Model (HF-LFP)

n/a

Hidden Markov Model (threshold crossings)

Dwell time

1 s (reset on target exit)

1.5 s (cumulative)

1 s (reset on target exit)

Bias estimation

no

yes

yes

Cursor recentering

no

yes

no

Recalibration blocks

Recalibrated continuous and discrete decoders

Only updated bias estimates

Only updated bias estimates

Error attenuation in recalibration block

yes

yes

no

Free typing task

The aim of this task was to create a natural, familiar, and conversational environment to demonstrate the potential for iBCIs to be used as communication devices. In this task, conducted only with participant T6, questions were presented at the top of the monitor. These questions were tailored to topics that T6 enjoys discussing. At the start of a block, one of these questions would be chosen from a pool of questions that had not been used before. After reading the question considering her response, T6 started the block counter and enabled the keyboard inputs by selecting the play button in the bottom right corner of the screen. She then used the BCI to type her response to the question by selecting one letter at a time.

During the free typing task, T6 was asked to suppress her hand movements as best as possible. During the quantitative performance evaluations, T6 was free to make movements as she wished.

Quantitative performance evaluations

The quantitative measurement experiments were performed with all three participants. These experimental days were explicitly structured and carefully timed so that each piece of data could be compared and measured independently. The experimental flow diagram for participant T6 is shown in Figure 2—figure supplement 1. With participants T6 and T5, the calibration protocol resulted in two BCI decoders: one for cursor movement and one for click. With participant T7, only a cursor movement decoder was calibrated. After decoders were calibrated and confirmed to be working successfully in a brief (less than 30 s evaluation), the experimental data were then collected. Once the data portion of the experiment was started, the blockset structure was repeated until the participant ended the research session. Starting over with the calibration portion of the protocol was not permitted once the blockset data collection portion of the research day began.

Blocksets

Each blockset was collected in a strict, timed, randomized fashion. Each blockset was considered a complete and independent unit, equally weighted, and statistically identical to all other blocksets. Blockset timing structure is defined in Figure 2—figure supplement 2. Each blockset began with a recalibration block, which resulted in new cursor movement and click decoders for participant T6. For participants T5 and T7, the movement decoder was held constant and the recalibration block was simply used to create an updated estimate of the underlying velocity bias. This recalibration protocol was used to maximize the performance of the data collected in the time-locked blocks that followed. Three data blocks were then collected in a randomized fashion, constituting a blockset. Each blockset consisted of one block each of three tasks. The three tasks with participant T6 and T5 were the grid task, the QWERTY task, and the OPTI-II task. With participant T7, the QWERTY task was substituted with the ABCDEF task, since he had minimal experience with the conventional (QWERTY) keyboard layout. The task order in each blockset was randomized subject to the constraint that the two copy typing tasks were always adjacent. This constraint minimized the amount of elapsed time between the copy typing blocks, in order to minimize any confounding effects on measured typing rate. The prompted sentence to copy in both keyboard tasks for a given blockset was identical. Following the completion of a blockset, participants were given as long a break (to request a drink from a caregiver, etc.) as desired before starting the subsequent blockset. Breaks within a blockset were minimized as best as possible.

Target selection and cursor re-centering

For both participants, selections could be made by holding the cursor over the target for a fixed period of time (1 s for T6 and T5, 1.5 s for T7). For T6 and T5, leaving a given target area would reset the hold time counter to 0 – thus they were required to remain over the same target for a full second to select via holding. For T7, who could only select targets by dwelling on them, selection used a strategy called ‘cumulative dwell time’ – each target had a separate hold time counter, and the cumulative time spent over a target counted towards the 1.5 s requirement (i.e., it was not required that the 1.5 s be contiguous). All hold time counters were reset to 0 after any target selection. Additionally, T6 and T5 could also select targets using the HMM-based click decoder, which was typically a faster method of selecting targets. Thus, T6 and T5 had two methods for target selection.

Participant T7 preferred that the cursor re-center to the middle of the screen after each selection, which allowed him to better focus on one trajectory at a time. (In contrast, participants T6 and T5 preferred continuous cursor control instead of re-centering, as it allowed them to plan out a series of keystrokes and achieve faster typing rates.)

For T7, after each selection, the cursor was centered relative to the targets and held in place for 500 ms – during this time, target selection was disabled. This approach minimized the ‘worst case’ path lengths (i.e., eliminated the potential of having to move from one corner of the keyboard to another while typing a phrase); this is beneficial in the case where instability causes control biases, which make long trajectories that oppose the bias more difficult. We note that, as re-centering was completely unsupervised (i.e., it occurred regardless of whether the selection made was correct), it did not compromise the typing or achieved bitrate measurements in any way.

Grid task

The purpose of this task was to measure performance using information theoretic metrics. In this grid task (Hochberg et al., 2006; Nuyujukian et al., 2015), the workspace was divided into a 6 × 6 grid of equal gray squares. Each square was selectable, and one would randomly be prompted as the target when illuminated in green. After a selection was made, a new target was immediately prompted. This task ran for two minutes (fixed duration).

QWERTY task

The purpose of this task was to measure typing rates using a conventional keyboard layout. In this task, a sentence was prompted at the top of the screen, and participants were instructed to copy this sentence as quickly and accurately as possible. Selection methods were identical to that described in the grid task. This task ended when the participants typed the last letter of the prompted sentence or two minutes had elapsed, whichever occurred first.

ABCDEF task

The purpose of this task was identical to the QWERTY task, except it was specific to participant T7. Since he was not very familiar with the QWERTY layout, the letters were rearranged alphabetically from left to right, top to bottom. This alphabetical ordering allowed T7 to more easily determine where a given letter was located. The keyboard geometry of the ABCDEF task was identical to that of the QWERTY task, and the same task timing and prompting was employed as described in the QWERTY task.

OPTI-II task

The purpose of this task was to provide a potentially more efficient keyboard layout than the QWERTY or ABCDEF layouts for a continuous cursor communication interface. The conventional QWERTY layout is not ideal for selecting letters via continuous cursor navigation. Thus, a more efficient keyboard layout that minimizes the average distance travelled between letters should increase the typing rate. We used the OPTI-II keyboard layout described in the HCI literature (Rick, 2010) as an optimized layout for text entry with a continuous cursor. This was used with both participants, with timings and promptings identical to the QWERTY and ABCDEF tasks. For participant T6, a programming error caused the accessible workspace for the OPTI-II task (copy typing) to stop in the middle of the bottom row of keys (contrary to other tasks, where the accessible workspace extended past the keyboards).

Metrics

The performance of each task was measured using one of two metrics, depending on the task. Performance on the grid task was measured via achieved bitrate, measured in bits per second, and performance on the typing tasks (QWERTY, ABCDEF, OPTI-II) was measured via correct characters per minute.

Achieved bitrate

The grid task, representing a stable, memoryless, discrete communication channel with random, uniformly-distributed prompted targets, satisfies information theoretic criteria for measuring achieved bitrate (Nuyujukian et al., 2015). Achieved bitrate is a conservative measure of the actualized throughput of a communication channel. The achieved bitrate, B, is calculated via the following equation:

B=log2(N1)max(S2E,0)t

where N is the number of targets on the screen, S is the number of selections, E is the number of errors, and t is the time elapsed. The floor of this value is 0, since bitrate cannot be less than zero. Note that trials in which the participant timed out and made no selection are not counted in S or E, but are included in the value for time elapsed. This metric, in bits per second, represents the minimum expected throughput achievable from the system.

Correct characters per minute

Typing rates were measured by calculating the number of correct characters transmitted over time (correct characters per minute [Bacher et al., 2015]). Correct characters were defined as those that were not subsequently deleted by the participant using the delete key. This measure, C, is defined by the following equation:

C= max(S2D, 0)t

where S is the number of selections, D is the number of delete key selections, and t is the elapsed time.

We note that this metric labels typographical errors or spelling errors as correct characters. However, as it is not clear whether the participant was aware of a given spelling error, we only considered errors those that were actually deleted. This metric also parallels achieved bitrate, in that it only measures the net characters transmitted over time.

Quantifying movement suppression

For sessions in which participant T6 was asked to suppress her movements to the best of her abilities (Figure 4), we first quantified the degree to which movements were suppressed during decoder calibration (Figure 4—figure supplement 1). As mentioned earlier, finger movements were measured using a dataglove (5DT, Irvine, CA). For each condition (i.e., freely moving vs. suppressed movement), we measured the finger position as a function of time (relative to the starting position for each trial), and averaged these positions across all trials for a given target direction. To robustly evaluate the degree of suppression between freely moving and suppressed movement, we compared the time epochs spanning 600–1200 ms after target onset, which was well after movement was detectable but before movements became more variable across trials (i.e., to perform corrective movements as T6 approached the target). Movement suppression was only estimated for target directions in which movement on a given finger was to be expected to avoid singular values (e.g., as index finger movements were related to control of the horizontal dimension, index finger movements were not compared for the vertical targets where they would be expected to be 0). Data compared are from T6’s trial days 570 (freely moving) and 595 (movements suppressed). We next quantified the degree to which movements were suppressed during closed-loop BCI control (Grid task). Individual trials were grouped by the target direction (i.e., the angle between the previous target and the prompted target for the current trial; eight possible directions) and finger positions were averaged across all trials of a given direction. Trials that lasted less than 1200 ms were excluded from the analysis. To ensure that any minute movements were captured, movements were quantified using the absolute value (rather than the signed value) of the finger position at each time point relative to the starting position for each trial. Analysis includes all Grid task data from T6’s trial days 595 and 598 (movements suppressed). Unfortunately, for freely moving sessions, finger positions were not recorded during closed-loop BCI control, so data are unavailable for the specific comparison of finger movements during closed-loop BCI control for freely moving vs. movement suppressed sessions.

Code availability

Code, which is platform specific and implemented in xPC, may be made available upon request to corresponding author.

Acknowledgements

The authors would like to thank participants T6, T5, T7, and their families and caregivers; EN Eskandar for T7 implantation surgery; B Davis, B Pedrick, E Casteneda, M Coburn, S Patnaik, P Rezaii, B Travers, and D Rosler for administrative support; SI Ryu for surgical assistance; L Barefoot, S Cash, J Menon, and S Mernoff for clinical assistance; A Sarma, and N Schmansky for technical assistance; V Gilja, JD Simeral, JA Perge and B Jarosiewicz for technical assistance and helpful scientific discussions; JP Donoghue for helpful scientific discussions.

This work was supported by: Stanford Office of Postdoctoral Affairs; Craig H Neilsen Foundation; Stanford Medical Scientist Training Program; Stanford BioX-NeuroVentures, Stanford Institute for Neuro-Innovation and Translational Neuroscience; Larry and Pamela Garlick; Samuel and Betsy Reeves; NIH-NIDCD R01DC014034; NIH-NINDS R01NS066311; NIH-NIDCD R01DC009899; NIH-NICHD-NCMRR (N01HD53403 and N01HD10018); Rehabilitation Research and Development Service, Department of Veterans Affairs (B6453R); MGH-Deane Institute for Integrated Research on Atrial Fibrillation and Stroke; Executive Committee on Research, Massachusetts General Hospital.

The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health, the Department of Veterans Affairs, or the United States Government. CAUTION: Investigational Device. Limited by Federal Law to Investigational Use.

Funding Statement

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Funding Information

This paper was supported by the following grants:

  • Craig H. Neilsen Foundation Postdoctoral Fellowship to Chethan Pandarinath.

  • Stanford Medical Scientist Training Program to Paul Nuyujukian.

  • U.S. Department of Veterans Affairs B6453R to Leigh R Hochberg.

  • Massachusetts General Hospital Deane Institute for Integrated Research on Atrial Fibrillation and Stroke to Leigh R Hochberg.

  • National Institute on Deafness and Other Communication Disorders R01DC009899 to Leigh R Hochberg.

  • Eunice Kennedy Shriver National Institute of Child Health and Human Development N01HD10018 to Leigh R Hochberg.

  • Eunice Kennedy Shriver National Institute of Child Health and Human Development N01HD53403 to Leigh R Hochberg.

  • National Institute of Neurological Disorders and Stroke R01NS066311 to Krishna V Shenoy, Jaimie M Henderson.

  • National Institute on Deafness and Other Communication Disorders R01DC014034 to Krishna V Shenoy, Jaimie M Henderson.

  • Stanford University BioX-NeuroVentures to Krishna V Shenoy, Jaimie M Henderson.

  • Stanford Institute for Neuro-Innovation and Translational Neuroscience to Krishna V Shenoy, Jaimie M Henderson.

  • Larry and Pamela Garlick to Jaimie M Henderson.

  • Samuel and Betsy Reeves to Jaimie M Henderson.

Additional information

Competing interests

The authors declare that no competing interests exist.

Author contributions

CP, Responsible for study design, research infrastructure development, algorithm design, data collection, analysis, and manuscript preparation.

PN, Responsible for study design, research infrastructure development, algorithm design, data collection, analysis, and manuscript preparation.

CHB, Contributed to study design and data collection for participants T6 and T5.

BLS, Contributed to study design and data collection from participant T7.

JS, Contributed to technical development.

FRW, Contributed to algorithm design.

LRH, Contributed to study design and is the sponsor-investigator of the multi-site pilot clinical trial.

KVS, Was involved in all aspects of the study.

JMH, Was responsible for surgical implantation for study participants T6 and T5 and was involved in all aspects of the study.

Ethics

Clinical trial registration NCT00912041.

Human subjects: Permission for these studies was granted by the US Food and Drug Administration (Investigational Device Exemption) and Institutional Review Boards of Stanford University (protocol # 20804), Partners Healthcare/Massachusetts General Hospital (2011P001036), Providence VA Medical Center (2011-009), and Brown University (0809992560). The three participants in this study, T5, T6 and T7, were enrolled in a pilot clinical trial of the BrainGate Neural Interface System (http://www.clinicaltrials.gov/ct2/show/NCT00912041). Informed consent, including consent to publish, was obtained from the participants prior to their enrollment in the study.

References

  1. Achtman N, Afshar A, Santhanam G, Yu BM, Ryu SI, Shenoy KV. Free-paced high-performance brain-computer interfaces. Journal of Neural Engineering. 2007;4:336–347. doi: 10.1088/1741-2560/4/3/018. [DOI] [PubMed] [Google Scholar]
  2. Aflalo T, Kellis S, Klaes C, Lee B, Shi Y, Pejsa K, Shanfield K, Hayes-Jackson S, Aisen M, Heck C, Liu C, Andersen RA. Decoding motor imagery from the posterior parietal cortex of a tetraplegic human. Science. 2015;348:906–910. doi: 10.1126/science.aaa5417. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Bacher D, Jarosiewicz B, Masse NY, Stavisky SD, Simeral JD, Newell K, Oakley EM, Cash SS, Friehs G, Hochberg LR. Neural Point-and-Click communication by a person with incomplete Locked-In Syndrome. Neurorehabilitation and Neural Repair. 2015;29:462–471. doi: 10.1177/1545968314554624. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Bishop W, Chestek CC, Gilja V, Nuyujukian P, Foster JD, Ryu SI, Shenoy KV, Yu BM. Self-recalibrating classifiers for intracortical brain-computer interfaces. Journal of Neural Engineering. 2014;11:026001. doi: 10.1088/1741-2560/11/2/026001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Blabe CH, Gilja V, Chestek CA, Shenoy KV, Anderson KD, Henderson JM. Assessment of brain-machine interfaces from the perspective of people with paralysis. Journal of Neural Engineering. 2015;12:043002. doi: 10.1088/1741-2560/12/4/043002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Borton DA, Yin M, Aceros J, Nurmikko A. An implantable wireless neural interface for recording cortical circuit dynamics in moving primates. Journal of Neural Engineering. 2013;10:026010. doi: 10.1088/1741-2560/10/2/026010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Brunner P, Ritaccio AL, Emrich JF, Bischof H, Schalk G. Rapid communication with a "P300" matrix speller using electrocorticographic signals (ECoG) Frontiers in Neuroscience. 2011;5:5. doi: 10.3389/fnins.2011.00005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Carmena JM, Lebedev MA, Crist RE, O'Doherty JE, Santucci DM, Dimitrov DF, Patil PG, Henriquez CS, Nicolelis MA. Learning to control a brain-machine interface for reaching and grasping by primates. PLoS Biology. 2003;1:E42. doi: 10.1371/journal.pbio.0000042. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Chestek CA, Gilja V, Nuyujukian P, Foster JD, Fan JM, Kaufman MT, Churchland MM, Rivera-Alvidrez Z, Cunningham JP, Ryu SI, Shenoy KV. Long-term stability of neural prosthetic control signals from silicon cortical arrays in rhesus macaque motor cortex. Journal of Neural Engineering. 2011;8:045005. doi: 10.1088/1741-2560/8/4/045005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Collinger JL, Boninger ML, Bruns TM, Curley K, Wang W, Weber DJ. Functional priorities, assistive technology, and brain-computer interfaces after spinal cord injury. The Journal of Rehabilitation Research and Development. 2013b;50:145. doi: 10.1682/JRRD.2011.11.0213. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Collinger JL, Wodlinger B, Downey JE, Wang W, Tyler-Kabara EC, Weber DJ, McMorland AJC, Velliste M, Boninger ML, Schwartz AB. High-performance neuroprosthetic control by an individual with tetraplegia. The Lancet. 2013a;381:557–564. doi: 10.1016/S0140-6736(12)61816-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Cunningham JP, Nuyujukian P, Gilja V, Chestek CA, Ryu SI, Shenoy KV. A closed-loop human simulator for investigating the role of feedback control in brain-machine interfaces. Journal of Neurophysiology. 2011;105:1932–1949. doi: 10.1152/jn.00503.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Ethier C, Oby ER, Bauman MJ, Miller LE. Restoration of grasp following paralysis through brain-controlled stimulation of muscles. Nature. 2012;485:368–371. doi: 10.1038/nature10987. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Fairbanks G. Voice and Articulation: Drillbook. New York: Harper and Row; 1960. [Google Scholar]
  15. Flint RD, Wright ZA, Scheid MR, Slutzky MW. Long term, stable brain machine interface performance using local field potentials and multiunit spikes. Journal of Neural Engineering. 2013;10:056005. doi: 10.1088/1741-2560/10/5/056005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Ganguly K, Dimitrov DF, Wallis JD, Carmena JM. Reversible large-scale modification of cortical networks during neuroprosthetic control. Nature Neuroscience. 2011;14:662–667. doi: 10.1038/nn.2797. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Gilja V, Chestek CA, Diester I, Henderson JM, Deisseroth K, Shenoy KV. Challenges and opportunities for next-generation intracortically based neural prostheses. IEEE Transactions on Biomedical Engineering. 2011;58:1891–1899. doi: 10.1109/TBME.2011.2107553. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Gilja V, Nuyujukian P, Chestek CA, Cunningham JP, Yu BM, Fan JM, Churchland MM, Kaufman MT, Kao JC, Ryu SI, Shenoy KV. A high-performance neural prosthesis enabled by control algorithm design. Nature Neuroscience. 2012;15:1752–1757. doi: 10.1038/nn.3265. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Gilja V, Pandarinath C, Blabe CH, Nuyujukian P, Simeral JD, Sarma AA, Sorice BL, Perge JA, Jarosiewicz B, Hochberg LR, Shenoy KV, Henderson JM. Clinical translation of a high-performance neural prosthesis. Nature Medicine. 2015;21:1142–1145. doi: 10.1038/nm.3953. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Hochberg L, Cochrane T. Implanted Neural Interfaces. 2013. Neuroethics in Practice; pp. 235–250. [DOI] [Google Scholar]
  21. Hochberg LR, Anderson KD. In: BCI Users and Their Needs. Wolpaw JR, Wolpaw EW, eds., editor. Oxford University Press; 2012. [Google Scholar]
  22. Hochberg LR, Bacher D, Jarosiewicz B, Masse NY, Simeral JD, Vogel J, Haddadin S, Liu J, Cash SS, van der Smagt P, Donoghue JP. Reach and grasp by people with tetraplegia using a neurally controlled robotic arm. Nature. 2012;485:372–375. doi: 10.1038/nature11076. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Hochberg LR, Serruya MD, Friehs GM, Mukand JA, Saleh M, Caplan AH, Branner A, Chen D, Penn RD, Donoghue JP. Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature. 2006;442:164–171. doi: 10.1038/nature04970. [DOI] [PubMed] [Google Scholar]
  24. Hoggan E, Brewster SA, Johnston J. Investigating the effectiveness of tactile feedback for mobile touchscreens. Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 5-10, 2008); ACM; 2008. pp. 1573–1582. [Google Scholar]
  25. Huggins JE, Wren PA, Gruis KL. What would brain-computer interface users want? opinions and priorities of potential users with amyotrophic lateral sclerosis. Amyotrophic Lateral Sclerosis. 2011;12:318–324. doi: 10.3109/17482968.2011.572978. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Hwang HJ, Lim JH, Jung YJ, Choi H, Lee SW, Im CH. Development of an SSVEP-based BCI spelling system adopting a QWERTY-style LED keyboard. Journal of Neuroscience Methods. 2012;208:59–65. doi: 10.1016/j.jneumeth.2012.04.011. [DOI] [PubMed] [Google Scholar]
  27. Jarosiewicz B, Sarma AA, Bacher D, Masse NY, Simeral JD, Sorice B, Oakley EM, Blabe C, Pandarinath C, Gilja V, Cash SS, Eskandar EN, Friehs G, Henderson JM, Shenoy KV, Donoghue JP, Hochberg LR. Virtual typing by people with tetraplegia using a self-calibrating intracortical brain-computer interface. Science Translational Medicine. 2015;7:313ra179. doi: 10.1126/scitranslmed.aac7328. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Kao JC, Nuyujukian P, Ryu SI, Shenoy KV. A high-performance neural prosthesis incorporating discrete state selection with hidden Markov models. IEEE Transactions on Biomedical Engineering. 2016:1. doi: 10.1109/TBME.2016.2582691. [DOI] [PubMed] [Google Scholar]
  29. Kemere C, Santhanam G, Yu BM, Afshar A, Ryu SI, Meng TH, Shenoy KV. Detecting neural-state transitions using hidden Markov models for motor cortical prostheses. Journal of Neurophysiology. 2008;100:2441–2452. doi: 10.1152/jn.00924.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Kim SP, Simeral JD, Hochberg LR, Donoghue JP, Friehs GM, Black MJ. Point-and-click cursor control with an intracortical neural interface system by humans with tetraplegia. IEEE transactions on neural systems and rehabilitation engineering. 2011;19:193–203. doi: 10.1109/TNSRE.2011.2107750. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Kovach CK, Tsuchiya N, Kawasaki H, Oya H, Howard MA, Adolphs R. Manifestation of ocular-muscle EMG contamination in human intracranial recordings. NeuroImage. 2011;54:213–233. doi: 10.1016/j.neuroimage.2010.08.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Leuthardt EC, Schalk G, Wolpaw JR, Ojemann JG, Moran DW. A brain-computer interface using electrocorticographic signals in humans. Journal of Neural Engineering. 2004;1:63–71. doi: 10.1088/1741-2560/1/2/001. [DOI] [PubMed] [Google Scholar]
  33. Lopez MH, Castelluci S, MacKenzie IS. Text entry with the Apple iPhone and the Nintendo Wii. Proceedings of the Twenty-Seventh Annual SIGCHI Conference on Human Factors in Computing Systems; Boston, USA, April 4-9, 2008. New York, USA: ACM; 2009. [Google Scholar]
  34. MacKenzie IS, Soukoreff RW. Text entry for mobile computing: models and methods, theory and practice. Human-Computer Interaction. 2002;17:147–198. doi: 10.1207/S15327051HCI172&#x00026;3_2. [DOI] [Google Scholar]
  35. Mainsah BO, Collins LM, Colwell KA, Sellers EW, Ryan DB, Caves K, Throckmorton CS. Increasing BCI communication rates with dynamic stopping towards more practical use: an ALS study. Journal of Neural Engineering. 2015;12:016013. doi: 10.1088/1741-2560/12/1/016013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. McCane LM, Heckman SM, McFarland DJ, Townsend G, Mak JN, Sellers EW, Zeitlin D, Tenteromano LM, Wolpaw JR, Vaughan TM. P300-based brain-computer interface (BCI) event-related potentials (ERPs): People with amyotrophic lateral sclerosis (ALS) vs. age-matched controls. Clinical Neurophysiology. 2015;126:2124–2131. doi: 10.1016/j.clinph.2015.01.013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Moran D. Evolution of brain-computer interface: action potentials, local field potentials and electrocorticograms. Current Opinion in Neurobiology. 2010;20:741–745. doi: 10.1016/j.conb.2010.09.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Mugler EM, Ruf CA, Halder S, Bensch M, Kubler A. Design and implementation of a P300-based brain-computer interface for controlling an internet browser. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2010;18:599–609. doi: 10.1109/TNSRE.2010.2068059. [DOI] [PubMed] [Google Scholar]
  39. Musallam S, Corneil BD, Greger B, Scherberger H, Andersen RA. Cognitive control signals for neural prosthetics. Science. 2004;305:258–262. doi: 10.1126/science.1097938. [DOI] [PubMed] [Google Scholar]
  40. Münßinger JI, Halder S, Kleih SC, Furdea A, Raco V, Hösle A, Kübler A. Brain painting: first evaluation of a new brain-computer interface application with ALS-patients and healthy volunteers. Frontiers in Neuroscience. 2010;4:1–11. doi: 10.3389/fnins.2010.00182. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Nijboer F, Sellers EW, Mellinger J, Jordan MA, Matuz T, Furdea A, Halder S, Mochty U, Krusienski DJ, Vaughan TM, Wolpaw JR, Birbaumer N, Kübler A. A P300-based brain-computer interface for people with amyotrophic lateral sclerosis. Clinical Neurophysiology. 2008;119:1909–1916. doi: 10.1016/j.clinph.2008.03.034. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Nuyujukian P, Fan JM, Kao JC, Ryu SI, Shenoy KV. A high-performance keyboard neural prosthesis enabled by task optimization. IEEE Transactions on Biomedical Engineering. 2015;62:21–29. doi: 10.1109/TBME.2014.2354697. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Nuyujukian P, Kao JC, Fan JM, Stavisky SD, Ryu SI, Shenoy KV. Performance sustaining intracortical neural prostheses. Journal of Neural Engineering. 2014;11:066003. doi: 10.1088/1741-2560/11/6/066003. [DOI] [PubMed] [Google Scholar]
  44. Nuyujukian P, Kao JC, Ryu SI, Shenoy KV. A nonhuman primate brain-computer typing interface. Proceedings of the IEEE. 2016;105:66–72. doi: 10.1109/JPROC.2016.2586967. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. O'Doherty JE, Lebedev MA, Ifft PJ, Zhuang KZ, Shokur S, Bleuler H, Nicolelis MA. Active tactile exploration using a brain-machine-brain interface. Nature. 2011;479:228–231. doi: 10.1038/nature10489. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Pires G, Nunes U, Castelo-Branco M. Statistical spatial filtering for a P300-based BCI: tests in able-bodied, and patients with cerebral palsy and amyotrophic lateral sclerosis. Journal of Neuroscience Methods. 2011;195:270–281. doi: 10.1016/j.jneumeth.2010.11.016. [DOI] [PubMed] [Google Scholar]
  47. Pires G, Nunes U, Castelo-Branco M. Comparison of a row-column speller vs. a novel lateral single-character speller: assessment of BCI for severe motor disabled patients. Clinical Neurophysiology. 2012;123:1168–1181. doi: 10.1016/j.clinph.2011.10.040. [DOI] [PubMed] [Google Scholar]
  48. Rick J. Performance optimizations of virtual keyboards for Stroke-based text entry on a Touch-based tabletop. Proceedings of the 23rd Annual ACM Symposium on User Interface Software and Technology; New York, USA — October 03 - 06, 2010 . New York, USA: ACM; 2010. pp. 77–86. [Google Scholar]
  49. Ryu SI, Shenoy KV. Human cortical prostheses: lost in translation? Neurosurgical Focus. 2009;27:E5. doi: 10.3171/2009.4.FOCUS0987. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Santhanam G, Ryu SI, Yu BM, Afshar A, Shenoy KV. A high-performance brain-computer interface. Nature. 2006;442:195–198. doi: 10.1038/nature04968. [DOI] [PubMed] [Google Scholar]
  51. Schalk G, Miller KJ, Anderson NR, Wilson JA, Smyth MD, Ojemann JG, Moran DW, Wolpaw JR, Leuthardt EC. Two-dimensional movement control using electrocorticographic signals in humans. Journal of Neural Engineering. 2008;5:75–84. doi: 10.1088/1741-2560/5/1/008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Sellers EW, Ryan DB, Hauser CK. Noninvasive brain-computer interface enables communication after brainstem stroke. Science Translational Medicine. 2014;6:257re7. doi: 10.1126/scitranslmed.3007801. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Serruya MD, Hatsopoulos NG, Paninski L, Fellows MR, Donoghue JP. Instant neural control of a movement signal. Nature. 2002;416:141–142. doi: 10.1038/416141a. [DOI] [PubMed] [Google Scholar]
  54. Shanechi MM, Orsborn AL, Moorman HG, Gowda S, Dangi S, Carmena JM. Rapid control and feedback rates enhance neuroprosthetic control. Nature Communications. 2017;8:13825. doi: 10.1038/ncomms13825. [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Shenoy KV, Meeker D, Cao S, Kureshi SA, Pesaran B, Buneo CA, Batista AP, Mitra PP, Burdick JW, Andersen RA. Neural prosthetic control signals from plan activity. NeuroReport. 2003;14:591–596. doi: 10.1097/00001756-200303240-00013. [DOI] [PubMed] [Google Scholar]
  56. Silfverberg M, MacKenzie IS, Korhonen P. Predicting text entry speed on mobile phones. Proceedings of the ACM Conference on Human Factors in Computing Systems - CHI 2000; The Hague, The Netherlands — April 01 - 06, 2000. New York: ACM; 2000. pp. 9–16. [Google Scholar]
  57. Simeral JD, Kim SP, Black MJ, Donoghue JP, Hochberg LR. Neural control of cursor trajectory and click by a human with tetraplegia 1000 days after implant of an intracortical microelectrode array. Journal of Neural Engineering. 2011;8:025027. doi: 10.1088/1741-2560/8/2/025027. [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Spüler M, Rosenstiel W, Bogdan M. Online adaptation of a c-VEP Brain-computer Interface(BCI) based on error-related potentials and unsupervised learning. PLoS One. 2012;7:e51077. doi: 10.1371/journal.pone.0051077. [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Taylor DM, Tillery SIH, Schwartz AB. Direct cortical control of 3D neuroprosthetic devices. Science. 2002;296:1829–1832. doi: 10.1126/science.1070291. [DOI] [PubMed] [Google Scholar]
  60. Townsend G, LaPallo BK, Boulay CB, Krusienski DJ, Frye GE, Hauser CK, Schwartz NE, Vaughan TM, Wolpaw JR, Sellers EW. A novel P300-based brain-computer interface stimulus presentation paradigm: moving beyond rows and columns. Clinical Neurophysiology. 2010;121:1109–1120. doi: 10.1016/j.clinph.2010.01.030. [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Townsend G, Platsko V. Pushing the P300-based brain–computer interface beyond 100 bpm: extending performance guided constraints into the temporal domain. Journal of Neural Engineering. 2016;13:026024. doi: 10.1088/1741-2560/13/2/026024. [DOI] [PubMed] [Google Scholar]
  62. Vansteensel MJ, Pels EG, Bleichner MG, Branco MP, Denison T, Freudenburg ZV, Gosselaar P, Leinders S, Ottens TH, Van Den Boom MA, Van Rijen PC, Aarnoutse EJ, Ramsey NF. Fully implanted brain-computer interface in a locked-in patient with ALS. New England Journal of Medicine. 2016;375:2060–2066. doi: 10.1056/NEJMoa1608085. [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Velliste M, Perel S, Spalding MC, Whitford AS, Schwartz AB. Cortical control of a prosthetic arm for self-feeding. Nature. 2008;453:1098–1101. doi: 10.1038/nature06996. [DOI] [PubMed] [Google Scholar]
  64. Venkatagiri HS. Clinical measurement of rate of reading and discourse in young adults. Journal of Fluency Disorders. 1999;24:209–226. doi: 10.1016/S0094-730X(99)00010-8. [DOI] [Google Scholar]
  65. Wang W, Collinger JL, Degenhart AD, Tyler-Kabara EC, Schwartz AB, Moran DW, Weber DJ, Wodlinger B, Vinjamuri RK, Ashmore RC, Kelly JW, Boninger ML. An electrocorticographic brain interface in an individual with tetraplegia. PLoS One. 2013;8:e55344. doi: 10.1371/journal.pone.0055344. [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. Wolpaw JR, Ramoser H, McFarland DJ, Pfurtscheller G. EEG-based communication: improved accuracy by response verification. IEEE Transactions on Rehabilitation Engineering. 1998;6:326–333. doi: 10.1109/86.712231. [DOI] [PubMed] [Google Scholar]
eLife. 2017 Feb 21;6:e18554. doi: 10.7554/eLife.18554.033

Decision letter

Editor: Sabine Kastner1

In the interests of transparency, eLife includes the editorial decision letter and accompanying author responses. A lightly edited version of the letter sent to the authors after peer review is shown, indicating the most substantive concerns; minor comments are not usually included.

Thank you for submitting your article "High performance communication by people with ALS using an intracortical brain-computer interface" for consideration by eLife. Your article has been reviewed by two peer reviewers, and the evaluation has been overseen by Sabine Kastner as the Reviewing Editor and enior Editor. The following individual involved in review of your submission has agreed to reveal his identity: Hagai Lalazar (Reviewer #3).

The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.

Summary: This study examines spelling performance for two patients with ALS using an intracortical brain computer interface. This is the most comprehensive and elegant study that has been performed on this topic up to this point, and the editors agreed that it merits further consideration for publication at eLife.

Essential revisions:

1) Please discuss why the classification approach (vs. regression underlying cursor control) has not yet been explored.

2) Please provide some evidence that during the sessions when subject T6 was instructed not to use her hand, there was no overt movement. If you don't have EMG or motion capture, even an analysis of the video might satisfy this concern.

3) Please provide additional data (ideally EMG/high resolution eye tracking) to rule out muscle artifacts, or, alternatively, careful documentation of raw LFP traces that were used for control, demonstrating that they appear "naturalistic" rather than "artefactual".

4) Please explain why the HMM was not used with subject T7.

5) Please provide a detailed training history of the subjects (including previous experiments) and obtain as complete as possible training history for the studies they are comparing with. Please explain why the comparison is justified.

Reviewer #1:

Overview: The current study assesses spelling performance for two patients with ALS using an intracortical Brain computer interface (BCI) and a specific algorithm (refit-KF for cursor movement and an HMM for click selection). The main claim of the study is that the exhibited performance is better than previous spelling BCI studies with disabled patients.

General assessment: My main concern is whether the study's approach and result are innovative enough for publication in eLife. The authors previously published a study with the same participants (Gilja et al., 2015) using the same algorithm for cursor movement control. In the current study this is additionally combined with click selection (via an HMM) for spelling, which in the attached IEEE paper is shown to provide a modest but significant improvement on target selection. However, in one of the 2 subjects in the current study, click selection wasn't used. Thus, it is unclear how important is this addition for the exhibited performance. Relatedly, a recent noninvasive BCI spelling study on healthy subjects (Townsend & Platsko, 2016) appear to exhibit similar performance to the current study (1.73 bps in their "free spelling" task, see their table 5). That study should be referred to and the comparison discussed. While the current study appears to exhibit the best spelling accuracy to date as measured on ALS patients using a BCI, another study (McCane et al., 2015) showed that ALS patents and age matched controls exhibit similar performance while using the noninvasive BCI speller used in the Townsend & Platsko study. Thus, that study results arguably apply to ALS patients as well.

Other issues:

1) Intracranial LFP signals can be affected by activity of head muscles, including eye muscles. I'm concerned that such artifacts took place in the higher performing subject (T6), especially given that for that subject LFP frequencies for up to 450 Hz were used, as such high frequencies are more prone to artifacts than low frequencies. If possible, the authors should perform an additional experimental session where EMG signals from head muscles are recorded, or at least eye movements (including micro-saccades) are tracked, and show that the relevant control signals are uncorrelated with the EMG/eye movements.

2) Given that subject T6 has used an eye gaze speller before, it should be feasible to compare the spelling performance of that system vs. the one in the current study. This should be done to substantiate the main conclusion of the study that "intracortical BCIs offer a promising approach to assistive communication systems for people with paralysis."

3) Related to the above, and given that the paper is relatively short, there should be added to the Discussion a section detailing a cost-benefit analysis of invasive methods vs. noninvasive ones. If the authors believe (as I assume they do) that the exhibited performance of their BCI outweighs the risks of invasive surgery and chronic implants then that claim should be made explicit and substantiated.

4) The number of subjects (2) is standard for this type of studies. However, there are significant differences in the procedures involving each of the subjects, particularly regarding: 1 – the calibration procedure of the algorithm, which in one subject (T6) involved performing motor movements with the fingers. 2 – one participant (T6) performed clicks which were decoded using an HMM, the other (T7) did not. Thus, it is unclear if the subjects could actually be pooled together.

5) Related to the above, the best subject in the study seemed to exhibit a capacity for natural motor control not attained by the other subject in the study, and possibly by some subjects in previous studies that are being compared. Thus, this capacity may underlie the exhibited high performance. In order to address this concern the authors performed an additional two sessions with this subject were the movements were required to be suppressed. However, it is unclear if movements were actually suppressed in those sessions, as no data to that effect (such as EMG) is shown. Thus, it is unclear if this control is adequate. The authors should substantiate the claim of this control by showing such data.

6) I couldn't find references for several of the studies that are being compared in the table (Figure 3 – —figure supplement 3).

7) The best performing subject (T6) barely has any spiking signal in the array (Figure 3 – —figure supplement 5.) Thus, it is unclear what type of activity is being extracted and used. The authors should provide traces of the threshold crossings from a small number of channels (e.g. 10) which were most used for control, for example as assessed by off-line, channel dropping analysis. Additionally they should provide traces of the LFP features used for control, assessed in the same way. There should also be an analysis detailing which features were more influential for control (spiking or LFP) for both cursor movement and click selection.

8) There should be a summarized but detailed history of BCI experience for both subjects as this is directly relevant to their capacity to control the BCI. Differences in training can make it hard to compare between studies. Subjects in intracortical BCI studies typically train for much longer periods than in EEG BCI studies. For example, in (Mainsah et al., 2015) which is a study that should be added to the comparison table, subjects trained for 3 days. While subjects in the current study may have well been training for months in different BCI experiments. The authors should (when possible) add that information to the study comparison table and explain, in cases where previous studies had much shorter training periods than the current study, why the comparison is still justified.

References:

Townsend, G. & Platsko, V. Pushing the P300-based brain-computer interface beyond 100 bpm: extending performance guided constraints into the temporal domain. J Neural Eng., 13(2):026024. (2016).

McCane, L.M., Heckman, S.M., McFarland, D.J., Townsend. G., Mak, J.N., Sellers, E.W., Zeitlin, D., et al. P300-based brain-computer interface (BCI) event-related potentials (ERPs): People with amyotrophic lateral sclerosis (ALS) vs. age-matched controls. Clinical neurophysiology 126 (11): 2124-31. (2015).

Mainsah, B.O., Collins, L.M., Colwell, K.A., Sellers, E.W., Ryan, D.B., Caves, K. & Throckmorton, C.S. Increasing BCI communication rates with dynamic stopping towards more practical use: an ALS study. J Neural Eng., 12(1): 016013. (2015).

Reviewer #3:

A closed-loop BMI for typing was tested with 2 ALS patients. Performance showed substantial improvement compared to previous studies, on both realistic and quantitative tests. This study is thorough, its details are, for the most part, explained well, and has important clinical implications. I have made suggestions to improve the clarity and presentation of the manuscript.

1) Improving typing speeds can be evaluated both on an absolute scale (relative to the typing speeds of able-bodied typers), as well as, the improvement relative to previous studies. This study makes a substantial improvement in both regards, however both points can be made more clearly and earlier in the paper.

The comparison to able-bodied typers (main text, twelfth paragraph) should be stated in the Abstract (perhaps just comparing to texting, which is the most comparable to cursor control) and mentioned earlier in the manuscript (e.g. after the fourth paragraph of the main text). This is critical for a non-expert reader to be able to assess this work in the general context of the state-of-the-art of assistive communication devices.

Additionally, the table in Figure 3—figure supplement 3, should be made a full figure. This comparison gives a broad picture of the improved performance achieved in the current study (i.e. the variance across many other studies and the 1-2 orders of magnitude improvement over EEG based approaches). This table should have an additional column, describing in a few words the algorithm and type of user-interface used in each study (e.g. "p300-speller", "ReFIT-KF & HMM", etc.). Also, all the papers cited in the table should appear in the references.

2) Typing is inherently a (multi-class) classification problem, and not the regression problem underlying 2D cursor control. BCI typing speeds will eventually be limited only by the mean time of each cursor trajectory (between each letter selected), and any time used for the selection itself (dwell times, or "click"). This difference explains why the texting speeds of able-bodied subjects (which usually use only one finger) are lower that their keyboard typing speeds (which have discretized the alphabet into the 10 fingers). This issue has not been explored in the current paper, and to my knowledge, in the series of studies on "typing" with humans or monkeys (except for Andersen, et al. (2004)). If the authors have done any offline data-analysis to test this idea, it would be very helpful to describe it briefly. Otherwise, why this approach has not yet been explored should be explained.

3) Figure 3—figure supplement 1, discusses that performance (in both the copy-typing and grid tasks) was not significantly different when the subject was asked to suppress any overt movements. However, there is no measurement of the movement and its ensuing reduction after the instruction. The main text should be more forthcoming about this, and mention that this is only what the subject was trying to do, however was not measured and quantified. Moreover, as EMG from the arm was not measured, it should be mentioned that an effect of nascent muscle commands (that did not elicit observable movements) on the performance cannot be ruled out. This is especially important, as subject T6 had better results, and her remaining finger movements were used to train her initial decoders. As different patients may or may not have specific residual movements, this makes performance comparisons between them less precise. This point should also be mentioned, in the paragraph hypothesizing about the reasons for the performance differences between the subjects (main text, fifteenth paragraph).

4) The analysis in Figure 3—figure supplement 4 suffers from all the weaknesses of analyzing the tuning of M1 neurons by fitting them to a cosine-tuning function for movement direction, reported in many studies (some even by authors of the current study). For example, (i) the percent of neurons that show a change in preferred direction depends on the goodness-of-fit of the cosine model across the population (which the authors don't report), (ii) preferred directions have been shown to change as a function of time during the movement (Churchland & Shenoy (2007), Figure 13), (iii) there are high frequency deviations from cosine tuning, which, in addition to the cosine tuning component, may be expected from random connectivity (Lalazar, Abbott, Vaadia (2016)), etc. This figure and the associated sentence in the main text (main text, eighth paragraph; and Methods subsection) do not contribute to the manuscript and only diminish from its otherwise compelling level of rigor. I suggest removing them.

5) Why was the HMM decoder not used for subject T7? This should be explained.

[Editors' note: further revisions were requested prior to acceptance, as described below.]

Thank you for resubmitting your work entitled "High performance communication by people with paralysis using an intracortical brain-computer interface" for further consideration at eLife. Your revised article has been favorably evaluated by Sabine Kastner (Senior Editor) and one of the previous reviewers.

The manuscript has been improved but there are some remaining issues that need to be addressed before acceptance, as outlined below:

1) The paragraph discussing the costs and benefits of invasive vs. noninvasive bci strategies should be more balanced. The potential risk of brain surgery (e.g. infection, tissue damage, brain swelling, seizures) needs to be explicitly stated, especially the risk of infection given that there is an implant providing a physical connection from the brain to outside the scalp.

2) – On several instances, the authors stress the benefits of a fully self-calibrating, fully wireless implantable system. They need to make clear that theirs is not such a system.

eLife. 2017 Feb 21;6:e18554. doi: 10.7554/eLife.18554.034

Author response


Essential revisions:

1) Please discuss why the classification approach (vs. regression underlying cursor control) has not yet been explored.

We agree that classification (i.e., estimating the endpoint goal) is a very promising alternative approach for use in communication interfaces. We have added the text included below to the Discussion section of the main manuscript in order to clarify why the current approach was investigated:

“Previous work with non-human primates from our lab and others (e.g., Shenoy et al., NeuroReport2003; Musallam et al., Science2004; Santhanam et al., Nature2006) demonstrated that BCI strategies which leverage discrete classification can achieve high communication rates. […] Thus, there are multiple technical and scientific challenges to address, and developing these approaches for clinical trial participants is an active area of research.”

2) Please provide some evidence that during the sessions when subject T6 was instructed not to use her hand, there was no overt movement. If you don't have EMG or motion capture, even an analysis of the video might satisfy this concern.

This request refers to sessions with participant T6, who at the time of this study still retained the ability to make dexterous movements with her fingers. A potential concern was that T6’s high performance might be due to her ability to make movements. To address this, in the original submission, we performed control experiments in which the participant was asked to suppress her natural movements as best as possible and control the BCI. In these sessions, decoders were calibrated with physical movements suppressed (i.e., “open-loop” calibration), and the participant imagined movements of her fingers. These experiments showed that performance in closed-loop BCI control with movements suppressed was comparable to performance when we did not explicitly ask her to suppress hand movements.

Further, to address questions regarding the original data on movement suppression with participant T6, we have added quantification to show the degree to which T6’s movements were suppressed. This quantification is based on hand movement data from a commercially-available “data glove” sensor system, which was used to track the position of the participant’s fingers during the session. This analysis is now presented in detail in Figure 4 and its supplements.

Overall, when T6 actively attempted to suppress movements, her movement was reduced by a factor of 7.2 – 12.6 (Figure 4—figure supplement 1). As described in the main text, despite this factor of 7.2 – 12.6 in movement suppression, performance was quite similar to performance when T6 moved freely. Across all three quantitative evaluation types (OPTI-II, QWERTY & Grid), the performance differences were within 0-20% and not significant (p > 0.2 in all cases, Student’s t test).

The concern that movement ability is required to achieve high-performance BCI control is addressed by data from a new participant (T5). Unlike T6, T5 had no ability to make any functional movements of his upper and lower extremities, and retained only very limited volitional movements of his non-dominant elbow and fingers (detailed in Methods). This level of movement ability is representative of the severely motor impaired population. For T5, continuous cursor control was based on attempted movement of the dominant hand/arm (for which no consistent movements were possible), and discrete selection was based on attempted movement of the non-dominant hand. Despite T5’s large decrease in movement ability relative to participant T6, his performance was in fact higher than participant T6. This further supports the idea that the ability to generate movements is not required for high performance cursor control.

As requested by the reviewers, we have adjusted the manuscript to make the comparison between T6’s performance with or without suppressing movements more prominent. We moved the text describing this topic into its own paragraph in the Results section, we moved the comparison to its own dedicated figure with two new supplementary figures (Figure 4 and Figure 4—figure supplements 1 and 2), and we have added text to the Results and Discussion to highlight this issue and detail our conclusions.

Results:

“As might be expected, T6 found that suppressing her natural movement was a challenging, cognitively demanding task. […] Despite this factor of 7.2 – 12.6 in movement suppression, performance was quite similar to performance when T6 moved freely – across all three quantitative evaluation types (Grid, OPTI-II, QWERTY), the performance differences were within 0-20% and not significant (p > 0.2 in all cases, Student’s t test).”

Discussion paragraphs:

“At the time of this study, participant T6 still retained the ability to make dexterous movements of her hands and fingers, which may raise the question of whether her high level of performance was related to the generation of movement. […] Further, there was little if any correspondence between the participants’ movement abilities and BCI performance.”

3) Please provide additional data (ideally EMG/high resolution eye tracking) to rule out muscle artifacts, or, alternatively, careful documentation of raw LFP traces that were used for control, demonstrating that they appear "naturalistic" rather than "artefactual".

We appreciate the reviewer’s question regarding high-frequency LFP signals (which were used for BCI control with participant T6) and whether these might contain artifacts related to EMG from eye movements. As requested, we have provided both additional data and analyses to rule out this possibility. We have added the following text to the Methods that discusses the possibility and our new data/analyses in detail:

“A potential concern with decoding a power signal such as these high frequency LFP (HF-LFP) signals (which were used for participant T6) is that they may pick up artifacts related to EMG from eye movements. […] These lines of evidence make the possibility that HF-LFP signals are eye movement-related highly unlikely.”

Last, as a piece of anecdotal evidence, we note that we originally aimed to conduct these performance evaluations with T6 seated at a closer distance to the display (56 cm). However, at this distance, we found that T6 had to make large eye movements to scan the workspace in order to find letters and targets, which hampered her performance and comfort. Instead, we performed the evaluations at a farther distance from the display (90 cm), greatly reducing her eye movements and improving her performance and comfort. If eye movement-related signals were mediating performance, we would expect performance to suffer as eye movements were reduced. However, this was not the case.

4) Please explain why the HMM was not used with subject T7.

Both reviewers brought up the issue that the HMM was not replicated with participant T7, and we understand and appreciate this question. We have added text to the Discussion (replicated below) that explains why this was not possible. Further, to directly address this question, in the revised submission we added data from a third participant (T5), which allowed further replication of our methods, including the HMM. Overall performance with T5 was even higher than the two previous participants, and further, the HMM was even more accurate with T5 than with T6 (as documented in Figure 3—figure supplement 1).

The following was added to the Discussion to explain the lack of HMM with participant T7:

“Both participants T6 and T5 used the HMM decoder for discrete selection. Our goal was to also use the HMM with participant T7. […] Unfortunately, however, T7 passed away before the HMM sessions could be conducted.”

5) Please provide a detailed training history of the subjects (including previous experiments) and obtain as complete as possible training history for the studies they are comparing with. Please explain why the comparison is justified.

We appreciate the reviewer’s comments regarding training history for our study and the studies we compare to in Table 1. To address these comments, we added a table to summarize the training history as best we can for all studies in Table 1 (new Table 1).

At the time of the sessions presented in this paper, both T6 & T7 had performed 2-3 research sessions per week for 1.5 years. Sessions consisted mainly of tasks related to controlling computer cursors, typing and communication, and attempted / imagined movement imagery, with additional sessions dedicated to robotic arm control. We did not perform detailed experiments to track learning or the effects of BCI experience on performance. Fortunately, for T6 & T7, a direct comparison is possible with Jarosiewicz et al., Science Translational Medicine2015 (which, prior to this study, had the highest reported bitrates for a BCI used by a person with motor impairment) as both T6 & T7 were participants in that study as well. For T7, later sessions from Jarosiewicz et al. were performed within 3 months of the current study. For T6, the sessions from Jarosiewicz et al. were collected both before and after the current sessions. For both participants, as noted in the text, performance in the current study was approximately a factor of 2 higher than Jarosiewicz et al. This rules out training time as a main driver of the differences in performance.

Further, we have now added data from a new, recently-implanted participant (T5; implanted in Aug 2016, first BCI session Sept 2016, and data collected in Oct 2016) which shows that higher performance than T6 could be achieved within 9 sessions of first controlling the BCI. We were not trying to minimize this time in any way, and spent many of those first sessions simply characterizing the types of intended movements that elicit neural modulation in the areas recorded by the arrays.

While it may be possible, as the reviewer suggests, that EEG studies may increase performance with longer exposure to the BCI, this has yet to be shown. As mentioned in Table 1, the Sellers and colleagues paper tracked an EEG participant for > 1 year, and they did not see an increase in performance over this time. Similarly, the average participant in the Mugler and colleagues study had 3+ years of experience with BCIs, yet performance was not markedly higher than the other P300 EEG-based studies in the literature.

As mentioned above, we did not perform detailed experiments to track learning. In our experience, large changes in performance were achieved by trying out new algorithms or behavioral imagery paradigms, but we did not generally see slow gradual increases in BCI control quality from repeatedly using the same decoding strategies. A recent non-human primate report from our group is consistent with the idea of seeing little if any change in performance across months and years of BCI use (Gilja et al., Nature Neuroscience2012, see Figure 2) when the decoder algorithm provides high performance early on and thus there is little “pressure” on the system to engage neural adaptation (Shenoy & Carmena, Neuron, 2014).

Reviewer #1:

Overview: The current study assess spelling performance for two patients with ALS using an intracortical Brain computer interface (BCI) and a specific algorithm (refit-KF for cursor movement and an HMM for click selection). The main claim of the study is that the exhibited performance is better than previous spelling BCI studies with disabled patients.

General assessment: My main concern is whether the study's approach and result are innovative enough for publication in eLife. The authors previously published a study with the same participants (Gilja et al., 2015) using the same algorithm for cursor movement control. In the current study this is additionally combined with click selection (via an HMM) for spelling, which in the attached IEEE paper is shown to provide a modest but significant improvement on target selection. However, in one of the 2 subjects in the current study, click selection wasn't used. Thus, it is unclear how important is this addition for the exhibited performance.

Thank you for this helpful question. We agree that this work builds on previous methodologies, but there are several reasons why the current work represents an important milestone in the development of clinically useful BCIs. While our previous manuscript (Gilja et al., 2015) showed improved continuous control performance over earlier approaches, as the reviewer points out, the current study adds a parallel decoding method to enable discrete selection (the HMM), which is a critical step in developing point-and-click interfaces that would be suitable to control a general purpose computing device. While it is unfortunate that the HMM could not be tested with the participant T7 (as described in Essential revisions #4 above), we now replicate the HMM results with an additional participant (T5) and show even higher performance than in the original manuscript.

Importantly, following the Gilja et al. study, it was unknown whether high performance BCI control based on motor cortical signals could be maintained in the face of increasingly complex tasks and cognitive loads. Here we explicitly show this. In all tasks presented, the participants had to select the correct targets without accidentally selecting distractor targets. Further, in the copy typing tasks here, the participants were required to copy sequences of words, keep track of their position within the word and sentence, recall their intended letter’s position (across multiple keyboard layouts), and navigate to the correct letter. In the free typing task, the participant had to actively construct their intended sentences while controlling the BCI. In all cases, the system demonstrated unparalleled performance for a BCI with subjects with motor impairment.

Further, this study documents a communication BCI with people with motor impairment whose performance would satisfy the majority of users’ desires. Specifically, as mentioned in the text, a previous survey of the communication desires of people with ALS (Huggins et al., Amyotroph Lateral Scler2011) found that 59% of respondents would be satisfied with a communication speed of 10-14 characters per minute (and 72% with 15-19 characters per minute). The typing performance reported in the current study (31.6 ccpm, 39.2 ccpm (7.8 wpm), and 13.5 ccpm for T6, T5, and T7, respectively) would, at least as documented by Huggins’ study, satisfy the desires of the majority of this population (and this performance would further increase with the addition of common word completion algorithms).

Relatedly, a recent noninvasive BCI spelling study on healthy subjects (Townsend & Platsko, 2016) appear to exhibit similar performance to the current study (1.73 bps in their "free spelling" task, see their table 5). That study should be referred to and the comparison discussed. While the current study appears to exhibit the best spelling accuracy to date as measured on ALS patients using a BCI, another study (McCane et al., 2015) showed that ALS patents and age matched controls exhibit similar performance while using the noninvasive BCI speller used in the Townsend & Platsko study. Thus, that study results arguably applies to ALS patients as well.

Thank you for raising the Townsend and Platsko work. That study and a few other previous BCI studies (e.g., Brunner et al., Front Neurosci2011, Spuler et al., PLoS ONE2012, Chen et al., PNAS2015) nominally report higher typing rates than those listed in the table when measured with healthy subjects. However, we exclude these from the comparison as, at present, we do not know how their reported performance would translate to subjects with motor impairment.

Other issues:

1) Intracranial LFP signals can be affected by activity of head muscles, including eye muscles. I'm concerned that such artifacts took place in the higher performing subject (T6), especially given that for that subject LFP frequencies for up to 450 Hz were used, as such high frequencies are more prone to artifacts than low frequencies. If possible, the authors should perform an additional experimental session where EMG signals from head muscles are recorded, or at least eye movements (including micro-saccades) are tracked, and show that the relevant control signals are uncorrelated with the EMG/eye movements.

We appreciate this question. The detailed response to this question is in Essential revisions #3 above.

2) Given that subject T6 has used an eye gaze speller before, it should be feasible to compare the spelling performance of that system vs. the one in the current study. This should be done to substantiate the main conclusion of the study that "intracortical BCIs offer a promising approach to assistive communication systems for people with paralysis."

We appreciate this important question. First, participant T6 is no longer a part of the clinical trial, therefore collecting such data is not possible. Second, we hold a slightly different perspective than that posed in this reviewer question. We believe that our results presented support the conclusion that intracortical BCIs offer a promising approach to assistive communication systems for people with paralysis. The results show that intracortical BCIs are capable of high typing rates (12-40 correct characters per minute (ccpm)) and communication rates (1.4-3.7 bits per second), and were demonstrated in complex, real-world tasks. As mentioned in the paper and above (B1), the levels of communication performance demonstrated in our study would satisfy the majority of people with ALS. We believe that this is sufficient to conclude that intracortical BCIs offer a promising approach to assistive communication systems for people with paralysis.

In a separate study, we are actively directly comparing available AAC systems to the current decoders used with our iBCI technology. This turns out to be a much more difficult study to perform than one would imagine (due largely to the many ways in which eye gaze and other AAC systems fail and require recalibration, and due to challenges in assessing the benefits provided by the “input” (eye gaze vs. neural control) as compared to the interface (screen keyboard differences, dwell time vs. click selection, etc.).

3) Related to the above, and given that the paper is relatively short, there should be added to the Discussion a section detailing a cost-benefit analysis of invasive methods vs. noninvasive ones. If the authors believe (as I assume they do) that the exhibited performance of their BCI outweighs the risks of invasive surgery and chronic implants then that claim should be made explicit and substantiated.

As requested, we have added the following to the Discussion section:

“The question of the suitability of implanted versus external BCI systems (or any other external AAC system) for restoring function is an important one. Any technology (or any medical procedure) that requires surgery will be accompanied by some risk. […] Thus, there is a clear willingness among people with paralysis to undergo a surgical procedure if it could provide significant improvements in their daily functioning.”

4) The number of subjects (2) is standard for this type of studies. However, there are significant differences in the procedures involving each of the subjects, particularly regarding: 1 – the calibration procedure of the algorithm, which in one subject (T6) involved performing motor movements with the fingers. 2 – one participant (T6) performed clicks which were decoded using an HMM, the other (T7) did not. Thus, it is unclear if the subjects could actually be pooled together.

We appreciate this question. The detailed response is in Essential revisions #4 above, including the addition of a new participant (T5) that addresses quite directly these questions.

5) Related to the above, the best subject in the study seemed to exhibit a capacity for natural motor control not attained by the other subject in the study, and possibly by some subjects in previous studies that are being compared. Thus, this capacity may underlie the exhibited high performance. In order to address this concern the authors performed an additional two sessions with this subject were the movements were required to be suppressed. However, it is unclear if movements were actually suppressed in those sessions, as no data to that effect (such as EMG) is shown. Thus, it is unclear if this control is adequate. The authors should substantiate the claim of this control by showing such data.

We appreciate this question. The detailed response is in Essential revisions #2 above.

6) I couldn't find references for several of the studies that are being compared in the table (Figure 3 – —figure supplement 3).

This is now fixed. We apologize for this error in referencing.

7) The best performing subject (T6) barely has any spiking signal in the array (Figure 3 – —figure supplement 5.) Thus, it is unclear what type of activity is being extracted and used. The authors should provide traces of the threshold crossings from a small number of channels (e.g. 10) which were most used for control, for example as assessed by off-line, channel dropping analysis. Additionally they should provide traces of the LFP features used for control, assessed in the same way. There should also be an analysis detailing which features were more influential for control (spiking or LFP) for both cursor movement and click selection.

We appreciate this question. The detailed response is in Essential revisions #3 above.

8) There should be a summarized but detailed history of BCI experience for both subjects as this is directly relevant to their capacity to control the BCI. Differences in training can make it hard to compare between studies. Subjects in intracortical BCI studies typically train for much longer periods than in EEG BCI studies. For example, in (Mainsah et al., 2015) which is a study that should be added to the comparison table, subjects trained for 3 days. While subjects in the current study may have well been training for months in different BCI experiments. The authors should (when possible) add that information to the study comparison table and explain, in cases where previous studies had much shorter training periods than the current study, why the comparison is still justified.

We appreciate this question. The detailed response is in Essential revisions #5 above. Also, we appreciate the additional reference and have now added this study to the comparison table.

Reviewer #3:

A closed-loop BMI for typing was tested with 2 ALS patients. Performance showed substantial improvement compared to previous studies, on both realistic and quantitative tests. This study is thorough, its details are, for the most part, explained well, and has important clinical implications. I have made suggestions to improve the clarity and presentation of the manuscript.

1) Improving typing speeds can be evaluated both on an absolute scale (relative to the typing speeds of able-bodied typers), as well as, the improvement relative to previous studies. This study makes a substantial improvement in both regards, however both points can be made more clearly and earlier in the paper.

The comparison to able-bodied typers (main text, twelfth paragraph) should be stated in the Abstract (perhaps just comparing to texting, which is the most comparable to cursor control) and mentioned earlier in the manuscript (e.g. after the fourth paragraph of the main text). This is critical for a non-expert reader to be able to assess this work in the general context of the state-of-the-art of assistive communication devices.

We appreciate this suggestion. However, we purposefully decided not to discuss absolute performance numbers (i.e., raw typing speed or communication rates) in the Abstract because our goal in the Abstract is to put this work in context in comparison to previous work, in order to highlight the advance of this paper. We chose instead to reserve the raw performance metrics for the results. We believe that the most appropriate place to relate these performance metrics to broader communication methods (e.g., typing by healthy subjects) is the Discussion, and have now moved the text relating to communication rates for healthy subjects to the first paragraph of the Discussion. In sum, we can certainly appreciate how this suggestion makes sense and could help many appreciate this work even more, but in balance we believe that the present approach has some key advantages.

Additionally, the table in Figure 3—figure supplement 3, should be made a full figure. This comparison gives a broad picture of the improved performance achieved in the current study (i.e. the variance across many other studies and the 1-2 orders of magnitude improvement over EEG based approaches).

We agree. This is now Table 1.

This table should have an additional column, describing in a few words the algorithm and type of user-interface used in each study (e.g. "p300-speller", "ReFIT-KF & HMM", etc.).

This has been added to the table. Thank you for the suggestion.

Also, all the papers cited in the table should appear in the references.

We apologize for this error in referencing. This is now fixed.

2) Typing is inherently a (multi-class) classification problem, and not the regression problem underlying 2D cursor control. BCI typing speeds will eventually be limited only by the mean time of each cursor trajectory (between each letter selected), and any time used for the selection itself (dwell times, or "click"). This difference explains why the texting speeds of able-bodied subjects (which usually use only one finger) are lower that their keyboard typing speeds (which have discretized the alphabet into the 10 fingers). This issue has not been explored in the current paper, and to my knowledge, in the series of studies on "typing" with humans or monkeys (except for Andersen, et al. (2004)). If the authors have done any offline data-analysis to test this idea, it would be very helpful to describe it briefly. Otherwise, why this approach has not yet been explored should be explained.

We appreciate this suggestion. The detailed response is in Essential revisions #1 above. Our response includes how we are quite familiar with this approach (so we doubly appreciate / enjoy receiving this question) as one of the senior authors (Shenoy) co-developed this approach with Richard Andersen years ago (Shenoy, …, Andersen NeuroReport2003) and, to our knowledge, has achieve the highest level of performance with the classification approach to date (Santhanam, …, Shenoy Nature2006). In future work we anticipate investigating this approach in people, as Prof. Andersen has beautifully been doing, but there are many advantages – importantly including flexibility with various interfaces – with the regression approach (as described above).

3) Figure 3—figure supplement 1, discusses that performance (in both the copy-typing and grid tasks) was not significantly different when the subject was asked to suppress any overt movements. However, there is no measurement of the movement and its ensuing reduction after the instruction. The main text should be more forthcoming about this, and mention that this is only what the subject was trying to do, however was not measured and quantified. Moreover, as EMG from the arm was not measured, it should be mentioned that an effect of nascent muscle commands (that did not elicit observable movements) on the performance cannot be ruled out. This is especially important, as subject T6 had better results, and her remaining finger movements were used to train her initial decoders. As different patients may or may not have specific residual movements, this makes performance comparisons between them less precise. This point should also be mentioned, in the paragraph hypothesizing about the reasons for the performance differences between the subjects (main text, fifteenth paragraph).

We appreciate this question and the suggestions. The detailed response is in Essential revisions #2 above.

An additional point to address this question: as mentioned in the section regarding “Performance of the BCI with movements suppressed” (Figure 4), we did not use finger movements to calibrate initial decoders in the “movement suppressed” sessions with T6. Rather, these decoders were calibrated in an “open-loop” fashion as the participant imagined movements. The degree of movement during this calibration is now detailed in Figure 4.

Last, regarding the impact of residual movements on performance – the addition of data from participant T5, who has no remaining functional movement but higher performance than either T6 or T7 – calls into question any potential relationship between movement ability and performance. Thus, we do not mention residual movement as a possible mediator of high performance, as the data do not support this conclusion. We have done our best to make the movement ability of each participant more clearly stated throughout the manuscript.

4) The analysis in Figure 3—figure supplement 4 suffers from all the weaknesses of analyzing the tuning of M1 neurons by fitting them to a cosine-tuning function for movement direction, reported in many studies (some even by authors of the current study). For example, (i) the percent of neurons that show a change in preferred direction depends on the goodness-of-fit of the cosine model across the population (which the authors don't report), (ii) preferred directions have been shown to change as a function of time during the movement (Churchland & Shenoy (2007), Figure 13), (iii) there are high frequency deviations from cosine tuning, which, in addition to the cosine tuning component, may be expected from random connectivity (Lalazar, Abbott, Vaadia (2016)), etc. This figure and the associated sentence in the main text (main text, eighth paragraph; and Methods subsection) do not contribute to the manuscript and only diminish from its otherwise compelling level of rigor. I suggest removing them.

We agree with the reviewer, and we are happy to remove it. This analysis (cosine-tuning across the neural population) has been removed. We included it originally as we are sometimes asked about this point, but we fully agree with the reviewer that this is not as meaningful an analysis as some might initially think and we surely wish to achieve and maintain a high level of rigor.

5) Why was the HMM decoder not used for subject T7? This should be explained.

This is detailed in Essential revisions #4 above.

[Editors' note: further revisions were requested prior to acceptance, as described below.]

The manuscript has been improved but there are some remaining issues that need to be addressed before acceptance, as outlined below:

1) The paragraph discussing the costs and benefits of invasive vs. noninvasive bci strategies should be more balanced. The potential risk of brain surgery (e.g. infection, tissue damage, brain swelling, seizures) needs to be explicitly stated, especially the risk of infection given that there is an implant providing a physical connection from the brain to outside the scalp.

We appreciate the desire to make the paragraph appropriately balanced, and have modified the text to list some of the immediate risks associated with the specific neurosurgical procedure. The paragraph already contains multiple references to guide readers to an in-depth discussion on the risks and risk-benefit analysis of an implanted neural interface.

“The question of the suitability of implanted versus external BCI systems (or any other external AAC system) for restoring function is an important one. […] That risk is not viewed in isolation, but is compared – by the individual contemplating the procedure – to the potential benefit [44,45]. […] Additional discussion of these topics are found in refs. [47,48].”

2) On several instances, the authors stress the benefits of a fully self-calibrating, fully wireless implantable system. They need to make clear that theirs is not such a system.

Thank you for this comment. There are two points in the paper that mention the prospect of a wireless and/or self-calibrating interface. We have modified them as below to prevent any confusion.

“A future self-calibrating, fully implanted wireless system could in principle be used without caregiver assistance, would have no cosmetic impact, and could be used around the clock. Such a system may be achievable by combining the advances in this report with previous advances in self-calibration and in fully-implantable wireless interfaces [15, 46].”

“In a recent survey of people with spinal cord injury [49], respondents with high cervical spinal cord injury would be more likely to adopt a hypothetical wireless intracortical system compared to an EEG cap with wires, by a margin of 52% to 39%.”

References cited above:

15) Jarosiewicz, B., Sarma, A. A., Bacher, D., Masse, N. Y., Simeral, J. D., Sorice, B., Oakley, E. M., Blabe, C., Pandarinath, C., Gilja, V., Cash, S. S., Eskandar, E. N., Friehs, G., Henderson, J. M., Shenoy, K. V., Donoghue, J. P., & Hochberg, L. R. Virtual typing by people with tetraplegia using a stabilized, self-calibrating intracortical brain-computer interface. Science Translational Medicine, 7, 313ra179. (2015)

44) Hochberg, L. & Cochrane, T. Implanted neural interfaces. Neuroethics Pract, 235, (2013)

45) Hochberg, L. R. & Anderson, K. D. BCI users and their needs. in Oxford University Press, (2012)

46) Borton, D. A., Yin, M., Aceros, J., & Nurmikko, A. An implantable wireless neural interface for recording cortical circuit dynamics in moving primates. J Neural Eng, 10, 026010. 10.1088/1741-2560/10/2/026010. (2013)

47) Ryu, S. I. & Shenoy, K. V. Human cortical prostheses: lost in translation? Neurosurgical focus, 27, E5. (2009)

48) Gilja, V., Chestek, C. A., Diester, I., Henderson, J. M., Deisseroth, K., & Shenoy, K. V. Challenges and opportunities for next-generation intracortically based neural prostheses. IEEE Transactions on Biomedical Engineering, 58, 1891--1899. (2011)

49) Blabe, C. H., Gilja, V., Chestek, C. A., Shenoy, K. V., Anderson, K. D., & Henderson, J. M. Assessment of brain--machine interfaces from the perspective of people with paralysis. Journal of neural engineering, 12, 043002. (2015)


Articles from eLife are provided here courtesy of eLife Sciences Publications, Ltd

RESOURCES