Abstract
Objective
To date, the majority of Brain Machine Interfaces have been used to perform simple tasks with sequences of individual targets in otherwise blank environments. In this study we developed a more practical and clinically relevant task that approximated modern computers and graphical user interfaces (GUIs). This task could be problematic given the known sensitivity of areas typically used for BMIs to visual stimuli, eye movements, decision-making, and attentional control. Consequently, we sought to assess the effect of a complex, GUI-like task on the quality of neural decoding.
Approach
A male rhesus macaque monkey was implanted with two 96-channel electrode arrays in Area 5d of the superior parietal lobule. The animal was trained to perform a GUI-like “Face in a Crowd” task on a computer screen that required selecting one cued, icon-like, face image from a group of alternatives (the “Crowd”) using a neurally controlled cursor. We assessed whether the Crowd affected decodes of intended cursor movements by comparing it to a “Crowd Off” condition in which only the matching target appeared without alternatives. We also examined if training a neural decoder with the Crowd On rather than Off had any effect on subsequent decode quality.
Main Results
Despite the additional demands of working with the Crowd On, the animal was able to robustly perform the task under Brain Control. The presence of the Crowd did not itself affect decode quality. Training the decoder with the Crowd On relative to Off had no negative influence on subsequent decoding performance. Additionally, the subject was able to gaze around freely without influencing cursor position.
Significance
Our results demonstrate that area 5d recordings can be used for decoding in a complex, GUI-like task with free gaze. Thus, this area is a promising source of signals for neural prosthetics that utilize computing devices with GUI interfaces, e.g. personal computers, mobile devices, and tablet computers.
Keywords: Brain Machine Interface, Posterior parietal cortex, Neuroprosthetics, Neural prosthetics, Graphical User Interface
INTRODUCTION
Neural prosthetics hold great promise for allowing disabled individuals to regain agency over their environment by directly manipulating robotic limbs or computer interfaces. When tested in the laboratory, computer or motor-based interfaces tend to only involve series of individual targets (Collinger et al., 2012, Velliste et al., 2008, Gilja et al., 2012, Hauschild et al., 2012). When multiple targets have been used, the effects of the greater task complexity themselves were not evaluated (Ifft et al., 2013, O'Doherty et al., 2011) or the cognitive component of the tasks have been minor (Hochberg et al., 2006, Shanechi et al., 2012), limiting clinical usefulness. These studies do not replicate the function of a modern GUI interface, i.e. selecting a remembered or desired target from a group of alternatives. Such function would be clinically relevant and useful to patients with motor deficits.
To this end, we designed a task for NHPs that incorporated these behavioral elements and assessed its effect on an interface driven by neural activity in the PPC. The “Face in a Crowd” task required selecting a single, icon-like stimulus ,the “face”, from a group, the “crowd.” The correct target was indicated by an initial sample face stimulus. The targets were refined so as to naturally require a visual search of several saccades to locate the matching stimulus without imposing any artificial constraints on eye movements, i.e. during free gaze. After visually locating the matching target, it was selected by manipulating a manually or neurally controlled computer cursor. This task created a NHP analog to human use of a GUI. In the “Crowd Off” task condition, no Crowd appeared, reducing behavior to a traditional center-out task.
Increasing task complexity from one to many possible targets may seem like a simple change, but the associated cognitive and behavioral requirements are not: more eye movements to and between visual stimuli, more complex decision making, and greater demands on working memory and attention. Prosthetics driven by neural activity in motor cortex may be influenced by these variables, as motor (Rao and Donoghue, 2014) and premotor areas (Pesaran et al., 2008) exhibit strong transients to the onset of visual stimuli. Posterior parietal cortex (PPC), which has also been used to drive cortical prosthetics (Musallam et al., 2004, Mulliken et al. 2008, Hauschild et al., 2012, Ifft et al., 2013) in NHPs, is well known to be sensitive to many of the these behavioral variables (Buneo & Andersen, 2006, Colby and Goldberg, 1999, Louie et al., 2011, Pesaran et al., 2010). It is therefore relevant to determine if these added task demands and their neural correlates interfere with the signals upon which a useful neural prosthetic would depend.
Area 5d, a subregion of PPC, was chosen as the substrate for Brain Control due to its selectivity for arm kinematics (Bremner and Andersen, 2012, Cui and Andersen, 2012, Graziano et al., 2000). Neural decoders were repeatedly trained to transform neural activity from this region into cursor commands during both the Crowd On and Crowd Off task conditions to determine whether the Crowd's presence during training or thereafter adversely affected decoding performance.
Some of the behavioral variables mentioned above, e.g. attention and working memory, are difficult to measure directly. Eye movements are closely related to them (Soto et al., 2005) and much more readily recorded. Therefore, we examined eye movements during the various phases of the Face in a Crowd task as well as during a saccade-only task to a) assess the degree of eye tuning in the recorded population of neurons, b) ensure the Face in a Crowd task required a visual search, and c) examine whether task performance under Brain Control was impaired as a result. Furthermore, we sought to determine if cursor movement under Brain Control could be dissociated from eye movements as during natural hand eye coordination.
METHODS
A male rhesus monkey participated in this study. All procedures were approved by the California Institute of Technology Institutional Animal Care and Use Committee and were performed in accordance with NIH guidelines.
Behavioral Setup
The monkey was seated in a chair and viewed all visual stimuli on a vertical LCD monitor placed about 40 cm from the eyes. The NHP's head was held in place by a surgically implanted headpost. When Brain Control was performed, both arms were gently restrained to prevent large arm movements. Eye position was recorded using the ISCAN system (ISCAN Inc., Woburn, MA). Hand position was tracked at 120 Hz with a magnetic 6 DOF trakStar sensor (Ascension Technology Corporation, Milton, VT) affixed to the hand. View of the hand was blocked by an opaque plate placed at neck height. Stimulus presentation was performed with the PsychoPy psychophysics library for Python (Peirce, 2007). Task control and recordings were performed with the Simulink real-time system (The MathWorks Inc., Boston, MA).
Neural Recordings
The monkey was implanted with two 96-channel electrode Cereport arrays (Blackrock Microsystems, Salt Lake City, UT) on the convexity of the superior parietal lobule near the posterior half of the IPS, i.e. the approximate location of neurons functionally ascribed to Area 5d in previous studies (Bremner and Andersen, 2012, Cui and Andersen, 2011). The Cereport (formerly known as the “Utah” array) has been commonly used in human neuroprosthetic studies and was thus used in the present study to more closely mimic clinical techniques for recording extracellular potentials (Rothschild 2010). Neural activity was amplified, digitized, and recorded with the Cerebus neural signal processor. In the Central software suite (Blackrock Microsystems), thresholds for action potential detection for each channel were set at −4.5 times the root-mean-square of the raw signal sampled over a 1 second window on a daily basis. In real-time, the time of threshold crossings were transmitted to Matlab software (The Mathworks Inc, Boston, MA) and counted in non-overlapping, 50ms time bins. No spike sorting was used, as spike sorting itself presents a significant difficulty to maintain from day to day in human trials (Franke et al., 2012), and has been reported to confer little benefit upon BMI performance (Fraser et al., 2009). From the two arrays combined, approximately 105 active channels were reliably recorded with spiking activity of some kind as judged by the experimenter. Active channels that met the simple criterion of firing at an average rate of 1 crossing per second during the Training block were used in online decoding. This resulted in 85 ± 2 channels being used for decoding each day.
Behavioral Task
A green cursor of radius 0.7cm was continuously presented on the screen. The cursor was controlled either by the monkey's hand moving in the horizontal plane above a flat, table-top surface immediately in front of his body (Manual Control mode), or by the output of a neural decoder (Brain Control mode) with hands gently restrained on the table surface in a relaxed position with elbows bent at approximately 90 degrees.
The purpose of the task was to create an animal paradigm that mimics human use of a GUI. The Crowd task naturally required a period of visual search for a cued stimulus via repeated saccades followed by a cursor movement to, and selection of, the chosen target. We chose visual stimuli/targets consisting of images of various human faces taken from the Psychological Image Collection at Stirling (PICS) database (http://pics.stir.ac.uk). Face targets consisted of a photographic head-on image of one of 3 facial expressions of 12 individuals. One individual was chosen for use as the “goal” face or individual for the current study. All faces were normalized for size with a red surrounding mask that obscured the overall shape of the head and hair. The faces were also normalized for total brightness. These manipulations made the stimuli subtle enough in their differences that they required fixation for correct identification of the goal individual. The goal individual's expression varied from trial to trial but not within a trial. The outer diameter of all the face stimuli, red mask included, was 3 cm. Acceptance windows for all targets and the cursor were identical in size to their respective visual representations.
A trial began when a sample face cue (Target 1) of the goal individual appeared at the center of the screen (Figure 1). The subject moved the cursor to overlap the cue for a contiguous Hold Period of 400ms. If overlap was broken during the Hold Period before 400ms elapsed, an entire new 400ms Hold Period would need to be performed. This rule was applied for all Hold Periods in the task. For the Crowd On condition, after the Hold Period, Target 1 disappeared, and a “crowd” of face stimuli of 8 individuals appeared. One of the 8 faces in the crowd (Target 2) was an identical match to the initial cue face, Target 1. Each face in the crowd was situated on a circle of radius 9cm centered on the middle of the screen and separated by 45 degrees on the circle (Figure 1). The monkey then had 20 seconds to locate the matching face and move the cursor to overlap it for another Hold Period of 400ms. After this second Hold Period, a juice reward was delivered via a tube placed in front of the monkey's mouth. Simultaneously, all targets disappeared and a reward beep was sounded. A new trial began after an intertrial interval (ITI) of 0.5s. Failure to locate, select, and Hold Target 2 within the 20s period resulted in termination of the trial: the disappearance of all targets, an auditory cue signifying trial failure, and a penalty ITI of 5-10s. Overlap with an incorrect target for 400ms or more also resulted in termination of the trial. An overlap of less than 400ms with an incorrect Target in the Crowd was permitted. The cursor was continuously controlled during the trials and ITI. In the “No Crowd” task condition, Target 2 appeared somewhere on the same circle described above, but with no other face stimuli present (Figure 1).
Figure 1. Face in the Crowd Task Trial Structure.
The timeline pictured schematizes the phases of the task and associated events. The start of each phase of the task is marked with a tick, labeled, and pictured above with a screenshot of the task display. The behavioral measures used and their corresponding temporal extents are also indicated below the timeline. Target 1 is the Cue Face, and Target 2 is the Match Face. In the Crowd On Condition, Target 2 is accompanied by 7 other faces of different individuals. In Crowd Off, it appears alone. The green dot represents the cursor.
Performance Measures
Task performance was assessed by the fraction of trials successfully completed and by measuring the time required to perform the various stages of the task (Figure 1). Time to Acquire, or TTA, spanned the time between Target 2 Onset (with or without the Crowd) and initial contact with Target 2. This period included the time required to visually locate the matching face whether the Crowd was present or not. The time from initial Acquisition of Target 2 to Reward, or Time to Hold (TTH), measured how long the subject took to “settle” the cursor down on the Target. Time to Hold could be no shorter than 400ms, but could be longer if overlap of the cursor and Target 2 was broken and reestablished before completing the trial. Time to Reward, or TTR, captured the time from Target 2 Onset to Reward and so would be the sum of TTA and TTH for a given trial.
In order to assess the effect of Brain Control on cursor control without the influence of task difficulty, we calculated the change in Time to Acquire, or ΔTTA, by subtracting average daily TTA in Manual Control from each subsequent Brain Control trial TTA, i.e.
| (1) |
where superscripts MC or BC indicate Manual Control or Brain Control and k indicates Brain Control trial k. The average TTA in Manual Control, TTAMC, was computed per day and per task condition (Crowd On or Off) and was only subtracted from Brain Control trials with the corresponding task condition on the same day. This calculation isolated the difference in TTA that was attributable solely to the use of Brain Control rather than Manual Control by eliminating time consumed by other aspects of the task, e.g. searching for and reacting to the presence of the correct target. This calculation thereby gave a direct indication of the effectiveness of Brain Control of the cursor independent of the influence of other, task-related factors. This measure was then examined as a function of Assessment condition and Training condition (described below) in subsequent analyses.
P values reported are the result of a non-parametric, two-sample Kolmogorov-Smirnov test for differences in distributions. Where reported, interquartile range (IQR) was computed by taking the difference between the third quartile (Q3) and the first quartile (Q1) of the data. Reported R-squared values were computed by taking the square of the Pearson's correlation coefficient (R) between the variables specified.
Decoder Training
Each day began with the monkey performing 160 trials of both the Crowd and No Crowd task conditions alternating every 20 trials under Manual Control. This allowed assessment of daily variation in basic task performance without the influence of Brain Control quality. Next, the monkey's hands were gently restrained.
A previously computed neural decoder, a Training Decoder, was used by the NHP to manipulate the cursor during an initial 250s Training Block in either the Crowd On or Crowd Off condition. The Training Decoder and computer assistance functioned like a set of “training wheels” on a bicycle, allowing the NHP to use neural activity to drive the cursor, though not fully independently.
The Training Decoder was computed in a previous behavioral session using the same methods described here. The task used during training of the Training Decoder was the Crowd Off task condition of the Face in a Crowd task. We attempted to use the same Training Decoder for every Training Block in the current study in order to keep initial conditions for each Training Block as similar as possible; however, after one to four days, Training Decoders stopped generating useful output even with substantial assistance during the Training. When that occurred, the most recently computed decoder (trained with the Crowd Off) was substituted in as a Training Decoder. The dataset for the current study spanned 7 days and 23 decoders. The decoders trained in the first 4 days all used the same Training Decoder during training. The next Training Decoder was used for 2 days, and the third for one day. All analyses described below were repeated on a restricted data set using only decoders trained with the first Training Decoder (days 1-4). The results of those analyses did not differ substantially from the results described below.
Furthermore, using a Training Decoder (itself trained on the Crowd Off task condition) to compute a new decoder with the Crowd On task condition could be considered a sort of worst-cast scenario in which the task type changes from one training block to the next. We reasoned that if we find no impairment of decoding function caused by the “switch” to the Crowd On condition, there is no reason to expect an impairment would arise if the two tasks were fully segregated with respect to training and assessment. It would be feasible that it might provide an advantage, but our main goal in the study was to examine whether or not these task contexts reduced decode performance.
During the Training Block, output of the Training Decoder was assisted by removing some fraction of the error in cursor movement in each time bin. Error was defined as the component of the instantaneous movement vector that did not point directly at the Target. When no target was present on the screen, any movement of the cursor was considered error. Typically, the assistance level was adjusted such that 30% of the error was removed in each time bin. During Training, the ITI was set to 0s. The neural activity and cursor kinematics during the Training Block were subsequently used as input to compute a new decoder.
When computing the new decoder after performing the Training Block, the noisy velocities of the cursor during the Training Block were reoriented to point towards the instantaneous goal to more accurately capture the assumed intentions of the subject (Gilja et al., 2012). For each time bin, the intention of the NHP was assumed to be either move the cursor towards the current correct face target or hold the cursor steady if the cursor already overlapped the correct target. Reaction times were accounted for by assuming an intention to hold the cursor steady until 200ms after Target 2 onset in the No Crowd condition and after initial cue onset and 400ms after onset of Target 2 and the other targets in the Crowd condition. These values were chosen based on average reaction times during Manual Control.
Once the new decoder was computed, it was then used for Brain Control by the NHP without assistance to perform the task in both Crowd On and Off conditions in 10 alternating blocks of 20 trials, yielding a total of 200 Assessment trials per decoder. This process of Training and Assessment of performance was repeated so that the effect on performance of the Crowd (during Training and/or Decoding) could be measured. Twenty-three decoders were trained and assessed across seven days. The task condition of the first Training Block on a given day was alternated to remove any order effects.
Decoder Calculation
For transforming neural activity into cursor position and velocity, we used a linear decoding model coupled with a linear state space model of the cursor dynamics. The final decoder form closely resembled that described by Gilja and colleagues (Gilja et al., 2012).
Saccade Task
To assess the correlation between eye kinematics and neural activity, the monkey was trained to perform a task in which a trial consisted of repeated fixation on a series of 4 yellow circular targets placed on a 2 by 2 equally spaced grid measuring 14cm square. After fixating a target for the required period of 500ms, the target would turn from yellow to grey. After successfully fixating on all 4 targets in any sequence, the targets would all disappear and juice reward would be delivered. An ITI of 0.5s followed. The position of the hand (which was not required to perform the task) was recorded along with neural signals during this task, though the hand rarely moved.
Two days, each consisting of approximately 1500 trials, were recorded. R-squared values between a) linear predictions of eye kinematics based on neuronal firing with b) actual eye kinematics were computed and validated using Leave One Out Cross-Validation (LOOCV) on 20 equally sized segments of the data.
Additionally, segments of neural data were used to decode the spatial locations of the endpoints of saccades during this task. Saccades that began on one of the four targets and ended on any of the other 3 targets were preselected from the data. Each saccade was labeled by the target at which the saccade ended. Observations were comprised of total number of spikes summed for each neural channel across a window beginning 0.150s before a saccade onset and 0.300s after. Linear discriminant analysis was used to classify the neural data into one of four possible targets/categories. LOOCV was used to obtain a measure of the performance of neural classification of saccade targets, whereby all observations save one were used as training data. The class of the “left out” trial was then predicted using the classifier. This was repeated using each available trial as the excluded trial. Performance was computed as the percentage of “left out” trials that were correctly classified. A permutation test, whereby target labels were randomly shuffled and LOOCV repeated, was used to generate a null distribution of performance in order to assess whether classification of the actual data exceeded chance levels (n = 103 permutations).
RESULTS
Saccade Task
Ideally, the neurons recorded would not at all be sensitive to eye movements. However, Area 5d neurons show some eye position tuning (Bremner and Andersen, 2012). To quantify the degree of eye position tuning in the population recorded for the current study, we recorded neural activity while the NHP performed a task involving saccades only. Cross-validated R-squared values between neural activity and eye movements were computed. Though highly significant for x and y position, (px = 1.5e−4 and py = 6.4e−3 ) R-squared values of 0.05 and 0.02, respectively, were obtained, indicating a measurable but small relationship with eye position. P values were not significant (>> 0.05) for eye velocity.
When trying to decode the goal of individual saccades from amongst the 4 possible targets in the saccade task based only on neural data, 41.54% correctness was achieved for n = 674 saccades. Though modest, this performance significantly exceeded chance level of 25% (p < 10−5, permutation test.) Though the influence of eye movements was measurable in the neural signals, it did not appear to be a very strong.
Behavioral Task – Manual Control
During the Manual Control block of each day, the monkey was able to successfully complete > 99% of the trials correctly, i.e. selecting the correct face before time ran out. This performance indicated that the animal had no difficulty in reliably finding the matching Face in the Crowd. Basic task performance statistics under Manual Control for all days (Crowd On n= 567 trials, Crowd Off n = 560 trials) revealed the desired effect (Figure 2). As expected, the presence of the Crowd significantly increased TTA (Crowd On Median = 0.82, IQR = 0.32, Crowd Off Median = 0.48, IQR = 0.11, p < 10e−16) and TTR (Crowd On Median = 1.23, IQR = 0.34, Crowd Off Median = 0.89, IQR = 0.12, p < 10e−16), but not TTH (Crowd On Median = 0.40, IQR = 1.4e−14, Crowd Off Median = 0.40, IQR = 2.8e−14, p = 0.08). This is because the Crowd causes a visual search which delays Acquisition time, but not the time required to Hold the target once it has been initially contacted. The delay in Acquisition time of course results in slower overall task performance as measured by TTR.
Figure 2. Manual Control Performance.

Boxplots of yperformance measures during Manual Control in Crowd On y(n = 567 trials) and Crowd Off (n = 560 trials) task yconditions. Wide, middle band represents the middle two yquartiles. Thinner bands on top and bottom represent the top yand bottom quartiles, respectively. Circles with dots indicate ymedians. Outliers are small circles jittered in the horizontal yaxis for visibility. Filled (Crowd On) or empty (Crowd Off) ybands and circles indicate the Crowd On or Crowd Off task ycondition. Values exceeding (Q3 + 1.5 × (Q3 – Q1) are yconsidered outliers, where Q1 and Q3 are the 25th and 75th ypercentiles, respectively. The Crowd On condition resulted in yan increase in Acquisition and Reward time relative to the yCrowd Off condition. Time to Hold was not significantly yaffected, as the vast majority of the Hold Times in both yconditions were the minimum possible value of 400ms.
To verify that TTA was increased because the Crowd required the animal to search and identify the target face, we examined eye behavior prior to movement onset and between movement onset and target acquisition for each day. Eye positions for each trial were rotated to place Target 2 at the 3 O'clock position. Two-dimensional histograms of eye position for each task condition clearly reveal the monkey's tendency to visually search though the faces in the Crowd On task condition before initiating his hand movement (Figure 3). Histograms for the period between Movement Onset and Target 2 acquisition looked similar, indicating that the monkey continued scanning the faces even during and after movement to select the correct target (Figure 4, left panel).
Figure 3. Effect of the Crowd on Gaze During Manual Control.

Heat maps of eye position during Manual Control averaged across n = 80 trials in each panel. Data was taken between onset of Target 2 and movement onset of the Hand. Data for each trial was rotated such that the location of Target 2 falls on the 3 O'clock position. (During task performance, Target 2 appeared in any one of the 8 possible positions.) The data demonstrated the NHP's tendency to gaze around the screen before moving the cursor when the Crowd was On. When the Crowd Was off, the NHP was able to initiate his hand movement to the target even before making a saccade to it. This explains why the left panel does not capture the position of the target at the 3 O'clock position.
Figure 4. Gaze with Crowd on during Manual vs Brain Control.

Heat maps of eye position as in Figure 3 averaged across n = 160 trials and n = 100 in left and right panels, respectively. Data was again rotated for each trial to place the correct target at the 3 O'clock position. Here, data was taken during a 1 second window ending on Target Acquisition, guaranteeing that the cursor was in motion. We compared this epoch between Manual and Brain Control to confirm that the NHP was able to freely gaze at the targets even while maintaining straight cursor motion. For Brain Control, the trials with the straightest cursor trajectories were preselected by only analyzing the fastest 15% of trials. Average cursor trajectory during the same period for each control type is superimposed on the images in cyan. Video 2 also demonstrates dissociation of neural cursor movement from eye movements.
Behavioral Task – Brain Control
During the Brain Control sessions, the monkey was able to successfully complete > 98% of the trials correctly, i.e. selecting the correct face before time ran out (Video 1). The fraction of trails that were completed successfully did not significantly differ between Manual and Brain Control blocks (p = 0.75).
Each decoder was trained during a Training Block either with the Crowd On or Crowd Off and then assessed for performance with the Crowd On or Off. This comprised a 2 × 2 factorial design with the “main effects” being Training Condition and Assessment Condition.
The task condition in which the decoders were trained, the Training Condition, did not significantly influence the TTA achieved (Crowd On Median = 1.65, IQR = 1.00, n = 1508 trials, Crowd Off Median = 1.69, IQR = 1.05, p = 0.22, n = 1871 trials, Figure 5, left panel). We also examined whether or not the task condition used during Training of a decoder had any systematic effect on the β weights for any channel or dimension (x or y velocity). After using the Bonferonni method to correct for multiple comparisons, there were no significant differences in the decoder weights among all channel/dimension combinations as a function of task condition. Thus, the task condition used during Training did not seem to affect subsequent performance or the decoders themselves.
Figure 5. Performance Measures Across Training and Assessment Conditions (Brain Control).

Boxplots as in Figure 2 of performance measures. Outliers have been excluded for clarity. Assessment Condition, indicated by the label on the abscissa, denotes the task used during full Brain Control with no assistance. Training Condition, indicated by filled (Crowd On) or empty (Crowd Off) bands, denotes task used during Training of decoders. For Time to Acquire (left panel), only Assessment Condition reached significance (p < 10-16). For ΔTime to Acquire, both the Training Condition (p = 0.03) and Assessment Condition (p = 1.3e−6) were significantly better in the Crowd On conditions, though only by small margins. No pairwise comparisons, indicated by lines joining adjacent medians, reached statistical significance.
But, as in Manual Control and as expected, the presence of the Crowd during Assessment blocks slowed task performance (Crowd On Median = 1.75, IQR = 1.05, n = 1656 trials; Crowd Off Median = 1.55, IQR = 1.00, n = 1723 trials; p < 10e−16, Figure 5, left panel). While this effect on performance was almost certainly due in part to the visual search required when the Crowd was present, it was also possible that the presence of the Crowd impaired Brain Control of the cursor by interfering with the neural signals used to determine cursor position. To test this possibility, we devised a second measure, ΔTTA, to directly assess the quality of Brain Control under the various Training and Assessment conditions.
The ΔTTA was computed for all trials to quantify how Brain Control affected the Acquisition of Target 2 relative to Manual Control in isolation from other factors. We then used the same statistical comparisons that were computed for the unadjusted TTA values to determine whether or not the Crowd's presence during Training or Assessment influenced the ΔTTA. While the comparison revealed a significant effect of Training condition (Crowd On Median = 0.99, IQR = 1.00, n = 1508 trials; Crowd Off Median = 1.04, IQR = 1.03, n = 1871 trials; p = 0.03, Figure 5, right panel), and the trend favored the Crowd On condition, the small difference in the medians suggested only a negligible advantage. We found a significant influence of Assessment Condition on ΔTTA (Crowd On Median = 0.95, IQR = 1.04, n = 1656 trials; Crowd Off Median = 1.08, IQR = 0.97, n = 1723 trials; p = 1.3e−6, Figure 5, right panel), indicating that decoding with the Crowd On yielded slightly better brain control quality. Again, however, the magnitude of this difference was only 45ms, so we considered this difference small enough to be negligible.
Taken together, these results indicated that the additional eye movements and various behavioral demands of the Crowd On task condition did not interfere directly with decoding of a neurally controlled cursor. The more complex task condition might have conferred a small albeit negligible advantage to performance of neural decoding.
These results suggest the NHP was able to simultaneously gaze freely around the screen while independently controlling cursor position. However, an alternative hypothesis that saccades did negatively impact cursor control would also account for this result if the NHP simply learned to minimize extraneous saccades when the Crowd was present, To rule out this possibility, we once again examined 2D histograms of eye position during a phase in the trials when the cursor was actively being transported to the target (Figure 4, right panel). These histograms include eye positions across a one second window ending on Acquisition of Target 2. This window was chosen to capture the time when the cursor is still in motion in both Manual (left panel) as well as Brain Control (right panel). For Brain Control, the trials with straightest cursor trajectories were preselected for this analysis by choosing the fastest 15% of trials. For this representative set, it is clear that, on average, even during active cursor movement in both Manual and Brain Control, the animal made many saccades to the faces around the screen. Additionally, ample saccades during cursor movement are evident in videos of task performance under Brain Control wherein playback speed was slowed and the animal's gaze position was added post-hoc (Video 2).
Though the NHP was able to gaze around the screen with the Crowd On during Brain Control as well as in Manual Control, it should be noted that the overall number of saccades was reduced in Brain Control. We compared the number of saccades landing on a peripheral target (and not the correct target or the cursor) in a one second window ending on initial acquisition of Target 2 for each trial. We then compared the occurrence of these saccades in Manual (mean = 2.04 saccades, s.d. = 1.54) vs. Brain Control (mean = 0.82 saccades, std = 1.18) trials, revealing that there were significantly more in Manual Control (p < 10−16). One possible explanation for this outcome is the increased difficulty and imperfect accuracy of Brain Control, i.e. on trials where cursor control is worse, the subject would need to gaze at the cursor longer to maintain closed-loop control. An alternative explanation is extraneous saccades reduced decode accuracy, and the subject learned to make fewer saccades to maintain cursor control. To distinguish these possibilities, we computed the correlation between number of saccades to peripheral targets (as above) in each trial to the TTA across all n = 1718 trials. A correlation of r = -0.12 (p < 10e-7, T test) supports the former account and rules out the latter. Trials with many saccades to locations not occupied by the cursor were amongst the shortest, while the longest trials involved prolonged periods of gazing at the cursor, presumably to accommodate feedback control.
In a separate session, the brain control task was run with long (1s) ITIs to determine if the animal was able to move the cursor back to the middle of the screen, the location where Target 1 appears at the start of each trial, before there is any overt visual cue to do so. The animal's performance clearly demonstrated his ability to move the cursor back to the middle of the screen before Target 1 appeared in anticipation of the upcoming trial (Video 3). Additionally, in separate sessions, we confirmed that the decoders trained in this center-out style task could generalize to a 3×3 grid of the same face targets spaced evenly on an 18cm × 18cm square. While performance in terms of trial length was inherently slower than the circular, center-out task (p < 10−16) due to the longer cursor movements required (Time To Acquire: Median = 2.405s, IQR = 1.433, Time to Reward: Median = 3.226s IQR = 1.540), control of the cursor itself was qualitatively no different (Video 4).
While the subject's arms were prevented from making large movements during Brain Control, he was still able to make small wrist and finger movements, though these movements as measured by the tracking sensor did not directly influence cursor position. On most days both hands were observably and measurably still (Figure 5), however on other days small, residual movements were made during performance of the task under Brain Control. We used the measured hand kinematics (in the horizontal plane that would typically be used to control the cursor in Manual Control) to predict the kinematics of the neural cursor. Cross-validated R-squared values never exceeded 0.03 for either dimension for any day, indicating little influence of residual hand movements on the decoded cursor.
DISCUSSION
Despite the known sensitivity of motor control areas to numerous cognitive and motor variables (Rao and Donoghue, 2014, Buneo & Andersen, 2006, Colby and Goldberg, 1999, Louie et al., 2011, Pesaran et al., 2010), we showed robust use of a neurally controlled cursor driven by signals from the parietal cortex in a context cognitively and visually richer than those created to date for use by primates. The task created a primate model of human use of GUI interfaces, e.g. tablet computers or smartphones. The performance measures indicated that training and decoding with the Crowd On did not impair neural cursor control, but may have actually conferred a small (albeit negligible) advantage. These small differences may simply have arisen as a result of motivational factors, i.e. more “interesting” stimuli being present on the screen with the Crowd On.
By targeting Area 5d for implant, we were able to obtain neural signals that reflected intended movements of the limb. Though residual eye-related signals were measurable in a control task, it was clear that they did not interfere with the functioning of the interface, whether during use of an existing decoder or during training To our knowledge, this study is the first confirmation that unconstrained gaze does not interfere with prosthetic control, even in a visually complex task environment. Furthermore, we demonstrated the ability of the subject to decouple gaze position, i.e. sensing, from control of the cursor, i.e. the motor intention. This is a crucial capability for providing natural, intuitive control.
This capability was further emphasized given the ability of the subject to manipulate the cursor even in the absence of overt visual targets during the trials with extended ITIs. This result suggests that neural activity in parietal cortex can capture motor intentions without the need for overt visual representations of movement goals.
We hope in upcoming clinical work that human subjects will be able to control parietal neuroprosthetics by naturally manipulating their internal representation of the limb. Or with training, perhaps patients will mentally manipulate the cursor or end effector directly without remapping imagined actions or using other indirect strategies. This capability could reasonably be expected to occur given the observed mechanisms of tool use and/or extension of the body schema in parietal neurons by Iriki and colleagues (Iriki et al., 2001).
These findings as a whole strengthen the case for the use of the parietal cortex in human clinical neuroprosthetic applications. They suggest that a human subject controlling a neural cursor driven by spiking activity in the parietal cortex could elicit similar results: robust 2D control that is insensitive to the visual and behavioral nuances of a modern computing interface.
Supplementary Material
Video 1 – Brain Control performance of the task in the Crowd On condition. Video was regenerated from recorded behavioral data and played back in real-time. The images appear almost exactly as they were viewed by the animal, with one exception: the behavioral clock was superimposed in the top left corner post-hoc for reference.
Video 2 – Brain Control performance of the task in the Crowd On condition as in Video 1. Here, the Gaze Position has been superimposed post-hoc to demonstrate the animal's eye behavior. The video plays back once in real-time and then again at half speed for clarity.
Video 3 – Brain Control performance of the task in the Crowd On condition as in Video 1. In this sample, the Intertrial Intervals were set to 1s to demonstrate the subject's ability to return the cursor to the center of the screen in anticipation of the next trial and without visible visual goals. Unrelated to the current study, task performance in this video also included use of a State Decoder (Revechkis et al., 2013).
Video 4 – Brain Control performance of the task in the Crowd On condition with the targets on a 3 × 3 grid, generated as in Video 1.
Figure 6. Hand and Cursor Position During Decoding.

Heat maps of Hand and Cursor positions during Manual & Brain Control. Hotter colors indicate greater fraction of time spent in that location. Left panel: Hand and Cursor (which are causally linked and thus represented with one image) during Manual Control averaged across n = 161 trials. Hand position (middle panel) and Cursor position (right panel) averaged across the same set of n = 465 Brain Control trials. For the brain control session shown, the signal measured by the hand sensor was very close to its static measurement noise.
Table 1.
Statistics for All Combinations of Conditions and Measures Table of data for all combinations of Training and Assessment conditions for both main measures. Layout of each group from left to right corresponds to layout in Figure 5. Note that both measures (Time to Acquire and ΔTime to Acquire) were computed using the same trials, thus the correspondence in number of trials between the left and right halves of the table.
| Time to Acquire | ΔTime to Acquire | |||||||
|---|---|---|---|---|---|---|---|---|
| Assess.Condition | Crowd On | Crowd Off | Crowd On | Crowd Off | ||||
| Train. Condition | Crowd Off | Crowd On | Crowd Off | Crowd On | Crowd Off | Crowd On | Crowd Off | Crowd On |
| Median | 1.779 | 1.748 | 1.597 | 1.531 | 0.969 | 0.940 | 1.106 | 1.038 |
| IQR | 1.062 | 1.01 | 1.000 | 0.984 | 1.055 | 1.012 | 0.999 | 0.978 |
| n (Trials) | 893 | 763 | 978 | 745 | 893 | 763 | 978 | 745 |
Acknowledgements
This work was supported by NIH Grant EY015545. We thank Drs. Eunjung Hwang and Chess Stetson for scientific discussion, Tessa Yao for editorial assistance, Kelsie Pejsa for animal care, and Viktor Shcherbatyuk for technical assistance.
References
- Bremner LR, Andersen RA. Coding of the Reach Vector in Parietal Area 5d. Neuron. 2012;75:342–351. doi: 10.1016/j.neuron.2012.03.041. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Buneo C, Andersen R. The posterior parietal cortex: sensorimotor interface for the planning and online control of visually guided movements. Neuropsychologia. 2006;44:2594–2606. doi: 10.1016/j.neuropsychologia.2005.10.011. [DOI] [PubMed] [Google Scholar]
- Colby C, Goldberg M. Space and attention in parietal cortex. Annual Review of Neuroscience. 1999;22:319–349. doi: 10.1146/annurev.neuro.22.1.319. [DOI] [PubMed] [Google Scholar]
- Collinger JL, Wodlinger B, Downey JE, Wang W, Tyler-Kabara EC, Weber DJ, McMorland AJC, Velliste M, Boninger ML, Schwartz AB. High-performance neuroprosthetic control by an individualwith tetraplegia. Lancet. 2012;6736:61816–61819. doi: 10.1016/S0140-6736(12)61816-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Crammond D, Kalaska J. Neuronal activity in primate parietal cortex area 5 varies with intended movement direction during an instructed-delay period. Experimental Brain Research. 1989;76:458–462. doi: 10.1007/BF00247902. (1989) [DOI] [PubMed] [Google Scholar]
- Cui H, Andersen RA. Different Representations of Potential and Selected Motor Plans by Distinct Parietal Areas. Journal of Neuroscience. 2011;31(49):18130–18136. doi: 10.1523/JNEUROSCI.6247-10.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Franke F, Jackel D, Dragas J, Muller J, Radivojevic M, Bakkum D, Hierlemann A. High-density microelectrode array recordings and real-time spike sorting for closed-loop experiments: an emerging technology to study neural plasticity. Front Neural Circuits. 2012;20(6):105. doi: 10.3389/fncir.2012.00105. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fraser GW, Chase SM, Whitford A, Schwartz AB. Control of a brain–computer interface without spike sorting. Journal of Neural Engineering. 2009;6:1–8. doi: 10.1088/1741-2560/6/5/055004. [DOI] [PubMed] [Google Scholar]
- Gilja V, Nuyujukian P, Chestek CA, Cunningham JP, Yu BM, Fan JM, Churchland MM, Kaufman MT, Kao JC, Ryu SI, Shenoy KV. A high-performance neural prosthesis enabled by control algorithm design. Nature Neuroscience. 2012;15(12):1752–1757. doi: 10.1038/nn.3265. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Graziano MSA. Coding the Location of the Arm by Sight. Science. 2000;290(5497):1782–1786. doi: 10.1126/science.290.5497.1782. [DOI] [PubMed] [Google Scholar]
- Hauschild M, Mulliken G, Fineman I. Cognitive signals for brain–machine interfaces in posterior parietal cortex include continuous 3D trajectory commands. Proceedings of the National Academies of Science. 2012;109(42):17075–17080. doi: 10.1073/pnas.1215092109. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hochberg LR, Serruya MD, Friehs GM, Mukand JA, Saleh M, Caplan AH, et al. Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature. 2006;442(7099):164–171. doi: 10.1038/nature04970. [DOI] [PubMed] [Google Scholar]
- Hwang E, Andersen R. The utility of multichannel local field potentials for brain– machine interfaces. Journal of Neural Engineering. 2013;10(4):046005. doi: 10.1088/1741-2560/10/4/046005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ifft P, Shokur S, Li Z, Lebedev M, Nicolelis M. A Brain-Machine Interface Enables Bimanual Arm Movements in Monkeys. Science Translational Medicine. 2013;5(210):210ra154–210ra154. doi: 10.1126/scitranslmed.3006159. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Iriki A, Tanaka M, Obayashi S, Iwamura Y. Self-images in the video monitor coded by monkey intraparietal neurons. Neuroscience Research. 2001;40:163–173. doi: 10.1016/s0168-0102(01)00225-5. [DOI] [PubMed] [Google Scholar]
- Louie K, Grattan LE, Glimcher PW. Reward Value-Based Gain Control: Divisive Normalization in Parietal Cortex. Journal of Neuroscience. 2011;31(29):10627–10639. doi: 10.1523/JNEUROSCI.1237-11.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mulliken GH, Musallam S, Andersen RA. Decoding Trajectories from Posterior Parietal Cortex Ensembles. Journal of Neuroscience. 2008;28(48):12913–12926. doi: 10.1523/JNEUROSCI.1463-08.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Musallam S, Corneil B, Greger B, Scherberger H. Cognitive control signals for neural prosthetics. Science. 2004;305:258–262. doi: 10.1126/science.1097938. [DOI] [PubMed] [Google Scholar]
- O'Doherty JE, Lebedev MA, Ifft PJ, Zhuang KZ, Shokur S, Bleuler H, Nicolelis MA. Active tactile exploration using a brain-machine-brain interface. Nature. 2011 Oct 5;479(7372):228–31. doi: 10.1038/nature10489. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Peirce JW. PsychoPy - Psychophysics software in Python. Journal of Neuroscience Methods. 2007;162(1-2):8–13. doi: 10.1016/j.jneumeth.2006.11.017. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pesaran B, Musallam S, Andersen RA. Cognitive neural prosthetics. Current Biology. 2006;16(3):R77–R80. doi: 10.1016/j.cub.2006.01.043. [DOI] [PubMed] [Google Scholar]
- Pesaran B, Nelson MJ, Andersen RA. A Relative Position Code for Saccades in Dorsal Premotor Cortex. Journal of Neuroscience. 2010;30(19):6527–6537. doi: 10.1523/JNEUROSCI.1625-09.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pesaran B, Nelson MJ, Andersen RA. Free choice activates a decision circuit between frontal and parietal cortex. Nature. 2008;453(7193):406–409. doi: 10.1038/nature06849. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rao N, Donoghue J. Cue to action processing in motor cortex populations. Journal Of Neurophysiology. 2014;111:441–453. doi: 10.1152/jn.00274.2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Revechkis B, Aflalo TNS, Andersen R. Use of a posterior parietal cortex brain-machine interface in a cognitive face-in-a-crowd task.. Poster presented at: Society for Neuroscience; San Diego, CA.. 2013; November 13; 2013. [Google Scholar]
- Rothschild RM. Neuroengineering tools/applications for bidirectional interfaces, brain-computer interfaces, and neuroprosthetic implants - a review of recent progress. Front Neuroeng. 2010;15(3):112. doi: 10.3389/fneng.2010.00112. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shanechi MM, Hu RC, Powers M, Wornell GW, Brown EN, Williams ZM. Neural population partitioning and a concurrent brain-machine interface for sequential motor function. Nat Neurosci. 2012;15(12):1715–1722. doi: 10.1038/nn.3250. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Soto D, Heinke D, Humphreys GW, Blanco MJ. Early, Involuntary Top-Down Guidance of Attention From Working Memory. Journal of Experimental Psychology: Human Perception and Performance. 2005;31(2):248–261. doi: 10.1037/0096-1523.31.2.248. [DOI] [PubMed] [Google Scholar]
- Velliste M, Perel S, Spalding MC, Whitford AS, Schwartz AB. Cortical control of a prosthetic arm for self-feeding. Nature. 2008;453(7198):1098–1101. doi: 10.1038/nature06996. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Video 1 – Brain Control performance of the task in the Crowd On condition. Video was regenerated from recorded behavioral data and played back in real-time. The images appear almost exactly as they were viewed by the animal, with one exception: the behavioral clock was superimposed in the top left corner post-hoc for reference.
Video 2 – Brain Control performance of the task in the Crowd On condition as in Video 1. Here, the Gaze Position has been superimposed post-hoc to demonstrate the animal's eye behavior. The video plays back once in real-time and then again at half speed for clarity.
Video 3 – Brain Control performance of the task in the Crowd On condition as in Video 1. In this sample, the Intertrial Intervals were set to 1s to demonstrate the subject's ability to return the cursor to the center of the screen in anticipation of the next trial and without visible visual goals. Unrelated to the current study, task performance in this video also included use of a State Decoder (Revechkis et al., 2013).
Video 4 – Brain Control performance of the task in the Crowd On condition with the targets on a 3 × 3 grid, generated as in Video 1.

