Skip to main content
The Journal of Neuroscience logoLink to The Journal of Neuroscience
. 2017 Apr 19;37(16):4311–4322. doi: 10.1523/JNEUROSCI.3077-16.2017

Decoding Information for Grasping from the Macaque Dorsomedial Visual Stream

Matteo Filippini 1,*, Rossella Breveglieri 1,*, M Ali Akhras 1, Annalisa Bosco 1, Eris Chinellato 2, Patrizia Fattori 1,
PMCID: PMC6596562  PMID: 28320845

Abstract

Neurodecoders have been developed by researchers mostly to control neuroprosthetic devices, but also to shed new light on neural functions. In this study, we show that signals representing grip configurations can be reliably decoded from neural data acquired from area V6A of the monkey medial posterior parietal cortex. Two Macaca fascicularis monkeys were trained to perform an instructed-delay reach-to-grasp task in the dark and in the light toward objects of different shapes. Population neural activity was extracted at various time intervals on vision of the objects, the delay before movement, and grasp execution. This activity was used to train and validate a Bayes classifier used for decoding objects and grip types. Recognition rates were well over chance level for all the epochs analyzed in this study. Furthermore, we detected slightly different decoding accuracies, depending on the task's visual condition. Generalization analysis was performed by training and testing the system during different time intervals. This analysis demonstrated that a change of code occurred during the course of the task. Our classifier was able to discriminate grasp types fairly well in advance with respect to grasping onset. This feature might be important when the timing is critical to send signals to external devices before the movement start. Our results suggest that the neural signals from the dorsomedial visual pathway can be a good substrate to feed neural prostheses for prehensile actions.

SIGNIFICANCE STATEMENT Recordings of neural activity from nonhuman primate frontal and parietal cortex have led to the development of methods of decoding movement information to restore coordinated arm actions in paralyzed human beings. Our results show that the signals measured from the monkey medial posterior parietal cortex are valid for correctly decoding information relevant for grasping. Together with previous studies on decoding reach trajectories from the medial posterior parietal cortex, this highlights the medial parietal cortex as a target site for transforming neural activity into control signals to command prostheses to allow human patients to dexterously perform grasping actions.

Keywords: electrophysiology, grasping, medial parietal cortex, monkey, neuroprosthetics, off-line decoding

Introduction

While artificial systems have yet to match the ability of the primate hand to reach, grasp, and manipulate objects, research on humanoid robots, inspired by the fine performance of the human hand, has moved closer to achieving dexterous grasping and manipulation of objects (Mattar, 2013; Chinellato and del Pobil, 2016). Decoding neural population signals from motor-related areas of the monkey, and recently from human brains, constitutes a promising way to implement modern brain–computer interfaces (BCIs) able to finely control arm actions (Wessberg et al., 2000; Serruya et al., 2002; Taylor et al., 2002; Carmena et al., 2003; Musallam et al., 2004; Hochberg et al., 2006, 2012; Kim et al., 2006; Santhanam et al., 2006; Schwartz et al., 2006; Fetz, 2007; Mulliken et al., 2008; Velliste et al., 2008; Hatsopoulos and Donoghue, 2009; Nicolelis and Lebedev, 2009; Scherberger, 2009; Carpaneto et al., 2011; Shenoy et al., 2011; Townsend et al., 2011; Collinger et al., 2013; Sandberg et al., 2014; Aflalo et al., 2015; Milekovic et al., 2015; Schaffelhofer et al., 2015; Schwartz, 2016).

The medial subdivision of the dorsal visual stream (dorsomedial frontoparietal network; Galletti et al., 2003) has traditionally been considered to be involved in controlling the transport component of prehension (Caminiti et al., 1996; Jeannerod, 1997; Wise et al., 1997) and its neuronal activity has been successfully exploited to decode reach endpoints, goals, and trajectories (Hatsopoulos et al., 2004; Musallam et al., 2004; Santhanam et al., 2006; Mulliken et al., 2008; Aggarwal et al., 2009; Chinellato et al., 2011; Aflalo et al., 2015). However, the dorsomedial stream has also been determined recently as a candidate cortical area involved in encoding grasping (Raos et al., 2004; Stark et al., 2007; Fattori et al., 2010; Breveglieri et al., 2016). This opens new perspectives on the problem of neural signal decoding for hand configurations. In the present work, we analyzed the decoding potential of a parietal node of the dorsomedial stream, area V6A (Galletti et al., 1999), for grasping actions.

Neural decoding analyses typically have two complementary objectives: selecting potential brain areas for driving BCIs and achieving a deeper understanding of the function of neurons in the studied region. In this research, we wanted in particular to ascertain whether the same neural code is used throughout a grasping task, or whether it changes within the time course of the action generation. We applied a generalization analysis to investigate this issue. The system was trained and tested during different time intervals and, to the best of our knowledge, has never been used before in related studies.

In addition, we wanted to investigate the dependence of the decoding performance of the proposed neurodecoder on the task condition; more precisely, when grasping is planned and executed either in the dark or in the light. Recent papers show that in V6A there is interplay between vision and movement, both in reaching (Bosco et al., 2010) and in grasping (Breveglieri et al., 2016), given that most V6A cells are modulated by both motor-related and visual components. We wanted to see whether there are differences in decoding performance when the visual information is present or absent before and during grasping and, in this case, to look for differences in the time course of the neural codes used by V6A cells during the preparation and execution of grasping actions in the dark and in the light.

The results of our analysis show that V6A neural signals can be reliably used to decode grasps, and that the neural code used by V6A cells during object vision is not maintained during the subsequent phases of the task (i.e., grasping preparation and execution), where a different code is used. We demonstrated that the neurodecoder performance is slightly influenced by the presence of visual information regarding the object to be subsequently grasped and regarding the hand–object interaction, which gives a clear view of the role of vision before and during grasping in V6A.

Materials and Methods

Experimental procedure.

The study was performed in accordance with the guidelines of EU Directives (EU 116-92; EU 63-2010) and Italian national law (D.L. 116-92, D.L. 26-2014) on the use of animals in scientific research. During training and recording sessions, particular attention was paid to any behavioral and clinical sign of pain or distress. We involved two male Macaca fascicularis monkeys, weighing 3.650 and 2.450 kg. A head-restraint system and a recording chamber were surgically implanted in asepsis and under general anesthesia (sodium thiopental, 8 mg/kg/h, i.v.) following the procedures reported by Galletti et al. (1995). Adequate measures were taken to minimize pain or discomfort. A full program of postoperative analgesia (ketorolac trometazyn, 1 mg/kg, i.m., immediately after surgery, and 1.6 mg/kg, i.m., on the following days) and antibiotic care [Ritardomicina (benzathine benzylpenicillin plus dihydrostreptomycin plus streptomycin), 1–1.5 ml/10 kg every 5–6 d] followed the surgery.

We performed extracellular recordings from the posterior parietal area V6A (Galletti et al., 1999) using single-microelectrode penetrations with home-made glass-coated metal microelectrodes (tip impedance of 0.8–2 MΩ at 1 kHz) and multiple electrode penetrations using a five-channel multielectrode recording minimatrix (Thomas Recording). The electrode signals were amplified (at a gain of 10,000) and filtered (bandpass between 0.5 and 5 kHz). Action potentials in each channel were isolated with a dual time–amplitude window discriminator (DDIS-1, Bak Electronics) or with a waveform discriminator (Multi Spike Detector, Alpha Omega Engineering). Spikes were sampled at 100 kHz and eye position was simultaneously recorded at 500 Hz with a Voss eyetracker. All neurons were assigned to area V6A following the criteria defined by Luppino et al. (2005) and described in detail by Gamberini et al. (2011).

Behavioral task

The monkey sat in a primate chair (Crist Instruments) with its head fixed in front of a personal computer-controlled rotating panel containing five different objects. The objects were presented to the animal one at a time, in a random order. During the intertrial period, the panel was reconfigured by the computer to present a new object at the next trial in the same spatial position occupied by the previous object (22.5 cm from the animal, in the midsagittal plane). The view of the remaining four objects was occluded. The same task has been used since we started this line of research in our laboratory (Fattori et al., 2010).

The reach-to-grasp movements were performed in the light and in the dark, in separate blocks. The reach-to-grasp task is sketched in Figure 1A and its time course in Figure 1B. In the dark condition (Fig. 1A, top), the animal was allowed to see the object to be grasped only for 0.5 s at the beginning of the trial, and then the grasping action was prepared and performed in the dark. In this way, the monkey was able to accomplish the reach-to-grasp movement, adapting the grip to the object shape using a memory signal based on the visual information it had received at the beginning of each trial, well before the go signal. In the light condition (Fig. 1A, bottom), the two white LEDs illuminated a circular area (diameter, 8 cm) centered on the object to be grasped, so the monkey could see the object during the grasping preparation, and the object and its own hand during grasp execution and object holding.

Figure 1.

Figure 1.

Reach-to-grasp task. A, Sequence of events in the reach-to-grasp task in the dark (top) and in the light (bottom). The animal was trained to fixate at a constant location (fixation LED) shown as a small circle in front of the animal. It reached for and grasped an object (a ring, in this example) visible only in the OBJ-VIS epoch (dark condition) or in OBJ-VIS, DELAY, and GRASP epochs (light condition). In the dark, the reach-to-grasp action was executed in darkness, after a delay in darkness; in the light, the action preparation and execution were in the light with full vision of the object and of the hand interacting with the object. B, Time course of the reach-to-grasp task. The sequence of status of the home button, color of the fixation point (Fixation LED), status of the light illuminating the object (Illumination), status of the target object (Target object, pull and off) are shown. Below the scheme, typical examples of eye traces during a single trial and time epochs are shown. Dashed lines indicate task and behavioral markers: trial start (Home Button, push), fixation target appearance (Fixation LED, green), eye traces entering the fixation window, object illumination on and off (Illumination on and Illumination off, respectively), go signal for reach-to-grasp execution (fixation LED, red), start (Home Button, release) and end (Target object, pull) of the reach-to-grasp movement, go signal for return movement (Fixation LED, off), start of return movement to the Home Button (Target object, off). C, Drawing (derived from videoframes) of the five objects and grip types used by the monkey. The object to be grasped changed from trial to trial, thus requiring different hand preshaping to facilitate grip. The orientation of the objects was chosen so that wrist orientation was similar in all cases. The five objects were grasped with five different grips: from the left, the handle with fingers only, the stick-in-groove with an advanced precision grip with precise index-finger/thumb opposition, the ring with the index finger only (hook grip), the plate with a primitive precision grip with fingers/thumb opposition, and the ball with the whole hand.

The time sequence of the task is illustrated in Figure 1B. The trial began when the monkey pressed the home button in complete darkness. After button pressing, the animal awaited instructions in darkness (FREE). It was free to look around and was not required to perform any eye or arm movement. After 1 s, the fixation LED lit up green and the monkey had to wait for the LED change color (to red) without performing any eye or arm movement. After a fixation period of 0.5–1 s, the two white lateral LEDs were turned on and the object was illuminated for a period of 0.5 s (OBJ-VIS); the lights were then switched off for the rest of the trial in the dark (Fig. 1A, top). For the task in the light (Fig. 1A, bottom), the lights stayed on for the rest of the trial (Fig. 1B, Illumination (light)]. After a delay period of 1–1.5 s, during which the monkey was required to maintain fixation on the LED without releasing the home button (DELAY), the LED color changed. This was the go-signal for the monkey to release the button and perform a reach-to-grasp movement (GRASP) toward the object, to grasp it, and to keep hold of it till the LED switched off (after 0.8–1.2 s). The LED switch-off cued the monkey to release the object and to press the home button again. The press of the home button ended the trial, allowed the monkey to be rewarded, and started another trial (FREE) in which another object, randomly chosen, was presented.

In both task conditions, the monkey was required to look at the fixation point. If fixation was broken (5 × 5° electronic window), trials were interrupted on-line and discarded. The correct performance of movements was monitored by pulses from microswitches (monopolar microswitches, RS Components) mounted under the home button and the object. Button/object presses/releases were recorded with 1 ms resolution (for a detailed description of the control system of trial execution, see Kutz et al., 2005). In addition, the monkey's arm movements were continuously video-monitored by means of miniature, infrared-illumination-sensitive videocameras.

Tested objects

The objects and the grip types used for grasping are illustrated in Figure 1C.

The objects were chosen such that they could evoke reach-to-grasp actions with different hand configurations.

The handle was 2 mm thick, 34 mm wide, and 13 mm deep. Gap dimensions were 28 × 11 × 2 mm. It was grasped with finger prehension by inserting all the fingers (but not the thumb) into the gap.

The stick-in-groove was a cylinder with base diameter of 10 mm and length of 11 mm, in a slot 12 mm wide, 15 mm deep, and 30 mm long. It was grasped with the advanced precision grip, with the pulpar surface of the last phalanx of the index finger opposed to the pulpar surface of the last phalanx of the thumb.

The ring had an external diameter of 17 mm and internal diameter of 12 mm. It was grasped with the hook grip, in which the index finger was inserted into the object.

The plate was 4 mm thick, 30 mm wide, and 14 mm long. It was grasped with the primitive precision grip, using the thumb and the distal phalanges of the other fingers.

The ball had a diameter of 30 mm. It was grasped with whole-hand prehension, with all the fingers wrapped around the object and with the palm in contact with it.

Data analysis

The analyses were performed with customized scripts in Matlab (Mathworks; RRID:SCR_001622) and Python (using open-source machine-learning toolkit scikit-learn, http://scikit-learn.org; RRID:SCR_002577). The neural activity was analyzed by quantifying the discharge in each trial in the following four different epochs: FREE, from button pressing to LED illumination; OBJ-VIS, response to object presentation, from object illumination onset to illumination offset (this epoch lasted 500 ms); DELAY, from the end of OBJ-VIS to movement onset (epoch duration assumed random values between 1 and 1.5 s); GRASP, from movement onset (defined as the time of home button release) to movement end (defined as the time of object pulling; movement period was not fixed over trials as it depended on the action execution time of the animal: handle, 355.1 ms; stick-in-groove, 770.2 ms; ring, 421.7 ms; plate, 581.9 ms; ball, 576.1 ms (average movement times).

We describe below the two types of analyses we performed on the data: population response and neural decoding.

All the analyses, neural information processing, and modeling were done off-line.

Population response.

We sequentially recorded 170 cells from two animals. We performed three-way ANOVA (factor 1, epoch: FREE, OBJ-VIS, GRASP; factor 2, object/grip: five levels; factor 3, visual conditions: light/dark; p < 0.05). In this study, we included the cells with significant main effects of epoch and object/grip in the decoding and population analyses. Among these cells, we considered only cells with 10 trials for each of the five objects, in each visual condition.

Population response was calculated as averaged spike density function (SDF; Fig. 2B). An SDF was calculated (Gaussian kernel, half-width 40 ms) for each neuron included in the analysis, and averaged across all the trials for each tested grip. The neuron peak discharge found over all grip types during the GRASP epoch and during the OBJ-VIS epoch was used to normalize all SDFs for that neuron. The normalized SDFs were then averaged to obtain population responses (Marzocchi et al., 2008). Each condition was ranked and aligned twice in each plot, one based on the OBJ-VIS discharge (first alignment), and the other on GRASP discharge (second alignment).

Figure 2.

Figure 2.

Object and grip selectivity in V6A. A, An example of a V6A neuron selective for object and for grip type and influenced by the vision of the object and of the action. Left, Objects and types of grips. Right, Activity is illustrated as peristimulus time histograms (PSTHs) and raster displays of impulse activity in the light (left) and in the dark (right). Below each discharge there is a record of horizontal (upper trace) and vertical components (lower trace) of eye movements. Neural activity and eye traces are aligned (long vertical line) twice, on object illumination onset and on movement onset. Long vertical ticks in raster displays are behavioral markers, as indicated in Figure 1B. Rectangles under the PSTH of the first object represent the duration of epochs (G, GRASP). The cell displays selectivity for the task conditions during the times of object presentation, delay, and the execution of grasp action. Vertical scale on histogram: 76 spikes/s; time scale: 1 tick = 200 ms. Eye traces: 60°/division. B, Population data. Activity of 79 grip-selective V6A neurons used for the decoding procedure expressed as averaged normalized SDFs (thick lines) with variability bands (light lines), constructed by ranking the response of each neuron for each individual object according to the intensity of the response elicited in the OBJ-VIS epoch (left, activities aligned with the onset of the object illumination) and according to the intensity of the response elicited in the GRASP epoch (right, activities aligned with the onset of the reach-to-grasp movement) in descending order (from magenta to blue). In other words, each condition was ranked and aligned twice in each plot, one based on the OBJ-VIS discharge (first alignment), and the other on the GRASP discharge (second alignment). The SDFs of each alignment were calculated on the same population of cells. Each cell of the population was taken into account five times, once for each object/grip. Scale on abscissa, 200 ms/division (tick); vertical scale, 80% of normalized activity.

Neural decoding.

Feature extraction and selection are crucial and challenging processes in machine learning. The goal is to select features that constitute a compact but informative representation of the phenomenon to analyze the neural coding in this study. For the purpose of our analysis, we assumed that neural information is coded as spike trains of firing neurons belonging to the same neural network. For each neuron of the population (79 neurons), we computed the mean firing rate (mFR; number of spikes per time units) over a selected timespan using a trial-by-trial approach. The resulting feature vector thus consisted of the 79 mFRs of the entire neural population. Every trial was evaluated as a sample for the decoding algorithm. Thus, each trial, represented as a feature vector of 79 elements, was vertically concatenated with the other trials to build the feature space. Since there were 10 trials for each of the five objects, the feature space was made up of 50 samples. The decoder outputs were the five objects or grip types. Fivefold cross-validation was performed by using 40 samples (eight for each condition) for training and 10 (two for each condition) for testing for each neuron, so to ensure that the classifier was trained and tested on different data.

With the purpose of computing more robust and precise means of the classifier performance, we decided to computationally increase the number of test samples. Since neurons were recorded in separate sessions, and thus activity correlations between single neurons were already lost, we were able to expand the number of samples by shuffling the feature contributions of single neurons between trials, potentially obtaining 1079 different vectors. We choose to randomly extend our dataset 10 times, thus performing our experiments on 400 training and 100 test samples (100 per each of the five conditions), instead of the original 40 training and 10 test samples. This procedure produced mean and SD of object/grip classification accuracy based on firing rates. It is worth clarifying that artificially extending the dataset is not expected to improve classification accuracy, since no new information is added to the system, but it enables the computation of a more precise mean given the few initial trials available. Non-normalized data were used for the decoding procedure.

We used a naive Bayesian classifier as a neurodecoder. Naive Bayes methods are a set of supervised learning algorithms based on applying Bayes' theorem with the “naive” assumption of independence between every pair of features. This technique has been shown to achieve performance closer to optimal compared with other classifiers when analyzing this kind of neural data (Scherberger et al., 2005; Townsend et al., 2011; Lehmann and Scherberger, 2013; Schaffelhofer et al., 2015). In our Python custom scripts we implemented the module of naive Bayes classifiers proposed by scikit-learn libraries (the statistical formulation can be found at http://scikit-learn.org/stable/modules/naive_bayes.html; Zhang, 2004). Under the assumption of Poisson distribution of features, we reinforced the model as suggested at the following site: github.com/scikit-learn/scikit-learn/pull/3708/files (Ma et al., 2006). To calculate the running time of the decoding algorithm, we used the time module embedded in Python.

We performed three types of analysis, computing the following feature vectors over different epochs and timespans: whole epoch, sliding window, and generalization analysis.

For the whole epoch analysis, mFR was computed over the whole OBJ-VIS, DELAY, and GRASP epochs. Neurodecoder predictions against real class, for each object or type of grip, are plotted as confusion matrices in Figure 3.

Figure 3.

Figure 3.

Confusion matrices describing the pattern of errors made by the naive Bayes classifier in the recognition of tested objects or grip types. A–F, mFRs were calculated for different epochs (A, B, OBJ-VIS; C, D, DELAY; E, F, GRASP) and conditions (left, DARK; right, LIGHT). The matrices summarize the results of cross-validation iterations plotted as real class (Observation) against predicted class (Prediction). Contributions of 79 neurons from V6A area were included in the dataset for the decoding analysis. Blue color scale indicates the accuracy yield by the algorithm as reported in the side indices. Mean recognition rates are reported together with SDs below the indices.

For the sliding window analysis, mFR was computed over a window of 300 ms, which progressively slides over the reference period with a moving step of 10 ms (similar to analysis by Carpaneto et al., 2011). As in the previous case, training and testing sets were computed over the same time interval. This approach (Fig. 4) was used to see how the recognition rate changed dynamically over time.

Figure 4.

Figure 4.

Sliding window analysis. Time course of the decoding accuracy (recognition rates) in the dark (A) and in the light (B) based on the firing rates extracted during the period starting 500 ms before the light onset, through 1 s after the movement onset. Due to the variable duration of the delay (1–1.5 s), double alignment result plots are shown. The first alignment coincides with the object illumination onset, the second one with the movement onset. Firing rates were calculated for a 300 ms sliding window, moving forward with a 10 ms step. Each dot on the graphs was plotted at the beginning of each 300 ms window. The mean line (black) was calculated as the arithmetic mean between recognition rates of individual objects (colored lines). For each object, variability bands are shown, representing SDs based on fivefold cross-validation.

For the generalization analysis, mFR was computed over different intervals for training and testing sets: the system was trained over the whole OBJ-VIS and GRASP epochs and over four portions of the DELAY epoch; after having trained the system for an epoch, it was tested over all the epochs. This was done to verify whether the same code is used from object vision to movement execution, or to discover how the code changes during the delay epoch, before the movement, and during movement execution. As the DELAY epoch varied in length from trial to trial, we performed the generalization analysis on 25% fractions of DELAY rather than on fixed size intervals.

In all experiments, classification performance was assessed by the rate of correct recognitions, and confusion matrices. These representations helped reveal the most common error patterns of the classifier.

Results

Area V6A is known to contain grasp-related neurons (Fattori et al., 2004, 2009, 2010, 2012, 2017; Breveglieri et al., 2016). These cells are modulated by the different grip types required to grasp different objects and/or by the vision of the objects to be grasped. An example of one of these cells is shown in Figure 2A. This cell fires when the monkey sees the object to be grasped and when the monkey plans and performs the reach-to-grasp action. These discharges are also different if the grasping was planned and executed in different visual conditions, the discharge being stronger in the light than in the dark (compare left with right columns). The visual discharge to object presentation (OBJ-VIS epoch) is tuned to the different objects, being strong for the ball and the plate, and maximal for the handle. Moreover, the motor-related discharges (GRASP epoch, G) are tuned to grasps occurring with different grips, from a maximum for grasping the handle to an almost null response for grasping the stick-in-groove.

Of 170 V6A neurons recorded from two monkeys, 79 cells (47 from Case 1; 32 from Case 2) satisfied all the inclusion criteria (see Materials and Methods). The population discharge of the 79 grasp-related cells (three-way ANOVA, p < 0.05; see Materials and Methods) used for the decoding analysis is shown in Figure 2B, where the activity of each neuron for each of the five tested objects was ranked in descending order to obtain the population response for the best (object or grip), the second best, and so on, up to the fifth, worst, grip. Each condition was ranked and aligned twice in each plot, one based on the OBJ-VIS discharge (first alignment), and the other on GRASP discharge (second alignment) for each individual background condition. The plot shows a clear distinction among the activations during the vision of the object, the preparation and the execution of reach-to-grasp actions. Moreover, Figure 2B shows that the V6A neural population starts discriminating between different objects/grips as soon as the object becomes visible to the animal (OBJ-VIS). The discrimination power of the population remains constant when the monkey is preparing the action (DELAY), and has a second peak when the action is executed (GRASP), as the huge difference between best (red line) and worst (blue line) responses shows. This trend is common to population activity in the dark and in the light.

Decoding results

The neural activity of 79 grasp-related V6A neurons was analyzed off-line in three main epochs: OBJ-VIS, DELAY, and GRASP, corresponding to the period of visual stimulation provided by the object, the planning phase of the subsequent reach-to-grasp action, and the execution phase, respectively. It is worth remembering that, in the dark condition, the animal was in darkness during DELAY and GRASP (except for the fixation LED), whereas in the light condition the animal prepared and executed the grasping action in the light, thus with the availability of visual information on the object and its hand/arm approaching and interacting with the object. The results, obtained from two cases, were similar for individual animals. Thus, the results of the two cases are presented jointly.

Although we performed decoding off-line, having in mind a future possible application of this methodology in a real-time loop, we calculated the running time of the decoding algorithm. Since in this setting only the prediction phase is relevant, we parsed the time required to run that phase only, given the already trained classifier. We found that the running time was extremely short, with a mean required time of 0.26 ms (SD, 0.04 ms), calculated on 100 iterations.

Object recognition within the object presentation epoch

The decoding results of the time span in which the object was illuminated in both visual conditions (OBJ-VIS epoch) are presented in Figure 3A,B. Using a naive Bayes classifier as neurodecoder (see Materials and Methods), we found a high correlation between the actual conditions and the decoded conditions, as illustrated in the confusion matrices. The mean accuracy, obtained using leave-p-out cross-validation testing on 20% of trials, was lower in the decoding in dark than in light conditions: in the dark, the mean accuracy was 81.6%, whereas in the light it was 91.8%. However, the decoding performance in the dark is highly variant (SD, 12%), whereas in the light the variance is almost null (SD, 0.8%). The apparently odd difference in performance in OBJ-VIS, where the visual conditions are identical, and the high variance in the dark can be explained by the presence of other factors influencing the discharge during OBJ-VIS. We suggest that the attention level of the monkeys is higher in the dark than in the light (where the monkeys know that the visual information of the object will be available until the end of the trial), and this can add noise to the system, causing a decrease and a higher variance in decoding performance.

Considering each animal separately, the performance slightly decreases in the light as well as in the dark, although in both individual cases the level remained well above chance (Table 1).

Table 1.

Performance, expressed as mean accuracy ± SD, of the classifier in the two cases (together and separated)

OBJ-VIS
DELAY
GRASP
Dark Light Dark Light Dark Light
Cases 1 + 2 81.6 ± 12% 91.8 ± 0.8% 97.2 ± 2.9% 100 ± 0.0% 98.4 ± 2.1% 100 ± 0.0%
Case 1 67.6 ± 10.2% 78.6 ± 10.4% 81.6 ± 11% 98.8 ± 0.9% 91.4 ± 2.7% 98 ± 0.4%
Case 2 74.4 ± 12.7% 68.6 ± 10.5% 86.8 ± 3.7% 93.6 ± 5% 84.6 ± 4.3% 96.2 ± 3.7%

Decoding the neural information within the planning epoch

The decoding accuracy during the delay before the movement was very high, both in the dark (97.2 ± 2.9%; Fig. 3C) and in the light (100 ± 0.0%; Fig. 3D). Considering each animal separately, the performance was consistent (Table 1). In both dark and light conditions, there is a clear and strong improvement of the recognition from object vision (Fig. 3A,B) to action planning (Fig. 3C,D): the explanation for this increase might be related to action preparation required when the movement execution is approaching. Alternatively, a different neural code, likely more related to grip type, may be used during the delay and this can improve the recognition rate. This aspect will be investigated later in the paper, using generalization analysis. However, a complex interplay between action planning and visual information processing is suggested by these results.

Decoding the neural information within the grasping execution epoch

As shown in the confusion matrices in Figure 3E,F, the decoding accuracy during movement execution was extremely high in both visual conditions. Even if the mean performance was similar in the dark and in the light (98.4 ± 2.1% in the dark vs 100 ± 0.0% in the light), there were small differences in the performance between the two visual conditions. In the dark, all grips were almost always identified, and only a few errors were observed. In the light, the performance was maximal for each grip, suggesting that visual information about the hand–object interaction adds significant information to the neural code, which slightly improved the discrimination power of the grip-type decoding algorithm. Considering each animal separately, the results were well above chance level and the importance of visual information for decoding results was even more evident (Table 1).

Time course of the decoding performance

Although confusion matrices are very informative about the decoding performance, they do not provide any insight on the temporal dimension. To fill this gap, we estimated the time course of the classifier performance by computing firing rates in time intervals around light and movement onset. Figure 4A,B shows the classification performance in the dark and in the light, respectively, when the feature was extracted from a time window of 300 ms, which progressively slides over the trial timespan from 500 ms before illumination onset to 1 s after the movement onset, with a moving step of 10 ms. We used a double alignment because of the variability in the delay duration.

In the dark and in the light, the time course of the recognition rates was slightly different. In the dark (Fig. 4A), there was a quick increase of the decoding performance, up to 80% occurring after the illumination onset related to the object's vision. Approximately 600 ms after the illumination onset (which corresponds to 100 ms after the switch-off of the object illumination), the recognition rate decreased to ∼75%, and this performance remained constant in the subsequent delay and slightly increased at the end of the delay. In the light (Fig. 4B), the accuracy was higher than in the dark during object observation, whereas in the delay it was similar to the dark condition. However, the increase of the recognition rate was more pronounced during the last part of the delay (Fig. 4B, curve in the right before the second alignment). During grasp execution, the recognition rate was particularly high, especially in the light, and remained high till the end of grasp execution. To summarize, we found a ramp-up trend of the decoding performance in both conditions. After object illumination, the accuracy increased with time as movement-onset approached, reaching maximum values at the end of the delay period, particularly in the light. We can reliably say that the accuracy reaches the maximum value when the hand is approaching the object, better if the animal is able to see the action.

Generalization analysis

To evaluate whether the neural code used during object observation was retained or changed during the subsequent delay before the grasping onset, we performed a generalization analysis by training classifiers either in OBJ-VIS or in GRASP, and we applied both codes on portions of the DELAY epoch. Figure 5 shows the results of this analysis in the dark (Fig. 5A,C) and in the light (Fig. 5B,D). The performance of the decoding algorithm trained using the neural activity during OBJ-VIS is indicated in blue (Fig. 5A,B). The performance using GRASP activity is shown in red (Fig. 5A,B). The performance using DELAY portions is shown in grayscale (Fig. 5C,D). In the dark, the code learned during OBJ-VIS and generalized during DELAY gave much lower accuracy (Fig. 5A, blue line). The accuracy subsequently dropped to much lower values (∼40%) during movement execution. This suggests that the neural code used during object observation quickly became weaker as soon as the animal began to prepare the movement. In the light, the accuracy obtained by training the algorithm using the OBJ-VIS epoch and tested on the DELAY fractions (Fig. 5B, blue line) was almost as high as during the vision of the object, so the same code was maintained during the DELAY in the light. This is likely because the visual information regarding the object was still available in the delay of the light condition. Again, as seen for the dark, the decoding performance dropped to ∼40% during grasp execution.

Figure 5.

Figure 5.

Generalization analysis. A–D, Generalization of codes derived from different epochs for dark (A, C) and light (B, D) conditions. The neurodecoder trained with the firing rates extracted from one epoch was used to decode all epochs. The trend of mean recognition rates together with the SD bars through different epochs are plotted as colored lines. In A and B, blue line shows classifier trained on OBJ-VIS, red line shows classifier trained on GRASP; in C, D, grayscale shows classifier trained on fractions of the DELAY epoch. The DELAY epoch was split in portions due to variable time duration between the trials: D1, 0–25% of the DELAY epoch; D2, 25–50%; D3, 50–75%; D4, 75–100%. The accuracy obtained from the activity of each time interval is shown under each plot.

In the dark, the time course of the accuracy obtained by training the algorithm with the GRASP neural activity (Fig. 5A, red curve) and tested in the DELAY demonstrated that the neural code used during action execution was partially present also during the last fraction of the delay, but dropped abruptly immediately before it. So, the same code seems not applicable during object observation (OBJ-VIS) and during the first parts of the DELAY. In the light (Fig. 5B, red line), on the other hand, the code obtained by decoding from GRASP dropped gradually during the DELAY: a decreasing trend of accuracy is apparent throughout the DELAY.

When analyzing the accuracy of the classifier trained in the different fractions of the DELAY (Fig. 5C,D, gray lines), code similarities are highlighted. In the dark, a noticeable difference between the first part of the delay (lighter gray) and the subsequent fractions (darker grays) is evident: the late codes share similarities, whereas the initial code is quite different. This highlights that, after object disappearance, there is a gradual transformation of the code from object observation to motor execution. On the other hand, in the light, the code, presumably related to visual information, was maintained longer, probably thanks to the availability of visual information. Overall, in the light, code differences were minimized, conceivably because information collected was more similar through portions of the DELAY.

To summarize, different codes were present from object observation to movement execution, but their relative influence on the overall neural activity varied over time. In both visual conditions there was a switch between the codes during the last parts of the delay. Moreover, this analysis shows that the neural population during the DELAY epoch switched its preferential coding feature, and this likely suggested that a transformation from visual information into motor representation was performed at that time and encoded by these neurons. In this study, in the five task conditions, each of the different objects was grasped with a clearly distinct grip. Therefore, selectivity for object and for grip type is necessarily strongly correlated and cannot be distinguished in our task. So, the change of coding observed in the generalization analysis does not necessarily imply a change of representation, i.e., from a code representing objects to one representing grip type. However, a possible explanation is that the decoded discharge from V6A reflects the visuo-to-motor transformations occurring in the DELAY period in which the visual information regarding the object (visual/object coding) is transformed into motor commands (motor/grip coding).

Discussion

The above experimental results show that the posterior parietal area V6A of the dorsomedial visual stream represents a reliable site for decoding information for grasping in the presence and in the absence of visual information regarding object and hand–object interaction available when the action is prepared and executed. This opens new perspectives and possibilities about the source of grasp-related signals that may be used to implement BCIs.

In our experiment, each tested object was grasped with a clearly distinct grip. In these conditions, selectivity for object shape and for grip type cannot be distinguished, unlike other studies (Schaffelhofer and Scherberger, 2016) where more objects and a larger variability of grip types were tested. Although an inherent decoding ambiguity cannot be avoided in our study, good decoding results have been achieved from a restricted number of grasp-related neurons from V6A, in accordance with what was found in ventral premotor cortex by Carpaneto and colleagues (Carpaneto et al., 2011), and the posterior parietal cortex (PPC) itself, for decoding reach trajectories (Aflalo et al., 2015). In addition, the number of trials, 10 in our case, is low for decoding; despite this, we still obtained an extremely high classification accuracy.

We found high recognition rates in different time epochs: the visual presentation of the object (OBJ-VIS), the delay before the movement (DELAY), and the period of reach-to-grasp execution (GRASP). In addition, the different visual conditions used show that combining visual and motor information could slightly modulate the power of the classification.

A very good recognition rate was obtained during the vision of the object well before grasping execution. This could indicate the presence in V6A of covert motor commands for the upcoming grasp, because animals were overtrained to grasp the objects used in this task. However, we are more inclined to suggest that the encoding occurring during the vision of the object reflects object recognition for action, as already shown for V6A in a work where visual responses to objects with different shapes evoking different grips were demonstrated to reflect object affordance (Breveglieri et al., 2015). The slightly higher accuracy obtained during movement execution in the light compared with movement execution in the dark is suggestive of a weak effect of the vision of hand–object interaction in V6A.

The delay period between object presentation and grasp execution proved to be a good source of decoding in V6A (Fig. 3C,D). Generalization analysis showed that in the first part of the delay, spanning some hundreds of milliseconds after the end of object illumination, well beyond transient visual responses (Thorpe et al., 1996; Schmolesky et al., 1998), the decoding was mostly effective if performed through an OBJ-VIS epoch-derived code, likely representing a visual/object code (Fig. 5). This epoch is followed by an intermediate visuomotor transformation stage, in which the brain likely converts the visual information into motor commands. Here we illustrated that decoding from V6A is still possible, but with a lower accuracy. Then, in the third part of the delay, we can obtain a higher decoding accuracy than the two first intervals. In this last phase, the decoding is most successful when using a GRASP-derived code, possibly representing a motor/grip code. This last period, close to motor execution, but well in advance with respect to possible afferent feedback signals (known to be present in V6A; Breveglieri et al., 2002; Fattori et al., 2005, 2017), could reflect an efferent command or an action plan where planned grasp coding information is present. These results from the performance of the neurodecoder parallel those found simply by analyzing mean frequencies of discharge in this same area: in V6A there is an encoding of the visual attributes of objects at the beginning of the DELAY period that switches to a grip-type encoding during the DELAY period, when the prehension action is planned, and later during movement execution (Fattori et al., 2012, their Fig. 8). For the purpose of decoding, at first glance, the coexistence of different coding schemes can be seen as a disadvantage, due to the lack of a clear distinction between used codes and the resulting increase in the data complexity. Potentially, however, properly trained multiple decoders can efficiently recover visual and motor attributes from the same dataset. Conceivably with the aid of a postprocessing algorithm, the decoder results can be integrated together to obtain more accuracy and/or additional data for a visuomotor-guided robotic prosthetic arm.

This anticipated decoding ability seems to be typical of the parietal cortex (Andersen et al., 2010), where the reaching goals and trajectories were decoded 190 ms after target presentation (Aflalo et al., 2015), thus comparable to V6A for grasping decoding (Fig. 4A). Precocious decoding from the PPC would allow signals to be sent to to the computer interfaces well before the movement needs to be initiated. Together with the short time required to run the classifier algorithm (a few tenths of a millisecond for the prediction phase, in our work), this fits well with a real-time decoding implementation.

Off-line decoding from single cells in dorsomedial frontoparietal areas: perspectives on BCIs

In this study, as in some others in the dorsolateral visual stream (Townsend et al., 2011; Carpaneto et al., 2011), the neural decoding with a high accuracy for grasping was performed off-line from single cells, thus confirming that this kind of signal is adequate to be exploited for successful decoding. In addition, this work adds a novel area in the panorama of the brain areas useful for BCIs. So far, all the studies aimed at decoding grasps used signals from the primary motor cortex (Carmena et al., 2003; Hochberg et al., 2006, 2012; Kim et al., 2006; Ben Hamed et al., 2007; Velliste et al., 2008; Vargas-Irwin et al., 2010) or the dorsolateral frontoparietal network, specifically the lateral premotor area F5 (Carpaneto et al., 2011, 2012; Townsend et al., 2011; Schaffelhofer et al., 2015) and the lateral posterior parietal area, AIP (anterior intraparietal area; Townsend et al., 2011; Klaes et al., 2015; Schaffelhofer et al., 2015).

In area AIP, the best performance was achieved during the reach-to-grasp task in the Cue epoch (Schaffelhofer et al., 2015). Conversely, in V6A, the best performance occurs in the GRASP epoch. This feature is similar to area F5, where the best performance was obtained during grasping execution (Carpaneto et al., 2011; Schaffelhofer et al., 2015), especially in the light. These results suggest that, although areas V6A and AIP are both grasp-related parietal areas that share many functional properties (Breveglieri et al., 2016), the AIP seems to be more involved during the vision of the object and V6A during movement execution.

Recently, Andersen's laboratory decoded visual and motor aspects of complex hand shaping from human area AIP (Klaes et al., 2015). Decoding of grasp information from monkey AIP is well supported (Townsend et al., 2011; Schaffelhofer et al., 2015), and these very recent data on human AIP suggest a good functional affinity between monkeys and human PPCs. The present data on decoding of objects and grasps from this other parietal site promises a future for decoding grasps from the human dorsomedial parietal cortex.

Indeed, so far, decoding neural signals from dorsomedial areas has been done in the context of reconstructing hand position in space (Hatsopoulos et al., 2004), or of finger flexion/extension movements (Aggarwal et al., 2009) and reach trajectories (Musallam et al., 2004; Mulliken et al., 2008; Hwang and Andersen, 2013; Aflalo et al., 2015). This is the first work in which an area of the dorsomedial visual stream is used successfully to decode grasps. It encourages researchers to look at other dorsomedial stream areas involved in grasping, such as the dorsal premotor area (Raos et al., 2004; Stark et al., 2007), as possible targets of decoding for prehensile actions.

Future directions

Since the first demonstrations of monkey medial PPC as a site encoding intentions for reaches (Snyder et al., 1997), attention has been given to this region as a site useful for translating basic research on monkey neural recordings into applications useful for BCIs (Musallam et al., 2004; Mulliken et al., 2008). Recent evidence shows that nonhuman primate and human PPCs share a similar sensorimotor function (Aflalo et al., 2015; Klaes et al., 2015). In fact, by recording from the PPC of tetraplegic subjects, Andersen and coworkers showed that neural signals from human medial PPC may be used for BCIs to guide reaching movements to appropriate goals with appropriate trajectories (Aflalo et al., 2015) and from the lateral PPC to control hand shaping (Klaes et al., 2015). The present results indicate that the monkey medial PPC hosts neural signals that could be used to implement BCIs to guide prehensile actions to grasp objects of different shapes with different grips. Future studies might obtain similar advantages by applying the decoding algorithms to neural signals from the human medial PPC to control signals in assistive devices for impaired patients (tetraplegics or subjects affected by neurodegenerative diseases that impair hand functions). This might be useful in recovering full control of a hand.

Footnotes

This work was supported by the Ministero dell'Università e della Ricerca (Italy; Firb 2013 N. RBFR132BKP) and by the Fondazione del Monte di Bologna e Ravenna (Italy). We thank N. Marzocchi and G. Placenti for setting up the experimental apparatus.

The authors declare no competing financial interests.

References

  1. Aflalo T, Kellis S, Klaes C, Lee B, Shi Y, Pejsa K, Shanfield K, Hayes-Jackson S, Aisen M, Heck C, Liu C, Andersen RA (2015) Neurophysiology. Decoding motor imagery from the posterior parietal cortex of a tetraplegic human. Science 348:906–910. 10.1126/science.aaa5417 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Aggarwal V, Tenore F, Acharya S, Schieber MH, Thakor NV (2009) Cortical decoding of individual finger and wrist kinematics for an upper-limb neuroprosthesis. Conf Proc IEEE Eng Med Biol Soc 2009:4535–4538. 10.1109/IEMBS.2009.5334129 [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Andersen RA, Hwang EJ, Mulliken GH (2010) Cognitive neural prosthetics. Annu Rev Psychol 61:169–190, C1–C3. 10.1146/annurev.psych.093008.100503 [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Ben Hamed S, Schieber MH, Pouget A (2007) Decoding M1 neurons during multiple finger movements. J Neurophysiol 98:327–333. 10.1152/jn.00760.2006 [DOI] [PubMed] [Google Scholar]
  5. Bosco A, Breveglieri R, Chinellato E, Galletti C, Fattori P (2010) Reaching activity in the medial posterior parietal cortex of monkeys is modulated by visual feedback. J Neurosci 30:14773–14785. 10.1523/JNEUROSCI.2313-10.2010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Breveglieri R, Kutz DF, Fattori P, Gamberini M, Galletti C (2002) Somatosensory cells in the parieto-occipital area V6A of the macaque. Neuroreport 13:2113–2116. 10.1097/00001756-200211150-00024 [DOI] [PubMed] [Google Scholar]
  7. Breveglieri R, Galletti C, Bosco A, Gamberini M, Fattori P (2015) Object affordance modulates visual responses in the macaque medial posterior parietal cortex. J Cogn Neurosci 27:1447–1455. 10.1162/jocn_a_00793 [DOI] [PubMed] [Google Scholar]
  8. Breveglieri R, Bosco A, Galletti C, Passarelli L, Fattori P (2016) Neural activity in the medial parietal area V6A while grasping with or without visual feedback. Sci Rep 6:28893. 10.1038/srep28893 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Caminiti R, Ferraina S, Johnson PB (1996) The sources of visual information to the primate frontal lobe: a novel role for the superior parietal lobule. Cereb Cortex 6:319–328. 10.1093/cercor/6.3.319 [DOI] [PubMed] [Google Scholar]
  10. Carmena JM, Lebedev MA, Crist RE, O'Doherty JE, Santucci DM, Dimitrov DF, Patil PG, Henriquez CS, Nicolelis MA (2003) Learning to control a brain-machine interface for reaching and grasping by primates. PLoS Biol 1:E42. 10.1371/journal.pbio.0000042 [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Carpaneto J, Umiltà MA, Fogassi L, Murata A, Gallese V, Micera S, Raos V (2011) Decoding the activity of grasping neurons recorded from the ventral premotor area F5 of the macaque monkey. Neuroscience 188:80–94. 10.1016/j.neuroscience.2011.04.062 [DOI] [PubMed] [Google Scholar]
  12. Carpaneto J, Raos V, Umiltà MA, Fogassi L, Murata A, Gallese V, Micera S (2012) Continuous decoding of grasping tasks for a prospective implantable cortical neuroprosthesis. J Neuroeng Rehabil 9:84. 10.1186/1743-0003-9-84 [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Chinellato E, Grzyb BJ, Marzocchi N, Bosco A, Fattori P, del Pobil AP (2011) The dorso-medial visual stream: from neural activation to sensorimotor interaction. Neurocomputing 74:1203–1212. 10.1016/j.neucom.2010.07.029 [DOI] [Google Scholar]
  14. Chinellato E, del Pobil AP (2016) The visual neuroscience of robotic grasping, cognitive systems monographs. Heidelberg: Springer International Publishing. [Google Scholar]
  15. Collinger JL, Wodlinger B, Downey JE, Wang W, Tyler-Kabara EC, Weber DJ, McMorland AJ, Velliste M, Boninger ML, Schwartz AB (2013) High-performance neuroprosthetic control by an individual with tetraplegia. Lancet 381:557–564. 10.1016/S0140-6736(12)61816-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Fattori P, Breveglieri R, Amoroso K, Galletti C (2004) Evidence for both reaching and grasping activity in the medial parieto-occipital cortex of the macaque. Eur J Neurosci 20:2457–2466. [DOI] [PubMed] [Google Scholar]
  17. Fattori P, Kutz DF, Breveglieri R, Marzocchi N, Galletti C (2005) Spatial tuning of reaching activity in the medial parieto-occipital cortex (area V6A) of macaque monkey. Eur J Neurosci 22:956–972. [DOI] [PubMed] [Google Scholar]
  18. Fattori P, Breveglieri R, Marzocchi N, Filippini D, Bosco A, Galletti C (2009) Hand orientation during reach-to-grasp movements modulates neuronal activity in the medial posterior parietal area V6A. J Neurosci 29:1928–1936. 10.1523/JNEUROSCI.4998-08.2009 [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Fattori P, Raos V, Breveglieri R, Bosco A, Marzocchi N, Galletti C (2010) The dorsomedial pathway is not just for reaching: grasping neurons in the medial parieto-occipital cortex of the macaque monkey. J Neurosci 30:342–349. 10.1523/JNEUROSCI.3800-09.2010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Fattori P, Breveglieri R, Raos V, Bosco A, Galletti C (2012) Vision for action in the macaque medial posterior parietal cortex. J Neurosci 32:3221–3234. 10.1523/JNEUROSCI.5358-11.2012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Fattori P, Breveglieri R, Bosco A, Gamberini M, Galletti C (2017) Vision for prehension in the medial parietal cortex. Cereb Cortex 27:1149–1163. 10.1093/cercor/bhv302 [DOI] [PubMed] [Google Scholar]
  22. Fetz EE. (2007) Volitional control of neural activity: implications for brain-computer interfaces. J Physiol 579:571–579. 10.1113/jphysiol.2006.127142 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Galletti C, Battaglini PP, Fattori P (1995) Eye position influence on the parieto-occipital area PO (V6) of the macaque monkey. Eur J Neurosci 7:2486–2501. [DOI] [PubMed] [Google Scholar]
  24. Galletti C, Fattori P, Kutz DF, Gamberini M (1999) Brain location and visual topography of cortical area V6A in the macaque monkey. Eur J Neurosci 11:575–582. [DOI] [PubMed] [Google Scholar]
  25. Galletti C, Kutz DF, Gamberini M, Breveglieri R, Fattori P (2003) Role of the medial parieto-occipital cortex in the control of reaching and grasping movements. Exp Brain Res 153:158–170. 10.1007/s00221-003-1589-z [DOI] [PubMed] [Google Scholar]
  26. Gamberini M, Galletti C, Bosco A, Breveglieri R, Fattori P (2011) Is the medial posterior parietal area V6A a single functional area? J Neurosci 31:5145–5157. 10.1523/JNEUROSCI.5489-10.2011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Hatsopoulos NG, Donoghue JP (2009) The science of neural interface systems. Annu Rev Neurosci 32:249–266. 10.1146/annurev.neuro.051508.135241 [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Hatsopoulos N, Joshi J, O'Leary JG (2004) Decoding continuous and discrete motor behaviors using motor and premotor cortical ensembles. J Neurophysiol 92:1165–1174. 10.1152/jn.01245.2003 [DOI] [PubMed] [Google Scholar]
  29. Hochberg LR, Serruya MD, Friehs GM, Mukand JA, Saleh M, Caplan AH, Branner A, Chen D, Penn RD, Donoghue JP (2006) Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature 442:164–171. 10.1038/nature04970 [DOI] [PubMed] [Google Scholar]
  30. Hochberg LR, Bacher D, Jarosiewicz B, Masse NY, Simeral JD, Vogel J, Haddadin S, Liu J, Cash SS, van der Smagt P, Donoghue JP (2012) Reach and grasp by people with tetraplegia using a neurally controlled robotic arm. Nature 485:372–375. 10.1038/nature11076 [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Hwang EJ, Andersen RA (2013) The utility of multichannel local field potentials for brain-machine interfaces. J Neural Eng 10:046005. 10.1088/1741-2560/10/4/046005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Jeannerod M. (1997) The cognitive neuroscience of action. Oxford: Blackwell. [Google Scholar]
  33. Kim HK, Biggs SJ, Schloerb DW, Carmena JM, Lebedev MA, Nicolelis MA, Srinivasan MA (2006) Continuous shared control for stabilizing reaching and grasping with brain-machine interfaces. IEEE Trans Biomed Eng 53:1164–1173. 10.1109/TBME.2006.870235 [DOI] [PubMed] [Google Scholar]
  34. Klaes C, Kellis S, Aflalo T, Lee B, Pejsa K, Shanfield K, Hayes-Jackson S, Aisen M, Heck C, Liu C, Andersen RA (2015) Hand shape representations in the human posterior parietal cortex. J Neurosci 35:15466–15476. 10.1523/JNEUROSCI.2747-15.2015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Kutz DF, Marzocchi N, Fattori P, Cavalcanti S, Galletti C (2005) Real-time supervisor system based on trinary logic to control experiments with behaving animals and humans. J Neurophysiol 93:3674–3686. 10.1152/jn.01292.2004 [DOI] [PubMed] [Google Scholar]
  36. Lehmann SJ, Scherberger H (2013) Reach and gaze representations in macaque parietal and premotor grasp areas. J Neurosci 33:7038–7049. 10.1523/JNEUROSCI.5568-12.2013 [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Luppino G, Ben Hamed S, Gamberini M, Matelli M, Galletti C (2005) Occipital (V6) and parietal (V6A) areas in the anterior wall of the parieto-occipital sulcus of the macaque: a cytoarchitectonic study. Eur J Neurosci 21:3056–3076. 10.1111/j.1460-9568.2005.04149.x [DOI] [PubMed] [Google Scholar]
  38. Ma WJ, Beck JM, Latham PE, Pouget A (2006) Bayesian inference with probabilistic population codes. Nat Neurosci 9:1432–1438. 10.1038/nn1790 [DOI] [PubMed] [Google Scholar]
  39. Marzocchi N, Breveglieri R, Galletti C, Fattori P (2008) Reaching activity in parietal area V6A of macaque: eye influence on arm activity or retinocentric coding of reaching movements? Eur J Neurosci 27:775–789. 10.1111/j.1460-9568.2008.06021.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Mattar E. (2013) A survey of bio-inspired robotics hands implementation: new directions in dexterous manipulation. Rob Auton Syst 61:517–544. 10.1016/j.robot.2012.12.005 [DOI] [Google Scholar]
  41. Milekovic T, Truccolo W, Grün S, Riehle A, Brochier T (2015) Local field potentials in primate motor cortex encode grasp kinetic parameters. Neuroimage 114:338–355. 10.1016/j.neuroimage.2015.04.008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Mulliken GH, Musallam S, Andersen RA (2008) Decoding trajectories from posterior parietal cortex ensembles. J Neurosci 28:12913–12926. 10.1523/JNEUROSCI.1463-08.2008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Musallam S, Corneil BD, Greger B, Scherberger H, Andersen RA (2004) Cognitive control signals for neural prosthetics. Science 305:258–262. 10.1126/science.1097938 [DOI] [PubMed] [Google Scholar]
  44. Nicolelis MA, Lebedev MA (2009) Principles of neural ensemble physiology underlying the operation of brain-machine interfaces. Nat Rev Neurosci 10:530–540. 10.1038/nrn2653 [DOI] [PubMed] [Google Scholar]
  45. Raos V, Umiltá MA, Gallese V, Fogassi L (2004) Functional properties of grasping-related neurons in the dorsal premotor area F2 of the macaque monkey. J Neurophysiol 92:1990–2002. 10.1152/jn.00154.2004 [DOI] [PubMed] [Google Scholar]
  46. Sandberg K, Andersen LM, Overgaard M (2014) Using multivariate decoding to go beyond contrastive analyses in consciousness research. Front Psychol 5:1250. 10.3389/fpsyg.2014.01250 [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Santhanam G, Ryu SI, Yu BM, Afshar A, Shenoy KV (2006) A high-performance brain-computer interface. Nature 442:195–198. 10.1038/nature04968 [DOI] [PubMed] [Google Scholar]
  48. Schaffelhofer S, Scherberger H (2016) Object vision to hand action in macaque parietal, premotor, and motor cortices. Elife 5:pii:e15278. 10.7554/eLife.15278 [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Schaffelhofer S, Agudelo-Toro A, Scherberger H (2015) Decoding a wide range of hand configurations from macaque motor, premotor, and parietal cortices. J Neurosci 35:1068–1081. 10.1523/JNEUROSCI.3594-14.2015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Scherberger H. (2009) Neural control of motor prostheses. Curr Opin Neurobiol 19:629–633. 10.1016/j.conb.2009.10.008 [DOI] [PubMed] [Google Scholar]
  51. Scherberger H, Jarvis MR, Andersen RA (2005) Cortical local field potential encodes movement intentions in the posterior parietal cortex. Neuron 46:347–354. 10.1016/j.neuron.2005.03.004 [DOI] [PubMed] [Google Scholar]
  52. Schmolesky MT, Wang Y, Hanes DP, Thompson KG, Leutgeb S, Schall JD, Leventhal AG (1998) Signal timing across the macaque visual system. J Neurophysiol 79:3272–3278. [DOI] [PubMed] [Google Scholar]
  53. Schwartz AB. (2016) Movement: how the brain communicates with the world. Cell 164:1122–1135. 10.1016/j.cell.2016.02.038 [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Schwartz AB, Cui XT, Weber DJ, Moran DW (2006) Brain-controlled interfaces: movement restoration with neural prosthetics. Neuron 52:205–220. 10.1016/j.neuron.2006.09.019 [DOI] [PubMed] [Google Scholar]
  55. Serruya MD, Hatsopoulos NG, Paninski L, Fellows MR, Donoghue JP (2002) Instant neural control of a movement signal. Nature 416:141–142. 10.1038/416141a [DOI] [PubMed] [Google Scholar]
  56. Shenoy KV, Kaufman MT, Sahani M, Churchland MM (2011) A dynamical systems view of motor preparation. Implications for neural prosthetic system design. Prog Brain Res 192:33–58. 10.1016/B978-0-444-53355-5.00003-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Snyder LH, Batista AP, Andersen RA (1997) Coding of intention in the posterior parietal cortex. Nature 386:167–170. 10.1038/386167a0 [DOI] [PubMed] [Google Scholar]
  58. Stark E, Asher I, Abeles M (2007) Encoding of reach and grasp by single neurons in premotor cortex is independent of recording site. J Neurophysiol 97:3351–3364. 10.1152/jn.01328.2006 [DOI] [PubMed] [Google Scholar]
  59. Taylor DM, Tillery SI, Schwartz AB (2002) Direct cortical control of 3D neuroprosthetic devices. Science 296:1829–1832. 10.1126/science.1070291 [DOI] [PubMed] [Google Scholar]
  60. Thorpe S, Fize D, Marlot C (1996) Speed of processing in the human visual system. Nature 381:520–522. 10.1038/381520a0 [DOI] [PubMed] [Google Scholar]
  61. Townsend BR, Subasi E, Scherberger H (2011) Grasp movement decoding from premotor and parietal cortex. J Neurosci 31:14386–14398. 10.1523/JNEUROSCI.2451-11.2011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Vargas-Irwin CE, Shakhnarovich G, Yadollahpour P, Mislow JM, Black MJ, Donoghue JP (2010) Decoding complete reach and grasp actions from local primary motor cortex populations. J Neurosci 30:9659–9669. 10.1523/JNEUROSCI.5443-09.2010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Velliste M, Perel S, Spalding MC, Whitford AS, Schwartz AB (2008) Cortical control of a robotic arm for self-feeding. Nature 453:1098–1101. 10.1038/nature06996 [DOI] [PubMed] [Google Scholar]
  64. Wessberg J, Stambaugh CR, Kralik JD, Beck PD, Laubach M, Chapin JK, Kim J, Biggs SJ, Srinivasan MA, Nicolelis MA (2000) Real-time prediction of hand trajectory by ensembles of cortical neurons in primates. Nature 408:361–365. 10.1038/35042582 [DOI] [PubMed] [Google Scholar]
  65. Wise SP, Boussaoud D, Johnson PB, Caminiti R (1997) Premotor and parietal cortex: corticocortical connectivity and combinatorial computations. Annu Rev Neurosci 20:25–42. 10.1146/annurev.neuro.20.1.25 [DOI] [PubMed] [Google Scholar]
  66. Zhang H. (2004) The optimality of naive Bayes. Proceeding of the Seventeenth International Florida Artificial Intelligence Research Society Conference, Miami Beach, FL, January. [Google Scholar]

Articles from The Journal of Neuroscience are provided here courtesy of Society for Neuroscience

RESOURCES