Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2012 Apr 1.
Published in final edited form as: J Neural Eng. 2011 Mar 24;8(2):025016. doi: 10.1088/1741-2560/8/2/025016

Decoding position, velocity, or goal: Does it matter for Brain-Machine Interfaces?

A R Marathe 1,2, D M Taylor 1,2,3
PMCID: PMC3140465  NIHMSID: NIHMS308380  PMID: 21436529

Abstract

Arm end-point position, end-point velocity, and the intended final location or ‘goal’ of a reach have all been decoded from cortical signals for use in brain-machine interface (BMI) applications. These different aspects of arm movement can be decoded from the brain and used directly to control the position, velocity, or movement goal of a device. However, these decoded parameters can also be remapped to control different aspects of movement, such as using the decoded position of the hand to control the velocity of a device.

People easily learn to use the position of a joystick to control the velocity of an object in a videogame. Similarly, in BMI systems, the position, velocity, or goal of a movement could be decoded from the brain and remapped to control some other aspect of device movement. This study evaluates how easily people make transformations between position, velocity, and reach goal in BMI systems. This study also evaluates how different amounts of decoding error impact device control with and without these transformations. Results suggest some remapping options can significantly improve BMI control. This study provides guidance on what remapping options to use when various amounts of decoding error are present.

1. Introduction

Researchers have been able to correlate a wide variety of limb movement parameters with cortical activity (i.e., position [15], velocity [1, 2, 47], acceleration [4], speed [4, 7], direction [4, 710], force [1116], muscle activations [1720], reach goal [21, 22], etc.). By understanding how different aspects of movement are encoded in or correlated with cortical activity, we can ‘decode’ these aspects of intended movement and use that movement information to control external devices in real time. This rapidly-growing field of brain-machine interfacing (BMI) has the potential to enable people with severe paralysis or limb loss to use neural activity associated with attempted movements to control the motion of a computer cursor, assistive robot, prosthetic limb, or even one’s own paralyzed limb activated via electrical stimulation.

Many decades of research have revealed how motor commands are processed through different cortical areas. For example, the abstract ‘goal’ of a reach can be decoded from the posterior parietal cortex early on in the motor processing system [21], whereas the biomechanical details needed for physical movement execution (e.g., muscle activation [20] and movement direction in local joint coordinates [23, 24]) are more readily decoded from the primary motor cortex at the output end of the processing stream.

While there is a progression from abstract to physical along the motor processing system, there is also much overlap in what movement parameters are encoded across different areas. For example, reach goal can be decoded from both posterior parietal cortex and premotor cortex [21, 25]; movement direction and hand position can be decoded from both premotor and motor cortex [8, 10, 26, 27]. Conversely, multiple movement parameters can be decoded from any one region of cortex [1, 7, 28].

As one examines what brain areas should be implanted, it is important to consider what movement parameters are most appropriate for the device the person needs to control. If the device is a stimulator to activate paralyzed muscles, the primary motor cortex would be a likely target for decoding muscle activations. If the device selects between letters at fixed positions on a computer screen, then decoding goal location from parietal or premotor cortex may be an effective option.

In practice, most BMI users will eventually want to control multiple types of devices using whatever neurons can be recorded from their electrodes. The population of available neurons also may change or decline over time, potentially leaving one with a set of neurons that encode different movement parameters from what his/her current assistive device was set up to use. The question remains, how important is it to match the movement parameters that the remaining neurons naturally encode with devices that only use those specific movement parameters?

In everyday life, people easily learn to use one movement parameter to control another. For example, we control the velocity of our car with the position of our foot within the range of motion of the gas pedal. We also learn to control the position of the gas pedal by adjusting the force we apply with our foot. We can learn these transformations because we receive visual and somatosensory feedback of the car’s velocity in response to how we activate the muscles in our leg. Similar transformations may be possible when using motor activity in the brain to control various device functions. Understanding what types of movement transformations a person can perform will help us understand what decoded movement parameters can be applied to the control of different device actions.

In this study, we assess how effectively people can make the transformation between three movement parameters commonly decoded for BMI applications—position, velocity, and reach goal. We also assess how decoding accuracy impacts one’s ability to make these transformations. The results presented here may help guide the choice of implant locations as well as the movement parameters one chooses to decode when multiple parameter options are available from the same set of neurons.

2. Methods

2.1. Overview

In this study, arm end-point position, velocity, and reach goal were extracted in real time from actual arm movements of able-bodied individuals as a proxy for decoding these parameters from cortical signals. Random errors were added to simulate decoding neural signals with different levels of accuracy. Each one of these three ‘decoded’ movement parameters was then mapped in real time to control the position, velocity, or goal of a spherical cursor in a center-out, target-acquisition task. These pairings between three different decoded movement parameters and three different applied device commands resulted in nine total combinations to test.

Some transformations between decoded neural signal and applied device can be performed in device control software without the user making the transformation or even being aware any transformation is taking place. For example, as a person or animal makes an arm movement, endpoint velocity is commonly decoded in real time and used to generate a matching movement of a cursor on a screen. In order to update the screen image, the decoded velocity must be converted to a cursor position. The BMI software automatically integrates the velocity over time to get the appropriate cursor position for display. The person does not have to make any transformation because the decoded velocity of the limb matches the displayed velocity of the cursor.

In contrast, this study specifically tests how well the user can make different transformations; a real-life example that requires a person to make such a transformation would be using a joystick in a video game where the position of the joystick controls the velocity of an object on the screen. In this case, there is an intentional mismatch between the movement the person is making and the effect it has on the movement of the object on screen.

Figure 1 illustrates the different transformations made by the participants in this study as well as some additional transformations made in software (transparent to the user, but needed for this experiment). Arm end-point position, velocity, and reach goal were extracted in software from actual measured movements and used in real time as a proxy for decoding these movement parameters from neural signals. Errors were added to the ‘decoded’ parameters (not to the hand movement itself) to simulate specific levels of error in decoding that particular movement parameter. Each of these three noisy, decoded movement parameters was then remapped to control either the position, velocity, or goal of the cursor in a center-out, target-acquisition task. Thus, in order to get the cursor to the targets, the participant had to make the appropriate transformations (e.g., use the 2D position of his/her wrist to control the velocity of the cursor on the screen). Finally, the cursor position, velocity, or goal command resulting from the person’s remapped movements was converted in software to the cursor position information needed to update the computer display.

Figure 1.

Figure 1

Flow chart showing the nine possible combinations of the three decoded movement parameters and three applied device commands tested in this study. The participants had to make the transformation between the decoded movement parameter and the applied device command. The other transformations necessary for conducting this experiment were made by the software.

Six able-bodied individuals performed a two-dimensional center-out movement task to eight evenly-spaced radial targets 25 cm from the center of the workspace. Each movement trial started with the participant’s hand stationary in the center of the workspace. Participants had ten seconds to reach the target and hold the cursor within the target. The cursor had to then stay within the target continuously for one second for that trial to be counted as a successful ‘hit’. All nine combinations of three decoded movement parameters and three applied device commands were tested in random order. Three complete blocks of eight movements each were collected for each of the nine combinations and at each of the three levels of added decoding error (decoding error details are described below). To allow time for participants to develop an appropriate movement strategy for each combination, participants were required to practice each combination for at least one block containing all eight targets before actual data collection for that combination began. More practice blocks were allowed, if needed, to ensure participants were comfortable with each task; however, none were utilized. Participants usually figured out a strategy for each movement combination within the first few movements of the required practice block. Participants did not exhibit significant learning over subsequent data collection blocks. All study activities were approved by the Louis Stokes Cleveland VA Medical Center’s Institutional Review Board.

2.2. Simulating decoded movement parameters from arm movements

The actual arm end-point position was used as a proxy to generate the three different ‘decoded’ movement parameters in software. The arm end-point position was tracked via an Optotrak motion capture system using a sensor on the wrist (Northern Digital Inc.). Position data was sent to Matlab every 100ms for further processing. Wrist position, P(t), was used directly as one of the three decoded movement parameters. Velocity was calculated as the difference in wrist position over the last time step divided by the time elapsed (i.e. V(t)=P(t)−P(t−1)/Δt). Reach goal, G(t), was inferred based on wrist location in the work space. Only four of the eight radial targets were defined as possible goals in order to explore if remapping the extracted goals to different movement parameters would enable participants to hit targets that were not part of the predefined goal set. Therefore, the decoded goal, G(t), could take on only five possible discrete target locations—the center target location and four of the eight radial target locations (the four targets lying on the diagonal axes). Figure 2 illustrates how the five goal locations were extracted in real time from the continuous wrist position data. One of the five goal options (indicated by stars in figure 2) was assigned at each time step based on which of the five regions of the workspace the wrist was currently located (color coded in figure 2).

Figure 2.

Figure 2

Deriving goal location from the actual wrist position at each time step. Diagram on the left shows where targets were located in the workspace. Small, lighter circles indicate the eight radial targets and the center target. One of the five starred target locations (four radial targets or the center target) was assigned as the decoded goal at each time step based on the actual wrist position. Colored regions indicate which one of the five possible goal options was assigned when the wrist was in different parts of the workspace. Also shown on this diagram is a typical center-out movement trajectory where each black dot indicates the wrist position at 100ms intervals. Plots to the right show the corresponding X and Y components of position, velocity, and goal as a function of time. Note that the X and Y goal positions can only take on the discrete values of the five possible goal locations indicated with stars. However, position and velocity can take on a continuous range of values at each time step. Color coding by region is only indicated here to illustrate goal decoding and was not displayed to participants during the study.

Three levels of decoding accuracy were evaluated by adding random error directly to each extracted movement parameter. Random error was added at each time step to make the coefficient of determination (R2) equal to 0.5, 0.75 or 1.0 when calculated between the original movement parameter and the parameter with errors added (1.0 represents perfect decoding; i.e., zero error added). R2 was chosen because it is a commonly-used measure of decoding accuracy and often used to rank how well different movement parameters are encoded by the same or different sets of neurons.

For position and velocity error, Gaussian noise low-pass filtered at 2.5 Hz was added to each dimension. A Gaussian distribution was used to model decoding error because a Gaussian function was a good fit to the decoding errors we have consistently seen when decoding wrist position or velocity from intracortical recordings in our non-human primate studies or when decoding wrist position or velocity from proximal arm/torso electromyograms (EMGs) in our human studies.

When decoding actual neural signals, the temporal characteristics or frequency content of the decoding errors is highly dependent on the type of decoding function used. For example, a linear decoder that uses up to a second of past data to predict current movement will produce a smoother output (i.e., errors will have a lower frequency content) than when using a linear decoder that only uses the current neural data to predict the current desired movement. In this study, we low-pass filtered the Gaussian noise at 2.5 Hz, which mimicked a moderate amount of smoothing similar to what is obtained with linear decoders that use up to a half a second of past neural data to predict current movement. Finally, the low-pass-filtered Gaussian noise was then scaled as needed to achieve the 0.5 or 0.75 R2 values between the original movement parameter and the parameter with the Gaussian noise added. These R2 values represent moderate-to-good accuracy values from the range of intracortical studies reported in the literature [2934].

To generate controlled levels of errors in the decoded goal, which could only take on five values, a predetermined percentage of data samples were ‘misclassified’ by assigning one of the four alternative goal options at random. The percentage of randomly-misclassified time points was adjusted to achieve R2 values of 0.5 or 0.75 when calculated between the goal positions in space before and after the misclassified time points were added.

2.3. Transformations required of the user

Each of the three decoded movement parameters (with errors added) was directly assigned to control either the instantaneous position, velocity, or ‘goal’ of the cursor (i.e., transformations across center vertical line in figure 1). Three of the nine different pairings did not actually require the user to make any transformations and represent how decoded signals are normally used (i.e., position-to-position, velocity-to-velocity, and goal-to-goal pairings). The remaining pairs required that the participants use one aspect of their arm movement to control a different aspect of the cursor movement1.

For example, if velocity was the decoded movement parameter and position was the assigned device command, holding the hand still in any location (V(t)=(0,0)) would cause the cursor to go to a position in the center of the workspace (P(t)=(0,0)). Moving the hand at a constant velocity anywhere in space would place the cursor at a stationary non-central position in the workspace (e.g. V(t)=(1,−1)*C would be assigned to P(t)=(1,−1)). Note that some dissimilar pairings required the use of a scale factor, C, to account for differences in units. These scale factors can be arbitrarily set for any device, even when a one-to-one mapping is used (e.g., a small decoded position change in the wrist can be mapped to a large position change of a robot arm). In this study, appropriate gains were first identified for each dissimilar combination during practice sessions performed at each level of decoding accuracy. Gain factors were chosen for each pairing that maximized the movement performance measures over all error conditions (same gain factors applied under all three error conditions).

In all cases where goal was the applied device command, the cursor would be placed directly in the current goal location at each time step (i.e., one of the five starred locations shown in figure 2). The process of remapping the decoded position or velocity to an applied goal command was similar to the method used to originally infer a decoded goal from the actual wrist position in software. Again, figure 2 can be used to illustrate this process. At each time step, one of the five possible goal options (starred targets) is assigned as the applied device command based on which one of the five regions of 2D Cartesian space the position or scaled velocity value resided.

2.4. Software conversion for display

Since the ‘applied device’ in this study was a cursor on a computer screen, any applied device command had to be converted to a cursor position in order to update the computer display. Position and goal commands could be used directly to update the location of the cursor in the display. Velocity was integrated over each trial starting at an initial center position to get an updated cursor position at each time step. This transformation was handled in software and was transparent to the user (similar to how most BMI experiments are performed when decoding velocity for control of a virtual object on a screen).

3. Results

Figure 3 shows the effects of decoding error on all four pairings between position and velocity. Under perfect decoding conditions (R2=1), position-to-position and velocity-to-velocity are virtually identical and felt no different to the study participants because neither required the person to make a transformation (in the velocity-to-velocity case, the initial transformation from wrist position to velocity and final transformation back to cursor position are automatically made in software). However, as decoding accuracy declines, the two are no longer equivalent. Position-to-position performance degraded so much that no targets could be hit at either R2=0.5 or 0.75. However, velocity-to-velocity performance only degraded slightly because the decoding errors were averaged out over time during the integration process. These results would suggest that, if the position and velocity can be extracted equally well from a neural ensemble (i.e., same R2 values for each), using decoded velocity for device control would be much more advantageous and robust to neural noise.

Figure 3.

Figure 3

Performance measures and example trajectories for the four possible mappings involving position and/or velocity. Each plot of trajectories includes one randomly selected block of movements from each participant. Trajectories are color coded by the presented target. Black dots indicate when the cursor was in the correct target. Movement time represents the average time needed to successfully acquire each target (excluding the one-second required hold time) or the max allotted time if no target was acquired. Error bars represent the standard deviation of the measures across different participants.

The transformation from position-to-velocity was easily learned and resulted in much better performance than directly using position to control position when any errors were present. Position-to-velocity was almost as good as velocity-to-velocity at each error level with only a slight decrease in overall performance measures. Here again, applying velocity commands to a device allowed position decoding errors to be integrated out over time, thus minimizing position errors in the device itself.

The transformation from velocity-to-position suffered from two problems. Just like in the position-to-position case, errors in the decoded command did not get integrated out over time thus causing the cursor to jump around at each time step. Also, the person could only maintain the desired velocity for short amounts of time. Each time the arm hit its full excursion limit in any given direction, the ‘decoded’ velocity would drop to zero. Velocity-to-position trajectories in figure 3 under the perfect decoding case (R2=1) show participants were often able to initially reach the targets, but they had difficulty maintaining the cursor in the target because they could not maintain their hand at the constant non-zero velocity that mapped to the desired target position. It’s unclear if this second problem would still be a factor in paralyzed individuals who are imagining or visualizing moving their arm at a given velocity. However, the inherent jitter from the imperfect position command (which does not get averaged out through integration) makes this option problematic regardless.

Figure 4.a shows performance measures under all transformations that involve goal. Recall that only the center target and four of the eight radial targets were defined as possible goals to be decoded or applied at each time step. A limited set of goals was used in this study to determine if remapping one movement parameter to another can enable people to generate movements to locations that were not part of the finite set of goals put out by a discrete goal-based decoding function. Performance measures are reported separately for the four radial targets that were the four possible radial goal locations versus the four targets that were not goal options.

Figure 4.

Figure 4

Performance measures when remapping to and from reach goal. (A) Performance measures were calculated separately for the four starred targets shown in figure 2(on-goal targets) versus for the four targets along the horizontal and vertical axes that were not part of the set of possible decoded goal locations (off-goal targets). Movement time represents the time needed to successfully acquire a target excluding the one-second required hold time or the max allotted time if no target was acquired. Error bars represent the standard deviation of the measures across different participants. (B) Trajectories resulting from the goal-to-velocity transformation. Trajectories are color-coded by target. Black dots indicate when the cursor was in the correct target.

When no decoding errors were added and goal was used as the applied device command (i.e., goal-to-goal, position-to-goal or velocity-to-goal transformations), all participants performed well as long as the targets they were trying to hit were part of the predefined set of possible goal locations (i.e., starred targets in figure 2 listed as ‘on-goal’ targets in figure 4). As decoding accuracy was reduced to R2=0.5, participants were unable to acquire on-goal targets in over 25% of the trials in the goal-to-goal condition, while they could still hit all the targets in the position-to-goal condition. In the goal-to-goal condition, all errors in discrete goal decoding that lower the R2 accuracy measure are, by definition, incorrect target choices because the goal decoder only outputs the five possible goal options at each time step. However, errors in position that would lower an R2 accuracy measure by the same amount can take on a continuous range of values. Often these position errors were still closer to the correct goal than to any of the other goal options. Therefore, many position errors still got assigned to the correct goal during the remapping process, thus resulting in fewer missed targets for the same R2 decoding accuracy.

In both the position-to-goal and velocity-to-goal conditions, participants took longer on average to successfully acquire the targets as the decoding errors increased. The occasional larger decoding errors would cause the cursor to intermittently be assigned to an incorrect target location often forcing the clock to start over on the one-second required hold time. In the position-to-goal condition, the participants could still successfully hit virtually all of the targets within the time limit in spite of the occasional decoding errors. Participants just waited longer with their hand in the correct position until enough consecutive time points went by for a successful hit without a position error being large enough to cause the cursor to jump to a wrong target. However, with the velocity-to-goal transformation, participants were unable to maintain their wrist at the needed velocity for the longer amounts of time necessary to consistently achieve hits in the presence of decoding errors. Again, this may or may not be an issue with paralyzed individuals who may be able to imagine or attempt moving at a constant velocity for extended amounts of time.

The four off-goal target locations (non-starred targets in figure 2), of course, could not be reached under any transformation where goal was the applied device command because the cursor could not take on any location other than the five predefined goal locations indicated by the starred targets in figure 2. Therefore, all off-goal targets were missed when people were required to use goal directly (goal-to-goal) or make position-to-goal or velocity-to-goal transformations.

However, when goal was the decoded movement parameter, one transformation enabled participants to easily acquire both on- and off-goal targets. The goal-to-velocity transformation resulted in a nearly perfect target acquisition record for both on- and off-goal target locations at all levels of decoding accuracy (trajectories shown in figure 4.b). Trajectories to on-goal targets were relatively straight, even at R2 = 0.5, whereas analogous position-to-velocity or velocity-to-velocity trajectories (shown in figure 3) were much more jagged. This regularity in the on-goal movement paths with goal-to-velocity remapping is because the cursor could only move in one of four possible directions at each time step (or no movement if the center target was decoded).

Off-goal targets were also easily acquired with the goal-to-velocity remapping, although the trajectories to off-goal targets were not smooth. Participants used the limited set of four possible movement directions to get to intermediate target locations by alternating back and forth between the two surrounding decoded goals options. Switching the decoded goal (and therefore the cursor velocity) between two options that are 90 degrees apart resulted in trajectories that zigzagged their way along an intermediate path, thus allowing participants to reach off-goal targets locations as shown in the bottom row of figure 4.b.

Tables 1 and 2 summarize the mean and standard deviation across participants of the average movement times under each transformation and level of decoding error.

Table 1.

Mean and standard deviation of each participant’s average movement times for all movements involving transformations of position and velocity.

Transformation R2=1.0 R2=0.75 R2=0.5
Position-to-Position 0.96 ± 0.16 10 ± 0 10 ± 0
Position-to-Velocity 2.12 ± 0.36 3.28 ± 1.01 5.97 ± 2.37
Velocity-to-Position 9.85 ± 0.25 10 ± 0 10 ± 0
Velocity-to-Velocity 0.85 ± 0.14 2.25 ± 1.22 5.24 ± 2.46

Table 2.

Mean and standard deviation of each participant’s average movement times for all movements with transformations involving goal.

Transformation Targets R2=1.0 R2=0.75 R2=0.5
Position-to-Goal On Goal 0.58 ± 0.24 1.11 ± 0.96 2.37 ± 2.21
Off Goal 10 ± 0 10 ± 0 10 ± 0
Velocity-to-Goal On Goal 1.00 ± 1.43 3.6 ± 3.24 5.66 ± 3.34
Off Goal 10 ± 0 10 ± 0 10 ± 0
Goal-to-Position/Goal-to-Goal On Goal 0.81 ± 0.48 2.00 ± 1.34 5.19 ± 3.5
Off Goal 10 ± 0 10 ± 0 10 ± 0
Goal-to-Velocity On Goal 2.56 ± 0.55 3.18 ± 1.2 3.85 ± 1.76
Off Goal 3.78 ± 0.96 4.08 ± 1.07 4.92 ± 1.35

3. Discussion

In studies with able-bodied animals and humans, researchers regularly use the coefficient of determination (R2) to quantify how accurately the decoded neural signals match the actual recorded movements. Even with paralyzed individuals, R2 can still be used to quantify how related the neural signals are to different aspects of movements by having a person attempt or imagine making specific movements shown to them on a computer screen [35, 36]. This objective R2 measure between the actual and predicted movement parameter is often used to rank or compare how strongly the different aspects of movement are encoded.

In brain-machine interface applications, one might be tempted to use the movement parameter that is most accurately decoded from the recorded neural signals and let the software transform that movement parameter into the needed device command (for example by integrating decoded velocity at each time step to get a new cursor position). However, this study revealed that decoding position, velocity, or reach goal can result in substantially different movement performances, even when the decoding accuracy levels are equivalent and the user is not required to make any transformations (i.e., compare performance measures for position-to-position, velocity-to-velocity, and goal-to-goal where the participants made no transformation). Specifically, position decoding errors were impossible to overcome resulting in no target hits at either error level. However, participants easily acquired all targets when the same amount of decoding error was in the velocity command and acquired nearly all on-goal targets when an equivalent error level was seen in the goal command.

This study also revealed that remapping the decoded movement parameter to control a different aspect of movement may be an option to improve performance in BMI applications. Participants easily made the transformation of using position or goal to control cursor velocity. This transformation virtually eliminated the control problems seen when position was used directly and also allowed the decoded goal to be used to acquire non-goal targets. The inherent integration of the velocity command naturally averages out decoding errors whether those errors are from inaccurately decoded position or velocity or whether those errors are because the limited possible goal values do not include the actual desired velocity.

Participants also easily learned to use decoded position or velocity to control discrete goal locations. This transformation may be advantageous if the application requires only movement to a fixed number of locations and if goal decoding accuracy is low. While goal decoding errors would normally result in the cursor jumping between correct and incorrect targets, equally-low measures of decoding accuracy in position often produce errors that still get mapped to the correct goal.

The results of this study provide a practical framework in which to compare the expected performance when decoding and using position, velocity, or goal without user transformations. For example, if a particular set of neural signals conveyed mostly position information (high R2) and much less velocity information, better device control may still be obtained by using the poorly decoded velocity signal instead of the more-accurately-decoded position command (i.e., velocity-to-velocity at R2=0.5 performed much better than position-to-position at R2=0.75 (p<0.001, Wilcoxon signed-rank sum test)).

This study also provides guidance for utilizing decoder-to-device transformations to maximize control with the signals that are available. Using the same example neural signals as above—by remapping the better position signal to the velocity of the device, one can still take advantage of the higher-quality position information for control while using the more stable velocity device command (i.e., position-to-velocity at R2=0.75 performed better than velocity-to-velocity at R2=0.5 (p<0.001, Wilcoxon signed-rank sum test)).

The best movement parameter for device control may not always be decodable from the available signals. Trying to use a poorly-decoded signal for device control could lead to frustration. However, by starting a person’s training using the most strongly-encoded movement parameter remapped to a more-easily controlled device action, participants may be able to avoid the frustration of inadequate initial control and learn to use their BMI system more rapidly.

The participants reported that many of the transformations were easy to learn and required no conscious thought after very little practice. For a paralyzed person who makes no physical movements, it is possible that any initial discrepancy between the decoded and applied command will quickly fade as controlling the device becomes second nature. In the same position-to-velocity example used above, a paralyzed person may initially have to think about using their arm like a joystick to control the velocity of the cursor or other assistive device. However, over time, as the transformation moves to a subconscious level, those neurons could simply appear to encode the needed velocity command for the device.

Acknowledgments

This work was supported by the National Institutes of Health NINDS 1R01NS058871, The Dept. of Veterans Affairs # B4195R, the Cleveland Clinic, & Case Western Reserve University. We are very grateful to Holle Carey and Jonathan Carey for their assistance in collecting this data.

Footnotes

1

Goal-to-position mapping is an exception because it is functionally identical to goal-to-goal which required no transformation. This is because the applied position command can only take on one of the five decoded goal locations just like in goal-to-goal mapping.

References

  • 1.Paninski L, Fellows MR, Hatsopoulos NG, Donoghue JP. Spatiotemporal tuning of motor cortical neurons for hand position and velocity. J Neurophysiol. 2004;91:515–532. doi: 10.1152/jn.00587.2002. [DOI] [PubMed] [Google Scholar]
  • 2.Kettner RE, Schwartz AB, Georgopoulos AP. Primate motor cortex and free arm movements to visual targets in three-dimensional space. III. Positional gradients and population coding of movement direction from various movement origins. J Neurosci. 1988;8:2938–2947. doi: 10.1523/JNEUROSCI.08-08-02938.1988. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Georgopoulos AP, Caminiti R, Kalaska JF. Static spatial effects in motor cortex and area 5: quantitative relations in a two-dimensional space. Exp Brain Res. 1984;54:446–454. doi: 10.1007/BF00235470. [DOI] [PubMed] [Google Scholar]
  • 4.Ashe J, Georgopoulos AP. Movement Parameters and Neural Activity in Motor Cortex and Area 5. Cereb Cortex. 1994;4:590–600. doi: 10.1093/cercor/4.6.590. [DOI] [PubMed] [Google Scholar]
  • 5.Pistohl T, Ball T, Schulze-Bonhage A, Aertsen A, Mehring C. Prediction of arm movement trajectories from ECoG-recordings in humans. Journal of Neuroscience Methods. 2008;167:105–114. doi: 10.1016/j.jneumeth.2007.10.001. [DOI] [PubMed] [Google Scholar]
  • 6.Bradberry TJ, Gentili RJ, Contreras-Vidal JL. Reconstructing Three-Dimensional Hand Movements from Noninvasive Electroencephalographic Signals. J Neurosci. 2010;30:3432–3437. doi: 10.1523/JNEUROSCI.6107-09.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Moran DW, Schwartz AB. Motor cortical representation of speed and direction during reaching. J Neurophysiol. 1999;82:2676–2692. doi: 10.1152/jn.1999.82.5.2676. [DOI] [PubMed] [Google Scholar]
  • 8.Georgopoulos AP, Kalaska JF, Caminiti R, Massey JT. On the relations between the direction of two-dimensional arm movements and cell discharge in primate motor cortex. J Neurosci. 1982;2:1527–1537. doi: 10.1523/JNEUROSCI.02-11-01527.1982. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Schwartz AB, Kettner RE, Georgopoulos AP. Primate motor cortex and free arm movements to visual targets in three-dimensional space. I. Relations between single cell discharge and direction of movement. J Neurosci. 1988;8:2913–2927. doi: 10.1523/JNEUROSCI.08-08-02913.1988. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Fu QG, Flament D, Coltz JD, Ebner TJ. Temporal encoding of movement kinematics in the discharge of primate primary motor and premotor neurons. J Neurophysiol. 1995;73:836–854. doi: 10.1152/jn.1995.73.2.836. [DOI] [PubMed] [Google Scholar]
  • 11.Evarts EV. Relation of pyramidal tract activity to force exerted during voluntary movement. J Neurophysiol. 1968;31:14–27. doi: 10.1152/jn.1968.31.1.14. [DOI] [PubMed] [Google Scholar]
  • 12.Ashe J. Force and the motor cortex. Behavioural Brain Research. 1997;86:1–15. doi: 10.1016/s0166-4328(96)00145-3. [DOI] [PubMed] [Google Scholar]
  • 13.Kalaska JF, Cohen DA, Hyde ML, Prud’homme M. A comparison of movement direction-related versus load direction- related activity in primate motor cortex, using a two-dimensional reaching task. J Neurosci. 1989;9:2080–2102. doi: 10.1523/JNEUROSCI.09-06-02080.1989. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Sergio LE, Kalaska JF. Changes in the Temporal Pattern of Primary Motor Cortex Activity in a Directional Isometric Force Versus Limb Movement Task. J Neurophysiol. 1998;80:1577–1583. doi: 10.1152/jn.1998.80.3.1577. [DOI] [PubMed] [Google Scholar]
  • 15.Georgopoulos AP, Ashe J, Smyrnis N, Taira M. The Motor Cortex and the Coding of Force. Science. 1992;256:1692–1695. doi: 10.1126/science.256.5064.1692. [DOI] [PubMed] [Google Scholar]
  • 16.Taira M, Boline J, Smyrnis N, Georgopoulos AP, Ashe J. On the relations between single cell activity in the motor cortex and the direction and magnitude of three-dimensional static isometric force. Exp Brain Res. 1996;109:367–376. doi: 10.1007/BF00229620. [DOI] [PubMed] [Google Scholar]
  • 17.Mussa-Ivaldi FA. Do neurons in the motor cortex encode movement direction? An alternative hypothesis. Neuroscience Letters. 1988;91:106–111. doi: 10.1016/0304-3940(88)90257-1. [DOI] [PubMed] [Google Scholar]
  • 18.Fetz EE, Cheney PDKMSP. Control of forelimb muscle activity by populations of corticomotoneuronal and rubromotoneuronal cells. Progress in Brain Research. 1989;80:437–449. doi: 10.1016/s0079-6123(08)62241-4. [DOI] [PubMed] [Google Scholar]
  • 19.Todorov E. Direct cortical control of muscle activation in voluntary arm movements: a model. Nat Neurosci. 2000;3:391–398. doi: 10.1038/73964. [DOI] [PubMed] [Google Scholar]
  • 20.Morrow MM, Miller LE. Prediction of Muscle Activity by Populations of Sequentially Recorded Primary Motor Cortex Neurons. J Neurophysiol. 2003;89:2279–2288. doi: 10.1152/jn.00632.2002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Snyder LH, Batista AP, Andersen RA. Coding of intention in the posterior parietal cortex. Nature. 1997;386:167–170. doi: 10.1038/386167a0. [DOI] [PubMed] [Google Scholar]
  • 22.Alexander GE, Crutcher MD. Neural representations of the target (goal) of visually guided arm movements in three motor areas of the monkey. J Neurophysiol. 1990;64:164–178. doi: 10.1152/jn.1990.64.1.164. [DOI] [PubMed] [Google Scholar]
  • 23.Kakei S, Hoffman DS, Strick PL. Muscle and Movement Representations in the Primary Motor Cortex. Science. 1999;285:2136–2139. doi: 10.1126/science.285.5436.2136. [DOI] [PubMed] [Google Scholar]
  • 24.Vargas-Irwin CE, Shakhnarovich G, Yadollahpour P, Mislow JMK, Black MJ, Donoghue JP. Decoding Complete Reach and Grasp Actions from Local Primary Motor Cortex Populations. J Neurosci. 2010;30:9659–9669. doi: 10.1523/JNEUROSCI.5443-09.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Pesaran B, Nelson MJ, Andersen RA. Dorsal Premotor Neurons Encode the Relative Position of the Hand, Eye, and Goal during Reach Planning. Neuron. 2006;51:125–134. doi: 10.1016/j.neuron.2006.05.025. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Caminiti R, Johnson PB, Galli C, Ferraina S, Burnod Y. Making arm movements within different parts of space: the premotor and motor cortical representation of a coordinate system for reaching to visual targets. J Neurosci. 1991;11:1182–1197. doi: 10.1523/JNEUROSCI.11-05-01182.1991. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Hatsopoulos N, Joshi J, O’Leary JG. Decoding Continuous and Discrete Motor Behaviors Using Motor and Premotor Cortical Ensembles. J Neurophysiol. 2004;92:1165–1174. doi: 10.1152/jn.01245.2003. [DOI] [PubMed] [Google Scholar]
  • 28.Crutcher MD, Alexander GE. Movement-related neuronal activity selectively coding either direction or muscle pattern in three motor areas of the monkey. J Neurophysiol. 1990;64:151–163. doi: 10.1152/jn.1990.64.1.151. [DOI] [PubMed] [Google Scholar]
  • 29.Taylor DM, Tillery SI, Schwartz AB. Direct cortical control of 3D neuroprosthetic devices. Science. 2002;296:1829–1832. doi: 10.1126/science.1070291. [DOI] [PubMed] [Google Scholar]
  • 30.Wessberg J, Nicolelis MAL. Optimizing a Linear Algorithm for Real-Time Robotic Control using Chronic Cortical Ensemble Recordings in Monkeys. Journal of Cognitive Neuroscience. 2004;16:1022–1035. doi: 10.1162/0898929041502652. [DOI] [PubMed] [Google Scholar]
  • 31.Serruya MD, Hatsopoulos NG, Paninski L, Fellows MR, Donoghue JP. Instant neural control of a movement signal. Nature. 2002;416:141–142. doi: 10.1038/416141a. [DOI] [PubMed] [Google Scholar]
  • 32.Carmena JM, Lebedev MA, Crist RE, Doherty JE, Santucci DM, Dimitrov DF, Patil PG, Henriquez CS, Nicolelis MAL. Learning to Control a Brain-Machine Interface for Reaching and Grasping by Primates. PLoS Biology. 2003;1:e42. doi: 10.1371/journal.pbio.0000042. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Wu W, Black MJ, Gao Y, Bienenstock E, Serruya M, Shaikhouni A, Donoghue JP. Neural Decoding of Cursor Motion using a Kalman Filter. Advances in Neural Information Processing Systems. 2003;15:133 – 140. [Google Scholar]
  • 34.Wu W, Shaikhouni A, Donoghue JP, Black MJ. Closed-loop neural control of cursor motion using a Kalman filter. Conf Proc IEEE Eng Med Biol Soc. 2004;6:4126–4129. doi: 10.1109/IEMBS.2004.1404151. [DOI] [PubMed] [Google Scholar]
  • 35.Hochberg LR, Serruya MD, Friehs GM, Mukand JA, Saleh M, Caplan AH, Branner A, Chen D, Penn RD, Donoghue JP. Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature. 2006;442:164–171. doi: 10.1038/nature04970. [DOI] [PubMed] [Google Scholar]
  • 36.Kim S-P, Simeral JD, Hochberg LR, Donoghue JP, Black MJ. Neural control of computer cursor velocity by decoding motor cortical spiking activity in humans with tetraplegia. Journal of Neural Engineering. 2008;5:455–476. doi: 10.1088/1741-2560/5/4/010. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES