Abstract
The brain processes sensory and motor information in a wide range of coordinate systems, ranging from retinal coordinates in vision to body-centered coordinates in areas that control musculature. Here we focus on the coordinate system used in the motor cortex to guide actions and examine physiological and psychophysical evidence for an allocentric reference frame based on spatial coordinates. When the equations of motion governing reaching dynamics are expressed as spatial vectors, each term is a vector cross product between a limb-segment position and a velocity or acceleration. We extend this computational framework to motor adaptation, in which the cross-product terms form adaptive bases for canceling imposed perturbations. Coefficients of the velocity- and acceleration-dependent cross products are assumed to undergo plastic changes to compensate the force-field or visuomotor perturbations. Consistent with experimental findings, each of the cross products had a distinct reference frame, which predicted how an acquired remapping generalized to untrained location in the workspace. In response to force field or visual rotation, mainly the coefficients of the velocity- or acceleration-dependent cross products adapted, leading to transfer in an intrinsic or extrinsic reference frame, respectively. The model further predicted that remapping of visuomotor rotation should under- or overgeneralize in a distal or proximal workspace. The cross-product bases can explain the distinct patterns of generalization in visuomotor and force-field adaptation in a unified way, showing that kinematic and dynamic motor adaptation need not arise through separate neural substrates.
Keywords: motor control, motor cortex, computational model, generalization, force-field adaptation, visuomotor rotation, reference frames, proprioception
the activities of neurons in the primary motor cortex reflect a broad mixture of movement-related variables, but how the desired movement is converted by M1 neurons into joint torques from these activities is not clear. Two broad class of theories have emerged to explain the data: one based on intrinsic joint coordinates, such as joint angles and torques (Evarts 1968; Fetz and Cheney 1980), and another based on extrinsic coordinates, such as limb positions and velocities in space (Georgopoulos et al. 1982, 1986). Another approach to resolving this issue is to look for computational constraints that would distinguish between the two classes of reference frames. This is the approach taken here, in which we examine the complexity of computing the kinematic elements of a reaching movement and the dynamic torques needed to achieve the movement. The predictions of this computational approach are compared with existing psychophysical data on kinematic and dynamic adaptation of reaching movements.
The equations of motion (EOMs) governing reaching dynamics simplify when expressed in spatial vectors rather than joint angles (Tanaka and Sejnowski 2013). Each term in the EOMs is a vector cross product between a spatial position and time derivatives (velocity and acceleration). These cross products have properties similar to those of neurons in the motor cortex and are consistent with a wide range of experimental findings: directional cosine tuning (Georgopoulos et al. 1982), a nonuniform distribution of preferred directions (Scott et al. 2001), workspace dependence of preferred directions (Caminiti et al. 1990), coexisting multiple reference frames, spatiotemporal properties of population vector (Georgopoulos 1988; Scott et al. 2001). The cross products of vectors in a spatial reference frame can be computed by a conventional feedforward neural network and could form an intermediate representation in the primary motor cortex between visual trajectory planning and motor outputs. By keeping all of the sensory and motor information in spatial coordinates, the muscle tensions for reaching can be approximated by a linear combination of these cross products. Computing the reaching dynamics with these spatial vectors is much simpler than using joint angles, which requires computationally expensive solutions to inverse kinematics and inverse dynamics problems.
The ways in which a learned remapping generalizes provide insights into the representations of motor control and motor adaptation in humans (Shadmehr 2004). For example, the remapping acquired within a fixed workspace for one movement direction can be examined for other movement directions starting from a same posture (Donchin et al. 2003; Imamizu et al. 1995; Mattar and Ostry 2007; Tanaka et al. 2009; Thoroughman and Shadmehr 2000; Wu and Smith 2013). Another example is intermanual transfer, in which a remapping learned with one arm is examined for the other arm (Criscimagna-Hemminger et al. 2003; Imamizu and Shimojo 1995; Sainburg and Wang 2002; Taylor et al. 2011; Wang and Sainburg 2003, 2004). Workspace generalization can also be assessed by testing a remapping in a new posture (Baraduc and Wolpert 2002; Ghilardi et al. 1995; Malfait et al. 2002, 2005; Wang and Sainburg 2005). More generally, tests for context-dependent generalization include unimanual/bimanual use (Nozaki et al. 2006), task variation (Braun et al. 2009), movement speed differences (Kitazawa et al. 1997), comparison across limb parts (Krakauer et al. 2006), and target arrangements (Taylor and Ivry 2013).
Our previous work suggested that the cross products represent intermediate dynamic variables in converting a trajectory in workspace coordinates into dynamic variables such as joint torques and muscle tensions (Tanaka and Sejnowski 2013). Within this model, an imposed perturbation such as an external force field or rotated visual feedback can be compensated by adjusting the relevant coefficients of cross products in a feedforward network, so the acquired remapping as a result of motor adaptation is stored in relatively few coefficients. This computational hypothesis makes an explicit prediction: motor adaptation and its generalization should have the geometric properties implicit in these cross-product terms.
Experimental paradigms developed in motor adaptation can be dynamic or kinematic, and modeling studies have hitherto treated them separately (Krakauer et al. 1999). In dynamic adaptation, an external force field is imposed onto hand or limb segments, or dynamic parameters such as mass or inertial moments are modified. In kinematic adaptation, the perceived kinematics of the hand is perturbed by, for example, an optical prism or computer-generated transformations. These two types of adaptation have been thought to derive from different neural mechanisms because they generalize to different reference frames and, consequently, they have been modeled with different computational frameworks. Here we will present an alternative to the hypothesis that dynamic and kinematic adaptation occur through independent mechanisms and consider viscous force-field adaptation and visuomotor rotation adaptation as representative examples of dynamic and kinematic adaptation, respectively. Specifically, we address whether the classic results of motor adaptation, the transfer for force-field and visuomotor rotation adaptation in intrinsic and extrinsic based coordinates, respectively, can be reproduced in a single computational model. In the present computational framework, dynamic and kinematic adaptation affect different terms in the EOMs governing reaching, thereby inheriting different reference frames when generalized to an untrained workspace. Thus geometric properties of cross products determine how an acquired remapping generalizes to an untrained workspace. This unified framework for kinematic and dynamic adaptation clarifies other aspects of motor learning. We have used Cartesian coordinates here for convenience, but other spatial coordinate systems could also be used; what is important is that the reference frame is not moving with respect to the world.
METHODS
Spatial representation for computing inverse dynamics in reaching movements.
Joint-angle coordinates are popular in robotics (Fig. 1A), but require the solution of nonlinear inverse kinematics and inverse dynamics equations, which are ill-posed and may not have unique solutions (Atkeson 1989). Even when the joint angles are uniquely determined for a multijointed limb, the equations based on joint angles are prohibitively complicated because the reference frames are moving, as seen even in the case of simplest two-link model:
| (1) |
| (2) |
Here θ1 and θ2 are the shoulder and elbow angles, respectively (Fig. 1A), mi, Ii, li, and ri are the mass, the moment of inertia, the full length, and the length to the center of mass (COM) of the ith limb segment, respectively, and Bi is the mechanical viscous coefficient of the ith joint. Although sensorimotor areas in the parietal and frontal lobes have been thought to process these inverse kinematics and inverse dynamics, the transformations that neurons in these areas perform are still are not understood. The EOMs in the joint-angle representation have been used for modeling dynamic motor adaptation in a number of studies (Berniker and Kording 2008; Shadmehr and Mussa-Ivaldi 1994), but not for modeling kinematic motor adaptation (Cheng and Sabes 2007; Ghahramani and Wolpert 1997; Kitazawa et al. 1995; Tanaka et al. 2009).
Fig. 1.
Motor control in joint-angle and spatial coordinates. A link model of the arm represented in joint-angle space (A) and in spatial coordinates (B) is shown. Joint angles (θ1, θ2) represent a configuration of the two-link model in joint-angle space, and spatial vectors (X10, X20, X21) are used for the spatial representation. C: visuomotor transformation proposed by the model. Task-oriented endpoint position and movement vectors are first determined, and those vectors in turn are decomposed into limb segment vectors. Vector cross products are then computed as intermediate variables, and joint torques or muscle tensions are finally generated as weighted sums of cross products. The readout coefficients in the computation of joint torques from cross products are determined in nonperturbed conditions according to Newtonian dynamics. This study posits that plasticity in the readout coefficients (i.e., weights from vector cross products to joint torques) provides adaptive changes for both dynamic and kinematic motor adaptation. See text for definition of terms.
Several invariant characteristics of arm movements are expressed in spatial variables (Morasso 1981). Therefore, we reformulated reaching using the spatial positions of limbs in spatial coordinates (Fig. 1B), which led to equations that are considerably more concise with physically intuitive interpretations (Hinton 1984; Tanaka and Sejnowski 2013). For an n-link system in the horizontal plane the EOMs become:
| (3) |
where Xj,i and Xj,0 are the location vectors of the jth segment measured with respect to ith segment endpoint and to the shoulder, and V and A are their velocity and acceleration, respectively. Bold fonts will be used for denoting vectors and matrices (X, V and A), which hereafter will be referred to collectively as limb segment vectors. The operator […]z extracts the z-component of a vector. Only the z components of cross products were considered because, in our study, the position, velocity and acceleration vectors were confined to the same horizontal plane, as assumed in many adaptation experiments. Note that the joint torque at ith joint (τi) consists of cross products between position and velocity/acceleration vectors of limb segments.
Equation 3 relates kinematic variables (spatial positions and derivatives on the right-hand side) directly to dynamic variables (joint torques on the left-hand side), thereby bridging the dynamic and kinematic views of the motor cortex. The first term on the right-hand side represents the inertial dynamics, and the second represents the mechanical viscosity of the limb. Given X, V and A, Eq. 3 computes the inverse-dynamics of joint torques without having to compute or represent the joint angles and has several computational advantages compared with the EOMs for a joint-angle representation.
Two-link model.
A simple planer two-link model based on Eq. 3 was used for simulating human psychophysical experiments in the horizontal plane:
| (4) |
| (5) |
Although Eqs. 4 and 5 are based on spatial vectors and are mathematically equivalent to Eqs. 1 and 2 based on joint angles, they suggest different computational schemes for computing reaching dynamics. The EOMs based on spatial vectors require four acceleration-dependent cross-product terms from three limb segment vectors:
| (6) |
Here X10, X20 and X21 are spatial vectors connecting the shoulder and the COM of arm, the shoulder and the COM of forearm, and the elbow and the COM of forearm, respectively (Fig. 1B), and A10, A20 and A21 are the corresponding accelerations of these spatial vectors. In addition to the acceleration-dependent terms, we assumed these velocity-dependent cross products exist in the EOMs, and that the coefficients of cross-product terms undergo adaptive changes to counteract the imposed force field. The motor cortex also has to compensate the viscous force generated in lengthening muscles represented by two velocity-dependent cross products:
| (7) |
which together with the collection of four cross products in Eq. 6 form a basis set for the inverse dynamics model. The eight biomechanical parameters in the coefficients were adopted from Shadmehr and Mussa-Ivaldi (1994) and our previous paper (Table 1).
Table 1.
Model parameters for the two-link model
| Mass (mi) | Inertial Moment (Ii) | Total Length (li) | COM Length (ri) | |
|---|---|---|---|---|
| Segment 1 | 1.93 kg | 0.0141 kg m2 | 0.33 m | 0.165 m |
| Segment 2 | 1.52 kg | 0.0188 kg m2 | 0.34 m | 0.19 m |
COM, center of mass.
To summarize, we proposed a computation of reaching dynamics with spatial cross products rather than joint angles that are conventionally used (Fig. 1C). First, the endpoint position and movement vectors are computed in the workspace coordinate with a given goal (i.e., target location); these endpoint vectors are then decomposed into limb-segment position and movement vectors and the cross products of limb-segment vectors are computed; and finally joint torques and/or muscle tensions are reconstructed by taking weighted sums of cross-product terms.
Vector cross products as neuronal firing rates.
The computation of reaching dynamics based on spatial vectors requires the computation of cross-product terms as an intermediate variable in converting kinematic variables (limb segment vectors) into dynamic variables (joint torques and muscle tensions). If the motor cortex is involved in this visuomotor transformation, then the firing rates of individual neurons in the motor cortex should represent individually or collectively all of the vector cross products in Eq. 3 and Eqs. 4 and 5 (Tanaka and Sejnowski 2013):
| (8) |
Each term is a multiplicative response of the spatial position of a limb and a limb velocity or acceleration, combinations that are represented by neurons in the motor cortex, modulated by the sine of the angle between the two vectors. The model neurons are either “acceleration cells” (rA) or “velocity cells” (rV). We have previously shown that this hypothesis can explain a wide range of experimental findings reported in the motor cortex as mathematical properties of vector cross products, such as directional cosine tuning, nonuniform distribution of preferred directions, postural dependence of preferred directions, coexistence of multiple reference frames, spatiotemporal properties of population vector, and (rectified) linear computation of muscle tensions (Tanaka and Sejnowski 2013). Vector cross products can be computed in a feedforward neural network with the known properties of motoneurons. The joint torques can be computed by a weighted sum of these neurons by a single layer of weights in a feedforward neural network model, as in Eq. 3 or Eqs. 4 and 5. In this framework, the motor cortex performs an internal inverse model of the arm that converts desired spatial movements of the limbs in spatial coordinates into joint torques and muscle forces.
The cross products X × A and X × V remain invariant if the vectors X, V and A are rotated by a same amount in the horizontal plane; in other words, the model neurons we assumed to represent the cross products (Eq. 8) should exhibit the same activities if the shoulder and the movement direction get rotated by a same amount. This property is clear if movements are expressed in terms of cross products but not when joint angles are used, and it furthermore leads to a testable prediction for how motor adaptation learned at one workspace generalizes to untrained workspaces in psychophysical experiments. Although muscle dynamics are more complicated than the dynamics of the cross products, they have a similar, rotational dependence on posture. Muscle actions (forces and moments generated by electrically stimulating muscles) rotate systematically in a posture-dependent manner (Buneo et al. 1997), consistent with our previous model of muscle activities as a rectified linear sum of cross products. Therefore, we expect that our argument based on the geometric property of cross products to hold even when muscle actions are taken into consideration.
Adaptive changes in linear read-out computation.
In Eqs. 4 and 5, the coefficients in front of cross products for computing the joint torques are determined so that a visually planned trajectory is recovered in the nonperturbed setting (i.e., with no viscous force field nor visuomotor rotation). Since the cross products form basis functions, a simple way to cancel an imposed force or visuomotor transformation is to adapt the appropriate coefficients:
| (9) |
where the 12 coefficients {wij} can be regarded as linear read-out weights from the cross products to the joint torques. The first four columns in the coefficient matrix W are coefficients of the acceleration-dependent bases, and the last two columns are coefficients of the velocity-dependent bases. This linear read-out computation from cross products to joint torques is similar to those proposed by Churchland and Shenoy (Churchland et al. 2012; Shenoy et al. 2011, 2013) and independently by Sussillo and Abbott (2009); one critical difference is that in our framework the input neurons compute the cross products that are explicitly related to reaching dynamics through Newtonian dynamics, whereas in their frameworks the activities of input neurons emerge through interactions with other input neurons that are connected recurrently and randomly.
We assumed that the coefficients {wij} were constant over the entire workspace, and therefore independent of position, so the joint torques depended on the positions only through the cross products. In previous studies of motor cortex, motor adaptation depended on gain-field-like modulation according to the proximity of extrinsic and intrinsic variables (Baraduc and Wolpert 2002; Brayanov et al. 2012; Hwang et al. 2003), not modeled in this study.
The model predicted that patterns of postadaptation generalization should reflect the properties of input neurons representing the specific cross products. When there are no external perturbations, the coefficient matrix is:
| (10) |
where all the coefficients are nonnegative. Here, the last two columns representing the coefficients of cross products between position and velocity are angular viscosity [a mechanical property of the joints (Hogan 1984)] and are set to zero because they have negligible effects on a limb under unperturbed conditions (Hollerbach 1982). Let τ(adapt) denote the adapted joint torque vector that recovers smooth, straight trajectories toward target, and the coefficients in Eq. 9 are optimized so as to minimize the squared error between τ(adapt) and τ(W),
| (11) |
τ(adapt) will be found for the two best studied motor adaptation paradigms: viscous force-field perturbations and visuomotor rotations. Two questions arise. First, can the linear read-out (Eq. 9) cancel imposed perturbations sufficiently to recover a straight trajectory toward the target? Second, can a postadaptation remapping reproduce the experimentally observed patterns of generalization to an untrained workspace? We examined these issues by numerically simulating the adaptive changes.
Desired trajectory of minimum-jerk criterion.
An endpoint trajectory can be computed in two ways: either with feedback control as a function of the current state of the arm and a target position, or with open-loop feedforward control as a function of time and target position. Feedforward control of the endpoint trajectory was chosen for computational simplicity with a point-to-point, minimum jerk trajectory (Flash and Hogan 1985):
| (12) |
where an initial position Xinitial and target position Xtarget were given in the horizontal plane, and Xhand was the distal end of the forearm for the two-link model. The cross products X × A and X × V were first computed as a function of time by using Eq. 12 as the desired trajectory, and then the joint torques were computed from the cross products by using Eq. 9 as the internal dynamics model. For a single initial position, eight targets uniformly distributed on a circle of 10-cm radius separated by 45° were considered both for training and generalization.
In simulating human psychophysical experiments, the movement amplitude was 10 cm and the movement duration tf was 800 ms for force fields and 500 ms for visuomotor rotation adaptations. Joint torques were computed in a feedforward manner as described above using Eqs. 9 and 12. For simulations of force-field adaptation, a feedback controller that modeled a reflex response was included (see Trajectory simulation) in addition to the feedforward controller. An additional 200 ms of stationary endpoint [i.e., Xhand (t) = Xtarget (t > tf)] was appended to the minimum-jerk trajectory of 800 ms, allowing the feedback controller to bring the hand to the target. First, the shoulder and elbow angles (θ1 and θ2, respectively) were uniquely determined. Then the segment-position vectors (X10, X20 and X21 for the two-link model) were computed accordingly from Xhand, from which the corresponding cross products were computed.
Trajectory simulation.
The imposed dynamic force-field and kinematic visuomotor rotation perturbations were applied in the joint-torque space of Eq. 9, but the results of the psychophysical were reported in terms of endpoint trajectories. We simulated the two-link dynamics with given joint torques supplied by the model to compare the experiments with the model. For the viscous force-field adaptation, a velocity-dependent external force was imposed, and for the visuomotor rotation the spatial coordinates were rotated.
For simulations that evaluated the degree of generalization in force-field adaptation, two types of perturbation were imposed on the hand. One was an extrinsic force field, Fextrinsic = BVhand, that depended on the endpoint velocity Vhand in spatial coordinates, with corresponding joint torques were τextrinsic = JT(θ)Fextrinsic, where JT(θ) is the Jacobean matrix for the transformation between spatial and joint angle coordinates. The B matrix was (N·s/m). A second type of perturbation was an intrinsic force field, τintrinsic = Wθ̇, which depended on the joint angle velocity θ̇ = (θ̇1, θ̇2)T, where the matrix W was determined by JRTBJR, with JR denoting the Jacobian matrix at the trained workspace (defined below).
For Fig. 2, the trained and test workspaces were (θ1, θ2) = (15°, 85°) and (θ1, θ2) = (65°, 85°), respectively, to reproduce the previous results in Shadmehr and Mussa-Ivaldi (1994). For Figs. 3 and 4, the trained workspace was (θ1, θ2) = (36°, 107°) [or (x, y) = 0, 40 cm], and the test workspace was varied systematically to examine generalization patterns to various initial postures.
Fig. 2.
Behavioral generalization after adapting to a viscous force field. Hand trajectories before adaptation (A), after adaptation (B), and after-effects to the viscous force field at the right workspace (C) are shown. Simulated model trajectories and desired minimum-jerk trajectories are solid and dashed lines, respectively (left) and are compared with the corresponding experimental trajectories (right). Eight directions of movement are color-coded. Generalization to hand trajectories in the left workspace when an intrinsic (D) or an extrinsic force field (E) was imposed after the linear readout model was trained at the right workspace. [The experimental figures are adapted from Figs. 9A, 9D, 13D, 15B, and 15A, respectively, from Shadmehr and Mussa-Ivaldi (1994), with permission.]
Fig. 3.
Generalization of intrinsic viscous force field to multiple workspace locations. The model learned the force field at the central location [(x, y) = (0, 40)], and 14 peripheral locations were tested with the intrinsic-based force field for generalization, which were positioned with one of three distances from the shoulder (30, 40 or 50 cm) and one of five directions (−30°, −15°, 0°, 15° or 30°). A: polar plots of directional errors at the training workspace (center, red) and the test workspaces (peripheral, black). Solid lines at each workspace represent directional errors for eight target directions, and dashed lines represent null directional errors for reference. They overlapped almost completely, indicating that there were little directional errors at all locations. The scale ticks along the radial axes indicate 15° directional errors (positive and negative values for clockwise and counterclockwise deviations, respectively). Hand trajectories at distal [(x, y) = (0 cm, 50 cm); B] and proximal [(x, y) = (0 cm, 50 cm); C] to the body, and left [(x, y) = (−25 cm, 35 cm); D] and right [(x, y) = (25 cm, 35 cm); E] from the center workspace are shown. The corresponding locations were indicated by the letters in A.
Fig. 4.
Generalization of extrinsic viscous force field to multiple workspace locations. A: polar plots of directional errors at the training workspace (center, red) and the test workspaces with the extrinsic-based force field (peripheral, black). Hand trajectories at distal [(x, y) = (0 cm, 50 cm); B] and proximal [(x, y) = (0 cm, 50 cm); C] to the body, and left [(x, y) = (−25 cm, 35 cm); D] and right [(x, y) = (25 cm, 35 cm); E] from the center workspace are shown. The same format was used as in Fig. 3.
For visuomotor rotation adaptation (Fig. 5), we simulated the experiment in Krakauer et al. (2000). A counterclockwise (CCW) rotation of 60° was imposed onto the visual feedback of the hand and adapted in the trained workspace [(θ1, θ2) = (45°, 90°)] by optimizing the coefficients to cancel the imposed rotation. The optimized coefficients were used to determine the degree of generalization at test postures [(θ1, θ2) = (0°, 90°) and (θ1, θ2) = (90°, 90°)] by changing the shoulder angle with the elbow angle unchanged. For Fig. 6, the degree of generalization to multiple starting postures was tested by training the linear model on a clockwise (CW) rotation of 60° at the central workspace (θ1, θ2) = (36°, 107°) [or (x, y) = 0, 40 cm], and then testing the learned remapping for transfer to other workspaces in the reachable region.
Fig. 5.

Directional generalization of visuomotor rotation remapping. A: cursor from adapted movements (solid) and desired (dashed) trajectories at the trained workspace [(θ1, θ2) = (45°, 90°)]. The cursor trajectories were obtained by rotating the simulated hand trajectories by the imposed rotation so that the comparison with the desired trajectories was made straightforward. Cursor from movements adapted in the trained workshops (solid) and desired (dashed) trajectories in the left workspace [(θ1, θ2) = (90°, 90°); B] and in the right workspace [(θ1, θ2) = (0°, 90°); C] are shown.
Fig. 6.
Under- and over-generalization of rotated remapping for visuomotor rotation. A: cursor (solid) and desired (dashed) trajectories at the trained workspace [(x, y) = (0 cm, 40 cm)]. Under-generalization at a distal posture [(x, y) = (0 cm, 45 cm); B], and over-generalization at a proximal posture [(x, y) = (0 cm, 35 cm); C] are shown. D: the degree of generalization over the entire workspace. Color at each point in the workspace indicates the degree of counterrotation averaged over eight movement directions. The gray dashed line indicates iso-distance locations 40 cm from the shoulder. Values larger than 60° represent over-generalization, and values smaller than 60° represent under-generalization. A, B, and C indicate the locations of the starting hand position for A, B, and C, respectively.
For viscous force-field adaptation simulations, we included a feedback controller that compensated for deviations in the trajectory using joint-angle coordinates. This modeled a stretch reflex caused by the imposed force field, corresponding to an experimental setting in which subjects were allowed to make movement corrections based on online visual feedback (Shadmehr and Mussa-Ivaldi 1994). The position-derivative feedback torques were computed using either joint angles
| (13) |
where Kp and Kv stand for the elastic and viscous feedback matrices, respectively (the parameter values used in simulations are adopted from Shadmehr and Mussa-Ivaldi 1994, summarized in Table 2). The joint torques were a sum of the feedforward (Eq. 9) and the feedback (Eq. 13) controllers. The trajectories were computed by solving the EOMs with the viscous force field (F = BVhand), the linear read-out (Eq. 9) and the feedback torques (Eq. 13). For visuomotor rotation adaptation, subjects in several studies were instructed to make rapid, point-to-point or out-back reaching movement without online correction (Krakauer et al. 2000, 2004; Tanaka et al. 2009) [but see Taylor et al. (2013)], so feedback control was not included in the simulation.
Table 2.
Gain matrices for position and velocity feedback control
| Position Gain Matrix | Velocity Gain Matrix |
|---|---|
Kp and Kv, elastic and viscous feedback matrices, respectively.
Summary of simulation steps.
The steps for the simulations are summarized here for clarity. First, the coefficient matrix was optimized as follows. For a given initial and target points (Xinitial and Xtarget) and a movement duration (tf), a desired trajectory of endpoint was computed as the minimum-jerk formula of Eq. 12. For visuomotor rotation adaptation, this minimum-jerk trajectory was counterrotated to cancel the imposed rotation transformation so that the adapted torque computed below recovered a trajectory toward a target. Then, from the trajectories of the limb segment vectors (X10, X20 and X21), their temporal derivatives were computed (V10, V20 and V21 for velocity and A10, A20 and A21 for acceleration), from which the cross products were obtained in Eqs. 6 and 7. With these cross products, the adapted torque τ(adapt) in Eq. 11 was computed as τ(adapt) = τ(Wnull) for visuomotor rotation and τ(adapt) = τ(Wnull) − τ(force field) for viscous force-field adaptation, respectively, where the coefficient matrix Wnull was defined in Eq. 10 and τ(force field) was either τextrinsic or τintrinsic, depending on the type of imposed force field. Finally, the coefficients of cross products were optimized to minimize the squared error between the adapted torque and the linear approximation τ(W) in Eq. 11. The squared error was averaged over all movement directions (in our case the eight cardinal directions) for a given initial starting posture. This optimization problem had a unique solution because the squared error was a convex quadratic function of W.
Using the optimized coefficient matrix, the feedforward torque τ(W) was computed from Eq. 9. For visuomotor adaptation, the trajectory was simulated with only the feedforward torque as
| (14) |
For viscous force-field adaptation, the feedback torque in Eq. 13 was included with the imposed force field torque and the feedforward torque:
| (15) |
Here I(θ) and G(θ, θ̇) denote the inertia matrix and the centripetal and Coriolis force terms, respectively. Note that the joint angles were used here only for the purpose of the simulations.
RESULTS
This study focuses on workspace generalization to understand the coordinate system or systems used for motor adaptation. Pioneering studies examined the degree to which motor adaptation generalizes to an untrained workspace by only changing the shoulder angle with the elbow angle unchanged in the horizontal plane (Krakauer et al. 2000; Shadmehr and Mussa-Ivaldi 1994). In this case, all of the limb segment vectors (X, V and A) underwent the same rotation in the horizontal plane, transforming into R(φ)X, R(φ)V and R(φ)A, respectively, where R(φ) is rotation matrix around the z-axis by angle φ. In contrast, the cross products are invariant, [R(φ)X] × [R(φ)V] = X × V and [R(φ)X] × [R(φ)A] = X × A, owing to the geometry of cross products. Therefore, for the cross product terms, the movement direction vectors V and A with a given posture X are equivalent to the rotated direction vectors R(φ)V and R(φ)A with a new posture R(φ)X (Fig. 2). This shoulder-based pattern of generalization is obvious in the cross-product representation but not in the joint angle representation and is critical in reproducing the experimentally observed patterns of motor generalization.
Another form of motor generalization, less investigated, is to vary the elbow and shoulder angles while keeping the shoulder-to-hand direction unchanged (Brayanov et al. 2012; Hwang et al. 2003). In contrast to experiments in which only the shoulder rotates, limb segment vectors undergo rotations and dilations different with respect to each vector. This type of posture change also occurs in motor remapping when generalizing to a workspace that is proximal or distal to a trained workspace. In these electrophysiological and modeling studies, the preferred direction of motor cortical neuron changed as the hand position was systematically altered (Ajemian et al. 2000; Caminiti et al. 1990; Wu and Hatsopoulos 2006, 2007). Numerical simulations were performed to test the predictions of the model for how motor remapping generalized to untrained workspaces.
Generalization in force-field adaptation.
We first investigated how a learned remapping in one workspace generalized to another workspace, a procedure that has been used to uncover the neural representation of movement primitives responsible for motor adaptation in human psychophysical experiments (Shadmehr 2004; Thoroughman and Shadmehr 2000). Shadmehr and Mussa-Ivaldi (1994) imposed a viscous force field on the hand that was proportional to hand velocity, Fextrinsic = BVhand. A learned viscous force field maximally transferred from one location to another location within the same limb when the force field was represented in an intrinsic, shoulder-based reference frame but not in the extrinsic, workspace reference frame (Shadmehr and Mussa-Ivaldi 1994). They modeled and reproduced the results of this generalization experiment using bases of joint angular velocities rather than extrinsic endpoint vectors, but stopped short of uncovering the underlying computational explanation. We here show that the cross products and the linear readout can reproduce the experimental findings parsimoniously.
The exact inverse model of arm dynamics under the influence of the external viscous force field included inertial and viscous force-field terms:
| (16) |
where the inertial terms for a desired trajectory and its derivatives ( and ) are:
| (17) |
and the torques required to cancel the viscous force-field terms are:
| (18) |
where the negative sign was needed to cancel the imposed force field. When the torques in Eq. 18 are added to the inverse model, the force field is completely canceled, and straight trajectories to targets are recovered.
The first step in canceling the force field was to adjust the linear readout coefficients of the cross-product bases to minimize the squared error between τ(adapt) in Eq. 16 and τ(W) in Eq. 9. Because the cost function is quadratic and the read-out is linear, the coefficients were computed by solving a pseudoinverse problem. Note that the torques in Eq. 18 are proportional to Vhand, which is a product of the Jacobian matrix and joint angular velocities, and that the joint angular velocities are expressed as cross products between limb positions and velocity vectors as
| (19) |
therefore, to approximate the torques in Eq. 18 with cross-product bases, the coefficients of the velocity-dependent cross products were expected to change.
The joint angles for the initial posture were (θ1, θ2) = (15°, 85°) in the right workspace (trained workspace) and (θ1, θ2) = (65°, 85°) in the left workspace (the untrained test workspace) as in the experiment. The coefficients before the training in the force field were (the numerical values for Eq. 10):
| (20) |
Using these coefficients, the endpoint trajectories in the force field were “hooked” as a result of the imposed force field, which deviated the trajectory away from the target initially before the feedback control brought them back to the target (Fig. 2A). The approximate inverse model in Eq. 9 was then obtained by optimizing the coefficients in the force field based on Eq. 11 in the right workspace to cancel the imposed force field. The values of the optimized coefficients were
| (21) |
With these optimized coefficients, the endpoint trajectories became straight, indicating that the linear read-out approximately canceled the imposed force field (Fig. 2B). The largest changes in these coefficients were in the velocity-dependent cross products (the last two columns), while the coefficients for acceleration-dependent cross product (the first four columns) were less affected, as expected since the imposed force field was velocity dependent. To simulate the after-effects of adaptation, hand trajectories were simulated with the optimized coefficients but without the force field. The trajectories mirrored the error directions of initial errors (the hooks were opposite to those before adaptation) (Fig. 2C), indicating that an internal model of force field was acquired. These simulation results reproduced the experimental findings (Shadmehr and Mussa-Ivaldi 1994).
We then examined the workspace generalization using the adapted coefficients in Eq. 21 for two types of imposed force field In the left workspace, one based on the spatial or extrinsic coordinate Fextrinsic = BVhand, and another based on the joint angle coordinate, Fintrinsic = JL−TJRTBJRJL−1Vhand, where JL and JR are the Jacobian matrices at the center of the left and right workspaces, respectively. The force field learned at the right workspace did not transfer to the left workspace if the force field remained invariant in the external spatial coordinate (Fig. 2D), but fully transferred if the force field in the left workspace was invariant in the joint-angle coordinate (Fig. 2E). Again, these patterns of generalization reproduced those reported in the psychophysical experiments (Shadmehr and Mussa-Ivaldi 1994). This shoulder-based generalization occurred because the adaptive terms in the inverse model have the form X × A and X × V, which were invariant when the shoulder and velocity vectors were rotated by the same angle.
In human psychophysical experiments, a limited number of workspaces outside the adaptation location, typically one and at most two, have been used to assess generalization of motor adaptation to a new workspace (Krakauer et al. 2000; Malfait et al. 2002; Shadmehr and Mussa-Ivaldi 1994; Wang and Sainburg 2005). To understand the coordinate frame in which generalization occurs, however, the entire workspace should be examined systematically. We, therefore, simulated the patterns of generalization to a number of workspace locations, for both extrinsic and intrinsic force fields.
The generalization of the intrinsic force field at the central location center position, (x, y) = (0 cm, 40 cm), was tested at 14 peripheral locations (Fig. 3A). The directional error between the actual initial trajectory (300 ms) and desired trajectory was used as a measure of generalization. For the intrinsic force field, the remapping transferred almost completely to all of the untrained workspaces (Fig. 3A). In fact, the trajectories that were distal, proximal, left and right from the trained workspace appeared indistinguishable to those at the trained workspace (Fig. 3, B–E). This complete generalization was expected for the intrinsic force field because the learned remapping was learned mostly through the velocity-dependent cross products, which are linear sums of joint angular velocities (see Eq. 19).
An extrinsic force field (Fextrinsic = BVhand) was next learned at the central position and tested at the 14 peripheral positions. The values of optimized coefficients were:
| (22) |
and once again most of the changes were to the coefficients of the velocity-dependent cross products. In contrast to generalization for the intrinsic force field, the motor adaptation for the extrinsic force field transferred to untrained workspaces was more complex (Fig. 4A): when the shoulder-to-hand direction coincided with that of the training, straight trajectories were conserved, indicating almost complete transfer of motor adaptation (Fig. 4, B and C). For workspaces that were left or right from the trained workspace, however, the distribution of directional errors were nonuniform over the movement directions and skewed in specific ways (Fig. 4, D and E). Experiments could be performed to test these predictions.
Generalization in visuomotor rotation adaptation.
Adapting to a visually rotated environment, known as visuomotor rotation, is a well-studied kinematic adaptation paradigm. Initially, rotated visual feedback causes errors in the initial movement direction and endpoint position. With repetition, the direction errors decrease following a learning curve as the hand movement directions counterrotate to compensate, eventually recovering trajectories that head straight toward the targets. Although force fields and visuomotor rotations both disturb the endpoints of movements, they are fundamentally different with respect to whether movement kinematics or dynamics are perturbed.
There are two approaches to modeling visuomotor adaptation. One is to counterrotate the input trajectory so that a perceived hand endpoint moves straight to a target (the vectorial planning hypothesis) (Gordon et al. 1994). Alternatively, the mapping from a desired trajectory to joint torques can be adjusted to make a straight movement to a counterrotated direction, without counterrotating the input trajectory, as expressed in Eq. 9.
In vectorial remapping, visuomotor adaptation can be modeled by converting the intended endpoint vectors (position X and velocity V) into the perceived endpoint vectors (position X̃ and velocity Ṽ):
| (23) |
| (24) |
where R is a rotation matrix. Conversely, to make a straight movement to a target in the visual space, the intended vectors must be counterrotated or remapped:
| (25) |
| (26) |
Vectorial planning by input remapping predicts that a movement to a learned direction should generalize to a new starting point (Krakauer et al. 2000; Wang and Sainburg 2005). Although the remapping is computationally simple, this hypothesis also requires that the movements of other limb segments be remapped, which is not as simple, since there is no simple linear transformation for remapping the other spatial vectors (X10, X20, and X21).
Alternatively, rotated visual feedback can be approximately compensated by reweighting the connections from the cross products to the joint torques as in Eq. 9, without adjusting the desired trajectory. As in the case of force-field adaptation, these coefficients were optimized to cancel an imposed visual rotation of CCW 60° in the center workspace, with joint angles (θ1, θ2) = (45°, 90°) as in a psychophysical experiment (Krakauer et al. 2000). Coefficients were optimized for a counterrotated trajectory by minimizing the squared error between τ(CW) and τ(W) in Eq. 11. The optimized coefficients were:
| (27) |
which have both positive and negative values implying that some of the terms changed the sign of their impact after adaptation. This approximation worked almost perfectly in the center workspace where the rotation remapping was trained (Fig. 5A). To investigate how the learned remapping generalized to other, untrained workspaces, we simulated reaching in two other workspaces by changing the shoulder angle while the elbow angle remained unchanged [left workspace with joint angles (θ1, θ2) = (90°, 90°) and right workspace with joint angles (θ1, θ2) = (0°, 90°)]. The rotated visual feedback was compensated in both test workspaces in the extrinsic reference frame almost as perfectly as in the trained workspace (Fig. 5, B and C), consistent with a previous study, which concluded that generalization of learned visuomotor remapping to other untrained workspace occurred not in the intrinsic joint-based reference frame but in an extrinsic spatial reference frame (Krakauer et al. 2000; Wang and Sainburg 2005). The model successfully reproduced this experimental finding.
To fully determine the reference frame, generalization to the entire workspace should be examined more systematically, so we simulated how remapping transferred from one workspace to multiple untrained workspaces. The linear readout model was trained for CW 60° rotation at the central workspace [40 cm in front of the shoulder or (x, y)=(0 cm,40 cm)]. The optimized coefficients were
| (28) |
At this workspace location, the imposed rotation was almost completely canceled by counterrotating the hand trajectories (Fig. 6A). When tested at a workspace distal from the body with the shoulder-to-hand direction fixed [(x, y) = (0 cm, 45 cm)], however, the cursor movements were CW rotated from the straight trajectories toward the targets, indicating that the imposed rotation was not completely compensated and the visuomotor remapping under-generalized (Fig. 6B). When tested at a workspace proximal to the body [(x, y) = (0 cm, 35 cm)], the cursor movements were rotated CCW from the straight trajectories toward the targets, indicating that the visuomotor rotation over-generalized (Fig. 6C). From Fig. 6, B and C, it seemed appropriate to evaluate the degree of transfer by the average rotation angles over eight targets, so the angles between the model's simulated hand trajectories and the corresponding minimum-jerk trajectories were computed at peak-velocity locations. A coherent pattern of generalization emerged with rotational symmetry (Fig. 6D): the learned remapping generalized almost completely to workspaces where the shoulder angle in the initial posture was rotated with the elbow angle fixed, whereas the learned remapping under- or over-generalized when tested at distal or proximal workspaces, respectively.
Integration of visual and proprioceptive representations through cross products.
Expressing EOMs in a spatial representation has a number of computational advantages compared with a joint-angle representation: shorter and more concise EOMs, an intuitive physical interpretation, no explicit computation for inverse kinematics, and fast adaptive learning because there are fewer adaptive coefficients to learn. However, this spatial representation requires spatial vectors in the spatial coordinates that are linearly dependent on each other and thus redundant, and not all combinations yield a valid limb configuration, whereas the joint angles always yield a valid limb configuration. Although vector cross products of spatial position and velocity or acceleration can be computed by a feedforward network from visually selective model neurons, this does not ensure consistency among the redundant vectors.
Here we show how the consistency among the redundant basis of spatial vectors can be maintained by proprioceptive feedback. One way to monitor the consistency among the spatial vectors is to compare the hand endpoint and limb-segment positions computed from the visual and proprioceptive inputs in spatial coordinates. The hand endpoint, (xhand, yhand), and joint angles, (θ1, θ2), are related by forward kinematics:
| (29) |
| (30) |
and similar equations hold for the proximal limb segments.
Alternatively, the visual and proprioceptive information from limb positions can be integrated in a common cross-product representation. The cross products in the EOMs can be computed in a feedforward network of neurons that represent visual positions and velocities (Tanaka and Sejnowski 2013):
| (31) |
| (32) |
where (||X||, θ) and (||V||, φ) are polar coordinate representations of the hand position and velocity vectors, and θi and φj are angles of preferred position and velocity. [X × A]Z can be computed similarly from neurons that represent visual positions and accelerations.
The vector cross products can also be computed from proprioceptive feedback, which provides information about muscle lengths, joint angles and their temporal derivatives. Joint angles and their derivatives can be converted into cross products in a spatial reference frame:
| (33) |
| (34) |
for the two-link model. Similar but much more complicated expressions can be derived for a general n-link arm. These expressions suffer from the same curse of dimensionality that led us to abandon a joint-based reference frame for motor control.
More generally, [X × A]Z and [X × V]Z can be computed with multiplicative, gain-field responses of joint angles and their derivatives as
| (35) |
where fi and gj are basis functions that can be used to compute cross product in a single-layered feedforward network. For the case of two-link model, the basis functions are fi(θ1, θ2)} = {1, cos θ2, sin θ2} and {gi(θ1, θ2)} = {θ̇1θ̇2, θ̇{22, θ̈1, θ̈2}. The computation in Eq. 35 requires that the information about position, velocity and acceleration be accessible to the motor cortex; extant electrophysiological studies have reported motor cortical activities related to hand position (Georgopoulos et al. 1984; Paninski et al. 2004), velocity (Moran and Schwartz 1999; Paninski et al. 2004) and acceleration (Flament and Hore 1988). Other studies fitted the activities of motor cortical neurons systematically with hand position, velocity and acceleration and reported different proportions: 16.5%, 26.6%, and 1.7% (Ashe and Georgopoulos 1994) and 9%, 80%, 11% (Stark et al. 2007) for position, velocity and acceleration, respectively.
DISCUSSION
Cross products are a computationally efficient basis that can serve as a set of motor primitives for motor adaptation. Their geometric structure makes explicit predictions for a range of motor adaptation experiments. We have shown that the cross-product basis naturally reproduces the previous results of the joint-angle based transfer in viscous force-field adaptation, and space-based transfer in adaptation to visuomotor rotation with the shoulder angle rotated. Our model predicts that CW rotation learned at the central workspace over-generalizes to proximal workspace and under-generalizes to distal workspace, a prediction that differentiates our model from the vectorial planning hypothesis. To our knowledge, this is the first study that proposes a computational model explaining both force-field and visuomotor rotation adaptation in a unified way. Furthermore, proprioceptive signals can maintain consistent limb positions in space through the cross-product basis, which conveniently integrates visual and proprioceptive information in a unified way.
A unified view of dynamic and kinematic motor adaptation.
Dynamic motor adaptation and kinematic motor adaptation have traditionally been considered to involve two distinct adaptive mechanisms in different neural circuits. We have shown that both types of adaptation can be modeled in a unified way in a model where vector cross products form basis functions for motor adaptation with adaptive coefficients that cancel the imposed perturbation. Coefficients of velocity-dependent, viscous terms were mostly altered for the force-field adaptation, and the adapted internal model generalized in a shoulder-based reference frame. Coefficients of acceleration-dependent, inertial terms were primarily adapted for visuomotor rotation, and the remapping generalized in an extrinsic reference frame. Thus the cross-product representation explains why multiple reference frames for behavioral generalization have been observed under different conditions. Our results demonstrate an alternative to the common consensus in the field of motor control that kinematic and dynamic adaptation have independent neural substrates.
In the simulations, the cross-product terms were chosen for adaptation by optimization. The adaptation could be implemented in the brain by using an error signal to modify the coefficients of the cross-product terms by gradient descent, which could be implemented by Hebbian plasticity since only a single layer of weights needs to be adjusted.
Generalization in motor adaptation predicted by cross-product representation.
Experimental studies of generalization after motor adaptation typically test only one (Krakauer et al. 2000; Shadmehr and Mussa-Ivaldi 1994) or two (Wang and Sainburg 2005) workspace locations because of limited resources. For force-field adaptation, a shoulder-based reference frame was reported after testing a generalization pattern at a tested workspace location with the same elbow angle of the trained workspace; this leaves open the question of how changing the elbow angle affects the pattern of generalization. The model can be easily tested over a wide range of test workspaces, making systematic predictions for motor generalization. In particular, the model predicted that the shoulder angle should have a dominant effect on generalization patterns for force-field adaptation while the influence of the elbow angle should be negligible. Wang and Sainburg (2005) reported that visuomotor remapping generalized to other movement vectors but did not report under- or over-generalization. There are two possible reasons: one is that the predicted deviations of the model were not large enough to notice (see Fig. 6, A–C), and the other is that they examined only two movement directions. To detect the model's prediction of under- or over-generalization would require a dense sampling of movement directions and workspaces.
Some electrophysiological and modeling studies addressed the reference frame(s) by examining the change of preferred directions of motor cortical neurons (Ajemian et al. 2001; Caminiti et al. 1990; Wu and Hatsopoulos 2006, 2007). Caminiti et al. (1990) concluded that the adaptation was in a shoulder-based reference frame based on three workspaces, but with a limited number of workspaces it would be difficult to dissociate shoulder- from joint-based reference frames. Later, Wu and Hatsopoulos (2006) examined nine workspaces and reported that different cortical motor neurons had joint-based, shoulder-based or extrinsic reference frames, consistent with the hypothesis of cross-product representation. In the future, the design of psychophysical and electrophysiological experiments should include multiple workspaces in a systematic way.
In our computational model, an imposed visuomotor rotation was compensated by adapting the mapping from a desired trajectory to joint torques without changing the desired trajectory. The cross-product model reproduced the experimental findings of directional generalization when the elbow angle was unchanged, as in the experimental condition, and showed further that the remapping under-generalized when a starting posture was more distal, and over-generalized when proximal, in contrast to the vector remapping hypothesis, which predicts that remapping should generalize uniformly to untrained workspaces. Thus the workspace generalization of visuomotor remapping predicted by our model is not based on the extrinsic, spatial reference frame, but depends on the distance between the shoulder and the hand. In contrast, the vectorial planning hypothesis states that a learned remapping of visuomotor rotation should generalize directionally to any untrained workspace. By studying each of the vector cross products, it should be possible to design new experiments to adapt each of the terms, to further test the predictions of the model.
Proprioceptive encoding of joint state.
Proprioceptive signals from muscle spindles convey muscle length and its change, which in turn can be used in computing the joint angles and angular velocities (Proske and Gandevia 2012). Although muscle spindles do not directly encode angular acceleration, the cortex could in principle compute acceleration by comparing previous and current proprioceptive inputs, similar to how the retinal circuitry computes the velocity of visual target (Kim et al. 2014). It is therefore conceivable that the proprioceptive kinematic information required to compute Eq. 35 is available in the parietal and motor cortices. Because the joint angles always represent a valid limb configuration, the consistency among the cross products computed from visual inputs in Eq. 31 can be maintained by comparing with those computed from proprioceptive inputs in Eq. 35 as a reference. Thus vector cross products represent a common “language” between visual and proprioceptive information.
The posterior parietal lobe, including the parietal reach region (PRR) and dorsal area 5 (area 5d), is specialized for the multisensory integration and coordinate transformations necessary in visuomotor transformation during reaching movements. Neurons in these areas combine retinal, eye- and hand-related signals in a spatially congruent manner (Battaglia-Mayer et al. 2000, 2001). PRR and area 5d represent a target in the eye-centered and the hand-centered reference frame, respectively (Batista et al. 1999; Bremner and Andersen 2012; Pesaran et al. 2006). Also, whereas PRR maintains potential and selected reach plans, area 5d encode only selected reach plans (Cui and Andersen 2007, 2011). The tuning curves of area 5d neurons are sinusoidal as found in other motor areas, and activity onsets in area 5d lag those in other motor areas (Kalaska and Crammond 1992). These lines of evidence suggest that area 5d is downstream of PRR and might monitor ongoing movements and compute a forward estimate of the body state for online sensorimotor control (Mulliken et al. 2008). Given these physiological findings, area 5d is a candidate for integration of proprioceptive and visual information for maintaining the consistency of body images. Loss of the superior parietal lobule in humans does not cause specific motor deficits, but instead disrupts the body self-image, which could be the result of impairment in the integration of spatial and proprioceptive information about the positions of limb segments and their spatial relationships.
Multiplicative gain fields have been reported for eye position in lateral intraparietal area of the parietal cortex, consistent with Eq. 35. Interestingly, gain fields for eye position have also been found in area 3a of the somatosensory cortex, where they are based on proprioceptive inputs (Xu et al. 2011). The proprioceptive signals consistently lagged the actual eye position in the orbit by ∼60 ms, which led the authors of this study to conclude that they were unlikely to be used for processing online actions. These findings are consistent with the hypothesis that proprioceptive signals may be used to calibrate the visual body configuration.
Whereas this study has focused on the feedforward torques to model fast out-back reaching movements, there is experimental evidence that proprioceptive feedback signals may contribute to adaptation for discrete movements that require stabilizing control of final posture. Ghez et al. (2007) demonstrated that adaptation in out-back reaching (“slicing”) generalized in terms of movement directions; on the other hand, adaptation in discrete reaching generalized in terms of final positions. They suggested that the intended trajectory and final position are represented in different reference frames. Their findings could also be explained in our computational framework by adaptive feedforward control through adjusting the torques for out-back reaching, and adaptive feedback control for final positioning by adjusting the mapping between cross products computed from vision and proprioception.
Related works.
There are a number of computational studies that modeled electrophysiological recordings in the motor cortex and psychophysical results in human experiments. The functions of the motor cortical neurons have been modeled on the basis of joint angles in most models. Todorov (2002) assumed that the motor cortex optimizes a tradeoff between force production error and effort in the presence of multiplicative, signal-dependent noise and explained broad (truncated cosine) tuning with respect to force directions. Lillicrap and Scott (2013) demonstrated that, if the geometry of the limb and the biomechanics of muscles are taken into consideration, nonuniform distribution of preferred directions in the motor cortex can be explained as a result of optimization in reaching and isometric force production tasks. Similarly, in Trainin et al. (2007), the properties of motor cortical neurons, such as broad directional tuning, nonuniform distributions of preferred directions and some temporal changes of firing rates were explained by taking into account the biomechanics of arm and the dynamics of muscles. In the model of Ajemian et al. (2008), the posture dependence of preferred directions in the single-cell level was reproduced by positing that the motor neurons are tuned to preferred torque directions; however, cosine tuning to torque direction was not derived but assumed.
The above models do not describe how the visual information is transformed into intrinsic variables, such as joint angles and muscle tensions. In contrast, our hypothesis of cross-product representation in the motor cortex explains the properties of motor cortex, such as directional cosine tuning and the nonuniform distribution of preferred directions without explicitly using joint angles (Tanaka and Sejnowski 2013). Moreover, our hypothesis explains the posture dependence of preferred directions, coexistence of multiple reference frames, spatio-temporal properties of population vector and linear computation of muscle tensions. Therefore, the cross-product representation explains a range of electrophysiological findings as well as previous studies. It is interesting whether and how our model conforms to the previous studies.
There have been previous attempts to explain the psychophysical results in viscous force-field adaptation. In the study of Malfait et al. (2005), for example, the intrinsic pattern of generalization in force-field adaptation was confirmed by transfer at a center location after training at two lateral locations. This pattern of generalization was explained in a model based on the λ version of the equilibrium-point hypothesis. The imposed field was adapted by adjusting the values of individual muscle λ values to reduce the error between desired kinematics and actual joint displacement. The model's success originates from the postulate that the force field was learned in terms of the equilibrium muscle lengths and activations, which are intrinsic variables. Our model explains the intrinsic pattern of generalization in force field adaptation in a distinctly different way: the feedforward torque was learned in bases of cross products, which are composed of extrinsic variables. The novel insight about this model is that the intrinsic pattern of generalization does not necessarily indicate an intrinsic-based representation of internal model, such as joint angles or muscle tensions, but can be equally explained in terms of a representation of cross products of limb positions and movements.
In the computation of feedforward torque, the independence between velocity-dependent and acceleration-dependent cross products was implicitly assumed (Eq. 9). At a glance, this separate coding of velocity and acceleration appears inconsistent with the findings of Hwang et al. (2006), who had subjects adapt to an acceleration-dependent force field in one movement direction and tested for generalization in various movement directions. The pattern of directional generalization indicated that the basis elements that form the internal model cannot be linearly separated into elements that were either angular velocity or angular acceleration (Eq. 3 in their paper). This result does not contradict our assumption of separate coding of velocity- and acceleration-dependent cross products, since the acceleration cross products consist of angular velocities and angular accelerations in a multiplicative way (Eq. 35). It would be worthwhile to determine whether the cross-product hypothesis explains these results (Hwang et al. 2006).
Relation to neurophysiological findings.
Several electrophysiological studies have reported adaptive changes in the activities of neurons in the motor cortex of monkeys before and after adaptation to viscous force field (Gandolfo et al. 2000; Li et al. 2001) and to visuomotor transformations including rotation (Wise et al. 1998). Gandolfo et al. (2000) and Li et al. (2001), in a force-field adaptation task, examined tuning curves of single neurons in baseline, force-field and washout conditions and found neurons whose tuning functions remained invariant (“kinematic” or “extrinsic” neurons), those whose tuning functions changed in response to force field (“dynamic” neurons), and those whose tuning functions changed in response to force field and were maintained even after washout trials (“memory neurons”). Wise et al. (1998), in various visuomotor transformation tasks, found increased or decreased modulation, shifted tuning profiles, and more complex changes in single-neuron activities. They also reported that about one-half the sampled neurons showed no significant change during adaptation. These studies indicate that there are in general two populations of cortical motor neurons: one with invariant activity profiles, and another with adapted changes in their activities.
The primary motor cortex (M1) is not a homogeneous area but is divided into rostral and caudal subareas defined by cytoarchitectural zones (Geyer et al. 1996), descending pathways (Rathelot and Strick 2009), and functional representations (Sergio et al. 2005). In particular, neuronal activity in the rostral M1 correlates with overall directions and kinematics of endpoint movements, whereas neuronal activity in the caudal M1 correlates with the temporal pattern of force production and motor output. We previously proposed that the transformation from cross products to joint torques or muscle tensions (Eqs. 4 and 5) could be computed within the circuitry of M1 (Tanaka and Sejnowski 2013). Accordingly, the proposed model in this study predicts two subpopulations: one representing the cross products whose activities remain invariant, and the other representing the dynamic variables whose activities undergo adaptive changes in response to perturbation. This prediction could be tested by examining functional connections from the kinematic representation in the rostral M1 to the dynamic representation in the caudal M1 and by examining whether and how their tuning profiles change over the course of motor adaptation.
Limitations of current model.
Although our proposed model is consistent with some of the extant experimental results of motor adaptation and its generalization, there remain several unexplained issues. The cross-product basis can explain the patterns of generalization across workspaces that were experimentally observed in the force-field and visuomotor adaptation, but the broad directional tuning of cross products cannot explain the narrow width of directional generalization within the same workspace observed in visuomotor rotation (Brayanov et al. 2012; Krakauer et al. 2000). In addition, the experimental patterns of directional generalization across two workspaces indicate that motor memory of visuomotor rotation adaptation is not exclusively intrinsic or extrinsic but rather encoded in a gain-field combination of intrinsic and extrinsic representations (Brayanov et al. 2012). These findings appear at odds with our cross-product hypothesis, which postulates the plasticity between cross products to dynamic variables (the bottom two panels in Fig. 1C). These findings might be explained by plasticity in other aspects of visuomotor transformation. Our scheme of visuomotor transformation (Fig. 1C) indicates that other stages of transformation (from endpoint vectors to limb segment vectors, or from limb segment vectors to vector cross products) might contribute to motor adaptation as well, and that plasticity in the other stages may contribute to the narrow width of generalization and the gain-field combination of extrinsic and intrinsic representations. In our previous modeling study, we suggested that the narrow directional generalization could be attributed to the narrow directional tuning in the posterior parietal lobe and plasticity between posterior parietal and frontal motor cortices (Tanaka et al. 2009), which would correspond to the transformation of endpoint vectors to limb segment vectors.
The adaptation model proposed here assumed that the locus of the plasticity is in the coefficients of cross-product terms, which should be regarded as one of several possible processes that could contribute to motor adaptation. Previous studies indicated that motor adaptation is not a single neural process, but rather consists of multiple processes with distinct characteristics, such as time scales and patterns of generalization (Kording et al. 2007; Smith et al. 2006). For example, a typical learning curve in motor adaptation can be fit well with a double-exponential decay, which can be modeled by a multirate state-space model. The fast and slow processes in a multirate model have different patterns of generalization, which can be experimentally assessed by trial-by-trial and postadaptation generalization, respectively (Tanaka et al. 2012). How these multiple processes of motor adaptation facilitate or compete with each other is relatively less studied.
Our model assumes that the adaptive coefficients of cross products are globally constant. This leads to a prediction that adaptation to one field or rotation at one workspace interferes with adaptation to an opposing field or rotation at another workspace, regardless of the distance between the two workspaces. This is not consistent with Hwang et al. (2003), who had subjects adapted to force fields of opposite signs at two workspaces and found that the degree of interference was maximal when the two workspaces were just 0.5 cm apart and decreased monotonically as a function of the workspace distance. They suggested that an internal model of force field encodes not only velocity, but also position, and that the internal model is modulated as a function of distance between the posture in which the force field is learned and another posture. Similarly, in visual-displacement adaptation, Baraduc and Wolpert (2002) showed that after-effects of a visual displacement were maximal when the initial posture of training and testing were the same and decreased with the distance between the two postures. These results indicate that our postulate of globally constant coefficients is an approximation; the coefficients need to include position dependence in accordance with the experimental findings.
Our model for motor adaptation is based on computing cross products of rigid body dynamics, but other mechanisms exist that do not involve rigid body dynamics. For example, human subjects can learn arbitrary mappings from coordinated finger postures to cursor locations flexibly (Liu et al. 2011; Liu and Scheidt 2008; Mosier et al. 2005). Also, in studies of brain computer interfaces, neural signals translate into cursor motion that is physically isolated from the musculoskeletal system (Serruya et al. 2002; Taylor et al. 2002; Wolpaw and McFarland 2004).
Previous studies have shown that motor adaptation consists of multiple, interacting processes with different time scales (Smith et al. 2006; Tanaka et al. 2012) or of distinct types of memory (Herzfeld et al. 2014; Mazzoni and Krakauer 2006). The model presented here could correspond to one of those processes, and it is an open question whether and how our cross-product model can be integrated with other processes of motor adaptation.
The cross-product basis function used here is a minimal set that is sufficient to represent joint torques, but other basis functions could also serve the same purpose as long as they are complete and are fixed with respect to the external world. Of particular interest is a recent proposal of a recurrent neural network with randomly connected units that exhibits chaotic behaviors, known as reservoir computing. Outputs of the recurrent network can reproduce complex temporal patterns of neural firing rates as well as muscle activities (Hennequin et al. 2014; Shenoy et al. 2013; Sussillo et al. 2013).
Conclusions.
Motor adaption and generalization is a sensitive probe of the underlying motor representation that guides skilled movements. Although the distinct patterns of generalization in viscous force-field and visuomotor rotation adaptation are believed to result from independent neural correlates, the vector cross-product representation explored here can account for both the intrinsic and extrinsic patterns of generalization, providing a unifying computational framework of motor adaptation. These results, together with the close match between the terms in the cross-product representation and the properties of neurons in the motor cortex and other motor structures (Tanaka and Sejnowski 2013), provide strongly converging evidence for the validity of a common spatial framework for the motor system. The cross product model makes testable predictions for many other motor adaptation and generalization experiments.
Having a neural basis of motor primitives is, however, only a first step toward understanding how the motor system plans actions and controls muscles. In an optimal feedback model, there are no preplanned trajectories, unlike the feedforward planning of motor control assumed here. Models of feedback motor control need to be formulated using the cross product representation.
GRANTS
This work was supported in part by Ministry of Education, Culture, Sports, Science and Technology KAKENHI Grants 25430007 and 261200xx (H. Tanaka) and the Howard Hughes Medical Institute (T. J. Sejnowski).
DISCLOSURES
No conflicts of interest, financial or otherwise, are declared by the author(s).
AUTHOR CONTRIBUTIONS
Author contributions: H.T. and T.J.S. conception and design of research; H.T. performed experiments; H.T. analyzed data; H.T. and T.J.S. interpreted results of experiments; H.T. prepared figures; H.T. and T.J.S. drafted manuscript; H.T. and T.J.S. edited and revised manuscript; H.T. and T.J.S. approved final version of manuscript.
ACKNOWLEDGMENTS
We thank Reza Shadmehr for figure permission.
REFERENCES
- Ajemian R, Bullock D, Grossberg S. Kinematic coordinates in which motor cortical cells encode movement direction. J Neurophysiol 84: 2191–2203, 2000. [DOI] [PubMed] [Google Scholar]
- Ajemian R, Bullock D, Grossberg S. A model of movement coordinates in the motor cortex: posture-dependent changes in the gain and direction of single cell tuning curves. Cereb Cortex 11: 1124–1135, 2001. [DOI] [PubMed] [Google Scholar]
- Ajemian R, Green A, Bullock D, Sergio L, Kalaska J, Grossberg S. Assessing the function of motor cortex: single-neuron models of how neural response is modulated by limb biomechanics. Neuron 58: 414–428, 2008. [DOI] [PubMed] [Google Scholar]
- Ashe J, Georgopoulos AP. Movement parameters and neural activity in motor cortex and area 5. Cereb Cortex 4: 590–600, 1994. [DOI] [PubMed] [Google Scholar]
- Atkeson CG. Learning arm kinematics and dynamics. Annu Rev Neurosci 12: 157–183, 1989. [DOI] [PubMed] [Google Scholar]
- Baraduc P, Wolpert DM. Adaptation to a visuomotor shift depends on the starting posture. J Neurophysiol 88: 973–981, 2002. [DOI] [PubMed] [Google Scholar]
- Batista AP, Buneo CA, Snyder LH, Andersen RA. Reach plans in eye-centered coordinates. Science 285: 257–260, 1999. [DOI] [PubMed] [Google Scholar]
- Battaglia-Mayer A, Ferraina S, Genovesio A, Marconi B, Squatrito S, Molinari M, Lacquaniti F, Caminiti R. Eye-hand coordination during reaching. II. An analysis of the relationships between visuomanual signals in parietal cortex and parieto-frontal association projections. Cereb Cortex 11: 528–544, 2001. [DOI] [PubMed] [Google Scholar]
- Battaglia-Mayer A, Ferraina S, Mitsuda T, Marconi B, Genovesio A, Onorati P, Lacquaniti F, Caminiti R. Early coding of reaching in the parietooccipital cortex. J Neurophysiol 83: 2374–2391, 2000. [DOI] [PubMed] [Google Scholar]
- Berniker M, Kording K. Estimating the sources of motor errors for adaptation and generalization. Nat Neurosci 11: 1454–1461, 2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Braun DA, Aertsen A, Wolpert DM, Mehring C. Motor task variation induces structural learning. Curr Biol 19: 352–357, 2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brayanov JB, Press DZ, Smith MA. Motor memory is encoded as a gain-field combination of intrinsic and extrinsic action representations. J Neurosci 32: 14951–14965, 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bremner LR, Andersen RA. Coding of the reach vector in parietal area 5d. Neuron 75: 342–351, 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Buneo CA, Soechting JF, Flanders M. Postural dependence of muscle actions: implications for neural control. J Neurosci 17: 2128–2142, 1997. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Caminiti R, Johnson PB, Urbano A. Making arm movements within different parts of space: dynamic aspects in the primate motor cortex. J Neurosci 10: 2039–2058, 1990. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cheng S, Sabes PN. Calibration of visually guided reaching is driven by error-corrective learning and internal dynamics. J Neurophysiol 97: 3057–3069, 2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Churchland MM, Cunningham JP, Kaufman MT, Foster JD, Nuyujukian P, Ryu SI, Shenoy KV. Neural population dynamics during reaching. Nature 487: 51–56, 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Criscimagna-Hemminger SE, Donchin O, Gazzaniga MS, Shadmehr R. Learned dynamics of reaching movements generalize from dominant to nondominant arm. J Neurophysiol 89: 168–176, 2003. [DOI] [PubMed] [Google Scholar]
- Cui H, Andersen RA. Posterior parietal cortex encodes autonomously selected motor plans. Neuron 56: 552–559, 2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cui H, Andersen RA. Different representations of potential and selected motor plans by distinct parietal areas. J Neurosci 31: 18130–18136, 2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Donchin O, Francis JT, Shadmehr R. Quantifying generalization from trial-by-trial behavior of adaptive systems that learn with basis functions: theory and experiments in human motor control. J Neurosci 23: 9032–9045, 2003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Evarts EV. Relation of pyramidal tract activity of force exerted during voluntary movement. J Neurophysiol 31: 14–27, 1968. [DOI] [PubMed] [Google Scholar]
- Fetz EE, Cheney PD. Postspike facilittion of forelimb muscle-activity by primate corticomotoneuronal cells. J Neurophysiol 44: 751–772, 1980. [DOI] [PubMed] [Google Scholar]
- Flament D, Hore J. Relations of motor cortex neural discharge to kinematics of passive and active elbow movements in the monkey. J Neurophysiol 60: 1268–1284, 1988. [DOI] [PubMed] [Google Scholar]
- Flash T, Hogan N. The coordination of arm movements: an experimentally confirmed mathematical model. J Neurosci 5: 1688–1703, 1985. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gandolfo F, Li C, Benda BJ, Schioppa CP, Bizzi E. Cortical correlates of learning in monkeys adapting to a new dynamical environment. Proc Natl Acad Sci U S A 97: 2259–2263, 2000. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Georgopoulos AP. Spatial coding of visually guided arm movements in primate motor cortex. Can J Physiol Pharmacol 66: 518–526, 1988. [DOI] [PubMed] [Google Scholar]
- Georgopoulos AP, Caminiti R, Kalaska JF. Static spatial effects in motor cortex and area 5: quantitative relations in a two-dimensional space. Exp Brain Res 54: 446–454, 1984. [DOI] [PubMed] [Google Scholar]
- Georgopoulos AP, Kalaska JF, Caminiti R, Massey JT. On the relations between the direction of two-dimensional arm movements and cell discharge in primate motor cortex. J Neurosci 2: 1527–1537, 1982. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Georgopoulos AP, Schwartz AB, Kettner RE. Neuronal population coding of movement direction. Science 233: 1416–1419, 1986. [DOI] [PubMed] [Google Scholar]
- Geyer S, Ledberg A, Schleicher A, Kinomura S, Schormann T, Bürgel U, Klingberg T, Larsson J, Zilles K, Roland PE. Two different areas within the primary motor cortex of man. Nature 382: 805–807, 1996. [DOI] [PubMed] [Google Scholar]
- Ghahramani Z, Wolpert DM. Modular decomposition in visuomotor learning. Nature 386: 392–395, 1997. [DOI] [PubMed] [Google Scholar]
- Ghez C, Scheidt R, Heijink H. Different learned coordinate frames for planning trajectories and final positions in reaching. J Neurophysiol 98: 3614–3626, 2007. [DOI] [PubMed] [Google Scholar]
- Ghilardi MF, Gordon J, Ghez C. Learning a visuomotor transformation in a local area of work space produces directional biases in other areas. J Neurophysiol 73: 2535–2539, 1995. [DOI] [PubMed] [Google Scholar]
- Gordon J, Ghilardi MF, Ghez C. Accuracy of planar reaching movements. I. Independence of direction and extent variability. Exp Brain Res 99: 97–111, 1994. [DOI] [PubMed] [Google Scholar]
- Hennequin G, Vogels TP, Gerstner W. Optimal control of transient dynamics in balanced networks supports generation of complex movements. Neuron 82: 1394–1406, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Herzfeld DJ, Vaswani PA, Marko MK, Shadmehr R. A memory of errors in sensorimotor learning. Science 345: 1349–1353, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hinton G. Parallel computations for controlling an arm. J Mot Behav 16: 171–194, 1984. [DOI] [PubMed] [Google Scholar]
- Hogan N. Adaptive control of mechanical impedance by coactivation of antagonist muscles. IEEE Trans Automat Contr 29: 681–690, 1984. [Google Scholar]
- Hollerbach JM. Computers, brains and the control of movement. Trends Neurosci 5: 189–192, 1982. [Google Scholar]
- Hwang EJ, Donchin O, Smith MA, Shadmehr R. A gain-field encoding of limb position and velocity in the internal model of arm dynamics. PLoS Biol 1: E25, 2003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hwang EJ, Smith MA, Shadmehr R. Adaptation and generalization in acceleration-dependent force fields. Exp Brain Res 169: 496–506, 2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Imamizu H, Shimojo S. The locus of visual-motor learning at the task or manipulator level: implications from intermanual transfer. J Exp Psychol 21: 719–733, 1995. [DOI] [PubMed] [Google Scholar]
- Imamizu H, Uno Y, Kawato M. Internal representations of the motor apparatus: implications from generalization in visuomotor learning. J Exp Psychol 21: 1174–1198, 1995. [DOI] [PubMed] [Google Scholar]
- Kalaska JF, Crammond DJ. Cerebral cortical mechanisms of reaching movements. Science 255: 1517–1523, 1992. [DOI] [PubMed] [Google Scholar]
- Kim JS, Greene MJ, Zlateski A, Lee K, Richardson M, Turaga SC, Purcaro M, Balkam M, Robinson A, Behabadi BF, Campos M, Denk W, Seung HS; EyeWirers. Space-time wiring specificity supports direction selectivity in the retina. Nature 509: 331–336, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kitazawa S, Kimura T, Uka T. Prism adaptation of reaching movements: specificity for the velocity of reaching. J Neurosci 17: 1481–1492, 1997. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kitazawa S, Kohno T, Uka T. Effects of delayed visual information on the rate and amount of prism adaptation in the human. J Neurosci 15: 7644–7652, 1995. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kording KP, Tenenbaum JB, Shadmehr R. The dynamics of memory as a consequence of optimal adaptation to a changing body. Nat Neurosci 10: 779–786, 2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Krakauer JW, Ghilardi MF, Ghez C. Independent learning of internal models for kinematic and dynamic control of reaching. Nat Neurosci 2: 1026–1031, 1999. [DOI] [PubMed] [Google Scholar]
- Krakauer JW, Ghilardi MF, Mentis M, Barnes A, Veytsman M, Eidelberg D, Ghez C. Differential cortical and subcortical activations in learning rotations and gains for reaching: a PET study. J Neurophysiol 91: 924–933, 2004. [DOI] [PubMed] [Google Scholar]
- Krakauer JW, Mazzoni P, Ghazizadeh A, Ravindran R, Shadmehr R. Generalization of motor learning depends on the history of prior action. PLoS Biol 4: e316, 2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Krakauer JW, Pine ZM, Ghilardi MF, Ghez C. Learning of visuomotor transformations for vectorial planning of reaching trajectories. J Neurosci 20: 8916–8924, 2000. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Li CSR, Padoa-Schioppa C, Bizzi E. Neuronal correlates of motor performance and motor learning in the primary motor cortex of monkeys adapting to an external force field. Neuron 30: 593–607, 2001. [DOI] [PubMed] [Google Scholar]
- Lillicrap TP, Scott SH. Preference distributions of primary motor cortex neurons reflect control solutions optimized for limb biomechanics. Neuron 77: 168–179, 2013. [DOI] [PubMed] [Google Scholar]
- Liu X, Mosier KM, Mussa-Ivaldi FA, Casadio M, Scheidt RA. Reorganization of finger coordination patterns during adaptation to rotation and scaling of a newly learned sensorimotor transformation. J Neurophysiol 105: 454–473, 2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Liu XL, Scheidt RA. Contributions of online visual feedback to the learning and generalization of novel finger coordination patterns. J Neurophysiol 99: 2546–2557, 2008. [DOI] [PubMed] [Google Scholar]
- Malfait N, Gribble PL, Ostry DJ. Generalization of motor learning based on multiple field exposures and local adaptation. J Neurophysiol 93: 3327–3338, 2005. [DOI] [PubMed] [Google Scholar]
- Malfait N, Shiller DM, Ostry DJ. Transfer of motor learning across arm configurations. J Neurosci 22: 9656–9660, 2002. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mattar AA, Ostry DJ. Neural averaging in motor learning. J Neurophysiol 97: 220–228, 2007. [DOI] [PubMed] [Google Scholar]
- Mazzoni P, Krakauer JW. An implicit plan overrides an explicit strategy during visuomotor adaptation. J Neurosci 26: 3642–3645, 2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Moran DW, Schwartz AB. Motor cortical representation of speed and direction during reaching. J Neurophysiol 82: 2676–2692, 1999. [DOI] [PubMed] [Google Scholar]
- Morasso P. Spatial control of arm movements. Exp Brain Res 42: 223–227, 1981. [DOI] [PubMed] [Google Scholar]
- Mosier KM, Scheidt RA, Acosta S, Mussa-Ivaldi FA. Remapping hand movements in a novel geometrical environment. J Neurophysiol 94: 4362–4372, 2005. [DOI] [PubMed] [Google Scholar]
- Mulliken GH, Musallam S, Andersen RA. Forward estimation of movement state in posterior parietal cortex. Proc Natl Acad Sci U S A 105: 8170–8177, 2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nozaki D, Kurtzer I, Scott SH. Limited transfer of learning between unimanual and bimanual skills within the same limb. Nat Neurosci 9: 1364–1366, 2006. [DOI] [PubMed] [Google Scholar]
- Paninski L, Shoham S, Fellows MR, Hatsopoulos NG, Donoghue JP. Superlinear population encoding of dynamic hand trajectory in primary motor cortex. J Neurosci 24: 8551–8561, 2004. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pesaran B, Nelson MJ, Andersen RA. Dorsal premotor neurons encode the relative position of the hand, eye, and goal during reach planning. Neuron 51: 125–134, 2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Proske U, Gandevia SC. The proprioceptive senses: their roles in signaling body shape, body position and movement, and muscle force. Physiol Rev 92: 1651–1697, 2012. [DOI] [PubMed] [Google Scholar]
- Rathelot JA, Strick PL. Subdivisions of primary motor cortex based on cortico-motoneuronal cells. Proc Natl Acad Sci U S A 106: 918–923, 2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sainburg RL, Wang J. Interlimb transfer of visuomotor rotations: independence of direction and final position information. Exp Brain Res 145: 437–447, 2002. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Scott SH, Gribble PL, Graham KM, Cabel DW. Dissociation between hand motion and population vectors from neural activity in motor cortex. Nature 413: 161–165, 2001. [DOI] [PubMed] [Google Scholar]
- Sergio LE, Hamel-Paquet C, Kalaska JF. Motor cortex neural correlates of output kinematics and kinetics during isometric-force and arm-reaching tasks. J Neurophysiol 94: 2353–2378, 2005. [DOI] [PubMed] [Google Scholar]
- Serruya MD, Hatsopoulos NG, Paninski L, Fellows MR, Donoghue JP. Instant neural control of a movement signal. Nature 416: 141–142, 2002. [DOI] [PubMed] [Google Scholar]
- Shadmehr R. Generalization as a behavioral window to the neural mechanisms of learning internal models. Hum Mov Sci 23: 543–568, 2004. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shadmehr R, Mussa-Ivaldi FA. Adaptive representation of dynamics during learning of a motor task. J Neurosci 14: 3208–3224, 1994. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shenoy KV, Kaufman MT, Sahani M, Churchland MM. A dynamical systems view of motor preparation: implications for neural prosthetic system design. Prog Brain Res 192: 33–58, 2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shenoy KV, Sahani M, Churchland MM. Cortical control of arm movements: a dynamical systems perspective. Annu Rev Neurosci 36: 337–359, 2013. [DOI] [PubMed] [Google Scholar]
- Smith MA, Ghazizadeh A, Shadmehr R. Interacting adaptive processes with different timescales underlie short-term motor learning. PLoS Biol 4: e179, 2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Stark E, Drori R, Asher I, Ben-Shaul Y, Abeles M. Distinct movement parameters are represented by different neurons in the motor cortex. Eur J Neurosci 26: 1055–1066, 2007. [DOI] [PubMed] [Google Scholar]
- Sussillo D, Abbott LF. Generating coherent patterns of activity from chaotic neural networks. Neuron 63: 544–557, 2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sussillo D, Churchland M, Kaufman M, Shenoy K. A recurrent neural network that produces EMG from rhythmic dynamics. Front Neurosci Comput Systems Neurosci Salt Lake City, UT, 2013. [Google Scholar]
- Tanaka H, Krakauer JW, Sejnowski TJ. Generalization and multirate models of motor adaptation. Neural Comput 24: 939–966, 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tanaka H, Sejnowski TJ. Computing reaching dynamics in motor cortex with Cartesian spatial coordinates. J Neurophysiol 109: 1182–1201, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tanaka H, Sejnowski TJ, Krakauer JW. Adaptation to visuomotor rotation through interaction between posterior parietal and motor cortical areas. J Neurophysiol 102: 2921–2932, 2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Taylor DM, Tillery SI, Schwartz AB. Direct cortical control of 3D neuroprosthetic devices. Science 296: 1829–1832, 2002. [DOI] [PubMed] [Google Scholar]
- Taylor JA, Hieber LL, Ivry RB. Feedback-dependent generalization. J Neurophysiol 109: 202–215, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Taylor JA, Ivry RB. Context-dependent generalization. Front Hum Neurosci 7: 171, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Taylor JA, Wojaczynski GJ, Ivry RB. Trial-by-trial analysis of intermanual transfer during visuomotor adaptation. J Neurophysiol 106: 3157–3172, 2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Thoroughman KA, Shadmehr R. Learning of action through adaptive combination of motor primitives. Nature 407: 742–747, 2000. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Todorov E. Cosine tuning minimizes motor errors. Neural Comput 14: 1233–1260, 2002. [DOI] [PubMed] [Google Scholar]
- Trainin E, Meir R, Karniel A. Explaining patterns of neural activity in the primary motor cortex using spinal cord and limb biomechanics models. J Neurophysiol 97: 3736–3750, 2007. [DOI] [PubMed] [Google Scholar]
- Wang J, Sainburg RL. Mechanisms underlying interlimb transfer of visuomotor rotations. Exp Brain Res 149: 520–526, 2003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang J, Sainburg RL. Interlimb transfer of novel inertial dynamics is asymmetrical. J Neurophysiol 92: 349–360, 2004. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang J, Sainburg RL. Adaptation to visuomotor rotations remaps movement vectors, not final positions. J Neurosci 25: 4024–4030, 2005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wise SP, Moody SL, Blomstrom KJ, Mitz AR. Changes in motor cortical activity during visuomotor adaptation. Exp Brain Res 121: 285–299, 1998. [DOI] [PubMed] [Google Scholar]
- Wolpaw JR, McFarland DJ. Control of a two-dimensional movement signal by a noninvasive brain-computer interface in humans. Proc Natl Acad Sci U S A 101: 17849–17854, 2004. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wu HG, Smith MA. The generalization of visuomotor learning to untrained movements and movement sequences based on movement vector and goal location remapping. J Neurosci 33: 10772–10789, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wu W, Hatsopoulos N. Evidence against a single coordinate system representation in the motor cortex. Exp Brain Res 175: 197–210, 2006. [DOI] [PubMed] [Google Scholar]
- Wu W, Hatsopoulos NG. Coordinate system representations of movement direction in the premotor cortex. Exp Brain Res 176: 652–657, 2007. [DOI] [PubMed] [Google Scholar]
- Xu Y, Wang X, Peck C, Goldberg ME. The time course of the tonic oculomotor proprioceptive signal in area 3a of somatosensory cortex. J Neurophysiol 106: 71–77, 2011. [DOI] [PMC free article] [PubMed] [Google Scholar]





