Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 Jan 19.
Published in final edited form as: Annu Int Conf IEEE Eng Med Biol Soc. 2018 Jul;2018:4714–4719. doi: 10.1109/EMBC.2018.8513182

Models of Motor Learning Generalization

Pritesh N Parmar 1, James L Patton 1
PMCID: PMC8767419  NIHMSID: NIHMS1770137  PMID: 30441402

Abstract

This study used evidence from trial-by-trial errors to understand how humans can generalize what they learn across different movement directions while reaching. We trained 15 healthy subjects to reach in six directions in the presence of challenging visuomotor distortions. We then tested a number of candidate models suggested by the literature of how the brain might use error to improve performance. Our cross-validated results point to a discrete affine model whose generalization, or influence of practice in one direction to neighboring directions, is reduced nearly to zero by 60 degrees away, and the subjects learned 6.25 times more from the error that was observed at a movement direction than neighboring directions.

I. Introduction

Error-feedback plays an important role in the acquisition of motor skills for goal directed movements [13] by facilitating learning of internal models, which allow the brain to optimize performance via anticipating and predicting sensory consequences of motor commands. It is recognized that the brain incrementally adjusts internal model proportional to current error (learning signal). However, when error is perceived consecutively across various contexts (e.g. different directions of reach or across arm configurations) during training, it is unclear whether the brain learns an independent internal model for each context using only the context-specific error or a generalized internal model using the current error that is generalized across contexts.

In order to comprehensively learn a motor skill, exposure to multiple contexts is sometimes required (e.g. when learning anisotropic distortions). Generalizations of learned skill have been observed across arms and across arm configurations [49]. They tested for generalization in novel contexts only after extensive training in one context. Researchers have also demonstrated that internal model generalizes only locally [1016]. It is not well understood how the errors experienced across different contexts inform the update of internal model. It can be hypothesized that error perceived in context A would only update the internal model local to A. The brain must learn independent internal models per each context, given that the generalizability region of each internal model is narrow versus broad across contexts.

Our study focused on evaluating how errors perceived across different movement directions facilitate learning. We trained human subjects to reach in six directions in nonlinear visuomotor distortions. We tested their learning data against three most probable models: movement-direction-specific model where errors across directions are not shared, generalizing model where the current error is shared across all directions, and mixed model, which was a weighted sum. The outcomes favored the mixed model. Our results indicate that the subjects learned 6.25 times more from the error that was observed at a movement direction than neighboring directions. Also, the generalization effect of learning from one movement direction to neighboring directions reduced nearly to zero by 60 degrees away.

II. Procedure

A. Subjects

Fifteen right-handed healthy human subjects (6M, 9F) were recruited in this study after obtaining written informed consent approved by the local ethics committee. These subjects had no history of neurological, shoulder, or elbow disorders and were within the age range of 21–40 years. We excluded subjects with ambidexterity. The experimental procedures involving human subjects described in this paper were approved by the Institutional Review Board.

B. Experimental Setup

Subjects sat in front of a manipulandum robot (Figure 1A), and we strapped their right arm wrist to the end effector, the handle, using a wrist brace. The manipulandum was a light weight, low friction, two degrees of rotational freedom robot [17]. The robot’s handle position measurements (400 Hz) were taken using two optical encoders (model 25/054-NB17-TA-PPA-QARIS, Teledyne Gurley, Troy, NY). We supported subject’s elbow by a multi-link arm support (Basic Mobile Arm Support (MAS) Kit by North Coast) so that their arm movements were planar. The arm support had three degrees of rotational freedom in a plane and had low inertia.

Figure 1.

Figure 1.

Experiment apparatus and protocol. (A) Participants reached for visual targets by moving the handle of the manipulandum, a planar robot (the horizontal screen and visual display are semi-transparent only for illustration purpose). The learning tasks were to adapt to various visuomotor distortions by practicing target reaching (C), where visual error from ideal straight-line was augmented as shown (B). (K1 = gain level for current error-feedback; K2 = gain level for bias; bias = errors the subjects produced during the initial exposure to the learning tasks)

An opaque, rectangular white screen was positioned horizontally above the robot to block subjects’ view of their arm when interacting with the robot. The subjects were seated such that this horizontal screen did not allow them to lean forward. In addition, they were instructed not to lean sideways. We used 40-inch display to show the position of the handle (as a cursor) and visual targets for reaching. The display was mounted directly above the robot, centered at eye level. We calibrated the display to represent the absolute spatial workspace of the handle (from −44.5cm to +44.5cm in x and from 22cm to 72cm in y of the robot coordinate).

C. Experimental Procedure

We seated subjects such that their right shoulder was directly in front of the robot’s shoulder. Each subject was instructed to move the handle of the robot to bring the cursor towards the center of a target by making a single, quick straight-line reach. The cursor was 2.5mm diameter white circle, and the targets were 4.5cm yellow “+” signs. The reaching task included moving the cursor from one target to the next (target-to-target reaching). Only the destination target for a trial was shown at a time on the display. Targets were placed at the vertices of a 15cm equilateral triangle. The visual location of these targets on the display were fixed for all phases and for all nonlinear visuomotor distortions used in this experiment. The center of the triangle was placed in the robot coordinate at 0cm in x and 47cm in y and was oriented 75.243 degrees from x-axis. We confirmed that this target set was within all subjects’ reachable workspaces under all visuomotor distortions.

For each target reach, the initial launch of movement was detected based on distance and speed thresholds (greater than 1cm away from start position and greater than 20cm/s), and the end of initial launch of movement was detected based on speed threshold (less than 5cm/s). All the thresholds were calculated in the cursor space. Once the end of initial launch of movement was detected, the “+” sign for a target was changed to the 4.5cm “x” sign. At this point, trial was marked completed, and if the subjects had missed the target (greater than 0.5cm away from the target position), they were asked to navigate the cursor to the target. Throughout this navigation phase, the cursor trace for the initial launch of movement was displayed. The average initial launch speed within 24 to 36 cm/s was marked satisfactory with a change in target color and speed feedback bar color to green. Blue represented slower speeds, and red represented faster speeds.

The cursor position was removed for some trials (no-vision trials) during the entire initial launch of movement. Also, the cursor trace for the initial launch of movement was not displayed for these no-vision trials.

During the learning phases, cursor represented the subject’s shoulder angle versus elbow angle, instead of hand position. Similar adaptation to a nonlinear visuomotor transformation was studied previously by [18]. Here, we could either map shoulder angle along the horizontal dimension (+x) of the display and elbow angle along the vertical dimension (+y) of the display or vice versa. We could also multiply shoulder and elbow angles with −1 to show their mirror transformations on the display. As listed in TABLE I, we changed how shoulder and elbow angles mapped on the display. This resulted in eight distinct nonlinear visuomotor transformations (learning tasks).

TABLE I.

Learning Tasks

Learning Task # along horizontal (+x) dimension of display along vertical (+y) dimension of display
1 θ S θ E
2 θ S θE
3 θS θE
4 θS θ E
5 θ E θ S
6 θ E θS
7 θE θS
8 θE θ S

Shoulder and elbow angles were calculated using inverse kinematics [19]:

D=defx2+y2L12L222L1L2, (1)
θE=tan1(1D2D), (2)
θS=tan1(yx)tan1(L2 sin θEL1+L2 cos θE), (3)

where x and y are subjects’ right arm wrist positions in their shoulder-centered coordinate system. L1 and L2 are subjects’ upper and forearm lengths in meters. θS and θE are subjects’ shoulder and elbow joint angles, respectively. 11 degrees of joint angles represented 5cm on the display. Furthermore, θS = 24° and θE = 111° were fixed at the center of the display (at 0cm in x and 47cm in y of the robot coordinate).

The experiment consisted of several distinct phases as illustrated in Figure 1C. After a brief familiarization (30 vision trials in the null visual environment, where 1cm of handle movement represented 1cm of cursor movement), we assessed baseline performance with 30 no-vision trials in the null environment. Next, all subjects experienced eight different learning tasks (250 trials each): task # 1, 2, 3, 4, 5, 6, 7, and 8 in order. Each learning phase was followed by a washout phase (30 trials). All subjects experienced the same random sequence of reaching targets chosen from the set of 6 movement directions (3 targets). Furthermore, the number of different movement directions experienced by the subjects within a phase was nearly equal. The subjects experienced no-vision trials intermittently (one in four; never two in succession; randomly distributed throughout the total number of trials).

First six trials of each learning phase were special trials, where initial exposure to a new visuomotor distortion was assessed. Each of these six trials was a new direction of reach and a no-vision trial. In order to record proper initial exposure to the visual distortion, we needed the cursor to be at a specific start position for a particular direction of reach when visual distortion was applied. We precalculated corresponding start positions in the movement space for all directions of reach and for all eight visual distortions. These start positions represented locations of visual target set in the movement space the subjects would have reached had they learned the visuomotor distortions. At the beginning of these trials, subjects navigated the cursor in the null visual environment to a visual target presented at the location of a start position in the movement space. Once they were at start, the cursor jumped to its corresponding location in the joint coordinate system. Next, the target at the start position was blanked, and the target at the end position for that trial was presented. The subjects were then asked to reach. Once the end of initial launch of movement was detected for these initial exposure trials, we changed to the null visual environment so that subjects can navigate to the start position of the next trial. These initial exposures were no-vision trials and did not provide any feedback about the visuomotor distortions, and thus there was no incentive for the subjects to update their reaching strategy.

We administered two types of augmented error-feedback (EA; Figure 1B): augmentation of current level of error (gain levels K1 = 0, 1, 2, 3) and augmentation of error bias (gain levels K2 = 0, 1, 2). Here, the error bias was the error that subjects produced during the initial exposure to the learning task. On the error-feedback gain space, there were 12 possible combinations of K1 and K2, which we denote as a set of EA Coordinates. Five control subjects received normal error-feedback, EA Coordinate {1,0} for all eight learning tasks. Each of new ten subjects received EA, EA Coordinate randomly chosen from the set per learning task (EA coordinate was never repeated within a subject and {1,0} was excluded). We repeated the same random order of EA coordinate per task for every two subjects. The EA was applied only during the initial launch of movement. Once the end of initial launch of movement was detected, the augmented error-feedback transitioned smoothly to the normal error-feedback {1,0} within 50ms using a sigmoid. Note that the order in which the learning task was presented was kept the same for all subjects, and only the order in which the error-augmentation applied was randomized.

D. Data Analysis

The handle positions were transformed to the objective space (visual display space) by first converting them to joint angles and then to their corresponding positions on the display (joint space trajectory; using the calibration methods provided above). For the data analysis, we did not further transform these joint space trajectories using error-feedback gains, which the subjects saw during the experiment.

Here, we define feedforward error as root-mean-square-error between the joint space trajectory from the first 150ms of the initial launch of movement and the straight-line path between start and goal positions. The error was indexed based on amount of distance travelled. The initial launch of movement could be to the left or to the right of the ideal straight line. Thus, we assigned a positive sign to the feedforward error if the initial launch direction (calculated at 150ms) for the performed movement was towards the same side as the initial exposure movement, and a negative sign was assigned to the opposite case.

E. Estimation of Generalization

We tested three schemes (Figure 2), which model generalization of learning across movement directions: direction-specific, generalizing, and mixed. In the direction-specific model, learning is independent for each movement direction. Here, we assessed how the perceived error at trial n(e^n) affects the feedforward error at trial n + 1 (en + 1):

en+1=en+f(e^n). (4)

Here, the perceived error was estimated as

e^n=K1*en+K2*e0, (5)

where K1, K2 were augmentation levels that the subjects experienced, and e0 was the error bias, feedforward error during the initial exposure. We tested following structures:

Linear:f(e^n)=Be^n, (6)
Affine:f(e^n)=A+Be^n. (7)

Figure 2.

Figure 2.

Three candidate models for generalization of learning across six movement directions. Solid lines represent feedback using observed states (in these examples for the direction 3), and dashed lines represent feedback using hidden states.

In the generalizing model, learning for all movement direction is dependent on the movement direction for which the error is currently perceived. We define generalized perceived error as:

g^n=W(θ)*e^n, (8)

where W is generalization weight between two movement directions with separation of θ degrees. W was restricted between −1 and 1. In this experiment, there were six movement directions and possible angular separations among them include θ = [−120, −60, 0, 60, 120, 180]. W(0) was fixed at 1 and others were free parameters in the optimization. Here, we used equations (4) to (7) where we had f(g^n) instead of f(e^n). We relabeled the coefficient B to Z to indicate learning from the generalized error.

Finally, the mixed model is weighted sum of the direction specific and generalizing models. We assessed how the perceived error (e^n) and generalized perceived error (g^n) at trial n affect the feedforward error at trial n + 1 (en+1) across movement directions:

en+1=en+f(e^n)+Zg^n. (9)

Note that as the subjects marched through the training phase, they only perceived error for one movement direction per trial. Therefore, we modeled the movement directions that they did not observe as hidden states. The effect of EA on the perceived error (5) was applied to the movement direction that was observed. For the movement directions with hidden states, we used unaugmented error-feedback.

Estimation of the model parameters was performed using nonlinear Quasi-Newton least squares regressions within five-fold simulated annealing. This required an initial condition, e0, which was fixed at the subjects’ initial exposure error. We used the feedforward errors from both vision and no-vision trials of the learning phases. We removed the observations where EA level on the current error was zero (K1=0) because this condition did not allow any variation in the perceived error to identify its relationship to the inter-trial change in error.

F. Statistical Analysis

The above models were fitted per movement direction per learning task and per subject. We calculated the coefficient of determination (R2 and Radj2) and the Bayesian Information Criteria (BIC) for each regression fit. For each model, we had 720 estimates of R2, Radj2, and BIC (6 directions × 8 learning tasks × 15 subjects). We then calculated maximum likelihood estimates (MLE) for these statistical measures from their kernel density estimate (gaussian kernels with 0.8 bandwidth of the kernel smoothing window). We also performed pairwise comparisons among the models for Radj2 using the righttailed Wilcoxon signed rank hypothesis test with 0.05 alpha level, corrected using Bonferroni-Holm post-hoc.

III. Results

During the baseline phase, all subjects reached for the targets in straight-lines with nearly symmetric and smooth velocity profiles. All subjects’ baseline motions were comparable, and the feedforward error was −0.05 ± 0.09 cm (mean ± 95% confidence interval). By the end of washout phases (last 6 trials), the subjects reached in straight-lines toward the targets with the feedforward error of 0.005 ± 0.08 cm. During the initial exposure to the learning tasks, the subjects produced substantial error (absolute feedforward error of 3.11 ± 0.32 cm), which were characteristically distinct based on the learning tasks.

Across learning phases, we observed the initial error diminished as the subjects practiced under various EA conditions (Figure 3). We tested our three hypotheses of how learning from error is generalized across movement directions by regressing models as described earlier. The mixed model with affine structure performed the best when compared to other 5 models (p<0.05; Figure 4A), and both generalizing models performed the worst compared to other 4 models (p<0.05). The MLE of R2 and Radj2 for the mixed model with affine structure were 0.16 and 0.12, respectively. For comparison, MLE of R2 and Radj2 for the direction specific model were 0.09 and 0.05, and for the generalizing model were 0.05 and 0.02. The mixed model also had the lowest MLE of BIC estimates (Figure 4B).

Figure 3.

Figure 3.

Examples of the model fits to the learning data from a subject. Each subplot represents a different movement direction of a learning task. Circles are feedforward errors performed by the subject. (blue = direction-specific model; green = generalizing model; red = mixed model)

Figure 4.

Figure 4.

Summary of regression model performances. A, B: Distribution of the adjusted R-squared (Radj2) and Bayesian Information Criteria (BIC). Each tiny dot represents Radj2 or BIC calculated for a learning curve from particular direction, learning task, and subject. Thick horizontal bars represent median and inter-quartile range. White circles are maximum likelihood estimates. The models indicated by arrow heads (A) have medians of Radj2 significantly greater than their pairs (right-tailed Wilcoxon signed rank test; p<0.05). C: Generalization weights between movement directions separated by θ degrees, equation (8), for all models with affine structure. Bar height in the radial direction represents 95% confidence interval, and * represents values significantly greater than zero (p<0.05; t-test). D: Learning rates associated with direction-specific error relative to generalized error (B vs Z) for each movement direction (indicated by number). Error bars are 95% confidence interval. (blue = direction-specific model; green = generalizing model; red = mixed model)

For the generalizing model with affine structure, the generalization weight distributions across all movement direction separations were symmetric and comparable (0.16 ± 0.02; excluding fixed weight at zero degrees; Figure 4C), and greater than zero (p<0.05; t-test). The learning rates across different directions were also identical (average Z value −0.17 ± 0.03 per trial; Figure 4D).

For the mixed model with affine structure, the generalization weight distributions across movement direction separations were asymmetric (Figure 4C). The generalization weights for +60 and −60 degrees of movement direction separations were comparable and greater than zero (average of 0.11; p<0.05). The generalization weight for +120 degrees of movement direction separation was not different than zero (p>0.05), while for −120 degrees was greater than zero (average of 0.11; p<0.05). The average learning rates associated with direction-specific error and generalized error were −0.59 ± 0.10 (B value) and −0.27 ± 0.07 (Z value), respectively (Figure 4D).

IV. Discussion

This study investigated how error experienced across different directions of reach contribute to learning of internal model for nonlinear visuomotor distortions. We tested probable candidate models against learning data from subjects, and the results reveal that perceived error help train independent internal models for each movement direction. We did observe significant generalization of learning across movement directions, but the effects were modest at best.

The direction-specific error contributed more to learning than the generalized error. Learning rates associated with the mixed model for all movement directions lie to the left of slope one between Z and B (Figure 4D). Furthermore, learning from the direction-specific error was about 6.25 times as much as from the generalized error. We were able to estimate this ratio by comparing learning rates of the generalizing model and knowing the fact that the generalization weight was fixed to 1 at 0 degrees.

Although a positive transfer of learning was detected as far as 120 degrees away from where error was perceived (for the mixed model; Figure 4D), the generalization of learning seemed to decay rapidly, even as soon as 60 degrees, as the angular distance increased across movement directions. This is consistent with previous results showing that the learning is restricted to spatial locations where errors are perceived [1016].

Researchers estimated the shape of the generalization function and found it to be bimodal [11]. More specifically, they found larger effects of generalization at 0 and 180 degrees than intermediate degrees of angular distance between movement directions. We did not find larger generalization at 180 degrees than intermediate angles. The primary difference between our and their study is that our learning task was visuomotor and theirs was force based.

These narrowly generalizing direction-specific learning provides evidence of a modular structure of motor learning system [3, 2022]. These investigators further suggested that the modular structure allows for an interpolated response from multiple “expert” modules. In this study, however, we did not explore any interpolation effects between our direction-specific models. Number of expert modules necessary and their distribution across directions of reach are still not well understood.

There are many confounding factors that might have led to only about 16% of the variance accounted for (MLE estimate for the mixed model R2) in the data. Our separate analysis indicates that inter-trial noise grew with increasing level of augmented error-feedback (EA). Furthermore, our models did not account for any learning that might have taken place when the subjects were asked to navigate to the target once the initial launch of reach was over and the visuomotor distortion was present.

Transfer of skills, generalization of internal model, is an important element of learning, where motor commands reinforced in one context is exhibited in another. However, we found only a modest amount of generalization across movement directions with angular separations of ±60, ±120, and 180 degrees. This narrowly generalizing direction-specific learning might be the characteristic of the motor system or a consequence of learning challenging nonlinear task. It remains to be seen whether the region of generalization of learning can be increased by using simpler linear task.

Acknowledgments

Research reported in this publication was supported by the National Institutes of Health Awards Number F31-NS100520 and 2R01-NS053606.

References

  • [1].Jordan MI and Rumelhart DE, “Internal World Models and Supervised Learning,” (in English), Machine Learning, pp. 70–74, 1991. [Google Scholar]
  • [2].Kawato M, “Feedback-error-learning neural network for supervised motor learning,” Advanced neural computers, vol. 6, no. 3, pp. 365–372, 1990. [Google Scholar]
  • [3].Kawato M, “Internal models for motor control and trajectory planning,” (in eng), Curr Opin Neurobiol, vol. 9, no. 6, pp. 718–27, Dec 1999. [DOI] [PubMed] [Google Scholar]
  • [4].Criscimagna-Hemminger SE, Donchin O, Gazzaniga MS, and Shadmehr R, “Learned dynamics of reaching movements generalize from dominant to nondominant arm,” (in eng), J Neurophysiol, vol. 89, no. 1, pp. 168–76, Jan 2003. [DOI] [PubMed] [Google Scholar]
  • [5].Dizio P and Lackner JR, “Motor adaptation to Coriolis force perturbations of reaching movements: endpoint but not trajectory adaptation transfers to the nonexposed arm,” (in eng), J Neurophysiol, vol. 74, no. 4, pp. 1787–92, Oct 1995. [DOI] [PubMed] [Google Scholar]
  • [6].Malfait N, Shiller DM, and Ostry DJ, “Transfer of motor learning across arm configurations,” (in eng), J Neurosci, vol. 22, no. 22, pp. 9656–60, Nov 15 2002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [7].Parmar PN, Huang FC, and Patton JL, “Evidence of multiple coordinate representations during generalization of motor learning,” (in English), Experimental Brain Research, vol. 233, no. 1, pp. 1–13, Jan 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [8].Shadmehr R and Moussavi ZM, “Spatial generalization from learning dynamics of reaching movements,” (in eng), J Neurosci, vol. 20, no. 20, pp. 7807–15, Oct 15 2000. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [9].Shadmehr R and Mussa-Ivaldi FA, “Adaptive representation of dynamics during learning of a motor task,” (in eng), J Neurosci, vol. 14, no. 5 Pt 2, pp. 3208–24, May 1994. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [10].Berniker M, Franklin DW, Flanagan JR, Wolpert DM, and Kording K, “Motor learning of novel dynamics is not represented in a single global coordinate system: evaluation of mixed coordinate representations and local learning,” J Neurophysiol, vol. 111, no. 6, pp. 1165–82, 2014-3-15 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [11].Donchin O, Francis JT, and Shadmehr R, “Quantifying generalization from trial-by-trial behavior of adaptive systems that learn with basis functions: theory and experiments in human motor control,” J Neurosci, vol. 23, no. 27, pp. 9032–45, Oct 8 2003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [12].Gandolfo F, Mussa-Ivaldi FA, and Bizzi E, “Motor learning by field approximation,” (in eng), Proc Natl Acad Sci U S A, vol. 93, no. 9, pp. 3843–6, Apr 30 1996. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [13].Malfait N, Gribble PL, and Ostry DJ, “Generalization of motor learning based on multiple field exposures and local adaptation,” (in eng), J Neurophysiol, vol. 93, no. 6, pp. 3327–38, Jun 2005. [DOI] [PubMed] [Google Scholar]
  • [14].Sainburg RL, Ghez C, and Kalakanis D, “Intersegmental dynamics are controlled by sequential anticipatory, error correction, and postural mechanisms,” (in eng), J Neurophysiol, vol. 81, no. 3, pp. 1045–56, Mar 1999. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [15].Thoroughman KA and Shadmehr R, “Learning of action through adaptive combination of motor primitives,” (in eng), Nature, vol. 407, no. 6805, pp. 742–7, Oct 12 2000. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [16].Witney AG and Wolpert DM, “Spatial representation of predictive motor learning,” J Neurophysiol, vol. 89, no. 4, pp. 1837–43, Apr 2003. [DOI] [PubMed] [Google Scholar]
  • [17].Fayé IC, “An impedance controlled manipulandum for human movement studies,” Master of Science in Mechanical Engineering SM Thesis, Department of Mechanical Engineering, Massachusetts Institute of Technology (MIT), 1986. [Google Scholar]
  • [18].Flanagan JR and Rao AK, “Trajectory adaptation to a nonlinear visuomotor transformation: evidence of motion planning in visually perceived space,” (in eng), J Neurophysiol, vol. 74, no. 5, pp. 2174–8, Nov 1995. [DOI] [PubMed] [Google Scholar]
  • [19].Spong MW, Hutchinson S, and Vidyasagar M, Robot modeling and control. Hoboken, NJ: John Wiley & Sons, 2006. [Google Scholar]
  • [20].Flanagan JR, Nakano E, Imamizu H, Osu R, Yoshioka T, and Kawato M, “Composition and decomposition of internal models in motor learning under altered kinematic and dynamic environments,” (in eng), J Neurosci, vol. 19, no. 20, p. RC34, Oct 15 1999. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [21].Ghahramani Z and Wolpert DM, “Modular decomposition in visuomotor learning,” (in eng), Nature, vol. 386, no. 6623, pp. 392–5, Mar 27 1997. [DOI] [PubMed] [Google Scholar]
  • [22].Wolpert DM and Kawato M, “Multiple paired forward and inverse models for motor control,” (in eng), Neural Netw, vol. 11, no. 7–8, pp. 1317–29, Oct 1998. [DOI] [PubMed] [Google Scholar]

RESOURCES