Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2011 Oct 28.
Published in final edited form as: Exp Brain Res. 2008 Dec 2;193(2):315–321. doi: 10.1007/s00221-008-1630-3

Adaptation of sound localization induced by rotated visual feedback in reaching movements

Florian A Kagerer 1,, Jose L Contreras-Vidal 2
PMCID: PMC3203351  NIHMSID: NIHMS236584  PMID: 19048242

Abstract

A visuo-motor adaptation task was used to investigate the effects of this adaptation on the auditory-motor representation during reaching movements. We show that, following exposure to a rotated screen cursor-hand relationship, the movement paths during auditory conditions exhibited a similar pattern of aftereffects as those observed during movements to visual targets, indicating that the newly formed model of visuo-motor transformations for hand movement was available to the auditory-motor network for planning the hand movements. This plasticity in human sound localization does not require active cross-modal experience, and retention tests indicated that the newly formed internal model does not reside primarily within the central auditory system as suggested in past studies examining the plasticity of sound localization to distorted spatial vision.

Keywords: Sensorimotor adaptation, Internal model, Visuo-motor, Auditory, Human, Sensory integration

Introduction

Current theoretical concepts hold that visually guided reaching movements involve a transformation of gaze- or eye-centered coordinates to limb-centered coordinates. How the central nervous system (CNS) processes these transformations is still a matter of debate. Shadmehr and Wise (2005) convincingly argue that the CNS likely bases its computations during visually guided movements on a fixation-centered coordinate frame, which in turn is based on retinotopic and extraretinal signals, and allows computations about target locations even when the target has left the visual field. In visually guided movements, the transformation of visual signals relating to hand position and target position into motor commands can be conceptualized as a ‘mapping’ describing the relationship between ‘visual space’ and ‘motor space’. This mapping could also be viewed as ‘internal models’ or ‘neural representations’ of kinematics and dynamics (Abeele and Bock 2001; Imamizu et al. 2000) the CNS has learned over time. In this context, the visuo-motor map is necessarily adaptive: it needs to be updated upon intrinsic changes that occur, for example, in relation to growth, or extrinsic changes, that occur when the environment changes. Previous studies have investigated this adaptive capacity by requiring the participants to adapt to a novel screen cursor-hand relationship (Bock 2003; Kagerer et al. 1997; Krakauer et al. 2000).

The physiological substrates involved in this operation appear to be in area five of the posterior parietal cortex (PPC), representing target position in eye and limb coordinates, the parietal reach region (PRR), representing target position in eye coordinates (Batista et al. 1999), and the lateral intraparietal area (LIP) where cells have been shown to also code target locations in eye-centered coordinates (Andersen and Buneo 2003; Connolly et al. 2003), particularly during eye movements. The involvement of the PPC in the visuo-motor transformation process has been confirmed using neuroimaging (Diedrichsen et al. 2005; Girgenrath et al. 2008), and high density EEG during adaptation to rotated visual feedback, which showed fronto-parietal shifts during adaptation (Contreras-Vidal and Kerick 2004).

Although neurons in area PRR and area LIP are considered to code primarily visual stimuli, some cells exhibit polysensory characteristics and respond to acoustic stimuli (Cohen and Andersen 2000), and code them also in a common, eye-centered reference frame. Spatial modification of vision, e.g., by means of spatial compression and prism adaptation, has been shown to induce adaptive changes in sound localization (Knudsen and Knudsen 1985; Zwiers et al. 2003). What is not known is whether and how formation of a new visuo-motor map would affect the existing mapping between auditory space and motor space during reaching in the absence of active, cross-modal experience.

Materials and methods

Participants

Nine adult participants (mean age 26.1 ± 7.2 years, right-handed) performed a center-out task on a digitizing board, with visual feedback in the form of starting position, targets, and movement paths provided on a computer monitor. A second group of students (n = 6, mean age 23.2 ± 3.6 years, right-handed) was included for a control experiment described below. All participants gave their informed consent prior to performing the task which was approved by the Institutional Review Board at the University of Maryland.

Apparatus and procedure

Participants were seated in front of a table, with their heads stabilized with a chin-rest. They looked down onto a 14″ computer LCD screen placed flat on top of a wooden stand covering a digitizing tablet (Wacom Intuos®), and directly above the actual pen position on the digitizing tablet; the stand occluded vision of the moving arm and hand.

The LCD screen displayed a red dot (diameter 0.5 cm) in the center (‘home position’). Participants were required to position the pen represented by the cursor on the screen inside this home position. As soon as this criterion was met, a blue dot (diameter 0.5 cm) appeared at either 24°, 90°, or 156° relative to and 9 cm away from the home position. Participants were then required to move from the home position to the target position “as fast and as straight as possible”, during which the pen trace was being displayed on the LCD screen, as were home position and target. During the visual baseline condition, 30 trials (10 per target) were administered. During the exposure condition, the visual feedback of the pen was rotated clockwise by 60°, requiring the participants to compensate by rotating the direction of their movements by the same amount counterclockwise; 126 trials (42 per target) were administered. The last condition (‘post-exposure’) consisted of 12 trials (4 per target) under normal, i.e., un-rotated, visual feedback in order to test for aftereffects.

Additionally to and interspersed with the visual condition was an auditory condition, during which participants were presented tones from two small piezoelectric mini buzzers (4.1 ± 0.5 kHz, 67.5 dB at 30 cm and 9V DC) placed at the corners of the monitor at an azimuth of 45° and 135° relative to the visually presented home position, and 19 cm away from it. The auditory baseline condition was performed immediately after the visual baseline. Participants wore opaque goggles preventing them from seeing the sound source. To start each trial, their hand holding the pen was guided by the experimenter to the location of the same home position used during the visual condition. They were then presented with intermittent beeps from one of the two speakers, and instructed to move the pen, again as fast and straight as possible, towards the perceived sound source, with the movement amplitude similar to that during the visual condition; 30 baseline trials (15 per target) were administered. This condition was administered again immediately after the visual exposure condition; at that stage, 12 trials (6 per target) were given to test for aftereffects. The sequence of conditions was therefore: visual baseline, auditory baseline, visual exposure, auditory post-exposure, visual post-exposure. The stimulus sequence within each phase was pseudo-randomized, but the same for all participants. The second group of participants performed the same experiment, with the one difference that during the visual post-exposure the pen trace was not visible. It is important to note that at no stage of the experiment the auditory-motor relationship itself was manipulated.

The position data recorded on the digitizer were sampled at 100 Hz and stored on a PC for later off-line analysis. The experiment, comprising a total of 210 trials, lasted about 25–30 min. A sub-sample of the first group (n = 6) and the second group were re-assessed 1 week later with just the auditory and visual post-exposure conditions.

Data analysis

The time series of each trial were subjected to a dual pass 8th order Butterworth filter with a cutoff frequency of 5 Hz, and movement onset was determined using an algorithm that searched for the first zero crossing preceding the first point in the time series that exceeded 20% of the peak velocity. From these time series, initial directional error (IDE, in degrees) was calculated, defined as the angular difference between a vector from home position and target, and the direction of the actual movement vector, at 80 ms after movement onset. This interval was chosen in order to get information about the ‘planned’ movement direction, without interference of visual feedback induced corrections of the movement. A positive IDE indicated a counterclockwise, and a negative IDE a clockwise deviation from the home position to target vector. To assess the visuo-motor adaptation, additionally root mean square error (RMSE, in mm) was calculated, defined as the average perpendicular distance between the actual movement path and a straight line between home position and target at each sample. For statistical analysis, the 210 data points per variable were reduced to 35 blocks, each representing the mean of six consecutive trials; group mean were based on these individual mean. For the baseline mean, the last three blocks of each the visual and auditory pre-exposure phase (18 trials) were used; post-exposure and retention mean were based on the first block of trials.

Results

During the visual baseline condition, movements were characterized by smooth and straight trajectories from the initial hand position to the targets, as shown in Fig. 1, a1. The rotation of the visual feedback by 60° clockwise induced the curved trajectories shown in Fig. 1, a3, which straightened out towards the end of the exposure phase (Fig. 1, a4). During exposure, the rotation-induced, initially high IDE decreased significantly across blocks of trials [F(1,20) = 7.25, P < 0.001], as did RMSE [F(1,20) = 4.60, P < 0.001]. The post-exposure phase which re-established baseline conditions was characterized by so-called aftereffects—curvilinear trajectories mirroring the ones observed during early exposure (Fig. 1, a6). Comparisons of the mean of the first six trials of post-exposure performance for IDE and RMSE with the respective baseline mean showed that IDE and RMSE were substantially higher post-exposure, indicating strong aftereffects as a result of the adaptation to the visual feedback rotation. When the robustness of aftereffects was revisited after 1 week measuring the retention of the previously updated internal model, IDE still exceeded the baseline mean significantly, as shown in Fig. 1, a8, and Fig. 2, and RMSE just failed to reach statistical significance. The descriptive statistics are shown in Table 1; all P-values are Bonferroni corrected.

Fig. 1.

Fig. 1

Group data for visual and auditory conditions. Throughout panels a and b, visuo-motor data are displayed in purple, and auditory-motor data in orange. Panel a shows the group-averaged movement paths: a1 visual baseline, a2 auditory baseline; the two dots above the movement paths represent the location of the two sound sources, a3 early (visual) exposure, showing curved trajectories as a result of the visual feedback rotation, a4 late (visual) exposure, with the straightened trajectories indicating adaptation to the rotation, a5 auditory post-exposure, showing a directional bias for the trajectories in the same direction as the movements during the (visual) exposure, a6 visual post-exposure, with trajectories mirroring the curvature of those during early exposure, indicating aftereffects, a7 auditory retention, and a8 visual retention after 1 week. The scale denotes the movement size in cm, and is the same for all plots. Panel b shows the IDE group mean for each experimental phase. Note the large positive IDE values for both auditory and visual condition during post-exposure, compared to the baseline values, indicating aftereffects, and the negative values during exposure, indicating undershooting of the required movement angle. The color tiles in panel c shows the mean peak velocity for each block of trials. Note the slower movement velocities during exposure, and the return to baseline levels during post-exposure and retention

Fig. 2.

Fig. 2

Individual IDE values and group mean. a IDE values for each participant for pre- and post-exposure, as well as retention for both visual and auditory conditions [n = 9 (6 for retention: participants 1, 2, 6, 7, 8, 9)]. b IDE values for each participant of the second group (n = 6). In this group, visual feedback of the pen trace was given during (visual) pre-exposure, but not during visual post-exposure phase; auditory pre- and post-exposure were identical to that of the original group. For both groups, positive IDE values indicate that the movement onset aimed at a direction more counterclockwise than was required, negative values indicate that the movement was aimed at a direction more clockwise than what was required

Table 1.

Statistical comparisons

Visual
Auditory
Mean ± SD t CI P Mean ± SD t CI P
IDE (degree)
 Pre-exposure 3.78 ± 4.21 6.63 ± 12.60
 Post-exposure 27.77 ± 9.30* 6.54 15.54–32.46 0.001 19.86 ± 11.21* 4.00 5.60–20.86 0.012
 Retention 13.59 ± 6.01* 4.27 3.95–15.92 0.024 1.35 ± 5.43 0.38 −6.77–9.13 0.2
RMSE (mm)
 Pre-exposure 0.37 ± 0.05
 Post-exposure 1.24 ± 0.41* 6.1 0.54–1.21 0.001
 Retention 0.69 ± 0.25 3.26 0.10–0.55 0.06
Vmax (cm/s)
 Pre-exposure 10.11 ± 2.10 15.10 ± 8.57
 Post-exposure 13.52 ± 4.95 2.18 0.20–7.01 >0.2 17.02 ± 9.14 0.88 3.10–6.95 >0.2
 Retention 11.04 ± 3.66 0.45 3.32–4.72 >0.2 16.94 ± 5.71 0.42 5.05–7.01 >0.2

Group mean and standard deviations for IDE, RMSE, and Vmax for pre-exposure, post-exposure, and retention during the visual and auditory conditions, for the primary group. The retention values are based on six participants. The asterisks indicate a statistically significant difference with respect to the pre-exposure (baseline) values. Also shown are the t-values, confidence interval (CI), and P-values which are Bonferroni-corrected

During the auditory condition the only criterion for the blind-folded participants was the directional accuracy of the movement vector with respect to the required target vector, which is well captured by the IDE scores. The low mean IDE during auditory baseline indicated that the participants were able to locate the azimuth position of each sound source quite accurately, as shown in Fig. 1, a2, and Table 1. When the participants performed under this condition again immediately after the visual exposure condition (Fig. 1, a5), the mean deviation from the target vector had increased significantly, in the same direction as the visually guided movements during the previous adaptation to the visual feedback rotation. While this crossmodal aftereffect was clearly present immediately after the visual-motor exposure, it was not retained 1 week later (see Fig. 1, a7, and Table 1). The individual IDE values for visual and auditory baseline, post-exposure, and retention for the first group are shown in Fig. 2a. In both conditions, all individual post-exposure values are higher than the respective baseline values, with the exception of participant four in the auditory condition.

Movement speed during auditory baseline was slightly, but non-significantly, higher than during visual baseline. For the visual condition, pairwise t-tests showed that peak velocity dropped substantially during exposure, and returned to baseline levels during post-exposure; similarly, peak velocity during auditory post-exposure was similar to auditory baseline. See Table 1 for peak velocity values.

Since the visuo-motor and auditory-motor conditions differed not only by the sensory stimuli, but also by availability of visual feedback of pen movement, we additionally tested a second group consisting of six participants which, during visual post-exposure and visual retention did not receive visual feedback of the pen trace (the respective target was visible during the trial). This group preformed essentially in the same way as the primary group, showing significant visual and auditory aftereffects post-exposure, with visual, but not auditory aftereffects still present in the retention phase (IDE visual baseline/post-exposure: mean difference = 28.05°, t = 8.59, 95% CI = 19.65–36.44, P = 0.001; visual baseline/retention: mean difference = 7.23°, t = 4.85, 95% CI = 3.40–11.10, P = 0.009, P-values Bonferroni adjusted; IDE auditory baseline/post-exposure: mean difference = 24.46°, t = 5.46, 95% CI = 12.94–35.98, P = 0.006; auditory baseline/retention: mean difference = 0.37°, t = 0.14, 95% CI = −6.32–7.06, P = 1.00, P-values Bonferroni adjusted; see also Fig. 2b for the individual performance). The only difference was that the movement paths during visual post-exposure and visual retention phase did not exhibit the curvature which was present in these phases in the group which had received visual feedback, confirming that the straight movement paths during the auditory aftereffects were a result of the absence of visual feedback via the pen trace (see Fig. 3).

Fig. 3.

Fig. 3

Mean movement paths for the second group. The numbering indicates the phases described in Fig. 1. This group performed without visual feedback of the pen trace during visual post-exposure and retention. Note the relatively straight movement paths during the post-exposure phase (6). The gray areas indicate standard deviation from the mean movement path

Discussion

The present findings—exposure to a rotated screen cursor-hand relationship results in an immediate transfer of the visuo-motor adaptation effects to acoustically-guided hand movements—are consistent with studies demonstrating adaptation of sound localization to distorted spatial vision in the barn owl (Knudsen and Knudsen 1985) and in humans (Zwiers et al. 2003), and indicate that the internal model formed during exposure to the visuo-motor distortion is immediately available to auditory-motor networks. In other words, speaking on a modeling level, the transformation of the difference vector between the visual target and hand position into the desired hand/joint kinematics and dynamics is being used by the system when the task suddenly becomes auditory-motor in its nature. It is unlikely that these results are the result of just ‘copying’ on a proprioceptive level the movement paths performed during the visual condition, because the movement paths towards the visual and auditory targets required different path angles. Also, the acoustically guided movements remained straight and did not exhibit the curvilinear characteristics of the movements performed during the visuo-motor adaptation period in the main experiment.

Several brain structures involved in polysensory convergence have been identified in the past decades, and it is known that this convergence happens both at early stages of sensory processing (Fu et al. 2003; Schroeder et al. 2003) and at higher levels (Duhamel 2002; Hyvarinen and Shelepin 1979). Among the higher structures, the PPC with its position between sensory and motor areas appears to have the capacity to link the sensory input convergence to motor output, thus providing a possible stage for interactions across different modalities and for the learning of internal models with cross-modal capabilities. It is very likely that the potential for polysensory convergence also exists in other cortical areas, particularly those which are part of the parieto-frontal networks (Burnod et al. 1999). A recent study in monkeys, using a visuo-motor task which also dissociated the visual feedback from movement execution, showed that the visually perceived movements were represented in the ventral premotor cortex (Schwartz et al. 2004); a separate study, also using monkeys, identified a sub-area of the ventral premotor cortex as a polysensory zone where neurons coded visual, tactile, and auditory responses (Graziano and Gandhi 2000).

Although our study did not explicitly address this, the results are consistent with recent findings in owls, emphasizing the importance of attention in adult animals (Keuroghlian and Knudsen 2007); by its very nature, the visual adaptation paradigm forced participants to closely attend to the movement path in order to hit the target, while performing the hand movements as fast and as straight as they could.

Studies using reaching to visual (Henriques et al. 1998) and acoustic (Pouget et al. 2002) stimuli have argued that sensory modalities use fixation-centered coordinates for re-mapping, based on findings that there was an overshoot of visual targets as a function of retinal eccentricity. These findings lend support to the view that the CNS performs direct transformational operations for reaching that may be computed in eye (or rather fixation)-centered coordinates and that can be read-out directly in hand or head coordinates (Buneo et al. 2002). The finding in our study that during retention only visual, but no auditory aftereffects were present, suggests that the acoustic-to-fixation centered transform failed to consolidate, resulting in a less stable representation. Our findings do not lend direct support to the view that the internal model resides within the central auditory system (Zwiers et al. 2003), as aftereffects were absent in auditory retention trials, whereas aftereffects remained high during visual retention trials. An alternative, but less likely explanation for this is that humans are less experienced with auditory-motor transformations. Therefore, in the absence of continuing practice under the distorted environment, the internal model becomes more specific to the input modality that was experienced during training, and thus the context under which it was formed. In the light of recent findings on memory consolidation (Fenn et al. 2003; Smith et al. 2006), our data support the view that sleep contributes to consolidation of procedural memory specific to the modality in which the training occurred—which in our study was in the visuo-motor domain. Interestingly, active cross-modal experience was not necessary to evoke adapted sound localization during hand movement as aftereffects were observed even when subjects did not experience any auditory–visual discordance (e.g., during open-loop performance). Although our experiment was not designed to determine the hypothetical networks—sensory input, transformational, or motor output—underlying the observed effects, we suggest that the findings speak to the involvement of input-related networks. At the same time, potential contributions from output-related networks to the directional bias of the acoustically guided movements cannot be ruled out. In a recent study on visual-shift adaptation (Simani et al. 2007) the authors suggest that in tasks like this the aftereffects are composed of sensory recalibration and effector-specific (task dependent) effects, with the two factors contributing approximately two-third and one-third, respectively, to the total aftereffect. Effector-specific effects have also been shown previously in a reaching task without intersensory conflict (Magescas and Prablanc 2006). In the context of the present study, this would mean that a portion of the aftereffect could be attributed to an effector-specific part common to both the visuo-motor and the auditory-motor transformations. Since the aftereffects found during retention were only present for the visual condition, but not for the auditory one, it would also suggest that the proposed effector-specific contribution did not last independently of the sensory recalibration contribution.

Since movement velocity during the post-exposure acoustically guided movements was comparable to the velocity shown during pre-exposure, we suggest that the influence across modalities affects the feed-forward (planning) component of the internal representation, whereas processes pertaining to movement execution do not seem to be affected.

Conclusions

Our findings indicate that a single session of exposure to a rotated screen cursor-hand relationship results in an immediate, albeit short-lasting, transfer of the visuo-motor training effects to acoustically-guided hand movements. Specifically, it is shown that the acquisition of the new internal model of a visuo-motor distortion, induced through a rotation of the visual feedback, affects the representation of auditory-motor space for limb/hand movement. Without being actively manipulated, the auditory-motor representation was shown to be ‘tilted’ in the direction of the visually adapted movements after the adaptation period.

Acknowledgments

Supported in part by NIH R01HD42527 and NIH RO3HD050372.

Contributor Information

Florian A. Kagerer, Email: fkagerer@umd.edu, Cognitive Motor Neuroscience Laboratory, Department of Kinesiology and Graduate Program in Neuroscience and Cognitive Science, University of Maryland School of Public Health, College Park, MD 20742, USA. Department of Physical Therapy and Rehabilitation Science, University of Maryland School of Medicine, Baltimore, MD 21201, USA

Jose L. Contreras-Vidal, Cognitive Motor Neuroscience Laboratory, Department of Kinesiology and Graduate Program in Neuroscience and Cognitive Science, University of Maryland School of Public Health, College Park, MD 20742, USA. Graduate Program in Bioengineering, University of Maryland School of Public Health, College Park, MD 20742, USA

References

  1. Abeele S, Bock O. Mechanisms for sensorimotor adaptation to rotated visual input. Exp Brain Res. 2001;139:248–253. doi: 10.1007/s002210100768. [DOI] [PubMed] [Google Scholar]
  2. Andersen RA, Buneo CA. Sensorimotor integration in posterior parietal cortex. Adv Neurol. 2003;93:159–177. [PubMed] [Google Scholar]
  3. Batista AP, Buneo CA, Snyder LH, Andersen RA. Reach plans in eye-centered coordinates. Science. 1999;285:257–260. doi: 10.1126/science.285.5425.257. [DOI] [PubMed] [Google Scholar]
  4. Bock O. Sensorimotor adaptation to visual distortions with different kinematic coupling. Exp Brain Res. 2003;151:557–560. doi: 10.1007/s00221-003-1553-y. [DOI] [PubMed] [Google Scholar]
  5. Buneo CA, Jarvis MR, Batista AP, Andersen RA. Direct visuo-motor transformations for reaching. Nature. 2002;416:632–636. doi: 10.1038/416632a. [DOI] [PubMed] [Google Scholar]
  6. Burnod Y, Baraduc P, Battaglia-Mayer A, Guigon E, Koechlin E, Ferraina S, Lacquaniti F, Caminiti R. Parieto-frontal coding of reaching: an integrated framework. Exp Brain Res. 1999;129:325–346. doi: 10.1007/s002210050902. [DOI] [PubMed] [Google Scholar]
  7. Cohen YE, Andersen RA. Reaches to sounds encoded in an eye-centered reference frame. Neuron. 2000;27:647–652. doi: 10.1016/s0896-6273(00)00073-8. [DOI] [PubMed] [Google Scholar]
  8. Connolly JD, Andersen RA, Goodale MA. FMRI evidence for a ‘parietal reach region’ in the human brain. Exp Brain Res. 2003;153:140–145. doi: 10.1007/s00221-003-1587-1. [DOI] [PubMed] [Google Scholar]
  9. Contreras-Vidal JL, Kerick SE. Independent component analysis of dynamic brain responses during visuomotor adaptation. Neuroimage. 2004;21:936–945. doi: 10.1016/j.neuroimage.2003.10.037. [DOI] [PubMed] [Google Scholar]
  10. Diedrichsen J, Hashambhoy Y, Rane T, Shadmehr R. Neural correlates of reach errors. J Neurosci. 2005;25:9919–9931. doi: 10.1523/JNEUROSCI.1874-05.2005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Duhamel JR. Multisensory integration in cortex: shedding light on prickly issues. Neuron. 2002;34:493–495. doi: 10.1016/s0896-6273(02)00709-2. [DOI] [PubMed] [Google Scholar]
  12. Fenn KM, Nusbaum HC, Margoliash D. Consolidation during sleep of perceptual learning of spoken language. Nature. 2003;425:614–616. doi: 10.1038/nature01951. [DOI] [PubMed] [Google Scholar]
  13. Fu KM, Johnston TA, Shah AS, Arnold L, Smiley J, Hackett TA, Garraghty PE, Schroeder CE. Auditory cortical neurons respond to somatosensory stimulation. J Neurosci. 2003;23:7510–7515. doi: 10.1523/JNEUROSCI.23-20-07510.2003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Girgenrath M, Bock O, Seitz RJ. An fMRI study of brain activation in a visual adaptation task: activation limited to sensory guidance. Exp Brain Res. 2008;184:561–569. doi: 10.1007/s00221-007-1124-8. [DOI] [PubMed] [Google Scholar]
  15. Graziano MS, Gandhi S. Location of the polysensory zone in the precentral gyrus of anesthetized monkeys. Exp Brain Res. 2000;135:259–266. doi: 10.1007/s002210000518. [DOI] [PubMed] [Google Scholar]
  16. Henriques DY, Klier EM, Smith MA, Lowy D, Crawford JD. Gaze-centered remapping of remembered visual space in an open-loop pointing task. J Neurosci. 1998;18:1583–1594. doi: 10.1523/JNEUROSCI.18-04-01583.1998. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Hyvarinen J, Shelepin Y. Distribution of visual and somatic functions in the parietal associative area 7 of the monkey. Brain Res. 1979;169:561–564. doi: 10.1016/0006-8993(79)90404-9. [DOI] [PubMed] [Google Scholar]
  18. Imamizu H, Miyauchi S, Tamada T, Sasaki Y, Takino R, Putz B, Yoshioka T, Kawato M. Human cerebellar activity reflecting an acquired internal model of a new tool. Nature. 2000;403:192–195. doi: 10.1038/35003194. [DOI] [PubMed] [Google Scholar]
  19. Kagerer FA, Contreras-Vidal JL, Stelmach GE. Adaptation to gradual as compared with sudden visuo-motor distortions. Exp Brain Res. 1997;115:557–561. doi: 10.1007/pl00005727. [DOI] [PubMed] [Google Scholar]
  20. Keuroghlian AS, Knudsen EI. Adaptive auditory plasticity in developing and adult animals. Prog Neurobiol. 2007;82:109–121. doi: 10.1016/j.pneurobio.2007.03.005. [DOI] [PubMed] [Google Scholar]
  21. Knudsen EI, Knudsen PF. Vision guides the adjustment of auditory localization in young barn owls. Science. 1985;230:545–548. doi: 10.1126/science.4048948. [DOI] [PubMed] [Google Scholar]
  22. Krakauer JW, Pine ZM, Ghilardi MF, Ghez C. Learning of visuomotor transformations for vectorial planning of reaching trajectories. J Neurosci. 2000;20:8916–8924. doi: 10.1523/JNEUROSCI.20-23-08916.2000. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Magescas F, Prablanc C. Automatic drive of limb motor plasticity. J Cogn Neurosci. 2006;18:75–83. doi: 10.1162/089892906775250058. [DOI] [PubMed] [Google Scholar]
  24. Pouget A, Ducom JC, Torri J, Bavelier D. Multisensory spatial representations in eye-centered coordinates for reaching. Cognition. 2002;83:1–11. doi: 10.1016/s0010-0277(01)00163-9. [DOI] [PubMed] [Google Scholar]
  25. Schroeder CE, Smiley J, Fu KG, McGinnis T, O’Connell MN, Hackett TA. Anatomical mechanisms and functional implications of multisensory convergence in early cortical processing. Int J Psychophysiol. 2003;50:5–17. doi: 10.1016/s0167-8760(03)00120-x. [DOI] [PubMed] [Google Scholar]
  26. Schwartz AB, Moran DW, Reina GA. Differential representation of perception and action in the frontal cortex. Science. 2004;303:380–383. doi: 10.1126/science.1087788. [DOI] [PubMed] [Google Scholar]
  27. Shadmehr R, Wise SP. The computational neurobiology of reaching and pointing. MIT Press; Cambridge: 2005. [Google Scholar]
  28. Simani MC, McGuire LM, Sabes PN. Visual-shift adaptation is composed of separable sensory and task-dependent effects. J Neurophysiol. 2007;98:2827–2841. doi: 10.1152/jn.00290.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Smith MA, Ghazizadeh A, Shadmehr R. Interacting adaptive processes with different timescales underlie short-term motor learning. PLoS Biol. 2006;4:e179. doi: 10.1371/journal.pbio.0040179. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Zwiers MP, Van Opstal AJ, Paige GD. Plasticity in human sound localization induced by compressed spatial vision. Nat Neurosci. 2003;6:175–181. doi: 10.1038/nn999. [DOI] [PubMed] [Google Scholar]

RESOURCES