Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2008 Nov 1.
Published in final edited form as: J Neurophysiol. 2007 Aug 29;98(5):2827–2841. doi: 10.1152/jn.00290.2007

Visual-shift adaptation is composed of separable sensory and task-dependent effects

MC Simani 1, LMM McGuire 2, PN Sabes 3
PMCID: PMC2536598  NIHMSID: NIHMS58702  PMID: 17728389

Abstract

Visuomotor coordination requires both the accurate alignment of spatial information from different sensory streams and the ability to convert these sensory signals into accurate motor commands. Both of these processes are highly plastic, as illustrated by the rapid adaptation of goal-directed movements following exposure to shifted visual feedback. Although visual-shift adaptation is a widely used model of sensorimotor learning, the multi-faceted adaptive response is typically poorly quantified. We present an approach to quantitatively characterizing both sensory and task-dependent components of adaptation. Sensory after-effects are quantified with “alignment tests” that provide a localized, two-dimensional measure of sensory recalibration. These sensory effects obey a precise form of “additivity”, in which the shift in sensory alignment between vision and the right hand is equal to the vector sum of the shifts between vision and the left hand and between the right and left hands. This additivity holds at the exposure location and at a second generalization location. These results support a component transformation model of sensory coordination, in which eye-hand and hand-hand alignment relies on a sequence of shared sensory transformations. We also ask how these sensory effects compare to the after-effects measured in target reaching and tracking tasks. We find that the after-effect depends on both the task performed during feedback-shift exposure and on the testing task. The results suggest the presence of both a general sensory recalibration and task-dependent sensorimotor effect. The task-dependent effect is observed in highly stereotyped reaching movements, but not in the more variable tracking task.

Keywords: human psychophysics, reaching, motor learning, vision, proprioception, alignment

Introduction

Artificial shifts in the visual feedback of the arm, either due to optical prisms or to virtual feedback systems, result in a rapid adaptation of visually guided reaching (von Helmholtz, 1925; Held and Gottlieb, 1958; Ghahramani et al., 1996). For convenience, we will refer to both of these experimental paradigms as visual-shift adaptation. While visual-shift adaptation is one of the best-studied examples of sensorimotor learning, there remains substantial disagreement over some of its basic behavioral properties, and there is almost nothing known about the underlying physiological mechanisms. A major reason for these difficulties is that the adaptive response to shifted visual feedback is multifaceted, and there have not been adequate tools for characterizing and quantifying the components adaptive responses. Here, we argue that this form of adaptation is made up of at least three quantifiable component responses: two forms of sensory recalibration and a task-dependent sensorimotor effect.

Since the pioneering work of Held and colleagues (Held and Gottlieb, 1958; Held and Freedman, 1963), the principle measure of adaptation has been the reach after-effect, i.e. the difference in endpoint errors between pre- and post-exposure reaches. The after-effect is the only measure of adaptation used in most recent studies of adaptation to shifted virtual visual feedback (Held and Durlach, 1993; Ghahramani et al., 1996; Kitazawa et al., 1997; Vetter et al., 1999; Krakauer et al. 2000), and it has been the exclusive measure used in physiological studies of this effect (Baizer and Glickstein, 1974; Weiner et al., 1983; Martin et al., 1996; Baizer et al., 1999; Kurata and Hoshi, 1999, although see Lee and van Donkelaar, 2006, for a counter example). As we will argue here, however, the reach after-effect is unlikely to be an accurate measure of any individual component of adaptation, and is therefore of limited use for comparing to specific physiological changes that accompany visual-shift adaptation. Better measures of adaptation are required in order to attain the goals of identifying the neural circuits that are responsible for the individual components of visual-shift adaptation and determining how sensorimotor feedback drives learning within and across these components.

Here we present a psychophysical approach to measuring and analyzing the components of visual-shift adaptation. The main methodological tool is a set of alignment tasks that are designed to measure the sensory calibration between the hand and eye or between the right and left hands (van Beers et al., 1996, 2002). Subjects are asked to align the fingertip of their unseen hand either with a visual target or with the other unseen hand. Since no time constraints are imposed and subjects are free to adjust their hand position until satisfied, errors in this task reflect mis-calibration of the relevant sensory alignment, rather than any movement-specific errors. Subjects performed these alignment tasks both before and after an exposure period, in which they perform visually guided arm movements with their right hand while viewing shifted visual feedback of that hand. The change in performance between the pre- and post-exposure tests is our measure of sensory recalibration, i.e. the component of the overall visual-shift adaptation attributable to changes in sensory localization or inter-sensory calibration.

We perform two experiments using this tool. In the first experiment, we study the nature of sensory recalibration. We demonstrate that sensory recalibration can be decomposed into shifts in two component transformations, one between eye and body (visual localization) and one between body and arm (proprioceptive localization). The vector sum of these two effects is shown to be equal to the overall shift in alignment measured between the eye and arm. This additivity holds at both the location in the workspace where subjects were exposed to the shifted feedback and at a generalization location. In the second experiment, we perform a similar analysis of the traditional reach after-effect. Reaching requires sensory localization of the target as well as other downstream computations, many of which are task-specific, e.g. trajectory planning. By comparing the reach after-effect to our alignment measure of sensory recalibration, we attempt to identify which of these computations are affected by the visual shift. We demonstrate that the reach and alignment after-effects agree when exposure to the visual shift occurs during a non-reaching task. In contrast, when exposure occurs during reaching, the reach after-effect is larger and the two effects are not significantly correlated. These results suggest that the reach after-effect can be decomposed into a general sensory recalibration effect and other effects that are exposure-task dependent.

Materials and Methods

Subjects

This study was approved by the UCSF Committee on Human Research. Twenty-eight right-handed participants gave written informed consent and were paid for their participation. Subjects were naive to the purpose of the experiment. All subjects were healthy, ranged from 20-34 years of age, and had normal or corrected-to-normal vision. Twelve subjects (10 female, 2 male) participated in two experimental sessions for Experiment 1. Eleven subjects (8 female, 3 male) participated in Experiment 2: ten in two experimental sessions, and one only in the tracking exposure session (described below). When subjects participated in two experimental sessions, the sessions were always separated by at least 24 hours.

Experimental Setup

The experiments made use of a virtual reality setup, illustrated in Figure 1. Subjects were seated with their right arm resting on an 8 mm thick, horizontal table. The right wrist and index finger were immobilized in the neutral position by a brace and splint. For Experiment 1, the left arm was located below the table, and when in use was placed with palm facing up and the index finger touching the bottom surface of the table. The left wrist and index finger were not restrained. In Experiment 2, subjects held a mouse with their left hand which they used to signal the start or end of certain trials (see below). In all experiments the torso was lightly restrained by a harness attached to the chair back.

Figure 1.

Figure 1

Experimental setup. A: Side view of the virtual feedback setup. B: Top view of the table with experimental landmarks and a sample reach path. The exposure box was displayed to the subject.

Subjects viewed visual objects that were displayed on a rear projection screen by a 1024 × 768 pixel liquid crystal display projector. The position of a horizontal mirror was calibrated so that images on projection screen appeared to lie just above the plane of the table. In addition, the room lights were dimmed and subjects' view of their arms and shoulders were blocked by the mirror and a drape. The positions of both index fingers were tracked at 120 Hz with an infrared tracking system (Optotrak, Northern Digital, Waterloo, Ontario). This setup allowed us to compute the hand position and velocity online and to provide real-time visual feedback of the hand. In both experiments, visual feedback of the position of the hand right was given in the form of a 1 cm white disk centered at the tip of the index finger or shifted from this location by a fixed vector (the visual feedback shift).

Experimental Design

Experiments were organized into two or three blocks of trials. Each block consisted of an exposure phase and a test phase. During the exposure phase, subjects performed repetitions of a single exposure task which included (possibly shifted) visual feedback of the right index finger. During the test phase, subjects performed a variety of test tasks in which no visual feedback was available. Periodic exposure trials were also added to the test phase to maintain adaptation. In the first block, feedback during the exposure trials was unshifted, and the test trials measured the pre-shift baseline performance. The second block was the shift block, with an 80 mm leftward or rightward visual feedback shift during exposure trials. Experiment 2 also included a third block with unshifted feedback that provided a second measure of baseline performance. This third block was included in order to remove any shift-independent drift in performance, e.g. due to fatigue, that might differ between the exposure tasks (see below). The after-effects that constitute the main experimental measures in this paper were computed as the difference in test performance between the shift block and the baseline block(s).

In the following sections we describe the different test and exposure tasks that were used in these experiments, followed by the specific trial sequences for each experiment.

Exposure Tasks

Reach Exposure Task

Subjects were required to reach accurately with their right arm to a visual target starting from a fixed unseen start location. For each subject, the start location was chosen by projecting the midpoint between the eyes into the plane of the table and then moving approximately 150 mm in the sagittal (+y) direction. At the beginning of a reach trial, subjects were guided to the start position (“start” in Figure 1) without explicit visual feedback of the location of the fingertip position or the start position (the “arrow-field” method, Sober and Sabes, 2005). Specifically, an array of 16 arrows appeared at a randomized location in the workspace. The direction and magnitude of the arrows were adjusted online to indicate the direction and relative distance from the right fingertip to the start location. When the fingertip had moved to within 5 mm of the start location, the arrows disappeared.

As soon as the arrow field disappeared, a reach target appeared in the form of an open circle, 20 mm radius, at a random location within a 100 mm × 60 mm box centered 300 mm sagittally from the start location (“exposure box” in Figure 1). For two subjects the exposure box could not be reached comfortably, and so a reach distance of 250 mm or 270 mm was used. After a variable delay of 500-1500 msec, the target flashed and a tone sounded, indicating that the movement should begin. Subjects were instructed to then move rapidly and accurately to the target. During the movement, the feedback disk was illuminated when the instantaneous tangential velocity of the fingertip dropped to either 95% (early visual feedback) or 15% (late visual feedback) of the peak velocity value for that movement. In order to complete the movement, the center of the feedback disk had to be within 5 mm of the center of the reach target and the fingertip had to stop moving. Typically, this required corrective movements after the end of the primary reach. Finally, subjects were required to hold the final position, with visual feedback, for 500 msec after completion of the movement. The target and the feedback were extinguished at the end of the trial.

Tracking Exposure Task

Subjects were required to track a moving target with the tip of their right index finger with continuous visual feedback. Subjects first moved their right hand to the “neutral position” to the right of the experimental workspace. This triggered the appearance of a filled, green 20 mm diameter circle at the start location for the trial, which was chosen to be the point that the target reached 667∼msec into its trajectory (see below). Subjects then moved their right index finger to this start location (without visual feedback), and clicked the mouse with their left hand when they were ready to begin tracking. At this point, visual feedback of the fingertip was illuminated, and the target appeared in the form of an open green 20 mm diameter circle. The target then began moving along the target trajectory. By design, the target circle intersected the start circle 667 msec into trajectory, and the two circles merged into a single filled target circle that continued along the target trajectory. This arrangement allowed subjects to estimate the initial position and velocity of the target before it intersected the start position, facilitating accurate tracking at the beginning of the trial. Subjects were instructed to try to keep their index finger on the filled target circle as it moved along its trajectory. Targets followed a randomly generated Lissajous trajectory: sinusoidal x-and y-components with random phase and frequencies chosen uniformly between 0.2 and 0.3 Hz. Trajectories were centered in the exposure box and scaled to fit within the box. The total trial duration of the trajectory, starting from the mouse click, was six seconds.

Test Tasks

No visual feedback was given during any of the test tasks described below. All test tasks were performed at two locations: the exposure target, at the center of the exposure box, and a second generalization target, also located 300 mm from the reach start location but 45° to the right of mid-line (see Figure 1). There are two motivations for including a generalization target. First, since the exposure tasks are also used as test tasks, the after-effects measured at the exposure location could include effects other than those typically considered to be forms of sensorimotor adaptation, e.g. the use of remembered visual or proprioceptive positions from exposure trials. The generalization target trials control for such effects. Second, since we expect that the after-effect vectors will differ between the two target locations (Ghahramani et al., 1996; Vetter et al., 1999; Krakauer et al. 2000), the inclusion of the second target provides a more stringent test of our hypotheses.

Reach Test

Reach test trials were identical to reach exposure trials except for two differences. First, no visual feedback was given at any point in the trial. Second, test reaches were considered completed when the fingertip stopped moving, as long as the finger had moved at least 150 mm from the start location.

Tracking Test

Tracking test trials were identical to tracking exposure trials except that visual feedback was not given at any point during the trial.

Right-to-Visual Alignment Test

For the right-to-visual alignment test, subjects were asked to accurately align their unseen right fingertip with a visual target. The goal is to measure changes in the calibration between visual localization and the felt position of the right hand, i.e. eye-arm recalibration. This and the following two alignment tasks were based on the methods of van Beers et al. (1998). We want these tests to measure changes in sensory calibration, independent of stereotyped behaviors such as reaching. This was done in part by allowing subjects as much time as needed to complete each trial.

At the beginning of each trial, the right fingertip was guided by the arrow-field method to a start position located near the target (red arrows indicated right hand movement). When the fingertip reached the start location, a 20 mm diameter red disk appeared at the target location. After a variable delay (250 - 750 msec), a tone instructed subjects to place their right index fingertip on the visual target as accurately as possible. Trials were completed when subjects either verbally indicated to the experimenter that they were satisfied with the alignment (Experiment 1) or clicked a mouse button with their left hand (Experiment 2).

The selection of the start position is important in this test, as it could bias the alignment endpoint. In order to avoid such a bias, we chose each start point uniformly from an annulus centered on the endpoint of the previous right-to-visual alignment trial (and similarly for the two alignment trials described below). For the very first such trial, the distribution was centered on the actual target location. In Experiment 1, the annulus had a 30 mm inner and 50 mm outer radius. In Experiment 2 radii of 40 mm and 60 mm were used. These values represent a compromise between starting too close to the target, in which case subjects might be reluctant to move at all, and starting too far, in which case the motor response might be too similar to the stereotyped reaching task.

Left-to-Visual Alignment Test

The second type of alignment trial measures adaptive changes in the calibration between visual localization and the felt position of the (unexposed) left hand. The protocol for left-to-visual trials was identical to right-to-visual trials except that green arrows and a green target indicated that the left hand was to be used instead of the right hand.

Right-to-Left Alignment Test

The third type of alignment trial measures adaptive changes in the calibration between proprioception of right and left hands. At the beginning of each trial, the left hand was guided to the target location on the underside of the table with a green arrow field. The right hand was then guided with red arrows to the start location on the top surface of the table. After a variable delay, an audible “go” tone was played. Subjects were instructed to then adjust the position of their right arm, without moving the left arm, until the right and left fingertips were felt to be aligned. Trials were completed when subjects verbally indicated to the experimenter that they were satisfied with the alignment. No visual target was provided during these trials. Furthermore, subjects were instructed to close their eyes throughout the alignment period, and compliance was monitored on every trial by the experimenter.

Experiment 1

The goal of this experiment was to analyze the sensory recalibration that follows exposure to shifted visual feedback. In particular, we wanted to test various model predictions (see below) for the relationship between the adaptive recalibrations seen in the right-to-visual, left-to-visual, and right-to-left alignment tests. Experiment 1 consisted of two blocks: the pre-exposure baseline block and the shift block. A brief practice period was also conducted at the beginning of each session.

Exposure phase

The exposure phase of Experiment 1 consisted entirely of reach exposure trials at the exposure target location. In Block 1, the exposure phase had 15 trials with unshifted feedback. In Block 2 the exposure phase had 50 shifted-feedback trials, with the shift ramped up to 80 mm rightward over the first 15 trials. The timing of the visual feedback (early or late) and the direction of the feedback shift (80mm, leftward or rightward) were varied across sessions. This 2 × 2 design yielded a total of 4 possible exposure conditions. The 12 subjects participating in Experiment 1 were equally distributed across these 4 exposure conditions during the first experimental session. For the second experimental session, each subject was exposed to conditions of feedback timing and shift direction that were opposite of that in their first session.

Test phase

The test phase in Experiment 1 consisted of four trial types, the reach test and the three alignment tests. A single test sequence consisted of eight trials in randomized order, with one repetition of each of the four tests at both the exposure target and the generalization target. In order to maintain adaptation, each test sequence was followed by two exposure trials of the same type used in the exposure phase. The test phase of each block consisted of 12 test sequences, for a total of 120 trials (96 test trials and 24 refresh exposure trials).

Experiment 2

Experiment 1 focuses on the sensory effects of visual-shift adaptation. In Experiment 2 we ask whether these sensory effects can be dissociated from other components of the adaptive response, such as changes in motor planning or execution. Since these latter components are likely to be task-dependent, we hypothesized that the relative magnitudes of these two types of responses would depend on the task performed during the exposure period.

Exposure phase

In Experiment 2 there were two types of sessions, differing only in the exposure task: in reach exposure sessions all exposure trials were reach trials with late visual feedback, and in tracking exposure sessions all exposure trials were tracking trials. In both cases, the exposure phase in Block 1 had 15 trials with unshifted feedback, and the exposure phase in Blocks 2 and 3 had 50 trials. In the latter cases, the feedback shift was ramped up (Block 2) or down (Block 3) over the first 15 trials of the exposure phase.

Test phase

Three test tasks were used in this experiment: the reach, tracking, and right-to-visual alignment tests. During each test sequence, subjects performed six trials in randomized order, with one repetition of each of the three tests at both exposure and generalization targets. As in Experiment∼1, two additional refresh exposure trials were added following each test sequence. The test phase of each block consisted of 10 test sequences, for a total of 80 trials (60 test trials and 20 refresh exposure trials). A brief practice period was also conducted at the beginning of each session.

Data Analysis

Movement endpoint

For alignment trials, the endpoint was defined as the last time the tangential velocity fell below 2 mm/sec. For the reach task, we wanted to focus on the primary reaching movement and to discard the secondary corrective movements which were sometimes seen. Therefore, the movement end was defined as the first time after the velocity peak that the velocity fell below 2 mm/sec. We also performed an offline visual inspection of every reach trajectory. In cases where a clear corrective sub movement began before the velocity fell to this criterion, the endpoint was set to the time of the minimum in the velocity profile that separated the two sub-movements.

Adaptation after-effects

For each test and target, we computed an adaptation after-effect by subtracting the average positional error for the baseline block(s) from the average error for the shift block. For the alignment and reach tests, the positional error was defined as the vector difference between the movement endpoint (as defined above) and the target location. For the tracking task, the positional error was measured as the average vector difference between the fingertip and target positions. This average was computed over the final four seconds of each trial, after correcting for any lag in the fingertip position. (A non-negative lag value was estimated by minimizing the sum-square-error between the fingertip position and the lagged target position during the last four seconds of the trial).

For each test and target, we computed the standard error of the reach endpoint vectors in each block. The standard errors for the adaptation after-effects are then given by the sum of the endpoint standard errors for the two blocks. In addition, a permutation test (Good, 2000) was used to test for significant adaptation (after-effect greater than zero, p < 0.05) of each test at each target.

In order to facilitate direct comparison of after-effects following leftward and rightward visual shifts, the sign of after-effects was inverted (reflected about the origin) for cases where the visual shift is in the rightward direction. Since rightward shifts should yield leftward (-x) after-effects and vice versa, this convention means that an adaptive after-effect will always have a positive component along the x-axis. This convention is used in all figures and analyses. Finally, we will use the symbols REACH, TRACK, RV, LV, and RL to refer to the after-effects measured with the reach, tracking, right-to-visual, left-to-visual, and right-to-left test tasks, respectively.

Models of inter-sensory calibration

Consider the computations required to perform the right-to-visual alignment task. Subjects must compare the visually perceived location of the target to the felt position of the arm, with the latter based on proprioception, efference-copy, and other non-visual sensory cues. Some form of sensory transformation must be performed in order to make this comparison.

One model for this process is a series of component sensory transformations between eye, body, and arm-based representations, the “component transformation model” (Figure2A). In this model, the visual representation of the target Xvis might first be converted into a body-centered representation Xbody by integrating information such as the position and orientation of the eye and head. Next, Xbody would be transformed into an arm-based representation of the target location Xright, which could then be compared directly with the felt position of the right arm. Alternatively the comparison could be made in the body-centered or visual representations. In any of these cases, however, comparing the visual target with the felt position of the arm requires both an eye-body and a body-arm transformation.

Figure 2.

Figure 2

Two models of sensory coordination. Top panels show the sensory transformations required for performing the alignment test tasks. A spatial variable X, i.e. location of the fingertip or target, can have multiple neural representations including Xvis, a visual- or eye-based representation, Xbody a body-centered representation, and the arm based representations Xleft or Xright. A,B: Schematic diagrams of the sensory transformations used in the alignment test tasks under each model. The transformations outlined in red are those involved in visual coordination of the right arm, and are thus the most likely sites of adaptive changes during the experimental exposure blocks. C,D: Predicted effects of adaptation to shifted feedback on the three alignment tasks. Boxes represent the additive effects of adaptive recalibration in the corresponding sensory transformations: EB, eye-body recalibration; BA, body-arm (right) recalibration; and EA, direct eye-arm recalibration. Colored arrows represent the comparisons required for each alignment task, along with the value of the predicted after-effects: LV for the left-to-visual task, RV for right-to-visual, and RL for right-to-left.

We next consider how this comparison changes following exposure to shifted visual feedback. As part of our model, we assume that adaptive recalibration will only occur in transformations that are used during the exposure period. In our experiments, exposure trials consist entirely of visually guided movements with the right hand, and so adaptation would only occur in the eye-body and body-arm (right) transformations (highlighted in red in Figure 2A). We further assume that recalibration at each sensory transformation has a cumulative, additive effect on the transformed signals. For example, eye-body recalibration (EB) would result in the transformation Xbody = Xvis + EB, and body-arm (right) recalibration (BA) would result in Xright = Xbody + BA (Figure 2C). Since we will allow these additive effects to vary with the target location, this assumption can be viewed as a first order approximation to the true non-linear response and is thus a fairly unrestrictive simplification. Under this model, the after-effect of the right-to-visual task will be the sum of the two component recalibrations: RV = EB + BA. Similarly, if we assume that there is no adaptation in the body-arm (left) transformation, then the after-effects for the left-to-visual and right-to-left tasks should be LV = EB and RL = BA (Figure 2C). These equalities can be combined into a testable prediction involving only experimentally measured quantities:

RV=RL+LV. (1)

In words, the after-effect measured by the right-to-visual task should be the vector sum of the after-effects measured in the other two tasks. We will refer to this as the “additivity prediction” of the component transformation model, and it will be tested in Experiment 1.

A key element of the component transformation model is that representations and transformations are shared across tasks. For example, the same body-centered representation and body-arm (right) transformation are used for the right-to-visual and right-to-left alignment tasks (Figure 2A, C). As an alternative, we consider the “direct transformation model” (Figure 2B, D). In this model, there are separate transformations (or sequences of transformations) that allow for direct comparison of any commonly-coordinated sensory streams. For example, visually guided control of the right arm would make use of a dedicated (or “direct”) eye-arm (right) transformation. Direct transformations might arise from the self-organization of sensory representations (i.e., unsupervised learning) given that eye and arm movements are correlated during the execution of natural movements (Sailer et al., 2000; Neggers and Bekkering, 2001; Land and Hayhoe, 2001; Ariff et al., 2002). In this model, the eye-arm (right) transformation would be the most likely site of recalibration following our exposure trials. Thus, the right-to-visual alignment task would be a direct measure of the recalibration: RV = EA, and the other alignment tests would show no after-effect (Figure 2D). Even if there were transfer of learning to the direct eye-arm (left) and arm-arm transformations, however, this model predicts no particular relationship between the after-effects measured in the three alignment tasks.

Within-subjects tests of the additivity prediction

The additivity prediction states that the after-effects of the three alignment tests should obey a linear equality, Equation 1, which can be rewritten as

0=RVRLLV=ERV,2ERV,1(ERL,2ERL,1)(ELV,2ELV,1) (2)

where ERV,2 is the mean positional error in the right-to-visual test in Block 2 (shift block), ERV,1 is the mean error in Block 1 (baseline block), and similarly for the other terms. Note that Equation 2 has a special form: it is a linear combination of sample means, with the coefficients summing to zero. This means that we can use a contrast analysis for 2-dimensional data (Stevens, 1996) to test the hypothesis that the additivity prediction does not hold (i.e. that the vector sum in Equation 2 is significantly different from zero). We performed this analysis separately for each subject, experimental session, and test target. We used the same approach to perform two control comparisons in which the right-to-visual after-effect is compared directly to the reach and right-to-left after-effects.

Subject exclusion criteria

Datasets were excluded from analysis based on either of two criteria. First, if a subject did not exhibit significant adaptation at the exposure location in the test version of the exposure task, data from that session were rejected. Second, after completing the experimental session, all subjects were asked if they ever felt that the visual feedback was not aligned with their finger. If subjects reported noticing the feedback shift, data from that session were rejected.

In Experiment 1, one session was excluded by the second criterion. In addition, we found post hoc that in three sessions, subjects made exceptionally large corrective sub-movements (about 10 cm) following the end of each reach trajectory, making it difficult to quantify the after-effect. For these three sessions, the reach test trials only were removed from subsequent analysis. In total, 12 late visual feedback and 11 early visual feedback sessions were included in analyses of the alignment tests in Experiment 1; 10 late visual feedback and 10 early visual feedback sessions were included in the reach test analyses.

In Experiment 2, from a total of 21 sessions, 3 sessions were excluded by the first criterion, one reach exposure session and two tracking exposure sessions. As a result, 9 sessions each of reach exposure and tracking exposure were included in the analyses of Experiment 2.

Results

Experiment 1

The goal of Experiment 1 was to determine whether there is a quantitative relationship between the visual-shift induced after-effects measured with the three sensory alignment tasks.

Adaptation after-effects

The performance of a sample subject on all test conditions in the early visual feedback condition is shown in Figure 3. The upper eight panels show the raw positional endpoint errors, separated by test and target location. The difference in the mean errors between Block 2 and Block 1 (black arrows) is the measured adaptation after-effect. All tests showed highly significant after-effects at both targets (Permutation test, p<0.001).

Figure 3.

Figure 3

Positional errors and after-effects for all test trials from a sample experimental session with early visual feedback. Top four rows: Open circles are raw positional errors for each trial in Block 1 (baseline); filled circles are data for Block 2 (post-exposure). Each panel has data from a single test and target. Black arrows represent adaptation after-effects. Bottom row: After-effect vectors for the four tests at each target are grouped together into a single plot. LV and RL are plotted “head-to-tail” in order to provide a visual test for the additivity hypothesis, RV=LV+RL. Ellipses are the 95% confidence limits for the after-effect or after-effect sum.

This sample subject is typical of the entire dataset, as shown in Figure 4. With only a few exceptions (open symbols in Figure 4), significant adaptation was observed at both the exposure and generalization targets for the right-to-visual and right-to-left alignment tests and the reach test. Furthermore, the group mean of these three after-effects is significantly different from zero for both targets. In contrast, the adaptation after-effects observed in the left-to-visual test (LV, second row of Figure 4) are smaller and are only significant in approximately half of the experimental sessions. When LV is significant, it tends to point more along the y-axis, either towards or away from the subject, despite the fact that that the visual shift was always along the x-axis. The group mean for this after-effect is also not significant at either target. We note that the lack of significance in the LV effect is not simply due to more variable performance in this task. In fact, the endpoint variability does not differ greatly across the four tests used in this experiment (supplementary Figure S1).

Figure 4.

Figure 4

After-effects for all subjects. Each data point represents the after-effect vector for a single subject, with separate panels for each test and target. Significant after-effects (permutation test, p < 0.05) are marked with filled symbols. Ellipses represent the covariance across subjects of the after-effect for a given test and target (drawn at the 95th percentile). Solid ellipses signify group means that are significantly different from zero (MANOVA, p < 0.05).

Test comparisons and the additivity prediction

As described in Methods and illustrated in Figure 2, if sensory coordination between the eyes and the arms relies on a series of shared component transformations, then the alignment test after-effects should obey the simple linear relationship, RV = RL + LV, which we have called the “additivity prediction”. The bottom two panels of Figure 3 show the after-effect vectors for each test type for the sample dataset discussed above, plotted together by target.

The RL and LV, vectors have been placed head-to-tail to represent the vector sum of these effects.

At both the exposure and generalization targets, we see that RV is approximately equal to that sum, i.e. the additivity prediction of the component transformation model appears to hold. At both targets, a contrast analysis was unable to reject the additivity prediction (p = 0.93, exposure target, p =0.70, generalization target; see Methods for details). As a comparison, the right-to-visual and reach after-effects were significantly different at both targets, (contrast analysis, p < 0.005).

These sample data are representative of the entire dataset. In order to quantify these effects, we plotted RV against the sum RL + LV, separately for each target location and each spatial dimension (Figure 5, top row). The additivity prediction is supported by the high correlation coefficients and near-unity regression slopes. The correlations between RV and RL + LV are highly significant for both spatial dimensions at both targets (p < 10-7). This prediction holds even at the subject-by-subject level: in all but two cases (filled symbols in upper-right panel of Figure 5) RV was not statistically distinguishable from the sum of the other two tests (contrast analysis, p < 0.05).

Figure 5.

Figure 5

Tests of the additivity prediction and control comparisons for all subjects. Top Row: Each data point represents after-effects measured in a single session at the exposure (right column) or generalization (left column) target (N=23). Blue circles, x-component of the after-effects; red squares, y-component. Filled symbols represent cases where the RV and RL + LV vectors differed significantly (contrast analysis, p < 0.05). Colored lines are the best linear fits to the data; solid lines are for significant regressions (p < 0.05). Middle Row: Same as top row, but RV is compared to RL (N=23). Bottom Row: Same as top row, but RV iscompared to REACH. The dashed blue line represents a non-significant regression (N=20).

One potential concern regarding these results is that the LV after-effect is generally smaller in magnitude than those measured from the other tests. The apparent additivity might thus reflect the simple equality RV = RL, which could be due to some form of acquired response bias for the right arm. In order to address this possibility, we consider how well either the RL or REACH after-effects alone predict RV (Figure 5, middle and bottom rows). Across subjects the correlations between these tests are much weaker than for the additivity prediction. Within sessions, contrast analyses revealed significant differences (p < 0.05) between the compared after-effects in roughly half of all cases (21/46 target-by-session comparisons for RL; 24/40 for REACH).

In order to further test the additivity prediction, we fit the linear regression model RV = a RL + b LV for each of the two targets and spatial dimensions. Here we describe the key results of this analysis (complete results are given in the Supplementary Table S1). In all four cases the values of a and b are both different significantly different from zero (p<5 × 10-4, with one exception: p < 0.05 for b, y-dimension, generalization target). Furthermore, the values of these weights are not significantly different from unity (which is the additivity prediction), except for the case of the y dimension at the exposure target, where the regression weight for RL was less than unity (a=0.65, pa≠1=0.03). This shows that both the left-to-visual and right-to-left tasks play an important role in the additivity and provides additional support for the prediction. Furthermore, these findings rule out a simple response bias as an explanation for the additivity.

Non-collinear effects

The after-effects measured with the left-to-visual task are generally not parallel to the direction of the visual shift (see Figure 4). In fact, most of the significant LV effects (8/10) contain a greater component in the y-dimension than in the x. Given this marked non-collinearity with the visual shift, one might ask whether this change really reflects an adaptive response to the visual shift.

We first note that the effects appear to reflect a true change in sensory alignment, and do not arise from sampling or measurement noise. This claim is based on two observations. First, in 10 of 24 sessions we observed a statistically significant effect at the 95% confidence level, many more than would be expected by chance due to sample noise. Second, the directions of the LV after-effects measured at the two targets are highly correlated across sessions, and these directions do not correlate with the axis of greatest measurement variability (see supplementary Figure S3 for more details).

We next ask why we might see adaptive effects that are not collinear with the visual shift. One idea is that adaptation of a sensory modality should proceed more easily or more rapidly along the directions in which that modality is less precise (Ghahramani et al. 1997; van Beers et al., 2002). We can test this hypothesis by estimating for the two-dimensional covariance ellipses of the visual and proprioceptive sensory modalities from our task performance variabilities at the exposure target (van Beers et al., 2002). As predicted by the argument above, there is a correlation between the direction of greatest visual uncertainty and the direction of the LV effect (circular association ρT = 0.269, p = 0.032; see Fisher, 1993, and supplementary Figure S3 for more details). Furthermore, a comparable degree of correlation was observed between the axis of greatest right-hand proprioceptive uncertainty and the angle of the LV effect (ρT = 0.168, p = 0.021). These correlations suggest that the non-collinearity between the visual shift and the sensory after-effects may reflect an optimal learning strategy, and could indeed be adaptive.

Effect of feedback timing and target location

Since feedback timing has previously been reported to affect the relative magnitudes of visual and proprioceptive adaptation (Uhlarik and Canon, 1971; Redding and Wallace 1990, 1992), we included both an early feedback condition (feedback turned on just after peak velocity) and a late feedback condition (feedback turned on late in the deceleration phase). A MANOVA analysis was performed to determine the effect of the visual feedback timing (early vs. late) on each of the four after-effect measures.

We found that the timing of the feedback had a significant and consistent effect on all three alignment after-effects (Figure 6A). However, the effect appears to be primarily a rotation of the after-effects: a separate ANOVA of the magnitude of the after-effects showed no significant effect of feedback timing.

Figure 6.

Figure 6

Effect of feedback timing and target location on adaptation after-effects. (A) Each arrow represents the mean after-effect across sessions for a given test and feedback timing: solid lines, early feedback; dashed lines, late feedback. All feedback-timing effects were significant except the REACH effect (MANOVA: RV, p < 0.005; LV, p < 0.05; RL, p < 0.001; REACH, p = 0.057). (B) Effect of target location: solid lines, early feedback; dashed lines, late feedback. Only the reach effect was significant (MANOVA, p < 0.001).

We also tested whether the effects were different at the exposure and generalization targets (Figure 6B). There was a highly significant effect of target on the reach after-effect (2D MANOVA and ANOVA on effect magnitude, p < 0.001). The three alignment tests, on the other hand, showed no significant differences. On average, the alignment tests showed a slight rotation across targets, comparable to that observed for the reach after-effect, but an ANOVA on alignment effect angle was not significant.

Experiment 2

Our predictions for Experiment 1 were based on a model in which visual-shift adaptation drives sensory recalibration. Furthermore, we have interpreted our alignment tests to be measures of those sensory changes. However, all of our tests are necessarily sensorimotor in nature. In Experiment 2, we ask how the alignment after-effect compares with the after-effects measured in two other sensorimotor tasks (target reaching and target tracking) following exposure to shifted visual feedback during these same two sensorimotor tasks.

We begin by showing that the movement kinematics are quite different across the three tasks. We then show that the measured after-effect depends on both the exposure task and the test task. Finally, we argue that the results point to two separable components to the adaptive response, a general sensory recalibration and task-dependent sensorimotor effect.

Task Kinematics

Three test tasks were used in Experiment 2: the right-to-visual alignment test, the reach test, and the tracking test. These tasks were designed to require very different movement kinematics, and these differences are evident in the sample trajectories shown in Figure 7. Reaching movements were highly stereotyped, with nearly straight paths and stereotypically bell-shaped velocity profiles (Hollerbach and Atkeson, 1987).

Figure 7.

Figure 7

Sample trajectories from one subject for each of the test measures in Experiment 2. Three sample movement paths for each of the reaching (A), alignment (C), and tracking (E) tests. In all plots, the black circle is the 20 mm diameter, visually displayed target, drawn to scale. In E, the target trajectory is shown in dotted lines, and the target circle is drawn at the end of its trajectory. Velocity profiles are shown for all reaching (B) and alignment (D) tests at the exposure target in Block 1 for the sample subject. Profiles are aligned at the first time-step in each trial where velocity exceeded 20 mm/sec. Colored velocity profiles correspond to the sample paths shown on the left. Note the different scales across panels.

In contrast, the much shorter movements in the right-to-visual alignment test were also much more variable. The paths were typically not straight, and the velocity profiles were multi-peaked, suggestive of the presence of multiple sub-movements. Movements in the tracking test were highly variable by design, as the target trajectory was different on each trial (three example paths are shown in Figure 7E). A more detailed kinematic analysis supports the conclusion that the sensorimotor output is quite different across the three test tasks (Supplementary Material, Figure S4).

Adaptation after-effects and test comparisons

For each subject, we computed the adaptation after-effect for each combination of test, target, and exposure condition. The averages across subjects are shown in Figure 8. Following reach exposure, the after-effect measured with the reach test appears to be larger, on average, than that measured with the other two tests, especially at the exposure target (Figure 8, top panels). In contrast, there is little difference between the tests following tracking exposure (Figure 8, bottom panels).

Figure 8.

Figure 8

Mean adaptation after-effects in Experiment 2. Each data point is the mean after-effect vector across subjects for a particular test; ellipses represent the standard error of the mean, plotted at the 95% confidence limit (N=9). Each panel contains data for a single target and exposure condition. Note that in this and the following plots, after-effects for rightward visual shifts are inverted (reflected about the origin).

These impressions were quantified with pair wise comparisons of the three tests, using the x-component of the adaptation after-effects as the dependent measure (Figure 9). Following reach exposure, the reach after-effect was significantly greater than the tracking or alignment effects (paired t-test: p < 0.01 at the exposure target, p < 0.05 at the generalization target), yet these latter two effects were indistinguishable (Figure 9, top row). The average reach after-effect was 50% larger than the alignment after-effect at the exposure target, and 43% larger at the generalization target. On the other hand, following tracking exposure the after-effects measured with the three tests were generally indistinguishable. The one exception is that the tracking test had a significantly larger after-effect than the reach test (p = 0.03) at the exposure target. While these results were computed using two baseline blocks (Blocks 1 and 3, see Methods), no qualitative difference was seen when Block 1 alone was used in the analyses.

Figure 9.

Figure 9

Test differences in Experiment 2 quantified with pairwise comparisons across subjects. Each bar represents the mean difference (± standard error) in the x-component of the adaptation after-effect for the pair of tests noted at the left. Each panel presents data from a single target and exposure condition. Pairedt-tests determined significance of test differences (*, p < .05; **, p < 0.01; N=9).

The increased magnitude of the reach after-effect following reach exposure suggests that this test measures an additional exposure-dependent component of adaptation beyond what is measured with the other two tests. This interpretation is further supported by an analysis of the correlations across subjects between the x-components of the various after-effects. Following tracking exposure, the reach after-effect is highly correlated with both other measures, and the regression line between the tests is not significantly different from identity (Figure 10, bottom panels). In contrast, following reach exposure the reach after-effect is larger (as described above) and not significantly correlated with the other tests (Figure 10, top panels). These results suggest that sensory recalibration is the primary source of the reach after-effect following tracking exposure, and that a separate task-dependent effect contributes to the reach after-effect following reach exposure.

Figure 10.

Figure 10

Scatter plots comparing the x-component of the reach after-effect to those of the alignment and tracking after-effect. Each datapoint represents the after-effects for a single subject at a particular target for the given exposure condition. The lines represent the orthogonal regression between the two after-effects: solid lines are for significant correlations (p ≤ 0.05, dashed lines are for correlations that are not significant).

Discussion

Additive components of sensory recalibration

In Experiment 1, we focused on the sensory effects of visual-shift adaptation, measured with the alignment tests. The main result is that the measured shift in sensory calibration between vision and the felt position of the right hand is precisely predicted by the vector sum of the shifts between vision and the left hand and between the left and right hands. Furthermore, this additivity holds at the “generalization” target where no visual feedback was received. These results imply that that the sensory alignment between vision and the right arm relies on two separable components. As described in more detail below, we interpret these components as transformations between visual, proprioceptive, and body-centered spatial representations.

Previous attempts to identify and measure separate components of the adaptive response focused on the distinction between “visual” and “proprioceptive” adaptation. Almost all of these studies relied on tests that compare visual and proprioceptive localization to an internal sense of “straight ahead” (Hamilton and Bossom, 1964; Harris, 1965; Hay and Pick, 1966; see Redding and Wallace, 1997 for a more recent review). It has often been noted that these measures of adaptation are additive, i.e. the sum of the visual and proprioceptive after-effects matches the magnitude of the reach after-effect (Harris, 1963; Hay and Pick, 1966; Welch et al., 1974; Wilkinson, 1971; Redding and Wallace, 1978). This traditional approach to decomposing visual-shift adaptation has three major limitations, which we discuss in the following paragraphs.

First, while these measures are often thought of as direct assays of visual or proprioceptive localization, they rely on a comparison between sensory-derived signals and an internal reference, the sense of “straight-ahead”. Unfortunately, this subjective reference is ambiguous, malleable and context-dependent (Harris, 1974; Welch, 1986). Indeed, the sum of these two sensory measures sometimes exceeds the magnitude of the reach after-effect (“over-additivity”), and this difference is likely due to shifts in subjects' sense of straight-ahead (Templeton et al., 1974; Redding and Wallace, 1978). In contrast, we have measured the sensory effects of visual-shift adaptation using the left hand as the non-adapting reference, an approach first suggested by Harris (1965) and employed by Templeton et al. (1974) and van Beers et al. (2002). Of course this approach is also not a pure measure of visual or proprioceptive localization, but rather of the alignment between these sensory modalities. Nonetheless, an explicit sensory reference is likely to be less susceptible to context-dependent effects or subject misinterpretation than subjective or remembered locations.

Second, previous reports of additivity have compared the sum of the “visual” and “proprioceptive” effects to the reach after-effect. We would expect significant subject-by-subject departures from this form of additivity, given the weak level of correlation that we and others (Redding and Wallace, 2006) have observed between the reach and alignment after-effects following reach exposure. We note, in fact, that almost all previous reports of additivity have based their argument on mean after-effects across subjects. Here, we designed our alignment tests in order to minimize the effect of any non-sensory components of adaptation. As a result, we have observed very close agreement on a per-subject basis between the shift in right-to-visual alignment and the sum of the right-to-left and left-to-visual shifts.

Lastly, comparisons to “straight-ahead” can only measure recalibration in the azimuth, and thus only provide a single, scalar measure of sensory recalibration. Using the location of the left hand as a reference, we are able to measure two-dimensional shifts anywhere in the planar workspace. This allowed us, for example, to measure the sensory effects of visual-shift adaptation at a generalization target where visual feedback was never available. The results at this generalization target highlight the potential power of our approach. We found that alignment after-effect had approximately the same magnitude at the exposure and generalization targets, while the reach effect was significantly larger at the exposure target following reach exposure. Previous studies on the generalization of reach adaptation have given quite mixed results, with some studies showing only local adaptation and others showing robust generalization (Bedford, 1989; Ghahramani et al., 1996; Vetter et al., 1999; Krakauer et al. 2000). We think these differences are likely due to the fact that reach after-effect confounds several different underlying effects, and the relative contributions of these effects may have varied greatly in the different experimental paradigms. Resolving this issue requires the ability to measure component effects across the workspace.

Models of sensory coordination

The additivity that we observed in Experiment 1 supports the component transformation model of sensory coordination (Figure 2A, C). In particular, it suggests that intersensory coordination relies on sequences of shared transformations. These results are not consistent with a model in which sensory coordination is due to direct, dedicated transformations between coupled sensory streams, e.g., between vision and the right hand (Figure 2B, D).

It is important to note, however, that the distinct predictions we have drawn from the component and direct transformation models are based on our interpretation of the alignment after-effects as a recalibration of sensory transformations. If the shifts in alignment that we observed were actually due to adaptive changes in the sensory inputs to these transformations, then both models would be consistent with the data. It is not implausible that sensory recalibration could be due to peripheral or early sensory effects. For example, we have described the left-to-visual after-effect as a recalibration in the transformation from an eye-centered to a body-centered coordinate frame (eye-body shift). This change is likely due to shifts in felt eye and head position (Lackner, 1973; Crawshaw and Craske, 1974). In traditional prism-based studies of adaptation, these shifts resulted in part from prolonged deviation of the eyes or head from midline during the exposure period (Ebenholtz, 1974; Paap and Ebenholtz, 1976; Mars et al., 1998; Guerraz et al., 2006). While the virtual feedback setup eliminates this particular source of adaptation, it is likely that other peripheral or early sensory factors play some role in the sensory recalibration we have observed. Nonetheless, the primary involvement of high-level central processes in visual-shift adaptation is well documented by lesion studies (Baizer and Clickstein,1974; Weiner et al., 1983; Martin et al., 1996; Baizer et al., 1999; Kurata and Hoshi, 1999; Newport et al., 2006), functional imaging (Clower et al., 1996), and the ability of prism adaptation to ameliorate deficits with a clear central origin (Rossetti et al., 1998; Maravita et al., 2003). As long as a significant central component exists, the distinction between the models holds.

The model predictions described above rely on another critical assumption, namely that feedback acquired during the use of one set of transformations does not drive adaptation in other, unused transformations. In the case of the component transformation model, this means that the body-to-arm (left) transformation does not change during exposure to shifted visual feedback of the right arm (Figure 2C). In fact, the presence of significant inter-manual transfer in the reach after-effect is well documented (e.g., Taub and Goldberg, 1973; Choe and Welch, 1974; Wallace and Redding, 1979). However, these effects are most likely primarily due to shifts in felt eye and head position, i.e., to eye-to-body recalibration (Harris, 1965; Craske and Gregg, 1966; Craske, 1966; Cohen, 1967; Choe and Welch, 1974; Welch, 1986). In fact, our results suggest that if there is any shift in the body-to-arm (left) transformation, it is minimal. To see why, consider what the component transformation model in Figure 2A,C would predict if there were a significant generalization of the body-arm recalibration to the left arm. In that case, LV and RV would be of similar magnitude, and RL would be near zero, since the effects at the two arms would cancel. However we observed the opposite pattern: RL and RV are of comparable magnitude, and LV is typically small. This observation is consistent with little or no recalibration of the body to left-arm transformation.

Adaptation and sensory uncertainty

We observed a novel property of sensory recalibration during visual-shift adaptation: the individual component effects need not be co-linear with the visual shift. Most notably, the after-effects vectors measured in the left-to-visual alignment task were typically oriented more than 45° from the direction of the visual shift (Figure 4). A detailed analysis of the effect is the topic of a future study. However we showed here that such non-collinearity may reflect a sensible learning strategy to deal with non-isotropic variability in sensory signals: more adaptation occurs along the axis of greatest sensory uncertainty (see van Beers et al., 2002, for a similar argument).

Another powerful insight into the learning rule underlying visual-shift adaptation can be obtained from the relative magnitudes of the eye-body and body-arm shifts. Redding and Wallace found an inverse relationship between the amount of time that the hand was visible during exposure reaches and the relative magnitude of the visual after-effect. We saw no such effect in our comparisons of early and late visual feedback (Figure 6). This difference is likely to do that fact that in the earlier study, the timing of visual feedback was confounded with the amount of the hand and arm that was visible. It is plausible that since more of the body was visible during early feedback trials in Redding and Wallace (1990), the uncertainty of visual localization was reduced in those trials. By the argument applied above, we would thus expect less visual adaptation in the early feedback trials performed in that study. In contrast, in our study the nature of the feedback was held constant, and only the feedback timing was changed. This timing difference did not affect the relative magnitudes of the adaptive responses.

A task-dependent after-effect

In Experiment 2 we investigated how the adaptation measured with three different test tasks varies with exposure task. We found that the after-effect is largely independent of the test task, with one notable exception: the reach after-effect is significantly larger following reach exposure. We also found that while the reach and alignment after-effects were highly correlated following tracking exposure, they were only weakly correlated following reach exposure. These results suggest the following conclusions: the three tests measure the same underlying effect following tracking exposure (since they correlate well across subjects), this effect is likely to be sensory recalibration, and the larger reach after-affect following reach exposure is due to a separate task-dependent effect.

Many previous studies have shown that the reach after-effect can exceed the sum of the visual and proprioceptive after-effects, measured with respect to straight-ahead (Harris, 1965; Uhlarik and Canon, 1971; Welch et al., 1974; Templeton et al., 1974; Choe and Welch, 1974; Redding and Wallace, 1988, 1996). The authors of these studies have typically interpreted this difference as evidence for a separate component of adaptation, as we have here. The nature of this component, however, has not been clear. In previous studies, the difference could have arisen, in part, from shifts in the perception of straight-ahead (Harris, 1974; Welch, 1986) or from cognitive corrective strategies driven by knowledge of the prism shift (Bedford, 1993). However, these factors likely account for at most a portion of the difference observed in previous studies, and the design of the present study eliminates them altogether.

Welch et al. (1974) called this component of the reach after-effect an “assimilated error-corrective response,” highlighting the role that explicit error feedback plays in the magnitude of the difference (Welch and Rhoades, 1969; Welch, 1969). Indeed, Magescas and Prablanc (2006) have shown that a reach after-effect can be obtained by reach error signals alone, without any direct intersensory conflict, providing further evidence for the error-corrective model. The role of explicit reach errors in driving a task-dependent effect is addressed in a supplemental version of Experiment 2, in which the exposure task consisted of a series of “slicing” or “reversal” movements (Sainburg et al., 1993). Specifically, subjects were asked to reach sagittally out and back without any explicit target, eliminating any error signal derived from comparison of target and reach endpoint. The results were qualitatively similar to those obtained in the reach exposure condition of Experiment 2 (see supplementary Figure S5). These results suggest that an explicit, visual error signal is not required in order to obtain a task-dependent effect. Of course an implicit error signal could still be available, for example by comparing the actual location of the movement apex with the intended location.

The presence of error-corrective learning rules (e.g., Sheidt et al., 2001; Donchin et al., 2003; Cheng and Sabes, 2007) could help explain the differences we observed in the reach after-effect following reach versus tracking exposure. While the reach exposure task is highly stereotyped from trial to trial, the tracking task is not. During reach exposure, the incremental effects of the error-corrective rule would accumulate across trials, and a large effect would be obtained. In contrast, since the movements in the tracking task are quite variable, the same error signal (e.g. positional error) could be observed following very different sequences of motor commands. In this case, the effects of error-corrective learning may also be variable and could partially cancel out across trials.

The relationship between the reach after-effect and sensory recalibration

The main conclusion we draw from Experiment 2 is that the reach after-effect reflects two types kinds of underlying adaptive changes: a general recalibration of the sensory alignment between eye and arm and the task-dependent effects just discussed. Using the alignment test as a measure of recalibration and the difference between the alignment and reach after-effects as a measure of the task-dependent effects, we inferred that approximately two-thirds of the reach after effect was due to sensory recalibration and one-third was due to task-dependent effects following reach exposure. This conclusion is based on a comparison across exposure tasks of both the magnitude of various after-effects and the correlations between them. This conclusion is also supported by the substantial evidence that reach adaptation can be dissociated from sensory recalibration. As describe above, Magescas and Prablanc (2006) report reach adaptation without intersensory conflict. Furthermore, Taub and Goldberg (1974) found that deafferented monkeys adapt even better than normal controls in a prism adaptation paradigm similar to that used here. In both cases, robust reach adaptation was observed in experimental contexts that would be expected to yield poor sensory recalibration, supporting the existing of two different types of adaptation.

If reach adaptation can occur in the absence of sensory recalibration, why do we conclude that the reach after-effect that we have measured is a composite of sensory and task-dependent effects? Could the alignment and reach after-effects be measuring two entirely different effects? We think this possibility is unlikely. First, the strong correlation between the tests following tracking exposure suggests that they are measuring the same underlying effect in that context. Second, there is substantial empirical and theoretical evidence that reach planning relies on multisensory sensory estimates of hand and target locations (Rossetti et al., 1995; Sober and Sabes, 2003, Saunders and Knill, 2003). There is also evidence that the neural circuits that underlie reaching are also used for other tasks. For example, Snyder and colleagues have shown substantial overlap in the Macaque parietal circuits for reaches and saccades (Snyder et al, 2000; Lawrence and Snyder, 2006). It thus seems likely that the neural circuits required for solving a spatial localization task such as the alignment task are also involved in reach planning. Changes in these circuits would thus be expected to effect performance in our reach test.

Finally, we return to the deafferentation study of Taub and Goldberg (1974). If roughly two-thirds of the reach after-effect in our study is due to sensory recalibration, why is there no decrement in the reach after-effect when an animal is deafferented? To answer this question, we again appeal to the error corrective learning model. Consider the fact that in these experiments, there are at least two potential error signals that can drive learning driving learning: the reach error and the intersensory conflict (Cheng and Sabes, 2007). In an intact subject, both error signals are likely to drive adaptation, and the two learning processes essentially compete. When there is no intersensory conflict, however, the reach error can still drive task-dependent effects (Magescas and Prablanc, 2006). Indeed, the reach error signal is likely to be even larger in the deafferented monkey than the in the normal control, since the unperturbed proprioception no longer contributes to combined sensory estimate of hand position. This difference can account for the increased adaptation observed in the deafferented monkeys, as noted by Taub and Goldberg (1974). It is also worth noting that some sensory recalibration could have taken place even in the deafferented monkey, since visual feedback can be compared to an internal prediction of arm position given efference copy. Finally, a number of experimenters have shown that deafferented humans can adapt normally in other visual perturbation paradigms (Bard et al., 1995; Guedon et al., 1998; Ingram et al, 2000; Pipereit et al., 2006; Bernier et al., 2006). The argument above applies to these studies as well. In addition, we note that most of these experiments used a center-out reaching task and a visual perturbation that consisted of a rotation about the center. In this situation, there is no consistent sensory remapping, and so little sensory recalibration would be expected even in normal subjects.

Applications to physiological studies of adaptation

We have presented an approach to quantitatively analyzing the adaptive response following exposure to shifted visual feedback. These tools could be applied more generally to characterize the adaptive response to various forms of sensorimotor manipulations. We believe that such fine grained behavioral analyses will be required in order to relate adaptive changes in behavior to changes in the underlying neural circuits. For example, previous attempts to localize the site of prism adaptation in the brain have been, perhaps, too successful, yielding evidence that most of the neural circuits involved in reaching play an important role (Baizer and Glickstein, 1974; Weiner et al., 1983; Martin et al., 1996; Baizer et al., 1999; Kurata and Hoshi, 1999; Newport et al., 2006). More recently, human neurophysiological studies have found evidence that various error signals involved in sensorimotor adaptation are processed by different networks of brain areas (Diedrichsen et al., 2005; Lee and van Donkelaar, 2006). The presence of multiple components of adaptation may explain the participation of many brain areas in visual-shift adaptation and related behavioral phenomena. In this case, progress on understanding the neural basis of adaptation requires the ability to dissociate and quantify these components. Further, by fully characterizing these component effects, for example how they generalize across space, we will have the potential to place strong constraints on the both the structure of the neural circuits that underlie accurate sensorimotor coordination and the learning rules that maintain them.

Supplementary Material

Acknowledgments

Grants: This work was supported by the McKnight Endowment Fund for Neuroscience, the Whitehall Foundation (2004-08-81-APL), and the National Eye Institute (R01 EY-015679).

Contributor Information

M.C. Simani, W.M. Keck Center for Integrative Neuroscience, Department of Physiology, and the Neuroscience Graduate Program, University of California, San Francisco, CA 94142-0444, simani@phy.ucsf.edu

L.M.M. McGuire, W.M. Keck Center for Integrative Neuroscience, Department of Physiology, and the Neuroscience Graduate Program, University of California, San Francisco, CA 94142-0444, lmcguire@phy.ucsf.edu

P.N. Sabes, W.M. Keck Center for Integrative Neuroscience, Department of Physiology, and the Neuroscience Graduate Program, University of California, San Francisco, CA 94142-0444, sabes@phy.ucsf.edu

Reference List

  • 1.Ariff G, Donchin O, Nanayakkara T, Shadmehr R. A real-time state predictor in motor control: study of saccadic eye movements during unseen reaching movements. J Neurosci. 2002;22:7721–7729. doi: 10.1523/JNEUROSCI.22-17-07721.2002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Baizer JS, Glickstein M. Proceedings: Role of cerebellum in prism adaptation. J Physiol (Lond) 1974;236:34–35. [PubMed] [Google Scholar]
  • 3.Baizer JS, Kralj-Hans I, Glickstein M. Cerebellar lesions and prism adaptation in macaque monkeys. J Neurophysiol. 1999;81:1960–1965. doi: 10.1152/jn.1999.81.4.1960. [DOI] [PubMed] [Google Scholar]
  • 4.Bard C, Fleury M, Teasdale N, Paillard J, Nougier V. Contribution of proprioception for calibrating and updating the motor space. Can J Physiol Pharmacol. 1995;73(2):246–54. doi: 10.1139/y95-035. [DOI] [PubMed] [Google Scholar]
  • 5.Bedford FL. Constraints on learning new mappings between perceptual dimensions. J Exp Psychol Hum Percept Perform. 1989;15:232–248. [Google Scholar]
  • 6.Bedford FL. Perceptual and cognitive spatial learning. J Exp Psychol Hum Percept Perform. 1993;19:517–530. doi: 10.1037//0096-1523.19.3.517. [DOI] [PubMed] [Google Scholar]
  • 7.Bernier PM, Chua R, Bard C, Franks IM. Updating of an internal model without proprioception: a deafferentation study. Neuroreport. 2006;17(13):1421–5. doi: 10.1097/01.wnr.0000233096.13032.34. [DOI] [PubMed] [Google Scholar]
  • 8.Bizzi E, Polit A, Morasso P. Mechanisms underlying achievement of final head position. J Neurophysiol. 1976;39:435–444. doi: 10.1152/jn.1976.39.2.435. [DOI] [PubMed] [Google Scholar]
  • 9.Cheng S, Sabes PN. Calibration of visually-guided reaching is driven by error corrective learning and internal dynamics. J Neurophysiol. 2007;97 doi: 10.1152/jn.00897.2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Choe CS, Welch RB. Variables affecting the intermanual transfer and decay of prism adaptation. J Exp Psychol. 1974;102:1076–1084. doi: 10.1037/h0036325. [DOI] [PubMed] [Google Scholar]
  • 11.Clower DM, Hoffman JM, Votaw JR, Faber TL, Woods RP, Alexander GE. Role of posterior parietal cortex in the recalibration of visually guided reaching. Nature. 1996;383:618–621. doi: 10.1038/383618a0. [DOI] [PubMed] [Google Scholar]
  • 12.Cohen MM. Continuous versus terminal visual feedback in prism aftereffects. Percept Mot Skills. 1967;24:1295–1302. doi: 10.2466/pms.1967.24.3c.1295. [DOI] [PubMed] [Google Scholar]
  • 13.Craske B. Intermodal transfer of adaptation to displacement. Nature. 1966;210:765. doi: 10.1038/210765a0. [DOI] [PubMed] [Google Scholar]
  • 14.Craske B, Gregg SJ. Prism after-effects: identical results for visual targets and unexposed limb. Nature. 1966;212:104–105. doi: 10.1038/212104a0. [DOI] [PubMed] [Google Scholar]
  • 15.Crawshaw M, Craske B. No retinal component in prism adaptation. Acta Psychol (Amst) 1974;38:421–423. doi: 10.1016/0001-6918(74)90001-8. [DOI] [PubMed] [Google Scholar]
  • 16.Diedrichsen J, Hashambhoy Y, Rane T, Shadmehr R. Neural correlates of reach errors. J Neurosci. 2005;25:9919–9931. doi: 10.1523/JNEUROSCI.1874-05.2005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Donchin O, Francis JT, Shadmehr R. Quantifying generalization from trial-by-trial behavior of adaptive systems that learn with basis functions: theory and experiments in human motor control. J Neurosci. 2003;23:9032–9045. doi: 10.1523/JNEUROSCI.23-27-09032.2003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Ebenholtz SM. The possible role of eye-muscle potentiation in several forms of prism adaptation. Perception. 1974;3:477–485. doi: 10.1068/p030477. [DOI] [PubMed] [Google Scholar]
  • 19.Feldman AG. Once more on the equilibrium-point hypothesis (lambda model) for motor control. J Mot Behav. 1986;18:17–54. doi: 10.1080/00222895.1986.10735369. [DOI] [PubMed] [Google Scholar]
  • 20.Fisher NI. Statistical analysis of circular data. Cambridge: University Press; 1993. [Google Scholar]
  • 21.Ghahramani Z, Wolpert DM, Jordan MI. Generalization to local remappings of the visuomotor coordinate transformation. J Neurosci. 1996;16:7085–7096. doi: 10.1523/JNEUROSCI.16-21-07085.1996. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Ghahramani Z, Wolpert DM, Jordan MI. Computational Models of Sensorimotor Integration. In: Morasso PG, Sanguineti V, editors. Self-Organization, Computational Maps and Motor Control. Elsevier; 1997. pp. 117–147. [Google Scholar]
  • 23.Good PI. Permutation tests: a pratical guide to resampling methods for testing hypotheses. 2nd. Springer; New York: 2000. [Google Scholar]
  • 24.Guedon O, Gauthier G, Cole J, Vercher JL, Blouin J. Adaptation in visuomanual tracking depends on intact proprioception. J Motor Behav. 1998;30:234–248. doi: 10.1080/00222899809601339. [DOI] [PubMed] [Google Scholar]
  • 25.Guerraz M, Navarro J, Ferrero F, Cremieux J, Blouin J. Perceived versus actual head-on-trunk orientation during arm movement control. Experimental Brain Research. 2006;172:221–229. doi: 10.1007/s00221-005-0316-3. [DOI] [PubMed] [Google Scholar]
  • 26.Hamilton CR, Bossom J. Decay of prism aftereffects. J Exp Psychol. 1964;67:148–150. doi: 10.1037/h0047777. [DOI] [PubMed] [Google Scholar]
  • 27.Harris CS. Adaptation to displaced vision: visual, motor, or proprioceptive change? Science. 1963;140:812–813. doi: 10.1126/science.140.3568.812. [DOI] [PubMed] [Google Scholar]
  • 28.Harris CS. Perceptual adaptation to inverted, reversed, and displaced vision. Psychol Rev. 1965;72:419–444. doi: 10.1037/h0022616. [DOI] [PubMed] [Google Scholar]
  • 29.Harris CS. Beware of the straight-ahead shift? a nonperceptual change in experiments on adaptation to displaced vision. Perception. 1974;3:461–476. doi: 10.1068/p030461. [DOI] [PubMed] [Google Scholar]
  • 30.Hay JC, Pick HL. Gaze-contingent prism adaptation: optical and motor factors. J Exp Psychol. 1966;72:640–648. doi: 10.1037/h0023737. [DOI] [PubMed] [Google Scholar]
  • 31.Held R, Gottlieb N. Technique for Studying Adaptation to Disarranged Hand-Eye Coordination. Percept Mot Skills. 1958;8:83–86. [Google Scholar]
  • 32.Held R, Freedman SJ. Plasticity in human sensorimotor control. Science. 1963;142:455–462. doi: 10.1126/science.142.3591.455. [DOI] [PubMed] [Google Scholar]
  • 33.Held R, Durlach N. Telepresence, time delay and adaptation. In: Ellis SR, Kaiser MKGA, editors. Pictorial communication in virtual and real environments. Taylor and Francis; 1993. pp. 232–246. [Google Scholar]
  • 34.Hollerbach JM, Atkeson CG. Inferring limb coordination strategies from trajectory kinematics. J Neurosci Methods. 1987;21:181–194. doi: 10.1016/0165-0270(87)90115-4. [DOI] [PubMed] [Google Scholar]
  • 35.Ingram HA, van Donkelaar P, Cole J, Vercher JL, Gauthier GM, Miall RC. The role of proprioception and attention in a visuomotor adaptation task. Exp Brain Res. 2000;132(1):114–26. doi: 10.1007/s002219900322. [DOI] [PubMed] [Google Scholar]
  • 36.Kitazawa S, Kimura T, Uka T. Prism adaptation of reaching movements: specificity for the velocity of reaching. J Neurosci. 1997;17:1481–1492. doi: 10.1523/JNEUROSCI.17-04-01481.1997. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Krakauer JW, Pine ZM, Ghilardi MF, Ghez C. Learning of visuomotor transformations for vectorial planning of reaching trajectories. J Neurosci. 2000;20:8916–8924. doi: 10.1523/JNEUROSCI.20-23-08916.2000. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Kurata K, Hoshi E. Reacquisition deficits in prism adaptation after muscimol microinjection into the ventral premotor cortex of monkeys. Journal of Neurophysiology. 1999;81:1927–1938. doi: 10.1152/jn.1999.81.4.1927. [DOI] [PubMed] [Google Scholar]
  • 39.Kwakernaak H, Sivan R. Linear optimal control systems. New York: Wiley-Interscience; 1972. [Google Scholar]
  • 40.Lackner JR. The role of posture in adaptation to visual rearrangement. Neuropsychologia. 1973;11:33–44. doi: 10.1016/0028-3932(73)90062-6. [DOI] [PubMed] [Google Scholar]
  • 41.Land MF, Hayhoe M. In what ways do eye movements contribute to everyday activities? Vision Res. 2001;41:3559–3565. doi: 10.1016/s0042-6989(01)00102-x. [DOI] [PubMed] [Google Scholar]
  • 42.Lee JH, van Donkelaar P. The human dorsal premotor cortex generates on-line error corrections during sensorimotor adaptation. J Neurosci. 2006;26:3330–3334. doi: 10.1523/JNEUROSCI.3898-05.2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Magescas F, Prablanc C. Automatic drive of limb motor plasticity. J Cogn Neurosci. 2006;18:75–83. doi: 10.1162/089892906775250058. [DOI] [PubMed] [Google Scholar]
  • 44.Maravita A, McNeil J, Malhotra P, Greenwood R, Husain M, Driver J. Prism adaptation can improve contralesional tactile perception in neglect. Neurology. 2003;60:1829–1831. doi: 10.1212/wnl.60.11.1829. [DOI] [PubMed] [Google Scholar]
  • 45.Mars F, Honore J, Richard C, Coquery JM. Effects of all illusory orientation of the head on straight-ahead pointing movements. Cahiers de Psychologie Cognitive-Current Psychology of Cognition. 1998;17:749–762. [Google Scholar]
  • 46.Martin TA, Keating JG, Goodkin HP, Bastian AJ, Thach WT. Throwing while looking through prisms. I. Focal olivocerebellar lesions impair adaptation. Brain. 1996;119:1183–1198. doi: 10.1093/brain/119.4.1183. [DOI] [PubMed] [Google Scholar]
  • 47.Neggers SF, Bekkering H. Gaze anchoring to a pointing target is present during the entire pointing movement and is driven by a non-visual signal. J Neurophysiol. 2001;86:961–970. doi: 10.1152/jn.2001.86.2.961. [DOI] [PubMed] [Google Scholar]
  • 48.Newport R, Brown L, Husain M, Mort D, Jackson SR. The role of the posterior parietal lobe in prism adaptation: Failure to adapt to optical prisms in a patient with bilateral damage to posterior parietal cortex. Cortex. 2006;42:720–729. doi: 10.1016/s0010-9452(08)70410-6. [DOI] [PubMed] [Google Scholar]
  • 49.Paap KR, Ebenholtz SM. Perceptual Consequences of Potentiation in Extraocular-Muscles - Alternative Explanation for Adaptation to Wedge Prisms. J Exp Psychol Hum Percept Perform. 1976;2:457–468. doi: 10.1037//0096-1523.2.4.457. [DOI] [PubMed] [Google Scholar]
  • 50.Pipereit K, Bock O, Vercher JL. The contribution of proprioceptive feedback to sensorimotor adaptation. Exp Brain Res. 2006;174(1):45–52. doi: 10.1007/s00221-006-0417-7. [DOI] [PubMed] [Google Scholar]
  • 51.Redding GM, Wallace B. Sources of “overadditivity” in prism adaptation. Percept Psychophys. 1978;24:58–62. doi: 10.3758/bf03202974. [DOI] [PubMed] [Google Scholar]
  • 52.Redding GM, Wallace B. Components of prism adaptation in terminal and concurrent exposure: organization of the eye-hand coordination loop. Percept Psychophys. 1988;44:59–68. doi: 10.3758/bf03207476. [DOI] [PubMed] [Google Scholar]
  • 53.Redding GM, Wallace B. Effects on prism adaptation of duration and timing of visual feedback during pointing. J Mot Behav. 1990;22:209–224. doi: 10.1080/00222895.1990.10735511. [DOI] [PubMed] [Google Scholar]
  • 54.Redding GM, Wallace B. Effects of pointing rate and availability of visual feedback on visual and proprioceptive components of prism adaptation. J Mot Behav. 1992;24:226–237. doi: 10.1080/00222895.1992.9941618. [DOI] [PubMed] [Google Scholar]
  • 55.Redding GM, Wallace B. Adaptive spatial alignment and strategic perceptual-motor control. J Exp Psychol Hum Percept Perform. 1996;22:379–394. doi: 10.1037//0096-1523.22.2.379. [DOI] [PubMed] [Google Scholar]
  • 56.Redding GM, Wallace B. Adaptive spatial alignment Scientific psychology series. Mahwah, N.J: Lawrence Erlbaum Associates; 1997. [Google Scholar]
  • 57.Redding GM, Wallace B. Generalization of prism adaptation. J Exp Psychol Hum Percept Perform. 2006;32:1006–1022. doi: 10.1037/0096-1523.32.4.1006. [DOI] [PubMed] [Google Scholar]
  • 58.Rossetti Y, Desmurget M, Prablanc C. Vectorial coding of movement: vision, proprioception, or both? J Neurophysiol. 1995;74:457–463. doi: 10.1152/jn.1995.74.1.457. [DOI] [PubMed] [Google Scholar]
  • 59.Rossetti Y, Rode G, Pisella L, Farne A, Li L, Boisson D, Perenin MT. Prismatic displacement of vision induces transient changes in the timing of eye-hand coordination. Naure. 1998;395:166–169. [Google Scholar]
  • 60.Sailer U, Eggert T, Ditterich J, Straube A. Spatial and temporal aspects of eye-hand coordination across different tasks. Exp Brain Res. 2000;134:163–173. doi: 10.1007/s002210000457. [DOI] [PubMed] [Google Scholar]
  • 61.Sainburg RL, Poizner H, Ghez C. Loss of proprioception produces deficits in interjoint coordination. J Neurophysiol. 1993;70(5):2136–47. doi: 10.1152/jn.1993.70.5.2136. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Saunders JA, Knill DC. Humans use continuous visual feedback from the hand to control fast reaching movements. Exp Brain Res. 2003;152:341–352. doi: 10.1007/s00221-003-1525-2. [DOI] [PubMed] [Google Scholar]
  • 63.Sober SJ, Sabes PN. Multisensory integration during motor planning. J Neurosci. 2003;23:6982–6992. doi: 10.1523/JNEUROSCI.23-18-06982.2003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 64.Sober SJ, Sabes PN. Flexible strategies for sensory integration during motor planning. Nat Neurosci. 2005;8:490–497. doi: 10.1038/nn1427. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 65.Stevens J. Applied multivariate statistics for the social sciences. Mahwah, N.J: Lawrence Elbaum Associates; 1996. [Google Scholar]
  • 66.Taub E, Goldberg LA. Prism adaptation: control of intermanual transfer by distribution of practice. Science. 1973;180:755–757. doi: 10.1126/science.180.4087.755. [DOI] [PubMed] [Google Scholar]
  • 67.Templeton WB, Howard IP, Wilkinson DA. Additivity components of prismatic adaptation. Percept Psychophys. 1974;15:249–257. [Google Scholar]
  • 68.Todorov E, Li W. A generalized iterative LQG method for locally-optimal feedback control of constrained nonlinear stochastic systems. Proceedings of the American Control Conference, 2005 (IEEE); 2005. pp. 300–306. [Google Scholar]
  • 69.Uhlarik JJ, Canon LK. Influence of concurrent and terminal exposure conditions on the nature of perceptual adaptation. J Exp Psychol. 1971;91:233–239. doi: 10.1037/h0031786. [DOI] [PubMed] [Google Scholar]
  • 70.van Beers RJ, Sittig AC, Denier van der Gon JJ. How humans combine simultaneous proprioceptive and visual position information. Exp Brain Res. 1996;111:253–261. doi: 10.1007/BF00227302. [DOI] [PubMed] [Google Scholar]
  • 71.van Beers RJ, Sittig AC, Denier van der Gon JJ. The precision of proprioceptive position sense. Exp Brain Res. 1998;122:367–377. doi: 10.1007/s002210050525. [DOI] [PubMed] [Google Scholar]
  • 72.van Beers RJ, Wolpert DM, Haggard P. When feeling is more important than seeing in sensorimotor adaptation. Curr Biol. 2002;12:834–837. doi: 10.1016/s0960-9822(02)00836-9. [DOI] [PubMed] [Google Scholar]
  • 73.van Beers RJ, Haggard P, Wolpert DM. The role of execution noise in movement variability. J Neurophysiol. 2004;91:1050–1063. doi: 10.1152/jn.00652.2003. [DOI] [PubMed] [Google Scholar]
  • 74.Vetter P, Goodbody SJ, Wolpert DM. Evidence for an eye-centered spherical representation of the visuomotor map. J Neurophysiol. 1999;81:935–939. doi: 10.1152/jn.1999.81.2.935. [DOI] [PubMed] [Google Scholar]
  • 75.von Helmholtz H. Treatise on physiological optics. 1925. [Google Scholar]
  • 76.Wallace B, Redding GM. Additivity in prism adaptation as manifested in intermanual and interocular transfer. Percept Psychophys. 1979;25:133–136. doi: 10.3758/bf03198799. [DOI] [PubMed] [Google Scholar]
  • 77.Weiner MJ, Hallett M, Funkenstein HH. Adaptation to lateral displacement of vision in patients with lesions of the central nervous system. Neurology. 1983;33:766–772. doi: 10.1212/wnl.33.6.766. [DOI] [PubMed] [Google Scholar]
  • 78.Welch RB, Rhoades RW. The manipulation of informational feedback and its effects upon prism adaptation. Can J Psychol. 1969;23:415–428. doi: 10.1037/h0082827. [DOI] [PubMed] [Google Scholar]
  • 79.Welch RB. Adaptation To Prism-Displaced Vision: the Importance of Target-Pointing. Percept Psychophys. 1969;5:305–309. [Google Scholar]
  • 80.Welch RB, Choe CS, Heinrich DR. Evidence for a three-component model of prism adaptation. J Exp Psychol. 1974;103:700–705. doi: 10.1037/h0037152. [DOI] [PubMed] [Google Scholar]
  • 81.Welch RB. Adaptation of Space Perception. In: Boff KR, Kaufman L, Thomas JP, editors. Handbook of Perception and Human Performance: Sensory Processes and Perception. Wiley; 1986. pp. 1–45. [Google Scholar]
  • 82.Wilkinson DA. Visual-motor control loop: a linear system? J Exp Psychol. 1971;89:250–257. doi: 10.1037/h0031162. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

RESOURCES