Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2013 Jun 30.
Published in final edited form as: J Neurosci Methods. 2012 Apr 24;208(1):48–58. doi: 10.1016/j.jneumeth.2012.04.001

Measuring adaptation with a sinusoidal perturbation function

Todd E Hudson 1, Michael S Landy 1
PMCID: PMC3612424  NIHMSID: NIHMS380458  PMID: 22565135

Abstract

We examine the possibility that sensory and motor adaptation may be induced via a sinusoidally incremented perturbation. This sinewave adaptation method provides superior data for fitting a parametric model than when using the standard step-function method of perturbation, due to the relative difficulty of fitting a decaying exponential vs. a sinusoid. Using both experimental data and simulations, we demonstrate the difficulty of detecting the presence of motor adaptation using a step-function perturbation, compared to detecting motor adaptation using our sinewave perturbation method.

Keywords: motor adaptation, perceptual adaptation, signal detection

1. Introduction

Adaptation studies generally use a step-function perturbation, where the adaptive response is measured by analyzing deviations from baseline measurements during perturbation onset and offset (e.g., Bruggeman et al., 2007; Gaveau et al.; Held and Bossom, 1961; Schor et al., 1984; Wallman et al., 1982; Zwiers et al., 2003). A progression of perturbation onset/offset representative of this method is shown in Fig. 1A, corresponding to the baseline, perturbation and washout phases of an experiment. Note that because data are usually represented in terms of the perturbed response, an exponential learning curve would have the shape of a decay at both perturbation onset and offset. Within this paradigm, large perturbations facilitate detection of the adaptive response because it is typical that only a portion of the perturbation is corrected, and decay to baseline is often rapid. However, large perturbations are evident to experimental subjects, and can lead to cognitive-based corrective responses (e.g., Harris, 1974). In addition, there is the possibility that the efficiency of the step-function adaptation paradigm is low in terms of detecting an adaptive response consistent with the theoretical exponential model, because much of the experimental data is focused on parts of the exponential function where there is little change, i.e., before application of perturbation, or during ‘saturation’ where the adaptive response is essentially at its maximal level.

Figure 1.

Figure 1

(A) Typical step-function perturbation (green) and hypothetical (noiseless) data (black) representing an adaptive response. There is an initial set of baseline trials in which no perturbation is introduced. This is followed by a set of perturbation trials in which the response is perturbed and correction to the perturbation may be measured. Finally, there is a second set of unperturbed (‘washout’) trials during which aftereffects due to having adapted to the previous perturbation may be measured. Data in this type of experiment is generally plotted in terms of the perturbed response (X(t) + P(t) for trial t), so that correct responses (i.e., on-target reaches) or veridical responses (accurate perception) would be at zero. (B) Sinewave perturbation (blue) and hypothetical data (black) representing an adaptive response. Responses are perturbed following a sine function. Here, the amplitude of the corrective response is measured as the amplitude of the response sinewave. Data are here plotted in terms of unperturbed responses (X(t)); i.e., on-target or veridical responses occur when the sum of perturbation and response is zero for a particular trial.

Here we provide an alternative method to the usual step-function adaptation paradigm designed to provide superior detectability of the theoretical adaptive response in both motor and perceptual adaptation experiments. Because the method is sensitive, it allows one to use perturbations so small that subjects are not aware of the perturbation. This allows the researcher to measure a low-level adaptation mechanism uncorrupted by conscious, cognitive attempts to respond to the perturbation. This method relies on a perturbation time-series that follows a sinusoidal function. Such a time-series with overall perturbation magnitude equal to that in Fig. 1A is shown in Fig. 1B. The major differences between sinewave and step-function adaptation are that 1) an adaptive response to sinewave perturbation is expected to be constantly changing, as opposed to changing only twice in response to perturbation onset and offset; and 2) perturbations change gradually rather than abruptly, which has been shown to lead to greater adaptive flexibility in some circumstances (Kagerer et al., 1997; Linkenhoker and Knudsen, 2002).

These differences appear to remediate the major disadvantage of the step-function method: its reliance on large, abrupt perturbations to generate sufficient statistical power to detect responses to the discrete perturbation onset and offset. Fig. 1A shows a perturbation onset/offset pair typical of the step-function method, where the size of the theoretical adaptive response is shown relative to the perturbation magnitude at both onset and offset. One can see that the majority of the data is obtained when the adaptive response is not changing, or changing very slowly. This is a necessary component of the step-function adaptation paradigm because one is interested in inferring the correction amplitude, or equivalently amplitude gain (ratio of correction to perturbation amplitude), which can be best measured only after the initial transient following perturbation onset has ended. A second reason for collecting the majority of the perturbation-phase data at or near its steady-state response is to ensure that the adaptive response has reached its maximal value. This is particularly important when one is interested in measuring the correction amplitude from the size of the aftereffect transient during the washout phase of the experiment (see Fig. 1A), because the size of that transient depends on the level of adaptation achieved during the perturbation phase.

The proposed sinewave adaptation paradigm has the advantage that it does not require a separate baseline measurement, nor any data collection for constant perturbation. Thus, for the same number of trials, one can conceivably produce several cycles of sinusoidal perturbation and adaptive correction (Fig. 1B; note that here we show the unperturbed response of the subject, whereas Fig. 1A shows the subject's response plus the perturbation). We show that this method is capable of providing evidence for an adaptive response even with small perturbations, from measurement of the spectral power at the perturbing frequency. We provide both experimental and simulation-based evidence of the advantages of sinewave adaptation.

2. Materials and Methods

2.1 Experiments

We first demonstrate whether it is possible to adapt human subjects to a sinusoidal perturbation, and compare this adaptation to that obtained from a comparable step-adaptation experiment. To this end, four subjects performed two sensory-motor reach adaptation experiments, one following the standard step-function perturbation and one using a sinusoidal perturbation. Subjects completed both experiments in a single half-hour session. The order of the two experiments was counterbalanced across subjects.

In both experiments the perturbation was introduced by providing false feedback regarding reach endpoints. Reaches were performed from point to point on a tabletop, with fingertip position continuously monitored via an Optotrak 3020 at 200 Hz. Reach targets and feedback were provided on an upright computer monitor (Fig. 2). For an unperturbed trial, there was a 1:1 correspondence of positions on the tabletop and feedback locations on the display. During perturbation the visual display that indicated reach endpoint was shifted horizontally. Thus, on trial t, if the fingertip landed at horizontal location X(t), the displayed feedback was at location X(t) + P(t), where P(t) was the perturbation on that trial. The range of perturbation was 4 mm; either in a 4 mm step, or via a sinewave with a 2 mm amplitude. We specifically chose a small overall perturbation magnitude to avoid any possibility that subjects might become aware of the perturbation and produce a cognitive-based correction. Note that the width of the tip of a human female index finger is typically greater than 1 cm, so that even without the presence of motor noise this perturbation would be difficult to detect consciously.

Figure 2.

Figure 2

Schematic of the experimental apparatus. Reaches were made from point-to-point on a tabletop to virtual targets that were presented on the display screen. The fingertip was lifted from the tabletop to initiate a reach, and the touchdown position defined the reach endpoint; i.e., reaches, not drawing movements, were required. Enforcing a short movement duration ensured that reaches were composed of a single relatively straight movement.

Each reach began with a start position shown on-screen. When this position was covered with the fingertip indicator (shown as a small dot), the start position and fingertip feedback were extinguished. After a 100 ms delay, a target was shown and a beep indicated that the reach could begin. For each subject and experiment, a mean target location was chosen randomly on a circle with 11 cm radius centered on the start location. On each trial within that experiment, targets were chosen so that reach extent was uniformly distributed between 9 and 13 cm and reach direction was uniformly distributed ±15 deg around the mean target direction. A reach began when the fingertip lifted from the tabletop and moved outside a 3 mm bounding radius. Reaches ended when the fingertip again touched the tabletop. Subjects were allowed 350 ms to complete each reach; timed-out reaches were repeated and feedback was only provided upon successful completion of the reach. Only the target was shown on-screen during a reach. Although the hand could be seen by looking down, directing gaze away from the screen would prevent subjects from observing fingertip feedback relative to the start or target dot; subjects did not typically move their gaze from the display.

At reach completion a fingertip feedback dot was shown to indicate the end position of the reach relative to the target. The x-coordinate of this endpoint feedback was perturbed (see below) to induce adaptation in both experiments. To help maintain motivation visual and sound effects were played if the feedback dot landed within the target (5.9 mm radius).

2.1.1 Step-function perturbation

The timecourse of perturbation in the step-function experiment is described by:

P(t)={0baseline,1tTbPperturbation,Tb<tTb+Tp0washout,Tb+Tp<tTb+Tp+Tw,} (1)

where t is the trial number, the perturbation amplitude P = 4 mm, and Tb = 75, Tp = 128, and Tw = 100 were the number of baseline, perturbation and washout trials, respectively.

2.1.2 Sinewave perturbation

The timecourse of perturbation in the sinewave experiment is described by:

P(t)=Psin[2πfsineT(t1)], (2)

where P = 2 mm, the perturbation frequency fsine = 4 cycle/experiment, and there were T = 303 total reaches. The negative sign of the perturbation function simplifies data analysis because the predicted response to perturbation follows a positive sine function Note that t – 1 occurs in Eq. 2 because the first trial of the experiment was unperturbed.

2.2 Simulations

Simulation data were generated using parameters similar to those derived from the experimental data (see below). In the step-function experiment, the predicted adaptive response is described by two exponential functions: the first arises during the perturbation phase as a corrective response to perturbation onset, and the second from perturbation offset during the washout phase (no adaptive response is predicted during baseline trials). In this experiment, we represent the data as deviations of the perturbed endpoint from the target (d(t) = X(t) + P(t) – Xtarget (t)). Therefore, both exponentials have the shape of a decay. We refer to the predicted adaptive response to step perturbation as Sstep:

Sstep(t)={0baseline,1tTbAeα1(tTb1)+operturbation,Tb<tTb+TpAeα2(tTbTp1)+owashout,Tb+Tp<tTb+Tp+Tw,} (3a)

where A is the amplitude of asymptotic correction to the perturbing step, o is the offset at asymptote (so that A + o is the perturbation amplitude, P) that represents any unadapted portion of the perturbation, and the α terms are the decay rates. The corresponding data model for the step experiment is:

d(t)=β+Sstep(t)+ε(t)1tTb+Tp+Tw, (3b)

where β is a constant response bias that is specific to each subject, and ε represents unpredictable errors (which we assume are Gaussian with standard deviation that can vary across subjects). We will use the shorthand of referring to the data model whose functional form is described by Eq. 3b as Mstep, and the data model describing a lack of adaptive response (i.e., A = 0 and hence o = P in Eq. 3a) as Mstep. Note that Mstep always describes an initial (perturbation) exponential that is deviated in the direction of the perturbing step, and an aftereffect observed during the washout phase that is deviated from baseline in the opposite direction. Also, two separate decay constants, α1 and α2, are modeled because there is no theoretical constraint for equal decays, and it is a typical finding in studies of motor adaptation (Davidson and Wolpert, 2004) that decay during washout is faster than during initial adaptation (i.e., α1 < α2).

In the sinewave experiment, we model the unperturbed response errors, so that the data to be modeled are d(t) = X(t) – Xtarget (t). The theoretical adaptive response to sinewave perturbation is modeled as:

Ssine(t)=Asin[2πfT(t1)φ]1tT, (4a)

where data conforming to this model will always produce deviations from baseline in the opposite direction of the perturbation (assuming positive sinusoidal amplitudes P and A), have a phase lag of φ, and a frequency of f cycles over the course of the experiment. The corresponding data model for the sinewave experiment is:

d(t)=β+Ssine(t)+ε(t)1tT, (4b)

where ε represents unpredictable errors (again drawn from a Gaussian distribution with a different standard deviation for each subject), and β is again a constant response bias that is specific to each subject. We refer to the data model with the sinusoidal functional form described by Eq. 4a as Msine. This is contrasted with a data model that does not contain an adaptive response (i.e., A = 0 in Eq. 4a), which we refer to as Msine.

To simulate noisy data consistent with an adaptive response in the two experiments, Gaussian noise with varying σ was used in Eqs. 3b and 4b; that is, ε(t) ~ N(0,σ). Simulations of the two experiments were run in parallel such that a single simulated noise vector was generated in each repetition of the simulation, and used in both Eqs. 3b and 4b. Simulated noise ranged from σ = 1.5 to 14.5 mm, in increments of 1 mm. The parameters used to simulate Eq. 3a were: P = 4 mm, o = 0.8 mm (i.e., A = 3.2 mm), α1 = 0.125 trial–1, and α2 = 0.4 trial–1. The parameters used to simulate Eq. 4a were: P = 2 mm, A = 1.6 mm, f = 4 cycle/experiment), and φ = 24 deg (i.e., approximately a 5 trial lag). In both simulated experiments, β values were drawn randomly and separately for each simulated subject from a range of -5 to 5 mm. The number of trials of each type was Tb = 75, Tp = 128, and Tw = 100, so that T = 303.

2.3 Data analysis

In adaptation experiments, the first step in analyzing an experiment is to detect an adaptive response; that is, to establish whether adaptation occurs under a given set of experimental conditions. As described below, the mathematical objective of detecting an adaptive response in experimental data is identical to the objective of a human observer in a perceptual signal-detection experiment. In a signal-detection experiment, the task for an observer is to determine whether or not a signal was present in noisy sensory input (data). Similarly, our objective in analyzing the experimental and simulation data from the above adaptation experiments is to determine whether a noisy signal – the adaptive response – was present in the data. For this reason, we will refer to the data analytic goal of detecting an adaptive response as signal detection, and to the theoretical adaptive response predicted from step-function (Sstep) and sinewave (Ssine) experiments (i.e., the responses described by Eqs. 3a and 4a) as the adaptive signals, or simply the signals. The two experimental paradigms each posit data with an underlying functional form described by three signal parameters as well as two nuisance parameters per subject (bias β and noise magnitude σ). These functional forms are given in Eqs. 3b and 4b, respectively.

2.3.1 Signal detection

To establish the presence of an adaptive response in either the step or the sinewave experiment, we must detect the presence of the adaptive signal in the noisy dataset. This requires a comparison of the probability that the data were produced by a signal-present vs. the probability that the data were produced by a signal absent model of the data (i.e., M or M′). To detect the adaptive signals defined by Eqs. 3a or 4a, we compute the evidence (Jaynes, 2003) in favor of data containing the relevant (double-exponential or sinusoidal) signal, where evidence is defined as the decibel measure:

λ=10log10O(Mi) (5)

where O(Mi)=p(MiD)p(MiD) is the odds ratio favoring the hypothesis that a signal is present in the data, i is the model under consideration (step or sine), D = {d1,···,dN} represents the data of the N subjects, and Mi and Mi refer to signal-present and signal-absent models of the data. The probability of a model (signal-present or signal-absent) is defined as:

p(MDι)p(Mι)p(DMι)=p(Mι)dϴp(ϴMι)p(DϴMι), (6)

where Θ represents all model parameters (both theoretically relevant and nuisance; which are different for the various models under consideration), and ι is any other background information relevant to the problem (such as information about the range of possible motor variance values that occur in a normal human population, etc.). Note that the model likelihood, p(DMι)=dϴp(ϴMι)p(DϴMι), is the normalization term that is usually ignored in deriving posteriors over parameter values (e.g., in Eq. 7 below). Explicit expressions for evidence favoring the presence of an adaptive response in the data obtained from step and sinewave experiments are derived in Appendix A.

2.3.2 Parameter estimation

Once evidence for a particular model of the data has been obtained, probability distributions can be derived for each of the unknown signal parameters of that model. Expressions for the posterior probabilities over the signal parameters of both signal-present models are derived in Appendix B. As an example, the joint probability of the amplitude, frequency, and phase of the sinewave model defined by Eq. 4b is:

p(fAφDMsine)p(fAφMsine)i=1N(di2¯di¯2+A2(sin2[2πfT(t1)φ]¯sin[2πfT(t1)φ]¯2)2A(disin[2πfT(t1)φ]¯di¯sin[2πfT(t1)φ]¯))T12, (7)

where the bars indicate averages over trials. One obtains best estimates of the three model parameters by first computing the marginal posterior, i.e., by numerically integrating over two of the three dimensions of each of the posterior distributions. For example, integrating over the amplitude and phase in the posterior over sinewave-model parameters yields the posterior over the frequency parameter of that model.

3. Results

3.1 Experiments

3.1.1 Signal detection

Average results of the step-function and sinewave adaptation experiments are shown in Figures 3A and 3B, respectively. The windowed mean of step-function data (solid light green) shows no obvious exponential decay during the adaptation or washout phases. On the other hand, there is an obvious sinusoidal shape to the mean of the sinewave adaptation data (solid light blue). Although a data average or windowed mean cannot be used to formally analyze these data, this difference in the visibility of the adaptive response seen in data averages is confirmed by signal-detection evidence measurements: The step-function experiment yields only 2.3 dB of evidence for the predicted adaptive response, whereas the sinewave experiment yields 21.2 dB of evidence in favor of an adaptive response. These values correspond to odds of only 1.7:1 in favor of the presence of adaptation in the step-function data, but about 130:1 in favor of adaptation having occurred based on the sinewave data.

Figure 3.

Figure 3

(A) Average experimental data from four subjects in the step-function experiment (black dots) and a 23 trial windowed mean of those data (light green). Data consistent with an adaptive response would display a positive jump relative to baseline during initial perturbed trials; this should gradually decay over the course of the perturbation phase of the experiment. Adaptation during the perturbation phase should also result in a negative aftereffect during initial washout trials. (B) Average experimental data from the same four subjects in the sinewave experiment (black dots) and a windowed mean of those data over trials (light blue). Data consistent with an adaptive response should have an overall shape that is a delayed negative image of the perturbing sinewave.

Although one is free to choose a reasonable evidence threshold for detection in a given experimental circumstance (ideally depending on the costs of false alarms, hits, etc.), we note that thresholds are typically chosen between 3 dB and 10 dB, or odds of between 2:1 and 10:1. Thus, under any reasonable threshold evidence level we would have to conclude that, despite yielding positive evidence, the evidence was insufficient to infer that adaptation had occurred in the step-function experiment. In contrast, there was strong evidence favoring adaptation in the sinewave experiment.

We used a very small perturbation relative to motor noise levels and fingertip widths to test the detectability of adaptation under statistically noisy conditions, as well as to ensure that no cognitive effects would contaminate the result. So while it is perhaps not surprising that we failed to find significant evidence for adaptation within the standard step-function adaptation experimental paradigm, the current experimental conditions highlight the advantages of the sinewave adaptation paradigm. Indeed, while we equated the two experiments for overall perturbation (4 mm), the root-mean-square (RMS) perturbation was substantially higher in the step-function experiment, suggesting it should have produced a more detectable response. To equate RMS perturbation in the two experiments, we would have had to reduce the number of perturbation-phase trials from 128 to 38, which would have further compromised the ability of the step experiment to produce a detectable adaptive response to perturbation.

Finally, we note that many step-function motor adaptation experiments (Lackner and Dizio, 1994; Scheidt et al., 2001b; Shadmehr and Mussa-Ivaldi, 1994) use a different method of perturbation than our false feedback method, generating perturbed endpoints by applying an external force to the moving arm. Data generated from such experiments require separate parameters to describe correction amplitude (A) and offset (o). We require only an offset parameter because in our experiment correction amplitude and offset are related exactly, based on the relation: P = A + o, and the perturbation-induced error magnitude (P) is a known value. If the perturbed error were not a known quantity, one must fit both A and o. This would have the effect of further impairing the detectability of adaptation from step-experiment data.

3.1.2 Parameter estimates

Only once a signal corresponding to a particular model has been detected is it sensible to fit the parameters of that model; i.e., it is unclear what information is conveyed by the fitted values of a model that is not supported by the data. We therefore fit the sinusoidal model parameters, whose marginal probability distributions are given in Fig. 4. As can be seen clearly in its probability distribution, the frequency parameter is fit particularly precisely, with a peak ±1 SD of 4.04 ± 0.06 cycle/experiment, and closely conforms to the prediction for a linear system undergoing sinusoidal perturbation. The other two parameters are not fit as precisely, but nevertheless provide fitted values that are reasonably consistent with the literature for an adaptive response: The maximum a posteriori amplitude is nearly 80% of the perturbing amplitude with a phase lag of about 5 trials.

Figure 4.

Figure 4

Marginal posterior probability distributions over the parameters of the sinewave model. Circles are plotted on the abscissa at the peak probability values, and vertical dashed lines are plotted at the values of the perturbing sinewave. (A) Probability of the response frequency. The probability peaks at 4.04, i.e., at the perturbing frequency. (B) Probability of the amplitude gain. Amplitude gain, the ratio of the response to the perturbing amplitude, is between 0.75 and 0.8, suggesting that nearly 80% of the perturbation is corrected. (C) Probability of the response phase. The distribution peaks at a lag of 5 trials, suggesting that the peak of the response sinusoid lagged the peak of the perturbing sinusoid by only a few reaches.

3.2 Simulations

Simulation data were generated for both the step-function and sinewave adaptation paradigms to measure differences in signal detectability for data from the two experiments. Signal-detection measurements were made for a series of simulated noise values, in simulated experiments with between 1 and 6 subjects (Fig. 5).

Figure 5.

Figure 5

Evidence for adaptation in the two experiments based on simulation data (in which adaptation does indeed occur) under different levels of motor noise. In both plots, dashed lines extending from a given colored circle indicate evidence values for experiments with multiple subjects with the same σ. Upwardly-angled dashed lines extend to the right of their corresponding colored circle in one-subject intervals; i.e., the last black circle on each line shows average evidence obtained from an experiment with six simulated subjects. The horizontal lines correspond to evidence values of zero and 3 dB, above which significantly positive evidence in favor of an adaptive signal may be inferred. (A) For a step-function perturbation, adaptation is only detectable at the 3 dB criterion in single simulated subjects (green) when the noise is below about 6.5 mm. (B) For sinewave perturbation, adaptation is detectable by the same 3 dB criterion for single simulated subjects (blue) even at relatively high noise levels (up to σ = 14.5 mm).

In our simulations, evidence values for single subjects (colored circles) under both experimental paradigms decay to zero as σ increases; i.e., extremely noisy data cannot provide evidence for or against the hypothesis that adaptation had occurred. These evidence values indicate that the sinewave paradigm produces adaptation data that are substantially more detectable than under the step-function paradigm, even though the perturbation ranges are identical. Step-function evidence values decay to zero more quickly with increasing motor noise than sinewave-adaptation evidence, and increase more slowly as data from multiple subjects are combined (filled circles). We conclude that sinewave adaptation provides greater power than the standard step-function method for eliciting and detecting adaptation effects.

4. Discussion

Our signal-detection simulations represent a general method of comparing experimental paradigms designed to detect the same effect. In each experiment the adaptive response is designed to have a different functional form, and that response comprises the signal to be detected. To detect the adaptive signal, the likelihood of that signal under the observed data is compared to the likelihood of a comparable no-signal model under the same data. The ratio of these likelihoods for simulation data provides a measure of the ability of each experimental paradigm to produce a detectable signal in the data. For our simulations, the difference in detectability of sinewave over step-function signals in the experimentally relevant conditions (e.g., combining 6 subjects with motor noise of σ = 7.5) is roughly the difference in the detectability of a gunshot vs. a birdcall (Branch and Beland, 1970).

Simulation results can also point to the need for a more elaborate model of the effect under consideration, as they appear to do here. Simulations of both paradigms suggest that experimentally observed evidence values should have been higher, given typical motor noise values of between σ = 4 and 10 mm. This discrepancy suggests that there are consistent effects present in the experimental data that are unmodeled by Eqs. 3 and 4, and were therefore attributed to noise in the signal-detection computations.

This discrepancy in observed vs. predicted evidence values suggests that an additional nuisance parameter might allow for a superior fit to the model, such as a subject-specific linear trend in the bias, i.e., a bias that changes slowly over the course of an experimental session. Whatever the nature of the unmodeled effect or effects, however, the result will be to reduce the experimentally measured evidence. The reason is that unmodeled effects in the data are treated as noise by Eq. 4; but unlike noise, unmodeled effects are consistent from one data source to another. Therefore, in addition to artificially increasing the portion of the data attributed to noise from single sources, the ‘noise’ added by such effects is not reduced by the same error-cancellation mechanism that tends to reduce the effective motor noise as data from multiple sources are combined.

While we have shown that sinewave adaptation is more sensitive than step-function adaptation for detection of an adaptive response, for similar reasons it is also more reliable for estimation of the parameters of that response (e.g., gain and phase lag). This is obviously useful in modeling the adaptive response. A standard adaptation model (e.g., Kawato et al., 1987; Scheidt et al., 2001a; Thoroughman and Shadmehr, 2000; Wolpert and Kawato, 1998) suggests that the motor system has a calibration parameter and that a portion of the error from each trial is subtracted from that parameter so as to improve the calibration on subsequent trials. This is a linear model with an exponential impulse response. The sinewave-adaptation paradigm can be used as a test for linearity. First, a nonlinear adaptive response should result in distortion products — responses at the harmonics of the perturbing frequency — and our method is particularly sensitive for detecting harmonic responses. Second, one can test directly whether adaptation satisfies homogeneity (using two amplitudes of adapter) and superposition (measuring the response to two individual sinewaves as well as to their sum). Finally, by measuring the gain and phase lag of the response to multiple temporal frequencies, one can test whether the impulse response is exponential.

The use of sinewave inputs to analyze both linear and nonlinear systems has a long history in sensory neuroscience (e.g., Hochstein and Shapley, 1976a, b; Hughes and Maffei, 1966). An alternative approach is to use white noise stimuli. White-noise analysis has some advantages (Marmarelis and Marmarelis, 1978), and has been both in sensory neuroscience (e.g., Victor, 1979; Victor et al., 1977) and to investigate sensory-motor calibration (Baddeley et al., 2003; Burge et al., 2008). In some sense, white-noise stimulation allows one to investigate the entire contrast sensitivity function of the system simultaneously, by providing input at all temporal frequencies. At the same time, the power at any given frequency is low, thus requiring a large amount of data to be collected. By concentrating the perturbation power at a single frequency, the sinewave method allows the researcher to measure the system response efficiently at a single temporal frequency at a time.

In summary, we have presented a new experimental method for measuring motor adaptation based on a sinusoidal perturbation function. This paradigm should allow investigators to more precisely characterize the nature of the adaptive response to perturbation than has been possible with the standard step-function method. Furthermore, sinewave adaptation should prove useful for neuroscience experiments involving perceptual as well as motor adaptation, due to its enhanced ability to produce detectable effects, and the fact that no baseline condition is required. And in a situation where the underlying model may not capture all elements of the data, and thus produce lower-than-predicted evidence, it is important to use the best procedure for detection.

Indeed, while one is always interested in using the experimental paradigm best capable of yielding significant effects, the ability of this method to produce detectable effects could be particularly helpful in human neuroimaging experiments, where it is useful to have strong experimental effects and to minimize scanning time. In many visual neuroimaging experiments, early visual areas are first defined by establishing retinotopy using a visual stimulus (a rotating wedge or expanding ring) that is repeated multiple times, and has a temporal frequency for which fMRI signal-to-noise ratio is relatively high (DeYoe et al., 1996; Engel et al., 1994; Freeman et al., 2011; Sereno et al., 1995). The ‘traveling wave’ of activity in cortex is measured by computing coherence of the voxel responses with the known temporal frequency of stimulus repetition. The analysis methods we introduce, as well as the simulation results, help to explain why such sinusoidal experimental paradigms are so powerful.

Highlights.

We identify two potential problems with step-function adaptation paradigms

Large perturbations are often required to produce detectable results, but can lead to cognitive responses

Much of the data is collected during baseline or at steday-state responding

We present an adaptation paradigm using incremental perturbations following a sinusoidal pattern

Simulation and experimental data show that the sinewave adaptation paradigm remediates both issues

Acknowledgments

This work was supported by NIH grant EY08266.

Appendix A: Signal detection

To detect the presence of a signal in a noisy dataset, we must first specify the functional form of that signal. That is, we must select an appropriate model to describe the signal before it makes sense to fit the parameters of that model. The simplest form of model selection involves comparing a pair of nested models where one model consists entirely of theoretically uninteresting variables (such as subject-specific biases, noise levels, etc.), and the other consists of these variables and an additional set of signal variables. That is, we consider comparison of a signal-absent and a signal-present model of the data. That signal may be as simple as a single constant term. Here, however, the signal is described by Eq. 3a in the step experiment and Eq. 4a in the sinewave experiment. In this Appendix, we derive equations for performing model selection. Once there is sufficient evidence for the signal-present model, one can then go on to estimate the parameters of the signal. That is the subject of Appendix B.

As stated in the text, we measure evidence favoring the signal-present model of the data using Eq. 5. In the case of the step experiment, the evidence is λ = 10log10 O(Mstep), where the odds-ratio term is derived from Eq. 6. Assuming equal prior odds for the two models (p(Mstepι)=p(Mstepι)), the odds ratio for the signal-present model of the step-function data is:

O(Mstep)=α1,minα1,maxdα1p(α1ι)α2,minα2,maxdα2p(α2ι)ominomaxdop(oι)i=1Nβminβmaxdβp(βι)σminσmaxdσp(σι)p(diσβoα1α2Mstepι)i=1Nβminβmaxdβp(βι)σminσmaxdσp(σι)p(diσβMstepι)=α1,minα1,maxdα1Δα1α2,minα2,maxdα2Δα2ominomaxdoΔoi=1NβminβmaxdβΔβσminσmaxdσσ(T+1)log(σmaxσmin)et=1T(Sstep(t)+βdi(t))22σ2i=1NβminβmaxdβΔβσminσmaxdσσ(T+1)log(σmaxσmin)et=1T(Sstep(t)+βdi(t))22σ2. (A.1)

Here, we assume a flat prior for most parameters, normalizing the prior by integrating over a finite range (e.g., Δβ = βmaxβmin). For σ we use a Jeffreys (1946) prior proportional to 1/σ, also normalized over a finite range. Within these integrals, data from different sources (individual experimental or simulated subjects) are combined after first integrating over the two nuisance parameters, because we assume that all subjects share the same values of α1, α2, and o, but that there are individual differences in the values of the nuisance parameters β and σ. The innermost integral over σ can be solved analytically. Pulling out the normalizing constant, it is of the form σminσmaxσ(T+1)eCσ2dσ, where in the numerator of Eq. 8, for example, C=t=1T(Sstep(t)+βdi(t))22 does not depend on σ. Setting t = Cσ–2 and performing a change of variables, we find:

σminσmaxσ(T+1)eCσ2dσ=12CT2Γ(T2)[Q(T2,Cσmax2)Q(T2,Cσmin2)], (A.2)

where Q(a,x)=Γ(a)1xta1etdt is the normalized upper incomplete gamma function, where Γ(a)=0ta1etdt is the gamma function. Substituting Eq. 9 into Eq. 8 and simplifying, we find:

O(Mstep)=α1,minα1,maxdα1Δα1α2,minα2,maxdα2Δα2ominomaxdoΔoi=1Nβminβmaxdβ(Q[T2,t=1N(Sstep(t)+βdi(t))22σmax2]Q[T2,t=1N(Sstep(t)+βdi(t))22σmin2])(t=1T(Sstep(t)+βdi(t))2)T2i=1Nβminβmaxdβ(Q[T2,t=1N(Sstep(t)+βdi(t))22σmax2]Q[T2,t=1N(Sstep(t)+βdi(t))22σmin2])(t=1T(Sstep(t)+βdi(t))2)T2. (A.3)

To evaluate this odds ratio, all other parameters must be integrated numerically.

The analogous expression for signal detection in the sinewave experiment, derived similarly, defines the odds ratio as:

O(Msine)=fminfmaxdfΔfφminφmaxdφΔφAminAmaxdAΔAi=1Nβminβmaxdβ(Q[T2,t=1N(Ssine(t)+βdi(t))22σmax2]Q[T2,t=1N(Ssine(t)+βdi(t))22σmin2])(t=1T(Ssine(t)+βdi(t))2)T2i=1Nβminβmaxdβ(Q[T2,t=1N(Ssine(t)+βdi(t))22σmax2]Q[T2,t=1N(Ssine(t)+βdi(t))22σmin2])(t=1T(Ssine(t)+βdi(t))2)T2. (A.4)

Although posterior distributions used for parameter fitting (Appendix B) can be computed from integrals involving improper (unnormalized) prior distributions over parameter values, signal-detection integrals must, as mentioned above, be computed using proper prior probabilities. Here, improper priors are normalized by defining ranges over which variables are consistent with each of the two models. In our simulations we use ranges of: 0.2 < σ < 20 mm, 0 < αi < 5 trial-1, 0 < A < 1.5P, 0 < φ < 60 deg (i.e., a lag of up to 13 reaches), 0.75fsine < f < 1.25fsine, and –15 < β < 15 mm.

Appendix B: Parameter estimation

Once a signal has been detected, it is useful to estimate the parameter values that best characterize that signal. Each experiment has a different theoretical adaptive signal associated with it, and each of these signals has three parameters. Above, we find evidence for adaptation in the sinewave experiment and subsequently estimate the parameters of the sinewave model using Eq. 7, which defines the 3D posterior distribution p(f · A · φ | D · Msine). Here we first derive Eq. 7, and then subsequently give the corresponding expression for the 3D posterior over the step parameters, p(o · α1 · α2 | D · Mstep).

In the sinewave experiment the three parameters are frequency (f), amplitude (A), and phase (φ). The posterior over sinewave parameters is a marginal distribution that is obtained by integrating over the nuisance parameters defined by the signal-present model of the data. In the case we explore in depth in the paper there are two subject-specific nuisance parameters (β and σ), and the 3D posterior distribution is defined as:

p(fAφDMsine)p(fAφMsine)p(DfAφMsine)=p(fAφMsine)i=1N[0dσp(σMsine)dβp(βMsine)p(diβσfAφMsine], (B.1)

which assumes that the nuisance parameters are independent [i.e., p(β · σ | Msine) = p(β | Msine)p(σ | Msine)]. Using uninformative priors over β and σ (uniform in β and proportional to 1/σ), this becomes:

p(fAφDMsine)p(fAφMsine)i=1N[0dσσdβσTet=1T(Ssine(t)+βdi(t))22σ2]p(fAφMsine)i=1N[0dσσ(T+1)eT(Ssine2¯Ssine¯2)+(di2¯di¯2)2(diSsine¯diSsine¯)2σ2dβeT(ββ^i)22σ2], (B.2)

where the term β^i=d¯iSsine¯ is the maximum-likelihood estimate of the subject-specific bias. The innermost integral over β has the form of a Gaussian integral missing its normalizing constant. Thus, we have:

p(fAφDMsine)p(fAφMsine)i=1N[0dσσTeT(Ssine2¯Ssine¯2)+(di2¯di¯2)2(diSsine¯di¯Ssine¯)2σ2]. (B.3)

The bracketed portion of this expression is of the form 0σTeCσ2dσ, where C does not depend on σ. We again perform a change of variables (t = Cσ–2), converting the integral to the form of a gamma function, resulting in:

0dσσTeT(Ssine2¯Ssine¯2)+(di2¯di¯2)2(diSsine¯di¯Ssine¯)2σ2=2(T3)2T(T1)2Γ((T1)2)[(Ssine2¯Ssine¯2)+(di2¯di¯2)2(diSsine¯di¯Ssine¯)](T1)2. (B.4)

Dropping terms that are not functions of data or parameter values (and only scale the result) our solution becomes:

p(fAφDMsine)p(fAφMsine)i=1N([di2¯di¯2]+A2[sin2[2πfT(t1)φ]¯sin[2πfT(t1)φ]¯2]2A[disin[2πfT(t1)φ]¯di¯sin[2πfT(t1)φ]¯])(T1)2. (B.5)

Again, terms with an overbar are averages over trials. For example, disin[2πfT(t1)φ]¯=1Tt=1Tdi(t)sin[2πfT(t1)φ].

The probability distribution used to estimate an individual parameter is obtained by computing the marginal distribution for that parameter by numerical integration over the other parameters. For example, to obtain the posterior distribution of amplitude, we integrate Eq. 16 over frequency and phase.

In the step-function perturbation experiment the three parameters are an offset parameter (o) and two decay parameters (α1 and α2). The derivation of the posterior distribution across the parameters is nearly identical to the derivation for the sinewave perturbation above. The resulting posterior is entirely analogous to Eq. 16:

p(oα1α2DMstep)p(oα1α2Mstep)i=1N[(di2¯di¯2)+(Sstep2¯Sstep¯2)2(diSstep¯di¯Sstep¯)](T1)2. (B.6)

Note that in computing probability distributions over parameter values the normalization term, p(D | Mstep) or p(D | Msine), may be dropped because it is constant for a given model of the data. We also make use of improper Jeffreys (1946) priors and (for simplicity) uniform priors [i.e., p(σ) ∝ 1/σ, and flat priors for all other parameters], although other reasonable choices have been suggested (c.f., Bretthorst, 1988).

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  1. Baddeley RJ, Ingram HA, Miall RC. System identification applied to a visuomotor task: near-optimal human performance in a noisy changing task. J. Neurosci. 2003;23:3066–75. doi: 10.1523/JNEUROSCI.23-07-03066.2003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Branch MC, Beland RD. Outdoor Noise and the Metropolitan Environment; Case Study of Los Angeles, With Special Reference to Aircraft. Los Angeles, Dept. of City Planning; Los Angeles, CA: 1970. [Google Scholar]
  3. Bretthorst GL. Bayesian Spectrum Analysis and Parameter Estimation. Springer-Verlag; New York: 1988. [Google Scholar]
  4. Bruggeman H, Zosh W, Warren WH. Optic flow drives human visuo-locomotor adaptation. Curr Biol. 2007;17:2035–40. doi: 10.1016/j.cub.2007.10.059. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Burge J, Ernst MO, Banks MS. The statistical determinants of adaptation rate in human reaching. J Vis. 2008;8:20, 1–19. doi: 10.1167/8.4.20. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Davidson PR, Wolpert DM. Scaling down motor memories: de-adaptation after motor learning. Neurosci Lett. 2004;370:102–7. doi: 10.1016/j.neulet.2004.08.003. [DOI] [PubMed] [Google Scholar]
  7. DeYoe EA, Carman GJ, Bandettini P, Glickman S, Wieser J, Cox R, Miller D, Neitz J. Mapping striate and extrastriate visual areas in human cerebral cortex. Proceedings of the National Academy of Sciences of the United States of America. 1996;93:2382–6. doi: 10.1073/pnas.93.6.2382. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Engel SA, Rumelhart DE, Wandell BA, Lee AT, Glover GH, Chichilnisky EJ, Shadlen MN. fMRI of human visual cortex. Nature. 1994;369:525. doi: 10.1038/369525a0. [DOI] [PubMed] [Google Scholar]
  9. Freeman J, Brouwer GJ, Heeger DJ, Merriam EP. Orientation decoding depends on maps, not columns. The Journal of neuroscience : the official journal of the Society for Neuroscience. 2011;31:4792–804. doi: 10.1523/JNEUROSCI.5160-10.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Gaveau J, Paizis C, Berret B, Pozzo T, Papaxanthis C. Sensorimotor adaptation of point-to-point arm movements after spaceflight: the role of internal representation of gravity force in trajectory planning. J Neurophysiol. 2011;106:620–9. doi: 10.1152/jn.00081.2011. [DOI] [PubMed] [Google Scholar]
  11. Harris CS. Beware of the straight-ahead shift--a nonperceptual change in experiments on adaptation to displaced vision. Perception. 1974;3:461–76. doi: 10.1068/p030461. [DOI] [PubMed] [Google Scholar]
  12. Held R, Bossom J. Neonatal deprivation and adult rearrangement: complementary techniques for analyzing plastic sensory-motor coordinations. J Comp Physiol Psychol. 1961;54:33–7. doi: 10.1037/h0046207. [DOI] [PubMed] [Google Scholar]
  13. Hochstein S, Shapley RM. Linear and nonlinear spatial subunits in Y cat retinal ganglion cells. The Journal of physiology. 1976a;262:265–84. doi: 10.1113/jphysiol.1976.sp011595. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Hochstein S, Shapley RM. Quantitative analysis of retinal ganglion cell classifications. The Journal of physiology. 1976b;262:237–64. doi: 10.1113/jphysiol.1976.sp011594. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Hughes GW, Maffei L. Retinal ganglion cell response to sinusoidal light stimulation. J Neurophysiol. 1966;29:333–52. doi: 10.1152/jn.1966.29.3.333. [DOI] [PubMed] [Google Scholar]
  16. Jaynes ET. Probability Theory: The Logic of Science. Cambridge University Press; Cambridge, UK: 2003. [Google Scholar]
  17. Jeffreys H. An invariant form for the prior probability in estimation problems. Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences. 1946;186:453–61. doi: 10.1098/rspa.1946.0056. [DOI] [PubMed] [Google Scholar]
  18. Kagerer FA, Contreras-Vidal JL, Stelmach GE. Adaptation to gradual as compared with sudden visuo-motor distortions. Experimental brain research. Experimentelle Hirnforschung. Experimentation cerebrale. 1997;115:557–61. doi: 10.1007/pl00005727. [DOI] [PubMed] [Google Scholar]
  19. Kawato M, Furukawa K, Suzuki R. A hierarchical neural-network model for control and learning of voluntary movement. Biological cybernetics. 1987;57:169–85. doi: 10.1007/BF00364149. [DOI] [PubMed] [Google Scholar]
  20. Lackner JR, Dizio P. Rapid adaptation to Coriolis force perturbations of arm trajectory. J Neurophysiol. 1994;72:299–313. doi: 10.1152/jn.1994.72.1.299. [DOI] [PubMed] [Google Scholar]
  21. Linkenhoker BA, Knudsen EI. Incremental training increases the plasticity of the auditory space map in adult barn owls. Nature. 2002;419:293–6. doi: 10.1038/nature01002. [DOI] [PubMed] [Google Scholar]
  22. Marmarelis PZ, Marmarelis VZ. Analysis of Physiological Systems: The White-Noise Approach. Plenum Press; New York: 1978. [Google Scholar]
  23. Scheidt RA, Dingwell JB, Mussa-Ivaldi FA. Learning to move amid uncertainty. J Neurophysiol. 2001a;86:971–85. doi: 10.1152/jn.2001.86.2.971. [DOI] [PubMed] [Google Scholar]
  24. Scheidt RA, Dingwell JB, Mussa-Ivaldi FA. Learning to move amid uncertainty. Journal of neurophysiology. 2001b;86:971–85. doi: 10.1152/jn.2001.86.2.971. [DOI] [PubMed] [Google Scholar]
  25. Schor CM, Johnson CA, Post RB. Adaptation of tonic accommodation. Ophthalmic Physiol Opt. 1984;4:133–7. [PubMed] [Google Scholar]
  26. Sereno MI, Dale AM, Reppas JB, Kwong KK, Belliveau JW, Brady TJ, Rosen BR, Tootell RB. Borders of multiple visual areas in humans revealed by functional magnetic resonance imaging. Science. 1995;268:889–93. doi: 10.1126/science.7754376. [DOI] [PubMed] [Google Scholar]
  27. Shadmehr R, Mussa-Ivaldi FA. Adaptive representation of dynamics during learning of a motor task. J. Neurosci. 1994;14:3208–24. doi: 10.1523/JNEUROSCI.14-05-03208.1994. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Thoroughman KA, Shadmehr R. Learning of action through adaptive combination of motor primitives. Nature. 2000;407:742–7. doi: 10.1038/35037588. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Victor JD. Nonlinear systems analysis: comparison of white noise and sum of sinusoids in a biological system. Proceedings of the National Academy of Sciences of the United States of America. 1979;76:996–8. doi: 10.1073/pnas.76.2.996. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Victor JD, Shapley RM, Knight BW. Nonlinear analysis of cat retinal ganglion cells in the frequency domain. Proceedings of the National Academy of Sciences of the United States of America. 1977;74:3068–72. doi: 10.1073/pnas.74.7.3068. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Wallman J, Velez J, Weinstein B, Green AE. Avian vestibuloocular reflex: adaptive plasticity and developmental changes. J Neurophysiol. 1982;48:952–67. doi: 10.1152/jn.1982.48.4.952. [DOI] [PubMed] [Google Scholar]
  32. Wolpert DM, Kawato M. Multiple paired forward and inverse models for motor control. Neural networks : the official journal of the International Neural Network Society. 1998;11:1317–29. doi: 10.1016/s0893-6080(98)00066-5. [DOI] [PubMed] [Google Scholar]
  33. Zwiers MP, Van Opstal AJ, Paige GD. Plasticity in human sound localization induced by compressed spatial vision. Nat Neurosci. 2003;6:175–81. doi: 10.1038/nn999. [DOI] [PubMed] [Google Scholar]

RESOURCES