Skip to main content
PLOS Computational Biology logoLink to PLOS Computational Biology
. 2023 Sep 8;19(9):e1010526. doi: 10.1371/journal.pcbi.1010526

Error-independent effect of sensory uncertainty on motor learning when both feedforward and feedback control processes are engaged

Christopher L Hewitson 1,#, David M Kaplan 2,3,#, Matthew J Crossley 2,3,*,#
Editor: Adrian M Haith4
PMCID: PMC10522034  PMID: 37683013

Abstract

Integrating sensory information during movement and adapting motor plans over successive movements are both essential for accurate, flexible motor behaviour. When an ongoing movement is off target, feedback control mechanisms update the descending motor commands to counter the sensed error. Over longer timescales, errors induce adaptation in feedforward planning so that future movements become more accurate and require less online adjustment from feedback control processes. Both the degree to which sensory feedback is integrated into an ongoing movement and the degree to which movement errors drive adaptive changes in feedforward motor plans have been shown to scale inversely with sensory uncertainty. However, since these processes have only been studied in isolation from one another, little is known about how they are influenced by sensory uncertainty in real-world movement contexts where they co-occur. Here, we show that sensory uncertainty may impact feedforward adaptation of reaching movements differently when feedback integration is present versus when it is absent. In particular, participants gradually adjust their movements from trial-to-trial in a manner that is well characterised by a slow and consistent envelope of error reduction. Riding on top of this slow envelope, participants exhibit large and abrupt changes in their initial movement vectors that are strongly correlated with the degree of sensory uncertainty present on the previous trial. However, these abrupt changes are insensitive to the magnitude and direction of the sensed movement error. These results prompt important questions for current models of sensorimotor learning under uncertainty and open up new avenues for future exploration in the field.

Author summary

A large body of literature shows that sensory uncertainty inversely scales the degree of error-driven corrections made to motor plans from one trial to the next. However, by limiting sensory feedback to the endpoint of movements, these studies prevent corrections from taking place during the movement. Here, we show that when such corrections are promoted, sensory uncertainty punctuates between-trial movement corrections with abrupt changes that closely track the degree of sensory uncertainty but are insensitive to the magnitude and direction of movement error. This result marks a significant departure from existing findings and opens up new paths for future exploration.

Introduction

During episodes of sensorimotor control, sensory information about the current state of the body and environment are used to generate motor commands to achieve a desired goal. In an ideal world, this process would be implemented perfectly and result in error-free motor behaviour. In the real world, however, every stage of the sensorimotor control process is contaminated by noise [1] and uncertainty [2]. Despite this, humans achieve remarkably accurate and appropriate motor behaviour by harnessing two complementary processes for error correction. First, sensory feedback is rapidly integrated to adjust ongoing movements and compensate for sensed deviations from the planned movement [3, 4]. Second, over successive movements, feedforward motor plans, which map behavioural goals to the motor commands needed to accomplish those goals, are adapted in response to movement errors [5, 6]. Throughout this paper, we refer to the former as feedback integration and the latter as feedforward adaptation[7].

An important question that has attracted attention recently concerns how these error-correction processes are influenced by sensory uncertainty. To date, most studies either investigate feedback integration or feedforward adaptation, but not both. For example, in their pioneering study, Körding and Wolpert [8] had participants perform a variation on a standard visuomotor adaptation task and showed that visual feedback provided briefly at the midpoint of the reach drives movement corrections that are inversely proportional to the level of uncertainty in the sensory feedback. In other words, when uncertainty is high, sensory feedback information is integrated less to correct the ongoing reach (and reliance on prior knowledge increases) compared to when uncertainty is low. Several follow-up studies have made similar observations about the influence of sensory uncertainty on feedback integration [9, 10].

Studies investigating the feedforward adaptation component have similarly shown that adaptation rates inversely scale with sensory uncertainty such that increasing sensory uncertainty leads to smaller updates to the feedforward plan (slower adaptation) and vice versa [9, 1113]. These studies use endpoint feedback only and in doing so prevent feedback integration, effectively isolating the feedforward component.

Importantly, because the majority of studies investigate these processes in isolation (but see section “Can paradigm differences explain our divergent results?” for a more nuanced discussion of Körding and Wolpert [8]), little is known about how sensory uncertainty influences feedback integration and feedforward adaptation when they co-occur—as they do in most natural movement contexts. For example, if highly uncertain sensory feedback leads to relatively small online corrections during a movement, does it also drive similar adaptive changes in feedforward motor plans? To our knowledge, no existing studies address this key question.

Here, we examine how sensory uncertainty influences feedforward adaptation and feedback integration when they co-occur. Our results indicate that (1) the presence of feedforward adaptation has little to no effect on how sensory uncertainty influences feedback integration, but (2) in the presence of feedback integration, sensory uncertainty appears to punctuate a slow and steady envelope of error reduction with large and abrupt changes to initial movement vectors that are insensitive to the magnitude and direction of the sensed movement error. This latter finding represents a significant departure from the existing literature, which consistently reports that sensory uncertainty inversely scales an error-dependent response.

Results

The overarching aim of our experiments was to determine how different levels of sensory uncertainty impact feedforward adaptation and feedback integration when they co-occur. Participants made planar reaching movements using visual feedback about their hand position provided immediately before movement onset and at midpoint (Experiment 1), or immediately before movement onset, at midpoint, and at endpoint (Experiment 2 and 3). See Figs 1 and 2 for details. We quantify our behavioural results using traditional statistical methods (see the “Statistical modelling” section for details) as well as by fitting state-space models that make explicit assumptions about how motor commands are planned, executed, and updated over time, as well as how these processes are modulated by sensory uncertainty (see section “State-space modelling” for details.)

Fig 1. Experimental apparatus and task structure.

Fig 1

Each participant made planar reaching movements while grasping the KINARM handle. A mirror system occluded vision of the hand and created the impression that the hand and visual targets were in the same plane. Red start target, green reach target, and white cursor feedback are shown.

Fig 2. Experimental protocols: Example adaptation phase trial conditions.

Fig 2

(A) Experiment 1: Midpoint-only feedback (large uncertainty). (B) Experiment 2: Matched midpoint and endpoint feedback (low uncertainty). (C) Experiment 3: Unmatched midpoint (moderate uncertainty) and endpoint (low uncertainty) feedback. Bottom, middle and top slides represent start, middle and end of reach respectively. coloured panels represent the possible uncertainty conditions (blue: σL, orange: σM, green: σH, red: σ). The example condition applied is outlined in black. In all experiments, a no-feedback washout phase followed the adaptation phase.

Experiment 1

Feedforward adaptation

Existing studies show that feedforward adaptation—when studied in the absence of feedback control—inversely scales with the level of uncertainty present in the sensory feedback [9, 11, 12, 1417]. The primary contribution of Experiment 1 is to reveal how feedforward adaptation is influenced by sensory uncertainty when feedback integration is also engaged. To do this, we provided uncertain sensory feedback at movement midpoint (see Fig 2a), and omitted endpoint feedback entirely. We focused on uncertainty at midpoint because feedback integration can only occur if feedback is provided at some point before movement offset. Presenting feedback briefly at midpoint is the simplest possible design in which feedback integration and feedforward adaptation both co-occur.

Previous computational and experimental work suggests that when the motor system performs feedback control, the feedback controller itself can be used as a teaching signal to drive adaptation in the feedforward controller [1823]. It is therefore possible that feedback integration of sensory uncertainty at midpoint prevents or otherwise alters the influence of sensory uncertainty on feedforward adaptation. To our knowledge, this prediction has never been directly tested. Consequently, another key contribution of this experiment is to provide a clear test of this common assumption of computational models of motor learning.

Fig 3 shows group-averaged initial movement vectors for Experiment 1. Panel A is colour coded such that the colour of the dot at trial t indicates the sensory uncertainty experienced at midpoint on trial t − 1. Recall that the trial sequence of perturbations and sensory uncertainty levels were matched across participants, which is a fundamental feature of our experimental design that makes this plot informative. Washout trials are shown in purple, but from the participant’s perspective they are identical to the unlimited uncertainty trials shown in red.

Fig 3. Experiment 1 mean initial movement vector across participants per trial.

Fig 3

(a) Dot colour for trial t represents the level of sensory uncertainty applied at midpoint on the previous trial t − 1. Performance during the washout phase is shown by purple x’s. The inset bar graph shows the mean difference between the last 10 trials of adaptation and the first 10 trials of washout plotted separately for each uncertainty level. Error bars are 95% confidence intervals. (b) Dot colour for trial t represents the error at reach midpoint on the same trial. (c) Dot colour for trial t represents the error at the reach endpoint on the previous trial t − 1.

There are several key takeaways from Fig 3a. First, the change in initial movement vectors across trials during the adaptation phase is in a direction that—on average—tends to reduce error towards an adaptation extent of 7.13±1.9° (59%), averaged over the last 10 trials. Second, initial movement vectors decay smoothly back to baseline during the washout phase. Together, these observations are consistent with the idea that changes in initial movement vector across trials are driven by an incremental error-driven adaptive process. Third, and perhaps most strikingly, there is a clear stratification across sensory uncertainty conditions indicating that sensory uncertainty at midpoint has a dramatic and systematic effect on the evolution of initial movement vectors across trials.

Recall that the perturbation experienced on trial t is on average 12° and therefore the error experienced on every trial should drive the subsequent trial’s initial movement vector in a more positive direction. Instead, we see changes in movement vectors across trials from a lower uncertainty level to a higher uncertainty level (e.g., σLσM, σMσH) that are in a direction leading to greater error on the subsequent trial. Considered the other way around, any change across trials from a higher to a lower uncertainty level (e.g., σMσL, σHσM) almost always results in a change in initial movement vector that is adaptive (i.e., in an error-reducing direction), but of a much greater magnitude than would be seen if the uncertainty level did not change at all. Motor noise is not a plausible explanation for this pattern because the correlation between the level of uncertainty on the previous trial and movement direction on the current trial is consistent both across the experiment for individual participants and also across all participants.

According to either of the above perspectives, the stratification in initial movement vectors based on the previous trial’s level of sensory uncertainty is difficult to reconcile with what is known about the incremental, error-driven nature of the implicit motor adaptation system. This naturally raises the question of whether the observed stratification reflects the process of motor adaptation at all, or may instead reflect the operation of some other system (e.g., whatever system is responsible for explicit aiming strategies [24]). Even though our paradigm was not designed to address this question, some insight can be gleaned by examining the no-feedback washout phase (shown in purple in Fig 3). For example, if initial movement vectors on low uncertainty trials (blue dots in Fig 3a) are an artifact of explicit aiming, there should be a large difference between the initial movement vectors observed for these trial types at the end of adaptation and those observed at the beginning of washout. Visual inspection of Fig 3 shows that this is indeed the case. To formalize this finding, we computed the difference between the mean accuracy achieved on the last 10 trials of the adaptation phase and the first 3 trials of the washout phase separately for each uncertainty trial type and separately for each subject. We used 3 washout trials instead of 10 in order to limit contamination from forgetting (i.e., the decay back to baseline seen during washout) which can lead to an underestimation of the initial washout state. A repeated-measures ANOVA indicated a significant difference in these difference scores across uncertainty trial types (F(3,57)=27.64,p<.001,ηG2=0.44). Posthoc paired t-tests corrected for multiple comparisons using the Bonferroni method (see Table 1) revealed that these difference scores were larger for the low uncertainty condition than they were for any other uncertainty condition. These difference scores were not significantly different between any of the other uncertainty trial conditions.

Table 1. Experiment 1 pairwise comparisons examining differences between uncertainty trial types in adaptation—washout difference scores.

A and B indicate the uncertainty trial types being compared; T is the observed t-statistic; dof is the degrees of freedom of the test; p-corr is the Bonferroni-corrected p-value; hedges is the Hedges G measure of effect size.

row A B T dof p-corr hedges
1 σ L σ M 6.98 19.00 0.00 1.95
2 σ L σ H 4.59 19.00 0.00 1.30
3 σ L σ 7.01 19.00 0.00 1.86
4 σ M σ H -2.18 19.00 0.25 -0.53
5 σ M σ infty 0.16 19.00 1.00 0.04
6 σ H σ infty 2.25 19.00 0.22 0.51

Thus, the likely state of adaptation at the beginning of washout appears much more closely aligned with the adaptation estimated by initial movement vectors preceded by medium, high, and infinite uncertainty trials (orange, green, and red dots in Fig 3) than it does with the adaptation estimated by initial movement vectors preceded by low uncertainty trials (blue dots in Fig 3). This is consistent with the possibility that the blue dots in Fig 3a are the output of a process distinct from motor adaptation (see the “Adaptation vs aiming” subsection of the Discussion).

One small, apparent puzzle in our data is that several no-feedback trials (σ, in red) result in initial movement vectors on the subsequent trial that appear closely aligned with the low uncertainty trials (σL, in blue). These occur at trials 31, 71, 136, 137, 160 and 165 (Fig 3a). While this pattern could be due to noise, it is striking that out of these 6 trials, 5 are directly preceded by a low-uncertainty trial, and one (trial 137) is preceded by a no-feedback trial. Throughout adaptation, there are only 6 trial pairs where σL precedes σ (trials [[3,4], [30,31], [70,71], [135,136], [159,160], [164,165]]). Thus, in all but the earliest case (trials [3, 4]) a no-feedback trial following a low-uncertainty trial had the same effect as a low-uncertainty trial on subsequent initial movement vectors, suggesting that no-feedback trials may simply preserve the behaviour from the previous trial.

Fig 3b colour codes initial movement vector by the error experienced at midpoint on the previous trial, and Fig 3c colour codes by the error experienced at endpoint. Since no visual feedback was provided at endpoint in this experiment, participants could only estimate endpoint error based on proprioceptive feedback. Interestingly, the stratification pattern observed when colour coding by sensory uncertainty is no longer evident in either of these panels. Overall, this suggests that sensory uncertainty, and not movement error, is responsible for the stratification of initial movement vector across trials.

We formalised these observations by fitting a regression model that treated initial movement vector as the observed variable. Predictor variables in this model were trial, the error experienced at midpoint/endpoint, and the sensory uncertainty experienced at midpoint/endpoint, and the interaction between the error terms and the sensory uncertainty terms. All error and sensory uncertainty predictors were taken from the previous trial. See the “Statistical modelling” section for more details.

The predicted initial movement vectors from this regression model are shown in Fig 4a, with the best fitting beta coefficients along with their 95% confidence intervals shown by blue lines in Fig 4b. Table 2 includes all estimated beta coefficients and corresponds to the blue confidence intervals displayed in Fig 4b). The model was statistically significant (Adjusted R2 = 0.831, F(8,168) = 103.1, p < .001).

Fig 4. Experiment 1 linear regression fit to initial movement vector.

Fig 4

(a) Initial movement vector predictions from the regression model superimposed over the behavioural data. (b) Point and 95% confidence interval estimates from best fitting regression models. Coefficients of the regression for predicting initial movement vector are shown in blue and coefficients for predicting change in initial movement vector are shown in orange.

Table 2. Experiment 1 regression results for predicting initial movement vector from error and sensory uncertainty terms.

These results correspond to the blue confidence intervals displayed in Fig 3. The coef column contains β coefficients, the se column contains standard errors of these coefficients, the T column contains corresponding t-statistic, the pval column contains corresponding p-values, the CI[2.5%] and CI[97.5%] columns give the 95% confidence interval, and the relimp column gives the corresponding relative importance.

row names coef se T pval CI[2.5%] CI[97.5%] relimp
1 β 0 1.06 0.29 3.69 0.00 0.50 1.63 NaN
2 σMσL -1.95 0.23 -8.52 0.00 -2.41 -1.50 0.17
3 σHσM -0.95 0.24 -3.93 0.00 -1.43 -0.47 0.07
4 σσH 0.90 0.25 3.65 0.00 0.41 1.39 0.01
5 δ MP 0.00 0.01 0.31 0.76 -0.02 0.03 0.03
6 (σMσL):δMP -0.01 0.04 -0.28 0.78 -0.08 0.06 0.05
7 (σHσM):δMP 0.01 0.04 0.22 0.83 -0.06 0.08 0.02
8 (σσH):δMP 0.04 0.04 1.16 0.25 -0.03 0.12 0.01
9 log(Trial) 1.33 0.06 21.68 0.00 1.20 1.45 0.47

Sensory uncertainty across all levels significantly predicted the initial movement vector on the following trial in a direction that inversely scaled with uncertainty (Table 2, rows 2–4). Initial movement vectors were significantly greater on trials following low sensory uncertainty than they were following moderate sensory uncertainty (σMσL; Table 2, row 2) and they were significantly greater on trials following moderate sensory uncertainty than they were following high sensory uncertainty (σHσM; Table 2, row 3). However, they were significantly lower on trials following high sensory uncertainty than they were following unlimited sensory uncertainty (σσH; Table 2, row 4) possibly suggesting that the unlimited uncertainty condition should be treated in a qualitatively distinct fashion relative to the other uncertainty conditions. Error at midpoint (ϵMP; Table 2, row 5) was not a significant predictor of initial movement vector (but see the results from the regression below examining change in initial movement vector). Furthermore, no interaction term between midpoint error and sensory uncertainty was significant (Table 2, rows 6–8). Finally, log(Trial) significantly predicted initial movement vectors (Table 2, row 9), indicating that initial movement vectors tend to increase over the course of the adaptation phase.

The relative importance of the log(Trial) term in the regression model was 0.47, and the relative importance of the sum of uncertainty terms was 0.25. This indicates that the slow envelope of the adaptation curve is well captured by a simple logarithmic function, but that the effect of sensory uncertainty captures substantial variance. Overall, this regression analysis revealed a clear effect of sensory uncertainty that is independent of the error experienced at midpoint. We found no evidence that sensory uncertainty scales the response to movement error.

We also fit a regression model treating change in initial movement vector from trial to trial as the observed variable. Predictor variables for this model were identical to those described above, but with no predictor for trial. The best fitting beta coefficients along with their 95% confidence intervals are indicated by orange lines in Fig 4b. The model was statistically significant (Adjusted R2 = 0.445, F(7,169) = 19.33, p < .001).

An intuition for what this analysis should reveal emerges from careful inspection of Fig 5a, which depicts change in initial movement colour coded by the sensory uncertainty at midpoint on the previous trial. Here it is evident that change in initial movement vectors inversely tracks sensory uncertainty levels with the exception of the no feedback trials (σ). This result can be seen in the regression results by noting that change in initial movement vectors were significantly greater on trials following low sensory uncertainty than they were following moderate sensory uncertainty (σMσL; Table 3, row 2), they were significantly greater on trials following moderate sensory uncertainty than they were following high sensory uncertainty (σHσM; Table 3, row 3), and they were significantly less on trials following high sensory uncertainty than they were following total sensory uncertainty (σσH; Table 3, row 4).

Fig 5. Experiment 1 change in initial movement vector and linear regression fits.

Fig 5

(a) Violin plot depicting in the distribution of mean changes in initial movement vector across all adaptation phase trials of the experiment separately for each uncertainty level. The inset of each violin shows a box plot in which the white dot indicates the median data value, the black box spans the 25% to 75% percentiles, and the whiskers extend to the most extreme data points. (b) Scatter plot showing the mean change in initial movement vector as a function of error experienced at midpoint. Lines indicate fitted simple linear regression lines. These regression lines do not correspond to the coefficients included in Table 3, rows 6–8) and are included only as a visual aid. colours indicate uncertainty level.

Table 3. Experiment 1 regression results for predicting change in initial movement vector from error and sensory uncertainty terms. These results correspond to the orange confidence intervals displayed in Fig 3.

The coef column contains β coefficients, the se column contains standard errors of these coefficients, the T column contains corresponding t-statistic, the pval column contains corresponding p-values, the CI[2.5%] and CI[97.5%] columns give the 95% confidence interval, and the relimp column gives the corresponding relative importance.

row names coef se T pval CI[2.5%] CI[97.5%] relimp
1 β 0 -0.32 0.13 -2.39 0.02 -0.59 -0.06 NaN
2 σMσL -1.44 0.37 -3.92 0.00 -2.16 -0.71 0.17
3 σHσM -1.52 0.39 -3.93 0.00 -2.29 -0.76 0.12
4 σσH 1.68 0.39 4.27 0.00 0.90 2.46 0.04
5 δ MP -0.08 0.02 -3.80 0.00 -0.12 -0.04 0.04
6 (σMσL):δMP 0.05 0.06 0.93 0.35 -0.06 0.17 0.07
7 (σHσM):δMP -0.07 0.06 -1.26 0.21 -0.19 0.04 0.03
8 (σσH):δMP 0.10 0.06 1.77 0.08 -0.01 0.22 0.01

Further understanding of this effect can be built by examining Fig 5b, which depicts the change in initial movement vectors as a function of midpoint error. The lines in this plot represent simple linear regression lines (they are not the same multiple regression model described above) and are colour coded by sensory uncertainty. Note that if sensory uncertainty scales the response to error, the slope of the blue line (low uncertainty) should be the steepest, the slope of the orange line (moderate uncertainty) should be the next steepest, the slope of the green line (high uncertainty) should be the next steepest, and the slope of the red line should be the shallowest. Fig 5b does not show this pattern. See Table 3 rows 6–8 for statistics corresponding to the interaction between midpoint error and sensory uncertainty at midpoint. However, this analysis does reveal that midpoint error itself significantly contributes to predicting change in hand angle (Table 3 row 5). This can be seen in Fig 5b by noting that the overall trend of all regression lines is negative.

Overall, the results of this regression model (taking change in initial movement vector as the observed variable) largely echo the results of the regression model built with initial movement vector taken as the observed variable. Specifically, while midpoint error clearly influences the magnitude of hand angle change on the next trial, it does not interact with the level of sensory uncertainty. Furthermore, the sum of the relative importance of the sensory uncertainty predictors was much greater than that of midpoint error (0.33 vs 0.04). Thus, the main takeaway from these analyses is that both sensory uncertainty and movement error influence feedforward adaptation as estimated by initial movement vectors, but (1) they do so independently of each other, and (2) the effect of sensory uncertainty is more influential than the effect of movement error.

Feedback integration

Fig 6a shows feedback integration (i.e., the difference between midpoint and endpoint hand angles) as a function of sensory uncertainty at midpoint, and Fig 6b shows feedback integration as a function or error at midpoint coloured by sensory uncertainty. The main pattern observed in Fig 6 is that there are no significant differences in overall feedback integration depending on sensory uncertainty level (panel A), but there are large differences in how feedback integration responds to midpoint error depending on sensory uncertainty level (i.e., slope differences in panel B).

Fig 6. Experiment 1 feedback integration (endpoint hand angle—initial movement vector).

Fig 6

(a) Violin plot depicting the distribution of mean feedback integration across all adaptation phase trials of the experiment separately for each uncertainty level. The inset of each violin shows a box plot in which the white dot indicates the median data value, the black box spans the 25% to 75% percentiles, and the whiskers extend to the most extreme data points. (b) Scatter plot showing the mean feedback integration as a function of error experienced at midpoint. Lines indicate fitted linear regression lines, corresponding to the coefficients included in Table 4, rows 6–8). Point and line colour indicates uncertainty level.

We formalised these observations by fitting a regression model treating the difference between endpoint and midpoint hand angle as the observed variable. Predictor variables were the error experienced at midpoint, the sensory uncertainty experienced at midpoint, and the interaction between these two terms. In contrast to the regression models reported for feedforward adaptation, all error and sensory uncertainty predictors were taken from the current trial. This regression was statistically significant (Adjusted R2 = 0.905, F(7,169) = 241.5, p < .001). Beta coefficient estimates and corresponding statistics are listed in Table 4.

Table 4. Experiment 1 regression results for predicting feedback integration (endpoint hand angle—initial movement vector) from error and sensory uncertainty terms.

The coef column contains β coefficients, the se column contains standard errors of these coefficients, the T column contains corresponding t-statistic, the pval column contains corresponding p-values, the CI[2.5%] and CI[97.5%] columns give the 95% confidence interval, and the relimp column gives the corresponding relative importance.

row names coef se T pval CI[2.5%] CI[97.5%] relimp
1 β 0 -0.15 0.11 -1.35 0.18 -0.37 0.07 NaN
2 σMσL 0.21 0.30 0.70 0.48 -0.39 0.81 0.01
3 σHσM -0.90 0.32 -2.81 0.01 -1.53 -0.27 0.05
4 σσH 0.78 0.32 2.41 0.02 0.14 1.42 0.05
5 δ MP -0.51 0.02 -30.96 0.00 -0.55 -0.48 0.49
6 (σMσL):δMP 0.33 0.05 7.03 0.00 0.23 0.42 0.05
7 (σHσM):δMP 0.05 0.05 1.01 0.31 -0.05 0.14 0.11
8 (σσH):δMP 0.52 0.05 10.90 0.00 0.43 0.61 0.14

The most important result from this analysis is that the feedback integration response to increasing midpoint error (i.e., the slopes in Fig 6b) was significantly greater for low sensory uncertainty than it was for moderate uncertainty [(σMσL) * ϵMP; Table 3 row 6], and was also significantly greater for high uncertainty than it was for unlimited uncertainty midpoint feedback [(σMσL) * ϵMP; Table 3 row 8]. The difference between moderate and high uncertainty was non-significant [(σHσM) * ϵMP; Table 3 row 7]. Overall, these results are consistent with prior studies showing that that sensory uncertainty has an error-scaling effect on feedback integration [8, 10].

Experiment 2

The results of Experiment 1 raise the possibility that the uncertainty of sensory feedback does not inversely scale the magnitude of the error-driven feedforward update as it does in an endpoint feedback-only paradigm [1113, 1517]. Rather, our analysis suggests that sensory uncertainty acts largely independently of the experienced error to induce large and abrupt changes in the adapted state that can be—in the case of a transition from a high uncertainty trial—in a direction opposite to the general adaptive trend.

These striking results suggest that the presence of feedback integration fundamentally alters how feedforward adaptation is affected by sensory uncertainty. However, another possibility is that feedforward adaptation is influenced by sensory uncertainty differently depending on the temporal proximity of the sensory feedback signal to movement offset, regardless of whether or not feedback integration occurred at the midpoint of the movement [2527]. Experiment 2 tests this possibility by providing midpoint and endpoint feedback matched in their level of uncertainty (see Fig 2B).

Fig 7 shows group-averaged initial movement vectors per trial, colour coded such that the colour of the point at trial t indicates the sensory uncertainty experienced at midpoint on trial t − 1. The extent of adaptation was 9.69±2.18° (81%) averaged over the last 10 trials.

Fig 7. Experiment 2 mean initial movement vector across participants per trial.

Fig 7

(a) The colour of the dot on trial t represents the level of sensory uncertainty applied at midpoint on the previous trial t − 1. Performance during the washout phase is shown by purple x’s. The inset bar graph shows the mean difference between the last 10 trials of adaptation and the first 10 trials of washout plotted separately for each uncertainty level. Error bars are 95% confidence intervals. (b) The colour of the dots on trial t represents the error at the reach midpoint on the same trial. (c) The colour of the dots on trial t represents the error at the reach endpoint on the previous trial t − 1.

Feedforward adaptation

In terms of sensory uncertainty scaling, the same basic pattern observed in Experiment 1 (i.e. Fig 3) is present here as well. Specifically, the trial-by-trial variation in initial movement vector is large, but cannot be attributed to noise because there is a clear stratification in how a particular uncertainty level influences the subsequent trial. Even with the inclusion of congruent endpoint feedback, sensory uncertainty on trial t − 1 continues to exert a powerful effect on the subsequent initial movement vector.

As was already discussed for Experiment 1, a natural question arises about whether this stratification reflects implicit motor adaptation or some other process such as explicit aiming strategies [24]). We therefore followed the same logic as before and performed the same analysis outlined above. In particular, we computed the difference between the mean accuracy achieved on the last 10 trials of the adaptation phase and the first 3 trials of the washout phase separately for each uncertainty trial type and separately for each subject. A repeated-measures ANOVA indicated that there was a significant difference in these difference scores across uncertainty trial types (F(3,57)=56.02,p<.001,ηG2=0.5). Posthoc paired t-tests corrected for multiple comparisons using the Bonferroni method (see Table 5) revealed that these difference scores were larger for the low uncertainty condition than they were for any other uncertainty condition. Additionally, the difference scores for the high uncertainty trial type (green dots in Fig 7a) were significantly smaller than those for the infinite uncertainty trial type (red dots in Fig 7a). These difference scores were not significantly different between any of the other uncertainty trial conditions.

Table 5. Experiment 2 pairwise comparisons examining differences between uncertainty trial types in adaptation—washout difference scores.

A and B indicate the uncertainty trial types being compared; T is the observed t-statistic; dof is the degrees of freedom of the test; p-corr is the Bonferroni-corrected p-value; hedges is the Hedges G measure of effect size.

row A B T dof p-corr hedges
1 σ L σ M 10.61 19.00 0.00 2.06
2 σ L σ H 7.11 19.00 0.00 1.56
3 σ L σ 10.53 19.00 0.00 2.49
4 σ M σ H -2.43 19.00 0.15 -0.42
5 σ M σ infty 2.92 19.00 0.05 0.64
6 σ H σ infty 4.77 19.00 0.00 0.99

As seen in Experiment 1, this is consistent with the idea that the likely state of adaptation at the beginning of washout is more closely aligned with the adaptation estimated by initial movement vectors preceded by medium, high, and unlimited uncertainty trials (orange, green, and red dots in Fig 7a) than it does with the adaptation estimated by initial movement vectors preceded by low uncertainty trials (blue dots in Fig 7a). In this case, adaptation appears most aligned with the level estimated by high uncertainty trials. These findings are consistent with the possibility that the blue dots in Fig 7a are the output of a process distinct from motor adaptation (see the “Adaptation vs aiming” subsection of the Discussion).

Fig 7b and 7c depict initial movement vector colour coded by the error experienced at midpoint or endpoint, respectively. As in Experiment 1, these panels show a clear pattern of stratification by sensory uncertainty and no stratification by movement error, suggesting that sensory uncertainty (and not movement error) is responsible for the stratification of initial movement vector observed across trials. As in Experiment 1, the no-feedback trials (σ, in red) that follow low-uncertainty trials (σL, in blue) have the same effect on subsequent initial movement vectors as a low uncertainty trial, again suggesting that no-feedback trials may preserve the behaviour from the previous trial.

We fit a regression model of the same form as that reported for Experiment 1 (see also the “Statistical modelling” section). The predicted initial movement vectors from this regression model is shown in Fig 8a and the best fitting beta coefficients along with their 95% confidence intervals are shown by blue lines in Fig 8b. Table 6 includes all estimated beta coefficients and corresponds to the blue confidence intervals displayed in Fig 8b). The model was statistically significant (Adjusted R2 = 0.845, F(12,165) = 81.17, p < .001).

Fig 8. Experiment 2 linear regression fit to initial movement vector.

Fig 8

(a) Initial movement vector predictions from the regression model superimposed over the behavioural data. (b) Point and 95% confidence interval estimates from best fitting regression models. Coefficients of the regression for predicting initial movement vector are shown in blue and coefficients for predicting change in initial movement vector are shown in orange.

Table 6. Experiment 2 regression results for predicting initial movement vector from error and sensory uncertainty terms.

These results correspond to the blue confidence intervals displayed in Fig 8. The coef column contains β coefficients, the se column contains standard errors of these coefficients, the T column contains corresponding t-statistic, the pval column contains corresponding p-values, the CI[2.5%] and CI[97.5%] columns give the 95% confidence interval, and the relimp column gives the corresponding relative importance.

row names coef se T pval CI[2.5%] CI[97.5%] relimp
1 β 0 2.01 0.33 6.12 0.00 1.36 2.66 NaN
2 σMσL -2.15 0.25 -8.76 0.00 -2.64 -1.67 0.14
3 σHσM -1.05 0.38 -2.74 0.01 -1.81 -0.29 0.07
4 σσH 0.81 0.36 2.25 0.03 0.10 1.52 0.01
5 δ MP -0.17 0.08 -2.10 0.04 -0.34 -0.01 0.03
6 (σMσL):δMP 0.01 0.11 0.14 0.89 -0.19 0.22 0.01
7 (σHσM):δMP -0.17 0.16 -1.05 0.29 -0.48 0.15 0.01
8 (σσH):δMP -0.45 0.31 -1.44 0.15 -1.07 0.17 0.01
9 δ EP 0.21 0.15 1.38 0.17 -0.09 0.50 0.05
10 (σMσL):δEP -0.04 0.45 -0.09 0.92 -0.92 0.84 0.03
11 (σHσM):δEP 0.41 0.42 0.98 0.33 -0.41 1.23 0.02
12 (σσH):δEP 0.37 0.40 0.91 0.37 -0.43 1.16 0.01
13 log(Trial) 1.68 0.07 23.32 0.00 1.54 1.82 0.47

As in Experiment 1, sensory uncertainty across all levels significantly predicted the initial movement vector on the following trial in a direction that inversely scaled with uncertainty (Table 6, rows 2–4). Initial movement vectors were significantly greater on trials following low sensory uncertainty than they were following moderate sensory uncertainty (σMσL; Table 6, row 2) and they were significantly greater on trials following moderate sensory uncertainty than they were following high sensory uncertainty (σHσM; Table 6, row 3). However, they were significantly smaller on trials following high sensory uncertainty than they were following total sensory uncertainty (σσH; Table 6, row 4), possibly suggesting that total uncertainty is playing a qualitatively distinct role here than that for the other uncertainty conditions.

Error at midpoint (ϵMP; Table 6, row 5), but not error at endpoint (ϵMP; Table 6, row 9), was a significant predictor of initial movement vector. Importantly, no interaction terms between midpoint/endpoint error and sensory uncertainty (Table 6 rows 6–8 and 10–12) were significant. Finally, log(Trial) significantly predicted initial movement vectors (6, row 13), indicating the trend of initial movement vectors to increase over the course of the adaptation phase. The relative importance of the log(Trial) term was 0.47, that for the sum of uncertainty terms was 0.22, and that for the midpoint error (δMP) term was 0.03. This suggests that the dominant source of variance is largely captured by the slow envelope of adaptation, but also shows that sensory uncertainty plays an important role.

Overall, this regression echoed the results of Experiment 1 in revealing (1) both error and sensory uncertainty influence adaptation, but (2) they exert their influences independently of each other. We therefore failed to find any evidence that sensory uncertainty scales the response to movement error.

We also fit a regression model treating change in initial movement vector as the observed variable. Predictor variables for this model were identical to those described above, but with no trial predictor. The best fitting betas along with their 95% confidence intervals is shown by the orange lines in Fig 4b. The model was statistically significant (Adjusted R2 = 0.522, F(11,166) = 18.54, p < .001).

A strong expectation for what this analysis should reveal can be built with careful inspection of Fig 9a, which depicts change in initial movement colour coded by the sensory uncertainty at midpoint on the previous trial. Here it is clear that change in initial movement vectors inversely tracks sensory uncertainty levels with the exception of the no feedback trials (σ). This result can be seen in the regression results by noting that change in initial movement vectors were significantly greater on trials following low sensory uncertainty than they were following moderate sensory uncertainty (σMσL; Table 7, row 2). However, they were not significantly greater on trials following moderate sensory uncertainty than they were following high sensory uncertainty (σHσM; Table 7, row 3), nor were they significantly greater on trials following high sensory uncertainty than they were following unlimited sensory uncertainty (σσH; Table 7, row 4). Thus, while the clear trend is for stratification according to sensory uncertainty, only the low uncertainty comparison reached significance.

Fig 9. Experiment 2 change in initial movement vector and linear regression fits.

Fig 9

(a) Violin plot depicting the distribution of mean changes in initial movement vector across all adaptation phase trials of the experiment separately for each midpoint/endpoint uncertainty combination, colour coded as per Fig 6a. The inset of each violin shows a box plot in which the white dot indicates the median data value, the black box spans the 25% to 75% percentiles, and the whiskers extend to the most extreme data points. (b) Scatter plot showing the mean change in initial movement vector as a function of error experienced at midpoint. Point and line colour indicates uncertainty level. (c) Scatter plot showing the mean change in initial movement vector as a function of error experienced at endpoint on the previous trial. Point and line colour indicates uncertainty level. The lines in panel B and C indicate fitted simple linear regression lines. These regression lines do not correspond to the coefficients included in Table 7) and are included only as a visual aid.

Table 7. Experiment 2 regression results for predicting change in initial movement vector from error and sensory uncertainty terms.

These results correspond to the orange confidence intervals displayed in Fig 8. The coef column contains β coefficients, the se column contains standard errors of these coefficients, the T column contains corresponding t-statistic, the pval column contains corresponding p-values, the CI[2.5%] and CI[97.5%] columns give the 95% confidence interval, and the relimp column gives the corresponding relative importance.

row names coef se T pval CI[2.5%] CI[97.5%] relimp
1 β 0 0.49 0.17 2.95 0.00 0.16 0.82 NaN
2 σMσL -1.28 0.38 -3.40 0.00 -2.02 -0.54 0.14
3 σHσM 0.01 0.59 0.02 0.98 -1.15 1.17 0.11
4 σσH -0.38 0.55 -0.69 0.49 -1.47 0.71 0.02
5 δ MP -0.48 0.13 -3.76 0.00 -0.73 -0.23 0.06
6 (σMσL):δMP -0.09 0.16 -0.58 0.57 -0.41 0.23 0.02
7 (σHσM):δMP -0.84 0.24 -3.44 0.00 -1.32 -0.36 0.03
8 (σσH):δMP 0.77 0.48 1.60 0.11 -0.18 1.72 0.03
9 δ EP 0.94 0.23 4.06 0.00 0.48 1.40 0.04
10 (σMσL):δEP 0.21 0.69 0.31 0.76 -1.15 1.57 0.02
11 (σHσM):δEP 1.50 0.64 2.35 0.02 0.24 2.76 0.02
12 (σσH):δEP -1.85 0.62 -2.99 0.00 -3.07 -0.63 0.04

Further intuitions can be built by examining Fig 9b, which depicts the change in initial movement vectors as a function of midpoint error. The lines in this plot are colour coded by sensory uncertainty. Fig 9c shows essentially the same information but for endpoint instead of midpoint error. Note that if sensory uncertainty scales the response to error, then the slope of the blue line (low uncertainty) should be the steepest, the slope of the orange line (moderate uncertainty) should be the next steepest, the slope of the green line (high uncertainty) should be the next steepest, and the slope of the red line should be the shallowest. Neither Fig 5b nor Fig 9c show this pattern. See Table 7 rows 6–8 for statistics corresponding to midpoint error, and rows 10–12 for statistics corresponding to endpoint error.

Overall, the results of this regression (using change in initial movement vector as the observed variable) echo the results of the regression modelling using initial movement vector as the observed variable. Furthermore, the results from both regression models are aligned with those from Experiment 1. Specifically, there appears to be a clear error-independent effect of sensory uncertainty—albeit less pronounced statistically then in the results from Experiment 1. Importantly, we find no evidence that sensory uncertainty scales the response to movement error.

Feedback integration

Fig 10a shows feedback integration (i.e., the difference between endpoint and midpoint hand angle) as a function of sensory uncertainty at midpoint, and Fig 10b shows feedback integration as a function or error at midpoint coloured by sensory uncertainty. The main pattern observed in Fig 10 is that there are no significant differences in overall feedback integration depending on sensory uncertainty level (panel A), but there are large differences in how feedback integration responds to midpoint error depending on sensory uncertainty level (i.e., slope differences in panel B).

Fig 10. Experiment 2 feedback integration (endpoint hand angle—initial movement vector).

Fig 10

(a) Violin plot depicting the distribution of mean feedback integration across all adaptation phase trials of the experiment separately for each uncertainty level. The inset of each violin shows a box plot in which the white dot indicates the median data value, the black box spans the 25% to 75% percentiles, and the whiskers extend to the most extreme data points. (b) Scatter plot showing the mean feedback integration as a function of error experienced at midpoint. Lines indicate fitted linear regression lines, corresponding to the coefficients included in Table 8, rows 6–8). Point and line colour indicates uncertainty level.

We formalised these observations by fitting a regression model treating the difference between endpoint and midpoint hand angle as the observed variable. Predictor variables were the error experienced at midpoint, the sensory uncertainty experienced at midpoint, and the interaction between these two terms. In contrast to the regression models reported for feedforward adaptation, all error and sensory uncertainty predictors were taken from the current trial. This regression was statistically significant (Adjusted R2 = 0.911, F(7,170) = 258.7, p < .001). Beta coefficient estimates and corresponding statistics are listed in Table 8.

Table 8. Experiment 2 regression results for predicting feedback integration (endpoint hand angle—initial movement vector) from error and sensory uncertainty terms.

The coef column contains β coefficients, the se column contains standard errors of these coefficients, the T column contains corresponding t-statistic, the pval column contains corresponding p-values, the CI[2.5%] and CI[97.5%] columns give the 95% confidence interval, and the relimp column gives the corresponding relative importance.

row names coef se T pval CI[2.5%] CI[97.5%] relimp
1 β 0 -0.26 0.08 -3.40 0.00 -0.41 -0.11 NaN
2 σMσL -0.16 0.21 -0.74 0.46 -0.58 0.26 0.01
3 σHσM -0.85 0.22 -3.83 0.00 -1.28 -0.41 0.02
4 σσH 1.11 0.22 5.02 0.00 0.67 1.55 0.02
5 δ MP -0.50 0.01 -34.78 0.00 -0.53 -0.47 0.60
6 (σMσL):δMP 0.34 0.04 8.54 0.00 0.26 0.42 0.05
7 (σHσM):δMP -0.02 0.04 -0.45 0.65 -0.10 0.06 0.08
8 (σσH):δMP 0.59 0.04 14.10 0.00 0.50 0.67 0.14

The most important result from this analysis is that the response of feedback integration to increasing midpoint error (i.e., the slopes in Fig 10b) was significantly greater for low sensory uncertainty than it was for moderate uncertainty [(σMσL) * ϵMP; Table 8 row 6], and was also significantly greater for high uncertainty than it was for totally uncertain midpoint feedback [(σMσL) * ϵMP; Table 8 row 8]. The difference between moderate and high uncertainty was non-significant [(σHσM) * ϵMP; Table 8 row 7].

Overall, the pattern of feedback integration seen in Experiment 2 are qualitatively identical to those observed in Experiment 1. Both are consistent with prior studies showing that that sensory uncertainty has an error-scaling effect on feedback integration [8, 10].

Experiment 3

In Experiment 3, we sought to further probe how sensory uncertainty influences feedback and feedforward control processes. In particular, Experiment 3 disassociates the sensory uncertainty experienced at midpoint from that experienced at endpoint (see Fig 2c). This allows us to investigate if sensory uncertainty at midpoint dominates sensory uncertainty at endpoint (as might be expected due to the feedback correction made at that time point), or if endpoint dominates midpoint (as might be expected due to its temporal proximity to movement offset) [25, 27].

Fig 11a shows group-averaged initial movement vectors per trial colour coded such that the colour of the point at trial t indicates the midpoint and endpoint sensory uncertainty combinations (σLL, σLH, σHL, σHH) experienced on trial t − 1. Mean adaptation extent over the last 10 trials was 11.91±2.29° (99%). There is a clear stratification between trial types according to endpoint uncertainty, regardless of midpoint uncertainty. Furthermore, similar to Experiment 1 and 2, we see that in the transition from lower endpoint uncertainty trials to higher endpoint uncertainty trials (e.g., σLLσLH; σHLσHH) the change in movement vector is in a direction that tends to increase error on the subsequent trial.

Fig 11. Experiment 3 mean initial movement vector across participants per trial.

Fig 11

(a) The colour of the dot on trial t represents the combination of sensory uncertainty applied at midpoint and endpoint on the previous trial t − 1. Specifically, σLL in blue, σLH in orange, σHL in green, and σHH in red. Performance during the washout phase is shown by purple x’s. The inset bar graph shows the mean difference between the last 10 trials of adaptation and the first 10 trials of washout plotted separately for each trial type. Error bars are 95% confidence intervals. (b) The colour of the dots on trial t represents the error at the reach midpoint on the previous trial t − 1. (c) The colour of the dots on trial t represents the error at the reach endpoint on the previous trial t − 1.

Feedforward adaptation

As noted in the Results section of Experiment 1 and 2, it is possible that the stratification of initial movement vector by endpoint hand angle reflects implicit adaptation in the motor system. But it is also possible that it reflects the operation of some other system or process such as explicit aiming strategies [24]. We therefore followed the same logic and performed the same analysis outlined in earlier sections for those experiments. Visual inspection of Fig 11a reveals that initial movement vectors at the beginning of washout are closely matched to those observed at the end of adaptation for the trial types containing high uncertainty at endpoint. To formalise this observation, we computed the difference between the mean accuracy achieved on the last 10 trials of the adaptation phase and the first 3 trials of the washout phase separately for each uncertainty trial type and separately for each subject. A repeated-measures ANOVA indicated that there was a significant difference in these difference scores across uncertainty trial types (F(3,57)=16.55,p<.001,ηG2=0.26). Posthoc paired t-tests corrected for multiple comparisons using the bonferoni method (see Table 9) revealed that these difference scores were larger for trials in high high uncertainty was provided at endpoint than for trials in which low uncertainty was provided at endpoint. The uncertainty provided at midpoint did not make difference.

Table 9. Experiment 3 pairwise comparisons examining differences between uncertainty trial types in adaptation—washout difference scores.

A and B indicate the uncertainty trial types being compared; T is the observed t-statistic; dof is the degrees of freedom of the test; p-corr is the Bonferroni-corrected p-value; hedges is the Hedges G measure of effect size.

row A B T dof p-corr hedges
1 σ L σ M 3.39 19.00 0.02 0.78
2 σ L σ H -1.03 19.00 1.00 -0.25
3 σ L σ 4.19 19.00 0.00 0.98
4 σ M σ H -5.33 19.00 0.00 -1.29
5 σ M σ infty 0.62 19.00 1.00 0.13
6 σ H σ infty 9.87 19.00 0.00 1.71

These findings are consistent with the idea that the likely state of adaptation at the beginning of washout is more closely aligned with the adaptation estimated by initial movement vectors preceded by high endpoint uncertainty trials (orange and red dots in Fig 11a) than it does with the adaptation estimated by initial movement vectors preceded by low endpoint uncertainty trials (blue and green dots in Fig 11a). Thus, the adaptation envelope seen with low uncertainty trials may reflect the output of a process distinct from motor adaptation (see the “Adaptation vs aiming” subsection for further discussion).

Fig 11b and 11c depict initial movement vector colour coded by the error experienced at midpoint or endpoint, respectively. As in Experiment 1 and 2, these panels show that the stratification seen when colour coding by sensory uncertainty trial type is not present when colour coding by error, and suggests that sensory uncertainty and not movement error is responsible for the striation of initial movement vector across trials.

We formalised these observations by fitting a regression model of the same form as that reported in Experiment 1 and 2 (see also the “Statistical modelling” section). The predicted initial movement vectors from this regression model is shown in Fig 12a and the best fitting betas along with their 95% confidence intervals is shown by blue lines in Fig 12b. Table 10 includes all estimated beta coefficients and corresponds to the blue confidence intervals displayed in Fig 8b). The model was statistically significant (Adjusted R2 = 0.917, F(12,165) = 163.2, p < .001).

Fig 12. Experiment 3 linear regression fit to initial movement vector.

Fig 12

(a) Initial movement vector predictions from the regression model superimposed over the behavioural data. (b) Point and 95% confidence interval estimates from best fitting regression models. Coefficients of the regression for predicting initial movement vector are shown in blue and coefficients for predicting change in initial movement vector are shown in orange.

Table 10. Experiment 3 regression results for predicting initial movement vector from error and sensory uncertainty terms.

These results correspond to the blue confidence intervals displayed in Fig 12. The coef column contains β coefficients, the se column contains standard errors of these coefficients, the T column contains corresponding t-statistic, the pval column contains corresponding p-values, the CI[2.5%] and CI[97.5%] columns give the 95% confidence interval, and the relimp column gives the corresponding relative importance.

row names coef se T pval CI[2.5%] CI[97.5%] relimp
1 β 0 2.97 0.23 13.00 0.00 2.51 3.42 NaN
2 σLHσLL -2.45 0.13 -19.39 0.00 -2.70 -2.20 0.16
3 σHLσLH 2.21 0.30 7.37 0.00 1.62 2.80 0.02
4 σHHσHL -2.20 0.38 -5.79 0.00 -2.95 -1.45 0.06
5 δ MP 0.01 0.03 0.22 0.83 -0.05 0.06 0.02
6 (σLHσLL):δMP -0.05 0.04 -1.25 0.21 -0.14 0.03 0.00
7 (σHLσLH):δMP 0.01 0.08 0.09 0.93 -0.15 0.16 0.01
8 (σHHσHL):δMP 0.06 0.10 0.58 0.56 -0.14 0.25 0.02
9 δ EP -0.06 0.12 -0.52 0.60 -0.29 0.17 0.01
10 (σLHσLL):δEP 0.24 0.33 0.73 0.47 -0.41 0.88 0.01
11 (σHLσLH):δEP -0.07 0.34 -0.21 0.83 -0.74 0.60 0.01
12 (σHHσHL):δEP -0.16 0.33 -0.50 0.62 -0.81 0.48 0.05
13 log(Trial) 1.72 0.05 33.82 0.00 1.62 1.82 0.56

As in Experiment 1 and 2, sensory uncertainty across all conditions significantly influenced the initial movement vector on the following trial. Here, this influence was to increase initial movement vectors after low uncertainty at endpoint and to decrease them after high uncertainty at endpoint (Table 10, rows 2–4). Initial movement vectors were significantly greater on trials following low endpoint sensory uncertainty than they were following high endpoint sensory uncertainty (σLHσLL and σHHσHL; Table 10, rows 2 and 4). Neither error at midpoint (ϵMP; Table 10, row 5) nor error at endpoint (ϵEP; Table 10, row 9) were significant predictors of initial movement vector. Importantly, no interaction terms between midpoint/endpoint error and sensory uncertainty (Table 10 rows 6–8 and 10–12) were significant. Finally, log(Trial) significantly predicted initial movement vectors (Table 10, row 13), indicating the trend of initial movement vectors to increase over the course of the adaptation phase. As in Experiment 1 and 2 the relative importance of the log(Trial) term was 0.56 and the sum of the relative importance of the uncertainty terms was 0.24 indicating that both captured important variance in the data.

Overall, this regression echoed the results of Experiments 1 and 2, revealing that the effect of sensory uncertainty on feedforward adaptation is independent of movement error and failing to provide any evidence that sensory uncertainty scales the response to movement error.

We also fit a regression model treating change in initial movement vector as the observed variable. Predictor variables for this model were identical to those described above but with no trial predictor. The best fitting betas along with their 95% confidence intervals is shown by the orange lines in Fig 12b. The model was statistically significant (Adjusted R2 = 0.507, F(11,166) = 17.56, p < .001).

A strong intuition for what this analysis ought to reveal can be built with careful inspection of Fig 13a, which depicts change in initial movement colour coded by the sensory uncertainty at midpoint on the previous trial. Here it is clear that change in initial movement vectors inversely tracks sensory uncertainty levels at endpoint. This result can be seen in the regression results by noting that change in initial movement vectors were significantly smaller following σHH and σLH trials than they were following σHL and σLL trials (Table 11, row 2–4).

Fig 13. Experiment 3 change in initial movement vector and linear regression fits.

Fig 13

(a) Violin plot depicting the distribution of mean changes in initial movement vector across all adaptation phase trials of the experiment separately for each midpoint/endpoint uncertainty combination, colour coded as per Fig 11a. The inset of each violin shows a box plot in which the white dot indicates the median data value, the black box spans the 25% to 75% percentiles, and the whiskers extend to the most extreme data points. (b) Scatter plot showing the mean change in initial movement vector as a function of error experienced at midpoint. Point and line colour indicates uncertainty level. (c) Scatter plot showing the mean change in initial movement vector as a function of error experienced at endpoint on the previous trial. The lines in panel B and C indicate fitted simple linear regression lines. These regression lines do not correspond to the coefficients included in Table 11) and are included only as a visual aid.

Table 11. Experiment 3 regression results for predicting change in initial movement vector from error and sensory uncertainty terms.

These results correspond to the orange confidence intervals displayed in Fig 12. The coef column contains β coefficients, the se column contains standard errors of these coefficients, the T column contains corresponding t-statistic, the pval column contains corresponding p-values, the CI[2.5%] and CI[97.5%] columns give the 95% confidence interval, and the relimp column gives the corresponding relative importance.

row names coef se T pval CI[2.5%] CI[97.5%] relimp
1 β 0 0.47 0.21 2.25 0.03 0.06 0.88 NaN
2 σLHσLL -2.49 0.27 -9.36 0.00 -3.02 -1.97 0.23
3 σHLσLH 3.52 0.62 5.69 0.00 2.30 4.74 0.05
4 σHHσHL -2.94 0.79 -3.70 0.00 -4.50 -1.37 0.08
5 δ MP -0.24 0.06 -4.28 0.00 -0.35 -0.13 0.05
6 (σLHσLL):δMP -0.03 0.09 -0.37 0.71 -0.21 0.14 0.00
7 (σHLσLH):δMP -0.34 0.16 -2.09 0.04 -0.66 -0.02 0.01
8 (σHHσHL):δMP 0.27 0.20 1.30 0.19 -0.14 0.67 0.02
9 δ EP 0.58 0.24 2.38 0.02 0.10 1.05 0.02
10 (σLHσLL):δEP -0.27 0.69 -0.39 0.70 -1.62 1.09 0.01
11 (σHLσLH):δEP 1.33 0.71 1.88 0.06 -0.07 2.72 0.01
12 (σHHσHL):δEP -0.83 0.68 -1.21 0.23 -2.18 0.52 0.06

Further intuitions can be built by examining Fig 13b, which depicts the change in initial movement vectors as a function of midpoint error. The lines in this plot are colour coded by sensory uncertainty trial type. Fig 13c shows essentially the same information but for endpoint instead of midpoint error.

First note that every line across both of these plots has a negative slope indicating a general trend for both error sources to drive changes in initial movement vector. This is reflected in Table 11, rows 5 and 9 which show that both error at midpoint (δMP) and error at endpoint (δEP) are significant. Note that since the coefficient for midpoint error is negative, it drives changes in initial movement veector that reduce midpoint error on the next trial. However, note that the coefficient for endpoint error is positive, indicating that it drives changes in initial movement vector that increase error at endpoint on the next trial. Here we elect to not speculate too deeply into this finding given that the relative importance of these two terms was only 0.05 and 0.02.

Importantly, if sensory uncertainty at endpoint scales the response to error, then the slope of the blue and green lines (σLL and σHL) should be the steepest, and the slope of the orange and red lines (σLH and σHH) should be shallowest. Neither Fig 13b nor Fig 13c show this pattern. See Table 11 rows 6–8 for statistics corresponding to the interaction between sensory uncertainty with midpoint error, and rows 10–12 for statistics corresponding to the interaction between sensory uncertainty and endpoint error. Of these, only the (σHLσLH):δMP term was significant and the relative importance of this term was only 0.01. On the other hand, the sum of the relative importance of sensory uncertainty terms was 0.36.

Overall, the results of this regression show that while both sensory uncertainty and movement error influence feedforward adaptation, they do so independently of each other. Furthermore, the influence of sensory uncertainty appears to outweigh the influence of movement error. Finally, like before, we find no evidence that sensory uncertainty scales the response to movement error.

Feedback integration

Fig 14a shows feedback integration (i.e., the difference between endpoint and midpoint hand angle) as a function of sensory uncertainty at midpoint, and Fig 14b shows feedback integration as a function of error at midpoint coloured by sensory uncertainty. The main pattern observed in Fig 14 is that there are no significant differences in overall feedback integration depending on sensory uncertainty level (panel A), but there are large differences in how feedback integration responds to midpoint error depending on sensory uncertainty level (i.e., slope differences in panel B).

Fig 14. Experiment 3 feedback integration (endpoint hand angle—initial movement vector).

Fig 14

(a) Violin plot depicting the distribution of mean feedback integration across all adaptation phase trials of the experiment separately for each midpoint uncertainty level. The inset of each violin shows a box plot in which the white dot indicates the median data value, the black box spans the 25% to 75% percentiles, and the whiskers extend to the most extreme data points. (b) Scatter plot showing the mean feedback integration as a function of error experienced at midpoint. Lines indicate fitted linear regression lines, corresponding to the coefficients included in Table 12, row 4). Point and line colour indicates uncertainty level.

We formalised these observations by fitting a regression model treating the difference between endpoint and midpoint hand angle as the observed variable. Predictor variables were the error experienced at midpoint, the sensory uncertainty experienced at midpoint, and the interaction between these two terms. In contrast to the regression models reported for feedforward adaptation, all error and sensory uncertainty predictors were taken from the current trial. This regression was statistically significant (Adjusted R2 = 0.94, F(3,174) = 925.1, p < .001). Beta coefficient estimates and corresponding statistics are listed in Table 12.

Table 12. Experiment 3 regression results for predicting feedback integration (endpoint hand angle—initial movement vector) from error and midpoint sensory uncertainty terms.

The coef column contains β coefficients, the se column contains standard errors of these coefficients, the T column contains corresponding t-statistic, the pval column contains corresponding p-values, the CI[2.5%] and CI[97.5%] columns give the 95% confidence interval, and the relimp column gives the corresponding relative importance.

row names coef se T pval CI[2.5%] CI[97.5%] relimp
1 β 0 -0.46 0.07 -6.44 0.0 -0.61 -0.32 NaN
2 σ MP -1.04 0.14 -7.26 0.0 -1.33 -0.76 0.01
3 δ MP -0.76 0.01 -50.92 0.0 -0.79 -0.73 0.88
4 σMP:δMP 0.28 0.03 9.41 0.0 0.22 0.34 0.05

The most important result from this analysis is that the response of feedback integration to increasing midpoint error (i.e., the slopes in Fig 14b) was significantly greater (i.e., steeper slope) for low sensory uncertainty at midpoint than it was for high uncertainty at midpoint (Table 12 row 4).

Overall, the pattern of feedback integration seen in Experiment 3 are qualitatively identical to those observed in Experiment 1 and 2. Both are consistent prior studies showing that that sensory uncertainty has an error-scaling effect on feedback integration [8, 10].

Model-based results

Experiments that study feedforward adaptation in the absence of feedback integration have shown that sensory uncertainty scales an error-driven adaptation. Our experiments—which all provide a short 100 ms window of task-relevant midpoint feedback and therefore induce both feedback integration and feedforward adaptation—clearly show that while both movement error and sensory uncertainty influence feedforward adaptation, they appear to exert their influence independently of each other. Although it seems unlikely, it is possible that our data can be accounted for by a state-space model that embodies the classic view that sensory uncertainty scales the response to error. Another possibility is that error-driven motor adaptation drives the slow envelope of improving performance over the course of the experiment, and the effect of sensory uncertainty is to punctuate this envelope with abrupt changes (boosts or dips) that do not depend on the magnitude or direction of the experienced error.

Here we explore the ability of two classes of models to account for our data. The first class assumes that sensory uncertainty interacts with the updating of an adaptive learning process. This class of models contains the error-scaling, retention-scaling, and bias-scaling models described below. Of these, the error-scaling model embodies the classic view that sensory uncertainty scales an error-driven update. The second class of model assumes that sensory uncertainty interacts with an additional aiming process that is memory-less (i.e., it does not retain any value from one trial to the next and is instead completely determined by the experienced sensory uncertainty on the previous trial). This class of models contains the state-aim-scaling and output-aim-scaling models described below. This model class is in line with the idea that sensory uncertainty transiently activates explicit aiming. However, there is noting intrinsic to this model that demands the aiming be driven by an explicit process (see the “Adaptation vs aiming” discussion section).

Each of these models makes importantly different assumptions about the role of sensory uncertainty on feedforward adaptation in the presence of feedback integration. The feedforward adaptation component of all five models was based on a standard linear dynamical systems model of sensorimotor adaptation [28]. This type of model assumes that internal state variables which map desired motor goals to motor plans are updated on a trial-by-trial basis according to three factors: (1) an error term that determines how internal states are updated after a movement error is detected, (2) a retention term that determines how quickly the internal state returns to baseline, and (3) a bias term that determines the baseline mapping that will be returned to in the absence of sensory input.

The error-scaling model assumes that the error term is inversely scaled by the level of sensory uncertainty. The retention-scaling model assumes that the retention term is inversely scaled by the level of sensory uncertainty. The the bias-scaling model assumes that the bias term is inversely scaled by the level of sensory uncertainty. The state aim-scaling model is equivalent to the bias-scaling model with zero retention and zero error sensitivity. In contrast to all other models, the output-aim-scaling model assumes that the output motor command—as opposed to some aspect of the internal state—is inversely scaled by sensory uncertainty. For all models, four parameters (γL, γM, γH and γ) encode the magnitude of scaling applied on low uncertainty, medium uncertainty, high uncertainty, and no-feedback trials.

Sensory uncertainty influences feedback integration in the same way across all five models: error experienced at reach midpoint generates a feedback motor command in the opposite direction of the sensed movement error with a magnitude that inversely scales with the level of sensory uncertainty of that signal. The magnitude of this scaling is captured by four parameters (ηL, ηM, ηH and η).

We fit single-state and two-state variants of each of the models just described. As suggested by this nomenclature, single-state model variants assume a single state variable determines the mapping from motor goal to motor plan. Two-state variants assume this mapping is determined by two state variables, one fast but labile and the other slow but stable [29]. In our two-state models, uncertainty scaling was applied only to the fast state variable.

Finally, we fit two different versions of the two-state models that differed in the bounds that the parameters were allowed to take. In one version, these parameter bounds lead the internal state variables to take negative values. In the other version, bounds were constrained such that the internal state variables were not allowed to take negative values. See supplemental figures for slow and fast-state model fits (S1S30 Figs), as well as distributions of best-fitting parameters (S1S3 Tables) as described in the State-space modelling section of the Methods and Materials. The BIC values across all models and experiments are presented in Fig 15.

Fig 15. Bar graph depicting model BIC values for Experiments 1,2 and 3.

Fig 15

Error-scaling model variants in blue. Retention-scaling variants in orange. Bias-scaling variants in green. State-aim variants in red. Output-aim variants in purple. Opacity indicate model sub-type: Non-negative two-state models have 100% opacity. Two-state models have 50% opacity and one-state models have 25% opacity. Error-bars represent 95% confidence intervals.

We also performed a model rank analysis in which, for each participant within each experiment, we ordered the models from best fitting (smallest BIC) to worst fitting (largest BIC). We then counted the number of participants, best fit (rank 1), second best fit (rank 2), third best fit (rank 3) etc. for each model. The resulting counts are shown in Fig 16. This figure clearly reveals that in Experiment 1 the State-aim-scaling-two-state and Output-aim-scaling-two-state models provide the first and second best fits to the data. The non-negative versions of these models, in addition to the bias-scale-two-state model, also provide the third, fourth, and fifth best account of the data. A very similar pattern is observed in Experiment 2 with the State-aim-scaling-two-state and Output-aim-scaling-two-state models providing the majority of best and second best fits. However, here here the retention-scale-two-state model provided the best fit for 4 participants, the second best fit for 3 particpants, and the third best for 12 participants. Here, the bias-scale-two-state model provided the fourth best fit for 16 participants. The results for Experiment 3 are slightly more varied with best and second best fits being split between the State-aim-scaling-two-state, Output-aim-scaling-two-state, retention-scale-two-state, and bias-scale-two-state models. These models in addition to the bias-scale-two-state-non-neg are also the third best fitting models. All variants of the error-scaling models did not perform well by this analysis, being ranked between 6 and 10 across all experiments. Additionally, Figs 17, 18 and 19 illustrate the predicted behaviour from each model class in comparison to the behaviour observed in humans for Experiment 1, Experiment 2, and Experiment 3, respectively.

Fig 16. Model rank analysis showing the number of particpants that were best fit (rank 1), second best fit (rank 2), third best fit (rank 3) etc.

Fig 16

by a model of each type. Results for Experiment 1 are shown on the left using shades of blue, results for Experiment 2 are shown in the middle using shades of green, and results for Experiment 3 are shown on the right using shades of red. Deeper colours indicate a greater count.

Fig 17. Experiment 1 model fits.

Fig 17

Left column shows initial movement vectors averaged across participants overlaid with the average full model prediction of the (A) Two-state error-scaling model. (C) Two-state retention-scaling model. (E) Two-state bias-scaling model. (G) Two-state state-aim-scaling model (I) Two-state output-aim-scaling model. Right column shows corresponding model fits to endpoint hand-angles. Here, human performance averaged across participants is shown in blue. Model predictions in orange. Fit lines and R2 values represent average of models fit to individual subjects.

Fig 18. Experiment 2 model fits.

Fig 18

Left column shows initial movement vectors averaged across participants overlaid with the average full model prediction of the (A) Two-state error-scaling model. (C) Two-state retention-scaling model. (E) Two-state bias-scaling model. (G) Two-state state-aim-scaling model (I) Two-state output-aim-scaling model. Right column shows corresponding model fits to endpoint hand-angles. Here, human performance averaged across participants is shown in blue. Model predictions in orange. Fit lines and R2 values represent average of models fit to individual subjects.

Fig 19. Experiment 3 model fits.

Fig 19

Left column shows initial movement vectors averaged across participants overlaid with the average full model prediction of the (A) Two-state error-scaling model. (C) Two-state retention-scaling model. (E) Two-state bias-scaling model. (G) Two-state state-aim-scaling model (I) Two-state output-aim-scaling model. Right column shows the corresponding model fits to endpoint hand-angles. Here, human performance averaged across participants is shown in blue. Model predictions in orange. Fit lines and R2 values represent average of models fit to individual subjects.

We performed one paired sample t-tests per experiment to compare BIC values for two-state vs one-state model variants. These tests indicated that in all experiments the BIC values of the two-state models was significantly better (more negative) than that of the one-state models with a large effect size indicating a substantial difference between the two model types (Experiment 1: t(19) = 68.68, p < 0.001, g = 16.77; Experiment 2: t(19) = 106.66, p < 0.001, g = 27.51; Experiment 3: t(19) = 90.04, p < 0.001, g = 17.06). In Experiment 1, the mean BIC score for the two-state model was -359.41 (SD = 12.05) and for the one-state model was -169.77 (SD = 10.02). In Experiment 2, the mean BIC score for the two-state model was -372.48 (SD = 11.19) and for the one-state model was -116.42 (SD = 6.43). In Experiment 3, the mean BIC score for the two-state model was -437.17 (SD = 12.24) and for the one-state model was -225.94 (SD = 12.03).

We also performed paired sample t-tests per experiment to compare BIC values for the different model classes (i.e., error-scaling, retention-scaling, bias-scaling, state-aim-scaling, and output-aim-scaling). The state-aim-scaling (mean BIC = -380.96, SD = 12.64) and output-aim-scaling (mean BIC -380.95 SD = 12.63) models provided significantly better fits than any other model in Experiment 1, but could not be distinguished from each other (see S1 Table).

The same pattern was also found in Experiment 2, with state-aim-scaling (mean BIC = -389.00, SD = 12.81) and output-aim-scaling (mean BIC -388.88 SD = 12.64) models provided significantly better fits than any other model (see S2 Table). These models again could not be distinguished from each other.

In Experiment 3, the error-scaling model performed significantly worse than all other models, but the remaining models were all indistinguishable from each other (see S3 Table).

In summary, two-state models outperform one-state models, state-aim-scaling and output-aim-scaling models are the most commonly preferred model, and error-scaling models are never preferred. Although the aim-scaling models provides the best fits, all other two-state models nonetheless also provide good fits (i.e., high R2 values). Consequently, any inferences drawn from the best-fitting model should be taken lightly, and future research is needed for more definitive model selection.

Discussion

The current study is the first to examine how sensory uncertainty influences feedforward adaptation and feedback integration when they co-occur (but see our discussion of Körding and Wolpert [8] in the “Divergence from existing work” subsection below). In line with previous research, we find that the extent to which sensory feedback is integrated into an ongoing reach is inversely scaled by its level of uncertainty regardless of the presence or absence of feedforward adaptation [810, 30]). However, in sharp contrast to previous studies—all of which have found that sensory uncertainty inversely scales an error-driven update [8, 9, 11, 12, 1417]—we show that the level of sensory uncertainty experienced in the previous trial punctuates a slow envelope of error reduction with large and abrupt changes to initial movement vectors that are insensitive to the magnitude and direction of the sensed movement error on the previous trial. Our results are highly novel and prompt important questions for future sensorimotor learning research to address.

Divergence from existing work

Standard models of motor learning assume a linear relation between adaptation rate and error size [14, 28, 3135], and the influence of sensory uncertainty on this process has been thought to inversely scale the error-driven update [1113, 36, 37] (i.e., our error-scaling model). These results have often been interpreted through the lens of Bayesian [36] or other optimality frameworks such as that of Kalman filters [38]. These frameworks assume that as sensory uncertainty increases the motor system should limit adaptation in response to observed errors because they likely reflect higher sensory noise (to which adaptation would be sub-optimal) instead of actual changes in the external environment (to which adaptation would be optimal). Consequently, when sensory uncertainty is high, the motor system should adapt less to a given error.

The observed pattern in our data is inconsistent with this characterization. In particular, while both sensory uncertainty and movement error influences feedforward adaptation, they do so independently of each other (but see the “Adaptation vs aiming” subsection below). For instance, the change in initial movement vectors after high sensory uncertainty trials consistently led to increased error on the subsequent trial, regardless of the magnitude and direction of the movement error experienced in the previous trial. This finding is supported by our regression analyses, which revealed that the interaction terms between sensory uncertainty and error were consistently non-significant or exhibited a pattern inconsistent with error-scaling.

Motivated by this apparent insensitivity to error, we developed a set of models in which sensory uncertainty has no effect on the error-driven component of feedforward adaptation. In the bias-scaling model, sensory uncertainty scales a constant bias term in the feedforward update. In the retention-scaling model, sensory uncertainty scales the rate that the system returns to baseline. In the state-aim scaling model, sensory uncertainty scales a bias term in the absence of retention or error updating, and in the output-aim scaling model, sensory feedback directly influences the feedforward motor output. In every experiment—and even in every participant—all of these “error-independent” models provided better fits than the error-scaling model. This is a significant point of divergence from the existing literature.

Adaptation versus aiming

Our results can be viewed from at least two perspectives. From one perspective, the initial movement vectors can be seen as reflecting the current state of feedforward adaptation. In this view, adaptation following low sensory uncertainty trials behaves as expected, leading to adjustments that tend to reduce errors in subsequent trials. However, trials following greater sensory uncertainty induce adjustments that increase errors in subsequent trials. It is this latter pattern of behaviour that is seemingly problematic for existing models of sensorimotor learning.

An alternative perspective suggests that the initial movement vectors result from two factors. First, they reflect the current state of feedforward adaptation, which dictates how actions are executed. Second, they also carry the influence of explicit aiming strategies that dictate the selection of what action to execute [24, 3941]. From this perspective, explicit aiming may drive the abrupt changes in initial movement vectors following low sensory uncertainty trials. For example, low sensory uncertainty trials might offer a clear signal for participants to notice the mismatch between their aiming point and where the cursor actually lands. This observation might lead to the generation of a hypothesis about the direction and magnitude of the perturbation. Following trials with greater sensory uncertainty, participants’ confidence in this hypothesis may be eroded due to poor sensory feedback. As a result, participants might attempt to reach straight to the target (or as straight as their current level of adaptation allows) to obtain a better explicit estimate of the true perturbation.

This possibility is consistent with the pattern of initial movement vectors observed during the washout phase of all three experiments. Specifically, the initial movement vectors begin their decay back to baseline from the current state of the higher uncertainty trials, not from the level of the lower uncertainty trials. Furthermore, our modeling results strongly favored the state-aim-scaling and output-aim-scaling models, which are also consistent with this possibility. Recall that these models assume an aiming process with no retention from one trial to the next and solely rely on the sensory uncertainty experienced in the previous trial.

On the other hand, our proposed explanation for the involvement of explicit aiming in the observed data remains highly speculative. For instance, it is unclear why high uncertainty trials would prompt participants to abandon their previously successful explicit strategy, only to suddenly revert to the same strategy just a few trials later. Additionally, it is puzzling why almost all participants would invoke the same explicit strategy in the same way during these trials. This observation seems inconsistent with explicit aiming, as one would expect humans to devise different coping strategies.

Furthermore, it is important to acknowledge that our study was not designed to distinguish between implicit feedforward adaptation and explicit aiming processes. Additionally, all the models we tested yielded relatively high R2 values, and this suggests that models which do not resemble explicit aiming can still effectively account for our data. Ultimately, further research is needed to understand how sensory uncertainty affects the interplay between feedforward adaptation and explicit aiming.

Key paradigm differences

Our study closely resembles the design of Körding and Wolpert (2004) [8], which is one of the seminal studies in establishing error-scaling as a model of sensory uncertainty on feedforward adaptation. However, a crucial distinction lies in our specific focus on the first 180 trials of adaptation, while their study examined behavior after 2000 trials of adaptation had already taken place. This difference in design reflects the divergence in research questions between our two studies. Körding and Wolpert aimed to investigate if participants had learned a prior distribution of perturbations and whether they would incorporate new information into that prior in a Bayesian manner. Therefore, exposing participants to thousands of trials of the perturbation served as a practical way to impose a prior onto their subjects. They used the interplay between sensory uncertainty and feedback integration as a readout of what participants believed about the perturbation and how they updated these beliefs. In contrast, our study emphasizes understanding the interplay of feedback integration and feedforward adaptation as participants encounter the perturbation for the first time (e.g., when errors are likely to be relatively large and frequent [13, 42]). In principle, we could have directly compared our results to the early trials in Körding and Wolpert’s study. However, regrettably, they did not report this data, leaving us unable to make a direct comparison between our feedforward adaptation results and their findings.

Since the seminal work of Körding and Wolpert (2004) [8], several other studies have investigated the influence of sensory uncertainty on feedforward adaptation, and all of these have supported an error-scaling model. Our study differs from these in a few ways. First, we interleaved different uncertainty conditions psuedo-randomly across trials whereas most other relevant studies used blocked designs [9, 13, 17, 43]. However, since Wei and Körding [12] also used a trial-interleaved design and found support for an error-scaling model, this is unlikely to be an important driver in our results.

Two remaining paradigm differences seem most compelling. In particular, our study is the first to investigate the influence of sensory uncertainty on feedforward adaptation when feedback integration and feedforward adaptation co-occur. In contrast, most existing studies only deliver feedback at movement endpoint (but see our discussion of Körding and Wolpert (2004) [8] at the top of this section). In doing so, they largely prevent corrections from occurring during the movement [1113, 1517]. This is the fundamental design feature we set out to manipulate in this study.

A final possibility is that our results may reflect the effect of sensory uncertainty on an explicit aiming process (as discussed above in the “Adaptation versus aiming” subsection), whereas some existing studies have used designs that limit the influence of explicit aiming. For example, one study employed a zero-mean variable perturbation ([12]) and another employed task-irrelevant clamped feedback ([13]. Ultimately, adjudicating between an explanation based on the co-occurrence of feedback integration and feedforward adaptation processes and one based on changes to explicit aiming processes will require further research.

Conclusion

Both the degree to which sensory feedback is integrated into an ongoing movement and the degree to which movement errors drive changes in feedforward motor plans have been shown to scale inversely with sensory uncertainty. Yet, little is known about how they respond to sensory uncertainty in real-world movement contexts where they co-occur. Here, we show that in this context, participants gradually adjust their movements from trial-to-trial in a manner that is well characterised by a slow and consistent envelope of error reduction, but also exhibit large and abrupt changes in their initial movement vectors that correlate with the degree of sensory uncertainty present on the previous trial yet are insensitive to the magnitude and direction of the sensed movement error. This may be seen as contextual alteration to the adaptation of feedforward motor plans (i.e., changes in how actions are executed) or as contextual alteration to the selection of what action to execute (e.g., aiming). In either case, our results prompt important questions for current models of sensorimotor learning under uncertainty and open up interesting new avenues for future exploration in the field.

Materials and methods

Participants

A total of 60 naive participants (32 males, 28 females, age 17–33 years) with normal or corrected to normal vision and no history of motor impairments participated in the experimental study. All participants gave written informed consent before the experiment and were either paid and recruited from the Macquarie University Cognitive Science Participant Register or were Macquarie University undergraduates participating for course credit. Neither written nor verbal consent was obtained from parents or guardians of participants aged 17 years (n = 2) because these participants were deemed capable of providing their own consent according to our ethics protocol. All experimental protocols were approved by the Macquarie University Human Research Ethics Committee (protocol number: 52020339922086). Participants were randomly assigned to one of three experiments (n = 20 per experiment). Sample sizes were consistent with field-standard conventions for visuomotor adaptation experiments [4446].

Experimental apparatus

A unimanual KINARM endpoint robot (BKIN Technologies, Kingston, Ontario, Canada) was utilized in the experiments for motion tracking and stimulus presentation (Fig 1). The KINARM has a single graspable manipulandum that permits unrestricted 2D arm movement in the horizontal plane. A projection-mirror system enables presentation of visual stimuli that appear in this same plane. Participants received visual feedback about their hand position via a cursor (solid white circle, 2.5 mm diameter) controlled in real-time by moving the manipulandum. Mirror placement and an opaque apron attached just below the subject’s chin ensured that visual feedback from the real hand was not available for the duration of the experiment.

General experimental procedure

Participants performed reaches with their dominant (right) hand from a starting position located at the center of the workspace (solid red circle, 0.5cm in diameter) to a single reach target (solid green circle, 0.5 cm in diameter) located straight ahead (0° in the frontal plane) at a distance of 10 cm. When participants moved the cursor within the boundary of the start target its colour changed from red to green and the reach target appeared, indicating the start of a trial. participants were free to reach at any time after the start target colour changed. Participants first completed a 20 trial baseline phase during which veridical online feedback was provided. Immediately following baseline, a 180 trial adaptation phase was completed. During the adaptation phase, once the cursor exited the start target, cursor feedback was extinguished and rotated counterclockwise (to the left) of the true hand position by an amount drawn at random on each trial from a Gaussian distribution with a fixed mean of 12° and standard deviation of 4°. Random trial-by-trial perturbations, the order of which was trial-matched across all participants, were applied to prevent completely predictable movements during the adaptation phase and to probe the effect of sensory uncertainty at a trial-by-trial resolution. Participants were instructed to use cursor feedback to guide their reaches whenever it was available, and to move their hand straight through the target as accurately as possible. There were no breaks between phases, and transitions between phases were not explicitly signaled to participants in any way.

Depending upon the specific experiment (see descriptions below for details), displaced cursor feedback was provided at reach midpoint (100ms duration) and/or at endpoint (100ms duration), or withheld altogether. To help guide the participant’s hand back to the starting position, a green ring centered over the starting position appeared with a radius equal to the distance between the hand and starting position. Once the participant’s hand was 1 cm from the starting position, the ring was removed and cursor feedback was reinstated.

To investigate the effect of sensory uncertainty on feedback integration and feedforward adaptation, information provided about the visuomotor perturbation (true cursor position) was manipulated in the following way. One of four visual uncertainty levels (σL, σM, σH, σ) were selected and applied on a given trial according to the specific experimental protocol, with the trial sequence matched across participants. In the zero uncertainty condition (σL), feedback was a single white circle (0.5 cm in diameter; 5.73° arc-angle at midpoint, 2.86° at endpoint), identical to the initial cursor. In the moderate uncertainty condition (σM), feedback was one of 10 randomly generated point clouds comprised of 50 small translucent white circles (0.1 cm in diameter) distributed as a two-dimensional Gaussian with a standard deviation of 0.5cm (5.73° arc-angle at midpoint, 2.86° at endpoint), and a mean centered over the true (perturbed) cursor position on the current trial. In the high uncertainty condition (σH), everything was the same as the moderate uncertainty condition (σM) except that the point clouds had a SD of 1 cm (11.47° arc-angle at midpoint, 5.73° at endpoint). In the unlimited uncertainty condition (σ), no feedback was provided at all.

Immediately following the adaptation phase, participants experienced a 100 trial washout phase during which no cursor feedback was provided. The maximum allowable time to complete a reach was 1000 ms. Irrespective of the cursor’s position, if participants did not cross the lower bound of the end target radius (9.5cm) the trial would time out and restart. If reaches exceeded the time limit or did not cross the lower bound of the target, the trial was repeated.

Experiment 1

All four feedback uncertainty types (σL, σM, σH, σ) were applied on 25% of trials (45 trials each) at midpoint only (Fig 2a). No feedback was provided at endpoint. The order of uncertainty conditions and perturbation values were randomised and trial-matched across all participants.

Experiment 2

The protocol employed in Experiment 2 was identical to Experiment 1 except that both midpoint and endpoint feedback were provided on each trial (Fig 2b). Midpoint and endpoint feedback had matched uncertainty levels.

Experiment 3

Experiment 3 consisted of four trial types (Fig 2c). Trial type 1 consisted of low uncertainty midpoint and low uncertainty endpoint feedback (σLL). Trial type 2 consisted of low uncertainty midpoint and high uncertainty endpoint feedback (σLH). Trial type 3 consisted of high uncertainty midpoint and low uncertainty endpoint feedback (σHL). Trial type 4 consisted of high uncertainty at midpoint and high uncertainty at endpoint feedback (σHH). Each of the four trial types occurred on 25% of trials (45 trials each).

Data analysis

Movement kinematics including hand position and velocity were recorded for all trials using BKIN’s Dexterit-E experimental control and data acquisition software (BKIN Technologies). Data was recorded at 200 Hz and logged in Dexterit-E. Custom scripts for data processing were written in MATLAB (R2013a). Data analysis and model fitting was done in Python (3.7.3) using the numpy (1.19.2) [47], SciPy (1.4.1) [48], pandas (1.1.3) [49], matplotlib (3.3.2) [50], and pingouin (0.3.11) [51] libraries. We report ΔBIC and compare the BIC distributions for model comparison via Dunnett’s post-hoc test and correct for multiple comparisons using the Bonferroni correction.

A combined spatial- and velocity-based criterion was used to determine movement onset, movement offset, and corresponding reach endpoints [52, 53]. Movement onset was defined as the first point in time at which the movement exceeded 5% of peak velocity after leaving the starting position. Movement offset was similarly defined as the first point in time at which the movement dropped below 5% of peak velocity after a minimum reach of 9.5 cm from the starting position in any radial direction, and reach endpoint was defined as the (x, y) coordinate at movement offset. The optimal movement trajectory is a straight line between the start and end targets. Accordingly, the initial movement vector (IMV) is the angular difference between the optimal vector and the movement vector at movement onset and endpoint hand angle is the angular difference between the optimal vector and the movement vector at movement offset. During the adaptation phase, initial movement vectors were analyzed to explore the influence of sensory uncertainty on feedforward adaptation. Endpoint hand angles were analysed to explore feedback integration. During the no-feedback washout phase, initial movement vectors were analysed to investigate adaptation aftereffects.

Statistical modelling

To quantify the effect of sensory uncertainty on feedforward adaption, we fit a regression model to the data from each of our experiments. We treated initial movement vector as the observed variable. Predictor variables were trial, the error experienced at midpoint, the error experienced at endpoint, the sensory uncertainty experienced at midpoint and/or endpoint on the previous trial, and the interaction between the error terms and the sensory uncertainty terms. We omitted the terms for error experienced at endpoint and the corresponding interaction terms in our analysis of Experiment 1 because no endpoint feedback was provided on any trial in this experiment.

We used backward difference coding to enter sensory uncertainty (which we treat as ordinal) into this regression model. According to this coding scheme, the performance with sensory uncertainty at one level is compared with performance when sensory uncertainty is at the previous level. Thus, the regression models contain beta coefficients that capture the difference between (1) moderate uncertainty and low uncertainty, (2) high uncertainty and moderate uncertainty and (3) unlimited uncertainty and high uncertainty. This also applies to the interaction terms. In addition, we also fit a regression model using the change in initial movement vector from trial to trial as the observed variable. This regression used all of the same terms as the regression just described with the exception that we did not include trial as a predictor. If sensory uncertainty influences feedforward adaptation by scaling the feedforward controller’s response to experienced errors, we should expect to find significant interaction terms between sensory uncertainty and error in our regression model.

We took a similar approach to determine how sensory uncertainty influences feedback integration. We fit regression models using endpoint hand angle as the observed variable. Predictor variables were the error experienced at midpoint, the sensory uncertainty experienced at midpoint on the current trial, and the interaction of these two terms. As before, we used backwards difference coding to enter sensory uncertainty into the regression model. Hence, the interaction terms in this model also indicate whether or not sensory uncertainty scales feedback integration. Finally, we also fit a regression model using the difference between endpoint hand angle and initial movement vector (i.e., feedback integration) as the observed variable. This regression used all of the same terms as the regression just described, with the exception that we did not include trial as a predictor.

In all models for which trial number was taken to be a predictor, trial number was transformed using the natural logarithm. This has the effect of turning our non-linear adaptation curves into straight lines, and thereby makes linear regression a more appropriate analysis tool for our research question. The models were fit to the group averaged data using ordinary least squares to obtain best-fitting parameter estimates. We also report the relative importance of each regressor following the methods developed in [54].

State-space modelling

To characterize how participants’ reaching behaviour changed over time, we also fit three different linear dynamical system models to our data. At a coarse-grained level, each model is characterised by the following features:

  • A feedforward motor plan is computed at movement onset that is an attempt to reach in a straight line from the starting position to the target location, and a feedback motor command is computed at movement midpoint that is an attempt to correct the ongoing movement for any error experienced at midpoint.

  • Feedforward motor plans are adapted on a trial-by-trial basis using both the error experienced at midpoint and the error experienced at endpoint as learning signals.

  • The gain applied to feedback corrections is similarly adjusted on a trial-by-trial basis, but is sensitive only to the error experienced at endpoint.

  • The sensory uncertainty experienced at midpoint and/or endpoint modulates the between-trial feedforward update and the within-trial feedback correction, but not the between-trial feedback gain update (for the sake of simplicity).

The feedforward adaptation component of all three models is based on simple discrete-time linear dynamical systems—so-called state-space models [28]. The simplest version of these models assumes that an internal state variable x maps desired motor goals to motor plans y, and that x is updated on a trial-by-trial basis in response to sensory feedback about movement error. The update to x has (1) an error term that determines how the internal state is updated after a movement error is detected, (2) a bias term that determines the baseline mapping that will be returned to in the absence of sensory input and (3) a retention term that determines how quickly the internal state returns to baseline after sensory feedback about error is removed. This arrangement is encapsulated in the following equations:

δ(n)=y*(n)-y(n) (1)
x(n+1)=βx(n)+αδ(n)+λ (2)
y(n)=x(n)+r(n) (3)

where n is the current trial, δ(n) is the error (i.e., the angular distance between the reach endpoint and the target location), y*(n) is the desired output (e.g., the angular position of the reach target), y(n) is the motor output and corresponds to the angle of the movement that will be generated when trying to reach to the target (i.e., it is a readout of the sensorimotor state), x(n) is the state of the system (i.e., the sensorimotor transformation), β is a retention rate that describes how much is retained from the value of the state at the previous trial, α is a learning rate that describes how quickly states are updated in response to errors, λ is a constant bias, and r(n) is the imposed rotation.

Note that the bias term (λ) is applied in the state-update equation and not in the motor-output equation. In this form, the bias term will ultimately produce a stable bias unless it is acted upon by sensory uncertainty (as it does in the bias-scaling models—see below) in which case it will cause trial-to-trial changes in the underlying adapted states.

These models are sometimes equipped with a second internal state variable [29, 39] as follows:

δ(n)=y*(n)-y(n) (4)
xf(n+1)=βfxf(n)+αfδ(n)+λ (5)
xs(n+1)=βsxs(n)+αsδ(n)+λ (6)
y(n)=xf(n)+xs(n)+r(n) (7)

where xf is a fast state variable, xs is a slow state variable, βf < βs, and αf > αs. That is, feedforward adaptation is often assumed to arise from the combination of a slow-but-stable system and a fast-but-labile system. Previous studies have not clearly established the appropriateness of one-state versus two-state models for capturing how sensory uncertainty influences feedforward adaptation. Consequently, we explore both one-state and two-state model variants in this paper.

The simple state-space framework just described assumes motor output reflects the execution only of feedforward motor commands, whereas the behaviour observed in our experiments is also likely influenced by feedback motor commands. We therefore augment the simple state-space model as follows.

The total motor output of the model is defined at three discrete time points within each trial. We denote the time of reach initiation as t0, the time of midpoint crossing as tMP, and the time of endpoint crossing as tEP. The total motor output on trial n at any time t denoted y(n, t) is a combination of feedforward yff(n) and feedback yfb(n, t) motor commands as follows:

y(n,t)=yff(n)+yfb(n,t) (8)

Note that feedforward motor output is not a function of time within a trial because we assume that the feedforward motor output is computed at t0 and remains fixed throughout the rest of each trial. This is equivalent to assuming that the execution of the movement occurs too rapidly for new feedforward motor planning to influence the ongoing movement.

In the single-state models, the feedforward motor command yff(n) is determined by a single internal state variable denoted by xff(n) that maps the current movement goal to motor commands as follows:

yff(n)=xff(n) (9)

In the two-state models, the feedforward motor command yff(n) is determined by two internal state variables denoted by xfff(n) and xffs(n) that map the current movement goal to motor commands as follows:

yff(n)=xfff(n)+xffs(n) (10)

At reach initiation, sensory feedback has not yet been provided so the feedback motor command is zero:

yfb(n,t0)=0 (11)

If sensory feedback is provided at midpoint, then the following sensory prediction error is experienced:

δ(n,tMP)=y(n,t0)+r(n) (12)

Here, δ(n, tMP) is the sensory prediction error, and r(n) is the visuomotor rotation applied on trial n. Notice that the motor command issued at time t0 is responsible for generating the sensory prediction error at time tMP. In response to this sensory prediction error, the following compensatory feedback motor command is triggered:

yfb(n,tMP)=-xfb(n)δ(n,tMP)ηI(n) (13)

Here, xfb(n) is an internal state variable that represents the gain of the feedback controller, η = [η0, ηM, ηH, η] is a row vector of free parameters encoding the sensory uncertainty of the midpoint feedback (one value for each possible level of sensory uncertainty), and I(n) is a column vector that indicates what level of midpoint uncertainty was present on trial n. Notice that the feedback motor command is just some fraction of the experienced error in magnitude and in the opposite direction—because of the leading negative sign—hence it serves to reduce movement error.

If endpoint sensory feedback is provided, the following sensory prediction error is experienced:

δ(n,tEP)=y(n,tMP)+r(n) (14)

Notice that the motor command issued at time tMP is responsible for generating the sensory prediction error at time tEP. In the transition from trial n to trial n + 1, the gain of the feedback controller is updated in response to this sensory prediction error as follows:

xfb(n+1)=βfbxfb(n)+αfbδ(n,tEP) (15)

Note that we assume that updates to feedback gain are not sensitive to sensory uncertainty. Evidence that feedback controllers are well described by this process comes from studies of so-called gain adaptation [1821, 55].

We built several models which fall into two classes of assumptions regarding how sensory uncertainty influences motor learning. The first class of models assumes that sensory uncertainty influences the updating of the adaptive learning process (i.e., it takes hold somewhere in the state-update equation). This class of models includes the error-scaling, retention-scaling, and bias-scaling models. Error-scaling models assume that sensory uncertainty scales the contribution of the error term (e.g., αδ(n) in Eq 2), retention-scaling models assume that sensory uncertainty scales the contribution of the retention term (e.g., βx(n) in Eq 2), and bias-scaling models assume that sensory uncertainty scales the contribution of the bias term (e.g., λ in Eq 2).

The second class of models assumes that sensory uncertainty influences an aiming process that contains no retention from one trial to the next (i.e., it is memoryless) and is completely determined by the sensory uncertainty experienced on the previous trial. This class of models contains the state-aim-scaling and output-aim-scaling models. state-aim-scaling models are equivalent to bias-scaling models but with error and retention terms set to zero, and output-aim-scaling models assume that sensory uncertainty scales the motor output (y(n) in Eq 3). This model class is consistent with the idea that sensory uncertainty triggers explicit aiming (see the “Adaptation vs aiming” discussion section) though the aiming process need not strictly be explicit.

In the two-state version of these models, we assume that sensory uncertainty only influences xfff. That is, we assume that xffs is independent of sensory uncertainty. In particular, the feedforward internal state xffs is updated between trials in response to the sensory prediction errors experienced at midpoint and at endpoint as follows:

xffs(n+1)=βffsxffs(n)+αffsδ(n,tMP)+αffs[δ(n,tEP)-yfb(n,tMP)] (16)

Here, αffs is a learning rate parameter and βffs is a retention parameter, both bound between [0, 1]. Notice that the feedback command issued at midpoint is taken to be an error signal in these equations—in the term [δ(n, tEP) − yfb(n, tMP)]—which is a common assumption in models that join feedforward and feedback control [1821].

Error-scaling models

The uncertainty of sensory feedback influences the update to xff in the single-state error-scaling model by acting as a gain on the learning rate αff. The update is given by:

xff(n+1)=βffxff(n)+[γ][νIMP(n)][αffδ(n,tMP)]+[1-γ][νIEP(n)][αff[δ(n,tEP)-yfb(n,tMP)]]+λ (17)

Here, ν(n) = [ν0, νM, νH, ν] is a row vector of free parameters (one value for each level of sensory uncertainty), with boundary conditions between [0, 1], to represent the scaling effect of sensory uncertainty. IMP(n) is a column vector that indicates the uncertainty of sensory feedback that was present on trial n at midpoint, IEP(n) is a column vector that indicates the uncertainty at endpoint, and γ is a temporal discounting parameter, bound between [0, 1], that determines the relative weighting of midpoint versus endpoint feedback on the overall state update. For instance, if γ > 0.5, midpoint feedback drives the majority of the state update. If γ < 0.5, endpoint feedback drives the majority of the state update. Note that any interpretation of the temporal discount parameter is relevant only in the case of Experiment 3, which is the only paradigm that applies unmatched midpoint and endpoint feedback, a feature required to demarcate the effect of the learning rate parameter from the effect of the temporal discounting parameter. The constant bias term λ is bound between [-10, 10]. The update to xfff in the two-state error-scaling follows exactly the same equation, while the update to xffs is denoted by Eq 16. The classic finding that sensory uncertainty inversely scales the magnitude of the error-driven component of the feedforward update [9, 1113, 1517] would be recapitulated here if (1) the error-scaling model provides the best-fitting account of our data, and (2) the best-fitting parameters were such that ν0 > νM > νH > ν.

Retention-scaling models

The uncertainty of sensory feedback influences the update to xff in the single-state retention-scaling model by acting as a gain on the retention term βff. The update is given by the following equations:

xff(n+1)=[γνIMP(n)+(1-γ)νIEP(n)][βffxff(n)]+[αffδ(n,tMP)]+[αff[δ(n,tEP)-yfb(n,tMP)]]+λ (18)

All parameters, nomenclature and bounds are identical to those described above for the error-scaling model. The update to xfff in the two-state retention-scaling model follows exactly the same equation, while the update to xffs is denoted by Eq 16.

Bias-scaling models

The uncertainty of sensory feedback influences the update to xff in the single-state bias-scaling model by acting as a gain on the bias term λ. The update is given by:

xfff(n+1)=[βfffxfff(n)]+[αfffδ(n,tMP)]+[αfff[δ(n,tEP)-yfb(n,tMP)]]+[γνIMP(n)+(1-γ)νIEP(n)]λ (19)

All parameters and nomenclature are identical to those described above for the error-scaling model. The update to xfff in the two-state bias-scaling follows exactly the same equation, while the update to xffs is denoted by Eq 16.

State-aim-scaling models

The uncertainty of sensory feedback influences the update to xff in exactly the same fashion as it does for the bias-scaling models. The key difference is that the state-aim-scaling models have zero retention and zero error updating. The state update is given by:

xfff(n+1)=[γνIMP(n)+(1-γ)νIEP(n)]λ (20)

All parameters and nomenclature are identical to those described above for the error-scaling model. The update to xfff in the two-state state-aim-scaling model follows exactly the same equation, while the update to xffs is denoted by Eq 16.

Output-aim-scaling models

In contrast to all other models described thus far, the uncertainty of sensory feedback does not influence the update to xff at all in output-aim-scaling models. Rather, these models assume that the uncertainty of sensory feedback directly influences the feedforward motor output as follows:

y(n)=yff(n)+yfb(n)+yaim(n) (21)
yaim(n)=[γνIMP(n-1)+(1-γ)νIEP(n-1)]λ (22)

All parameters and nomenclature are identical to those described above for the error-scaling model.

Parameter estimation

For each model, we obtained best-fitting parameter estimates on a per subject basis by minimising the following sum of squared error difference between the observed and predicted midpoint and endpoint hand angles:

E=iN[ypred(i,tMP)-yobs(i,tMP)]2+ (23)
iN[ypred(i,tEP)-yobs(i,tEP)]2 (24)

Here, N is the number of trials, ypred(i, tMP) and ypred(i, tEP) are the model predicted hand angle at midpoint and endpoint, respectively, on trial i, and yobs(i, tMP) and yobs(i, tEP) are the corresponding hand angles observed in a human participants. To find the parameter values that minimised E, we used the differential evolution optimization [56] method implemented in SciPy [48].

Parameter bounds

Table 13 shows the bounds under which parameter optimization was constrained. Note that we fit two different sets of bounds for the two-state model. The first, simply called two-state, allowed the internal state variables (xs and xf) to sometimes take negative values. If the fast internal state variable (xf) corresponds to an explicit aiming strategy, then owing to the inherent flexibility of such aiming strategies, these negative values do not seem particularly problematic or counter-intuitive. However, in our study at least, xf cannot be unambiguously linked to cognitive aiming strategies and may instead reflect the operation of an implicit adaptation system. In this case, negative internal state values may potentially be more biologically implausible. For example, what conditions would induce an implicit adaptation system to drive one state variable in a highly positive direction and the other slightly negative? Such a system could surely exist in principle, but given the state of knowledge in our field, it seems less parsimonious then a system for which both slow and fast internal state variables remain positive. For this reason we also fit a version of the two-state model with parameter bounds that ensured that both state variables remained positive. We call these non-neg-two-state models.

Table 13. State-space model parameter bounds.

Lower and upper values are indicated by (lb, ub), respectively. The nomenclature (-,-) indicates that this parameter was not present in the corresponding model. Blank entries indicate that the bounds were inhereted from the one-state model.

Error- and retention- scaling Bias-scaling State-aim- and output-aim- scaling
one-state two-state non-neg one-state two-state non-neg one-state two-state non-neg
parameter (lb, ub) (lb, ub) (lb, ub) (lb, ub) (lb, ub) (lb, ub) (lb, ub) (lb, ub) (lb, ub)
α s (-, -) (0, 1) (-, -) (0, 1) (-, -) (0, 1)
β s (-, -) (0, 1) (-, -) (0, 1) (-, -) (0, 1)
λs (-, -) (0, 0) (-, -) (0, 0) (-, -) (0, 1)
α f (0, 1) (0, 1) (-, -)
β f (0, 1) (0, 1) (-, -)
λf (-10, 10) (0, 10) (-10, 10) (0, 10) (-, -) (0, 10)
α fb (0, 1) (0, 1) (0, 1)
β fb (-10, 10) (-10, 10) (-10, 10)
x fb init (-2, 2) (-2, 2) (-2, 2)
ν ff1 (0, 1) (0, 1) (0, 1)
ν ff2 (0, 1) (0, 1) (0, 1)
ν ff3 (0, 1) (0, 1) (0, 1)
ν ff4 (0, 1) (0, 1) (0, 1)
ν fb1 (0, 1) (-1, 1) (-20, 20) (0, 20)
ν fb2 (0, 1) (-1, 1) (-20, 20) (0, 20)
ν fb3 (0, 1) (-1, 1) (-20, 20) (0, 20)
ν fb4 (0, 1) (-1, 1) (0, 1) (-20, 20) (0, 20)
γ (0, 1) (0, 1) (0, 1)

Model comparison

For each model, we computed the Bayesian Information Criterion (BIC) as follows:

BIC=nln(1-R2)+kln(n) (25)

Here k represents the number of model parameters for each model, n represents the number of data points, and R2 is the proportion of variance explained by the optimised model. Models with lower BIC value are preferred [57].

Supporting information

S1 Table. Experiment 1 two-state model comparison statistics.

Abbreviations are std is standard deviation, T is the t-statistic, dof is degrees of freedom, p-corr is the p-value corrected for multiple comparisons, and hedges is Hedges g.

(PDF)

S2 Table. Experiment 2 two-state model comparison statistics.

Abbreviations are std is standard deviation, T is the t-statistic, dof is degrees of freedom, p-corr is the p-value corrected for multiple comparisons, and hedges is Hedges g.

(PDF)

S3 Table. Experiment 3 two-state model comparison statistics.

Abbreviations are std is standard deviation, T is the t-statistic, dof is degrees of freedom, p-corr is the p-value corrected for multiple comparisons, and hedges is Hedges g.

(PDF)

S1 Fig. Experiment 1: Model fit and optimized parameter distributions for the two-state error-scaling model.

(TIF)

S2 Fig. Experiment 1: Model fit and optimized parameter distributions for the two-state error-scaling non-negative λ model.

(TIF)

S3 Fig. Experiment 1: Model fit and optimized parameter distributions for the two-state retention-scaling model.

(TIF)

S4 Fig. Experiment 1: Model fit and optimized parameter distributions for the two-state retention-scaling non-negative λ model.

(TIF)

S5 Fig. Experiment 1: Model fit and optimized parameter distributions for the two-state bias-scaling model.

(TIF)

S6 Fig. Experiment 1: Model fit and optimized parameter distributions for the two-state bias-scaling non-negative λ model.

(TIF)

S7 Fig. Experiment 1: Model fit and optimized parameter distributions for the two-state output-scaling model.

(TIF)

S8 Fig. Experiment 1: Model fit and optimized parameter distributions for the two-state output-scaling non-negative λ model.

(TIF)

S9 Fig. Experiment 1: Model fit and optimized parameter distributions for the two-state aim-scaling model.

(TIF)

S10 Fig. Experiment 1: Model fit and optimized parameter distributions for the two-state aim-scaling non-negative λ model.

(TIF)

S11 Fig. Experiment 2: Model fit and optimized parameter distributions for the two-state error-scaling model.

(TIF)

S12 Fig. Experiment 2: Model fit and optimized parameter distributions for the two-state error-scaling non-negative λ model.

(TIF)

S13 Fig. Experiment 2: Model fit and optimized parameter distributions for the two-state retention-scaling model.

(TIF)

S14 Fig. Experiment 2: Model fit and optimized parameter distributions for the two-state retention-scaling non-negative λ model.

(TIF)

S15 Fig. Experiment 2: Model fit and optimized parameter distributions for the two-state bias-scaling model.

(TIF)

S16 Fig. Experiment 2: Model fit and optimized parameter distributions for the two-state bias-scaling non-negative λ model.

(TIF)

S17 Fig. Experiment 2: Model fit and optimized parameter distributions for the two-state output-scaling model.

(TIF)

S18 Fig. Experiment 2: Model fit and optimized parameter distributions for the two-state output-scaling non-negative λ model.

(TIF)

S19 Fig. Experiment 2: Model fit and optimized parameter distributions for the two-state aim-scaling model.

(TIF)

S20 Fig. Experiment 2: Model fit and optimized parameter distributions for the two-state aim-scaling non-negative λ model.

(TIF)

S21 Fig. Experiment 3: Model fit and optimized parameter distributions for the two-state error-scaling model.

(TIF)

S22 Fig. Experiment 3: Model fit and optimized parameter distributions for the two-state error-scaling non-negative λ model.

(TIF)

S23 Fig. Experiment 3: Model fit and optimized parameter distributions for the two-state retention-scaling model.

(TIF)

S24 Fig. Experiment 3: Model fit and optimized parameter distributions for the two-state retention-scaling non-negative λ model.

(TIF)

S25 Fig. Experiment 3: Model fit and optimized parameter distributions for the two-state bias-scaling model.

(TIF)

S26 Fig. Experiment 3: Model fit and optimized parameter distributions for the two-state bias-scaling non-negative λ model.

(TIF)

S27 Fig. Experiment 3: Model fit and optimized parameter distributions for the two-state output-scaling model.

(TIF)

S28 Fig. Experiment 3: Model fit and optimized parameter distributions for the two-state output-scaling non-negative λ model.

(TIF)

S29 Fig. Experiment 3: Model fit and optimized parameter distributions for the two-state aim-scaling model.

(TIF)

S30 Fig. Experiment 3: Model fit and optimized parameter distributions for the two-state aim-scaling non-negative λ model.

(TIF)

Data Availability

Data and analysis code can be accessed at: https://github.com/crossley/sensory_uncertainty_fffb.

Funding Statement

The author(s) received no specific funding for this work.

References

  • 1. Faisal AA, Selen LPJ, Wolpert DM. Noise in the nervous system. Nature Reviews Neuroscience. 2008;9(4):292–303. doi: 10.1038/nrn2258 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Bays PM, Wolpert DM. Computational principles of sensorimotor control that minimize uncertainty and variability. The Journal of physiology. 2007;578(2):387–396. doi: 10.1113/jphysiol.2006.120121 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Saunders JA, Knill DC. Visual feedback control of hand movements. Journal of Neuroscience. 2004;24(13):3223–3234. doi: 10.1523/JNEUROSCI.4319-03.2004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Saunders JA, Knill DC. Humans use continuous visual feedback from the hand to control both the direction and distance of pointing movements. Experimental brain research. 2005;162:458–473. doi: 10.1007/s00221-004-2064-1 [DOI] [PubMed] [Google Scholar]
  • 5. Wolpert DM, Ghahramani Z. Computational principles of movement neuroscience. Nature Neuroscience. 2000;3(S11):1212–1217. doi: 10.1038/81497 [DOI] [PubMed] [Google Scholar]
  • 6. Shadmehr R, Smith MA, Krakauer JW. Error Correction, Sensory Prediction, and Adaptation in Motor Control. Annual Review of Neuroscience. 2010;33(1):89–108. doi: 10.1146/annurev-neuro-060909-153135 [DOI] [PubMed] [Google Scholar]
  • 7. Wagner MJ, Smith MA. Shared internal models for feedforward and feedback control. Journal of Neuroscience. 2008;28(42):10663–10673. doi: 10.1523/JNEUROSCI.5479-07.2008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Körding KP, Wolpert DM. Bayesian integration in sensorimotor learning. Nature. 2004;427(6971):244–247. doi: 10.1038/nature02169 [DOI] [PubMed] [Google Scholar]
  • 9. Fernandes HL, Stevenson IH, Vilares I, Kording KP. The generalization of prior uncertainty during reaching. Journal of Neuroscience. 2014;34(34):11470–11484. doi: 10.1523/JNEUROSCI.3882-13.2014 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Hewitson CL, Sowman PF, Kaplan DM. Interlimb Generalization of Learned Bayesian Visuomotor Prior Occurs in Extrinsic Coordinates. Eneuro. 2018;5(4). doi: 10.1523/ENEURO.0183-18.2018 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Burge J, Ernst MO, Banks MS. The statistical determinants of adaptation rate in human reaching. Journal of Vision. 2008;8(4):20. doi: 10.1167/8.4.20 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Wei K. Uncertainty of feedback and state estimation determines the speed of motor adaptation. Frontiers in Computational Neuroscience. 2010; doi: 10.3389/fncom.2010.00011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Tsay JS, Avraham G, Kim HE, Parvin DE, Wang Z, Ivry RB. The effect of visual uncertainty on implicit motor adaptation. Journal of neurophysiology. 2021;125(1):12–22. doi: 10.1152/jn.00493.2020 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Scheidt RA, Dingwell JB, Mussa-Ivaldi FA. Learning to Move Amid Uncertainty. Journal of Neurophysiology. 2001;86(2):971–985. doi: 10.1152/jn.2001.86.2.971 [DOI] [PubMed] [Google Scholar]
  • 15. Baddeley RJ, Ingram HA, Miall RC. System Identification Applied to a Visuomotor Task: Near-Optimal Human Performance in a Noisy Changing Task. The Journal of Neuroscience. 2003;23(7):3066–3075. doi: 10.1523/JNEUROSCI.23-07-03066.2003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Verstynen T, Sabes PN. How Each Movement Changes the Next: An Experimental and Theoretical Study of Fast Adaptive Priors in Reaching. Journal of Neuroscience. 2011;31(27):10050–10059. doi: 10.1523/JNEUROSCI.6525-10.2011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Fernandes HL, Stevenson IH, Kording KP. Generalization of Stochastic Visuomotor Rotations. PLoS ONE. 2012;7(8):e43016. doi: 10.1371/journal.pone.0043016 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Kawato M. Internal models for motor control and trajectory planning. Current Opinion in Neurobiology. 1999;9(6):718–727. doi: 10.1016/S0959-4388(99)00028-8 [DOI] [PubMed] [Google Scholar]
  • 19. Thoroughman KA, Shadmehr R. Electromyographic Correlates of Learning an Internal Model of Reaching Movements. The Journal of Neuroscience. 1999;19(19):8573–8588. doi: 10.1523/JNEUROSCI.19-19-08573.1999 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. White O, Diedrichsen J. Responsibility assignment in redundant systems. Current Biology. 2010;20(14):1290–1295. doi: 10.1016/j.cub.2010.05.069 [DOI] [PubMed] [Google Scholar]
  • 21. Ito M. Error detection and representation in the olivo-cerebellar system. Frontiers in neural circuits. 2013;7:1. doi: 10.3389/fncir.2013.00001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Albert ST, Shadmehr R. The neural feedback response to error as a teaching signal for the motor learning system. Journal of Neuroscience. 2016;36(17):4832–4845. doi: 10.1523/JNEUROSCI.0159-16.2016 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23. Maeda RS, Gribble PL, Pruszynski JA. Learning new feedforward motor commands based on feedback responses. Current Biology. 2020;30(10):1941–1948. doi: 10.1016/j.cub.2020.03.005 [DOI] [PubMed] [Google Scholar]
  • 24. Taylor JA, Krakauer JW, Ivry RB. Explicit and Implicit Contributions to Learning in a Sensorimotor Adaptation Task. Journal of Neuroscience. 2014;34(8):3023–3032. doi: 10.1523/JNEUROSCI.3619-13.2014 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25. Kitazawa S, Kohno T, Uka T. Effects of delayed visual information on the rate and amount of prism adaptation in the human. Journal of Neuroscience. 1995;15(11):7644–7652. doi: 10.1523/JNEUROSCI.15-11-07644.1995 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26. Kitazawa S, Yin PB. Prism adaptation with delayed visual error signals in the monkey. Experimental brain research. 2002;144(2):258–261. doi: 10.1007/s00221-002-1089-6 [DOI] [PubMed] [Google Scholar]
  • 27. Brudner SN, Kethidi N, Graeupner D, Ivry RB, Taylor JA. Delayed feedback during sensorimotor learning selectively disrupts adaptation but not strategy use. Journal of neurophysiology. 2016;115(3):1499–1511. doi: 10.1152/jn.00066.2015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28. Cheng S, Sabes PN. Modeling Sensorimotor Learning with Linear Dynamical Systems. Neural Computation. 2006;18:760–793. doi: 10.1162/089976606775774651 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29. Smith MA, Ghazizadeh A, Shadmehr R. Interacting Adaptive Processes with Different Timescales Underlie Short-Term Motor Learning. PLoS Biology. 2006;4(6):e179. doi: 10.1371/journal.pbio.0040179 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30. Yin C, Wang H, Wei K, Körding KP. Sensorimotor priors are effector dependent. Journal of Neurophysiology. 2019;122(1):389–397. doi: 10.1152/jn.00228.2018 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31. Kawato M, Furukawa K, Suzuki R. A hierarchical neural-network model for control and learning of voluntary movement. Biological cybernetics. 1987;57(3):169–185. doi: 10.1007/BF00364149 [DOI] [PubMed] [Google Scholar]
  • 32. Wolpert DM, Miall RC, Kawato M. Internal models in the cerebellum. Trends in Cognitive Sciences. 1998;2(9):338–347. doi: 10.1016/S1364-6613(98)01221-2 [DOI] [PubMed] [Google Scholar]
  • 33. Thoroughman KA, Shadmehr R. Learning of action through adaptive combination of motor primitives. Nature. 2000;407(6805):742–747. doi: 10.1038/35037588 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34. Tanaka H, Sejnowski TJ, Krakauer JW. Adaptation to visuomotor rotation through interaction between posterior parietal and motor cortical areas. Journal of neurophysiology. 2009;102(5):2921–2932. doi: 10.1152/jn.90834.2008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35. Tanaka H, Krakauer JW, Sejnowski TJ. Generalization and multirate models of motor adaptation. Neural computation. 2012;24(4):939–966. doi: 10.1162/NECO_a_00262 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36. Korenberg AT, Ghahramani Z. A Bayesian view of motor adaptation. Current Psychology of Cognition. 2002;21(4/5):537–564. [Google Scholar]
  • 37. Wei K, Kording K. Relevance of error: what drives motor adaptation? Journal of neurophysiology. 2009;101(2):655–664. doi: 10.1152/jn.90545.2008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38. Kalman RE. A new approach to linear filtering and prediction problems. Journal of Basic Engineering. 1960;82:35–45. doi: 10.1115/1.3662552 [DOI] [Google Scholar]
  • 39. Taylor JA, Ivry RB. Flexible Cognitive Strategies during Motor Learning. PLoS Computational Biology. 2011;7(3):e1001096. doi: 10.1371/journal.pcbi.1001096 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40. McDougle SD, Ivry RB, Taylor JA. Taking Aim at the Cognitive Side of Learning in Sensorimotor Adaptation Tasks. Trends in Cognitive Sciences. 2016;20(7):535–544. doi: 10.1016/j.tics.2016.05.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41. Tsay JS, Haith AM, Ivry RB, Kim HE. Interactions between sensory prediction error and task error during implicit motor learning. PLoS computational biology. 2022;18(3):e1010005. doi: 10.1371/journal.pcbi.1010005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42. Kim HE, Morehead JR, Parvin DE, Moazzezi R, Ivry RB. Invariant errors reveal limitations in motor correction rather than constraints on error sensitivity. Communications Biology. 2018;1(1):1–7. doi: 10.1038/s42003-018-0021-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43. Albert ST, Jang J, Sheahan HR, Teunissen L, Vandevoorde K, Herzfeld DJ, et al. An implicit memory of errors limits human sensorimotor adaptation. Nature human behaviour. 2021; p. 1–15. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44. Krakauer JW, Pine ZM, Ghilardi MF, Ghez C. Learning of Visuomotor Transformations for Vectorial Planning of Reaching Trajectories. The Journal of Neuroscience. 2000;20(23):8916–8924. doi: 10.1523/JNEUROSCI.20-23-08916.2000 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45. Brayanov JB, Press DZ, Smith MA. Motor Memory Is Encoded as a Gain-Field Combination of Intrinsic and Extrinsic Action Representations. Journal of Neuroscience. 2012;32(43):14951–14965. doi: 10.1523/JNEUROSCI.1928-12.2012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46. Leukel C, Gollhofer A, Taube W. In Experts, underlying processes that drive visuomotor adaptation are different than in Novices. Frontiers in Human Neuroscience. 2015;9. doi: 10.3389/fnhum.2015.00050 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47. Harris CR, Millman KJ, van der Walt SJ, Gommers R, Virtanen P, Cournapeau D, et al. Array programming with NumPy. Nature. 2020;585(7825):357–362. doi: 10.1038/s41586-020-2649-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48. Virtanen P, Gommers R, Oliphant TE, Haberland M, Reddy T, Cournapeau D, et al. SciPy 1.0: fundamental algorithms for scientific computing in Python. Nature methods. 2020;17(3):261–272. doi: 10.1038/s41592-019-0686-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.McKinney W, others. Data structures for statistical computing in python. In: Proceedings of the 9th Python in Science Conference. vol. 445. Austin, TX; 2010. p. 51–56.
  • 50. Hunter JD. Matplotlib: A 2D graphics environment. Computing in science & engineering. 2007;9(03):90–95. doi: 10.1109/MCSE.2007.55 [DOI] [Google Scholar]
  • 51. Vallat R. Pingouin: statistics in Python. Journal of Open Source Software. 2018;3(31):1026. doi: 10.21105/joss.01026 [DOI] [Google Scholar]
  • 52. Georgopoulos A, Kalaska J, Caminiti R, Massey J. On the relations between the direction of two-dimensional arm movements and cell discharge in primate motor cortex. The Journal of Neuroscience. 1982;2(11):1527–1537. doi: 10.1523/JNEUROSCI.02-11-01527.1982 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53. Scott SH, Gribble PL, Graham KM, Cabel DW. Dissociation between hand motion and population vectors from neural activity in motor cortex. Nature. 2001;413(6852):161–165. doi: 10.1038/35093102 [DOI] [PubMed] [Google Scholar]
  • 54. Lindeman RH, Merenda PF, Gold RZ. Introduction to bivariate and multivariate analysis. (No Title). 1980;. [Google Scholar]
  • 55. Franklin S, Wolpert DM, Franklin DW. Visuomotor feedback gains upregulate during the learning of novel dynamics. Journal of neurophysiology. 2012;108(2):467–478. doi: 10.1152/jn.01123.2011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56. Storn R, Price K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. Journal of global optimization. 1997;11(4):341–359. doi: 10.1023/A:1008202821328 [DOI] [Google Scholar]
  • 57. Kass RE, Raftery AE. Bayes factors. Journal of the american statistical association. 1995;90(430):773–795. doi: 10.1080/01621459.1995.10476572 [DOI] [Google Scholar]
PLoS Comput Biol. doi: 10.1371/journal.pcbi.1010526.r001

Decision Letter 0

Adrian M Haith, Daniele Marinazzo

18 Oct 2022

Dear Dr Crossley,

Thank you very much for submitting your manuscript "Sensory uncertainty punctuates motor learning independently of movement error when both feedforward and feedback control processes are engaged" for consideration at PLOS Computational Biology.

As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. The reviewers raised a number of significant issues in the manuscript. In particular, they had major concerns about the validity of the computational model and the ultimate interpretation of the findings. The reviewers were in agreement that the experiments and results were interesting and novel but that, at present, the paper falls short of providing a fully rigorous and convincing contribution to our understanding of motor adaptation. We would, however, be willing to consider a revised version of the manuscript in which the concerns raised by the reviewers are thoroughly addressed. Given the scope and nature of the concerns raised by the reviewers, the revisions would likely need to be extensive. Your revised manuscript will be sent out to the reviewers again to be re-evaluated and we cannot make any decision on this submission until after that time.

When you are ready to resubmit, please upload the following:

[1] A letter containing a detailed list of your responses to the review comments and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

[2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file).

Important additional instructions are given below your reviewer comments.

Please prepare and submit your revised manuscript within 60 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email. Please note that revised manuscripts received after the 60-day due date may require evaluation and peer review similar to newly submitted manuscripts.

Thank you again for your submission. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments.

Sincerely,

Adrian M Haith

Academic Editor

PLOS Computational Biology

Daniele Marinazzo

Section Editor

PLOS Computational Biology

***********************

Reviewer's Responses to Questions

Comments to the Authors:

Please note here if the review is uploaded as an attachment.

Reviewer #1: review uploaded as attachment

Reviewer #2: Summary:

The study investigates how visual uncertainty of movement feedback affects motor adaptation when the feedback is provided during and after the movement. Traditional motor adaptation studies on sensory uncertainty focus on how adaptation evolves between trials in a feedforward fashion. Instead, the paper studies a combined case when in-movement sensory integration and between-trial adaptation co-occur. With three behavioral experiments, the authors found that randomly presented high-uncertainty trials (implemented by modifying visual feedback during and after movements) lead to abrupt changes in feedforward adaptation, which are dependent on uncertainty level but not on movement error size. Model comparisons revealed that scaling the retention or trial-by-trial bias instead of the error adaptation rate can better explain the data. These findings are drastically different from those in the literature, where sensory uncertainty has been repetitively shown to affect the adaptation rate. 

The study is original, with a strong focus on probing its study aim. It is also novel in combining online movement feedback integration and offline motor adaptation. The technical part of modeling is also sound. However, the seemingly surprising results and their explanations invite scrutiny as the task design probably invokes unwanted effects, making their conclusions thus far ungrounded. 

Major concerns:

First, some important behavioral patterns and modeling results have not been fully analyzed or explained, preventing our thorough understanding of the interesting phenomena: 

1. The main selling point is that the mid-movement feedback enables a feedback integration component in addition to the feedforward adaptation. However, the paper only analyzed the initial movement direction to study feedforward adaptation. Is there any proof that feedback integration happens during the movement? Do we see a curved movement trajectory or a change in movement endpoint? How does this potential behavioral correction relate to the model-fitting results? It is well possible that the so-called feedback integration is negligible. And the influence of mid-movement feedback should be considered as part of feedforward adaptation, which only affects the next trial. This is a vital question before we try to understand what these peculiar findings mean.

2. A cross-experiment comparison is lacking. It is apparent that exp2 and 3 have larger adaptations than exp1. Do the three experiments have the same decay in the washout phase? How do the experiment differences inform the modeling? I see the slow state during the decay from all models is negative in all three experiments, up to -10 in exp3. This is rather surprising given the current understanding of the slow state, which probably reflects the implicit adaptation or the true re-calibration of the internal model. But it would never be negative. Note the decay portion of the data largely determines the model fitting.

3. The quick transitions between low and high uncertainty trials should be plotted better using other means, such as a scatter plot of uncertainty vs. movement direction (at the adaptation plateau). What do the adaptation changes look like for each kind of transition? Currently, this part of vital results is not even visible.  

4. What does the transition from the last adaptation trial to the first washout trial look like? This is another vital piece of missing information to unpack what constitutes the adaptation here. If the authors give clear instructions to drop the re-aiming strategy at the beginning of the washout, we should expect a sudden decrease in the movement direction. And, then the second hypothesis mentioned in the Discussion (strategic learning leads to fast changes during adaptation) can be examined.

5. Why does the state-scaling model win over the bias-scaling model in exp2 and 3 while we see the opposite in exp1? Note that the overall learning patterns are similar across experiments. Is it possible that the retention term and the bias term co-vary to generate the effect?

Second, though some of the behavioral data were not shown clearly in the paper, I think the task design has introduced some unwanted effects, which are not possibly modeled by the error-based model variants here.  

The perturbations are random rotation angles, ~ N(12,16) degrees. The gaussian point clouds are specified in cm (the paper should convert one or another to make a clear description of what the perturbation would look like). By my calculation, the mid-movement perturbation is, on average, centered at 1.06cm away from the desired straight line, and the endpoint perturbation is, on average, 2.12cm. The high-uncertainty dot cloud (with a SD of 1cm) would “touch” the straight line, thus giving people the impression that there is no perturbation (zero error). The medium-uncertainty dot cloud (with a SD of 0.5cm) would also touch the straight line if presented at the mid-movement. This is probably the reason that these trials lead to fast trial-by-trial changes, riding on top of a slow time-scale learning curve. These interleaved “target-hit” trials would affect strategic re-aiming and implicit adaptation. The former is driven by performance error (for the target hit, it is zero). The latter is also expected since, say, Richard Ivry group (Kim et al., 2019) have shown that touching the target, even barely without a direct hit, would effectively reduce the implicit adaptation. The mechanism for the damped implicit learning by this zero or reduced performance error (not sensory prediction error modeled here) is not clear, though reinforcement learning or motivational factors have been suggested. 

In other words, the study started off by implementing cloud dots to manipulate sensory uncertainty (precision) but triggered unwanted learning mechanisms that depend on performance error (bias). This target hit would be more severe for mid-movement feedback, which is the study subject of the paper. Thus, it is not fair, as claimed by the paper, that the sensory uncertainty at the midpoint, as opposed to the perturbation size, determines the feedforward adaptation here. Or, as it claims in the title, “sensory uncertainty punctuates motor learning independently of movement error…” It is the manipulation of the sensory uncertainty that accidentally nullifies the perceived error and thus leads to a temporary withholding of the adaptation process (note all these cloud-dot trials are interleaved with the small-uncertainty trials, which I believe is clearly perceived off the straight line). 

In this light, the so-called anti-adaptive effect, seen in angle changes between two cloud-dot trials (sigma_L to sigma_M, or sigma_M to sigma_H), is totally expected. Note the reduction of learning amounts to > 30% after single mid-movement feedback. This kind of fast change is indeed not expected by any model incorporating the effect of sensory uncertainty but is expected with a combination of a fast strategic re-aiming and damped implicit learning. Given these extremely volatile changes, the single-state models will not work, and the error-scaling model will not work either, given the normal range of learning rate. 

Minor:

I don’t understand why we can simply assume that the uncertainty only affects the fast state in the two-state variants of the models. I know that fast, abrupt adaptation changes are prominent in this data set (possibly due to the reasons I give above). But in the domain of motor adaptation, the slow process has been shown to be dependent on sensory uncertainty (e.g., an implicit adaptation that is supposedly governed by slow processes is affected by proprioceptive uncertainty). Theoretically, it is explicit learning (one of the fast processes) that is less affected by sensory uncertainty (precision) since the mean performance error (bias) matters more for strategical learning. Anyhow, in this sense, the results strongly suggest that the fast changes, which are uncertainty-dependent, reflect the explicit learning component. 

Figures 3, 5, and 7 can use a different color coding to distinguish the perturbation size better while making the zero perturbation a completely different color off the color-coding scale. 

Line614: x_fb(n) denotes the so-called feedback gain, but why feedback gain follows a state-space model as specified by eq.15? 

Eq14: y(n,t_MP) is unspecified. Note how this variable relates to eq.13 is also not given. 

As it currently stands, the model is poorly described. It is better to give a graphic illustration of how the modeled variables are related in this movement paradigm. Simply providing a list of equations would not give the readers a gist of what is modeled, especially when the model consists of two time scales, two time points of updating, annd two learning mechanisms (feedback and feedforward). To simplify the modeling part, I suggest only presenting the two-state model variants since they are well accepted as the default model for motor adaptation and also fit the data well. If possible, put the single-state model results in the supplementary and focus on comparing different scalings in the main text.  

Line632-635: eq.2, not 3?

eq.17: Is the gamma parameter necessary? Why do we need to assume that the mid-point feedback and the endpoint feedback compete?

Instead of giving out ANOVA results (Line185) to show significance, the author should provide condition means to give us a full picture of how this effect changes across uncertainty conditions.

Figures fonts are too small to read. 

Line349:unclear to me what the authors are trying to convey here. “Two uncertainty conditions” instead of “two high uncertainty conditions”?

Line393: Experiment 3

Reviewer #3: In this paper, Hewitson and colleagues investigate the effect of visual uncertainty on feedback and feedforward processes during motor adaptation. These influences had been studied separately for feedforward and feedback processes and they do it at the same time. The authors found that sensory uncertainty had the expected effect on feedback process (less correction when more uncertainty) but it had an unexpected effect on feedforward process (increasing uncertainty decreased adaptation).

I found the results intriguing and interesting. It is a pity that the authors did not investigate further whether the observed pattern could be due to the explicit component of adaptation. I am very skeptical of the modelling approach and there is little rationale for the different modelling choices. These and other comments are developed below.

Major comments:

1. The authors discuss the idea that the changes due to sensory uncertainty might be due to the explicit component of adaptation. They left it opened without real conclusion because they did not actually test it. This is in my opinion detrimental to the paper because it left it without a real conclusion. The results now are not really interpretable because we don’t know what the nature of the rapid changes is. I believe that this paper should test whether the explicit component of adaptation is responsible for the rapid shift or not. If it is due to explicit strategies, then it is not comparable to what happened to the feedback process which is likely implicit. If it is due to the explicit component of adaptation, then the model does not really make sense as explicit adaptation is driven by task performance error and not by sensory prediction error like the implicit component is. Yet, in current models in the paper, both states are driven by sensory prediction error.

2. I have several problems with the model. The model seems to be ad-hoc and not really supported by previous research or experimental observations:

a. The authors seem to confound task performance error and sensory prediction error. Sensory prediction error drives the implicit adaptation while the task performance error (midpoint or endpoint) drives the explicit component of adaptation and feedback responses. The model mixes the two continuously.

b. The authors model the feedback gains as following a state-space model (Eq.15). Where is the evidence for such a model? Feedback gains can be changed within a trial (work of Fred Crevecoeur on small and large targets) and are tuned continuously in function of the sensory feedback. I don’t think that the model would be able to capture that.

c. The model contains the idea that endpoint feedback is used to tune the gains on the next trial. Where is the evidence for that statement (Eq.16)? Eq.16 does not make sense to me. If it is a state-space, you need to update it once for midpoint and once for endpoint. Updating it based on two errors is weird.

d. Eq.16 also suggests that the sensory prediction error takes the feedback response into account. What is the evidence that this is actually the case?

e. Eq.17 suggests that sensory feedback from midpoint and endpoint is weighted to compute a weighted average error. What is the evidence that this weighting exists? In addition, I don’t see how this would generalize to a condition where visual feedback is continuously available.

f. I wonder where the parameter Lambda comes from. This bias-parameter is absent from previous versions of the two-state models and there is not real rationale to add it to the model. In addition, the authors decided to bound it [-10,10] without any further explanation.

g. In Eq.18 and 19, the authors use a weighted some of sensory uncertainties (on retention rate and lambda). This seems weird to me and I wonder what the rationale for that would be. Do the authors think that the influence of sensory uncertainty on the retention rate from midpoint and endpoint feedback is simply the average of their sensory uncertainty? That does not make sense.

In other words, I don’t think that the models are useful because they seem to have been built to specifically fit the results of the current experiments and will not be generalizable to any other experimental results.

3. There is a lack of information about the quality of the fit for individual participants. If Fig.4,6 and 8, R2 are given for all participants together but the quality of the fit for individual participants are never given. Coltman and Gribble (https://pubmed.ncbi.nlm.nih.gov/30840553/) showed that fits of two-state models are not very reliable for single participants while they were at the group level. Could the authors provide more information about the quality of the fit for the individual participants and a sensitivity analysis of the obtained parameters?

4. I found the title weird. There is no evidence in the paper that the sensory uncertainty does not punctuate motor learning. The slow system is not interrupted by the sensory uncertainty. None of the parameters become zero or so. The fast system is simply very volatile but this is added on top of a slow component that is still learning.

Minor comments:

1. Instructions delivered during the washout phase needs to be specified. It is written that the participants were asked to reach straight to the target but, despite this instruction, the pattern of after-effect looks very different than what others have reported with similar instructions (e.g. work of Taylor). It is also unclear how the participant made the difference between no-vision trials during the adaptation and during the washout. In other words, how was the waschout period signaled to the participant.

2. There are several problems with the report of the statistics. There are many p “>” 0.001, which should be “<”. Some statistics are weird. For instance, line 190-192, the statistics is wrongly reported. The p-value is p<0.001 and not p=0.99. This effect size is actually huge. Please, check all the statistics of the paper. Maybe use StatCheck?

3. The simulation with single state-space models is useless. From the experimental results, it is pretty clear that a model with a single state will never be able to reproduce such pattern.

**********

Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: None

Reviewer #2: Yes

Reviewer #3: No: code is available, data is not

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

Reviewer #3: No

Figure Files:

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org.

Data Requirements:

Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5.

Reproducibility:

To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols

Attachment

Submitted filename: review3.docx

PLoS Comput Biol. doi: 10.1371/journal.pcbi.1010526.r003

Decision Letter 1

Adrian M Haith, Daniele Marinazzo

23 May 2023

Dear Dr Crossley,

Thank you very much for submitting your manuscript "Error-independent effect of sensory uncertainty on motor learning when both feedforward and feedback control processes are engaged" for consideration at PLOS Computational Biology. As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by two of the reviewers who reviewed the original submission. 

The reviewers appreciated the substantial efforts in revising the paper based on their prior comments, but also raised a number of remaining concerns, some conceptual in nature and others more technical. 

While I believe the reviewers have highlighted some excellent and very important points and have made numerous constructive suggestions, I actually don't find there to be many concerns in the reviews that I would consider essential to address before the paper can be published. The extensive comments from the reviewers are indicative that this is a very surprising and stimulating set of results. While there is undoubtedly scope to further improve the conceptual framing, the analytical approach, and other aspects of the paper, I also think it's clear that there is a lot of scope for further investigation here, and much of this feedback can be incorporated into future work, rather than being necessary to resolve ahead of publishing this paper.

Regarding the concerns related to interpretational issues. Some of these surround the role of implicit versus explicit processes in the results. I think it will be difficult and ultimately not really necessary to completely resolve these concerns here and now. While I agree that knowing for sure whether these effects relate to the "explicit" component would provide valuable context for the results, I don't think it would substantially impact or diminish the contributions of the present paper. There were also some concerns about terminology. However, I think this is inevitably a challenge when combining different strands of research. Provided the terminology is clearly defined (which I think it is), I think the readers will be able to follow the reasoning.

Regarding the technical issues relating to the modeling and analysis, the reviewers made numerous suggestions for the how the paper could be further improved in this regard. In particular, I think the model recovery analysis suggested by Reviewer 1 would be helpful in clarifying how clearly the various models can be distinguished - see also Wilson and Collins, eLife 2019 for a thorough discussion of this. Overall, however, I consider most of these points to be constructive suggestions for further improvement, rather than critical concerns.

In short, I feel the paper is very close to being acceptable, but the reviewers have made numerous constructive suggestions for how the paper might be further improved. I am therefore recommending a "Minor Revision" in which I encourage you to seriously engage with the reviewers' suggestions and make revisions where you feel these are appropriate and improve the paper. While I would like to see a point-by-point response to the reviewers' comments, I do not expect you to implement all the suggestions of the reviewers.

Please prepare and submit your revised manuscript within 30 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email.

When you are ready to resubmit, please upload the following:

[1] A letter containing a detailed list of your responses to all review comments, and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out

[2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file).

Important additional instructions are given below your reviewer comments.

Thank you again for your submission to our journal. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments.

Sincerely,

Adrian M Haith

Academic Editor

PLOS Computational Biology

Daniele Marinazzo

Section Editor

PLOS Computational Biology

***********************

A link appears below if there are any accompanying review attachments. If you believe any reviews to be missing, please contact ploscompbiol@plos.org immediately:

Reviewer's Responses to Questions

Comments to the Authors:

Please note here if the review is uploaded as an attachment.

Reviewer #1: see attached comments

Reviewer #2: The revision answers some of my previous concerns and, more importantly, provides more experimental and modeling findings. Interestingly, all the reviewers pointed to one most probable explanation of the seemingly surprising data: the abrupt changes in initial movement direction are probably caused by rapid explicit strategy changes. The revision provides supporting evidence that the explicit learning, indirectly shown as the abrupt drop at the beginning of the washout, was predominant for the low-uncertainty condition. This condition showed the surprising abrupt "adaptation" on top of the slow adaptation with other uncertainty conditions. Furthermore, the modeling results also support the role of explicit strategy since the models with abrupt biasing terms (bias scale, state aim, and output aim models) outperform other models. It seems convincing that the main findings can be accounted for by the explicit error correction triggered by the low-uncertainty feedback. 

However, the authors still interpret the findings as supporting that the presence of feedback integration (aka., the midpoint feedback as opposed to the endpoint feedback that was used in previous studies) alters the effect of uncertainty on feedforward adaptation. I have concerns over the study rationale, the interpretation of the findings, and the way how the study relates to existing literature.


About the relation between the current study and existing studies. The paper claims it is the first to examine sensory uncertainty when feedforward adaptation and feedback integration co-occur. This is an overstatement. Kording & Wolpert 2004 paper, the seminar paper the current study is based on, arguably combined these two processes. They made people adapt to a lateral shift (adaptation) and, on top of that, used midpoint feedback to probe the "feedback integration" to test within-trial response, just like the current paper did. Furthermore, it is not fair to say that the midpoint feedback is for feedback processes and the endpoint feedback is for feedforward processes. I appreciate that the authors give their definitions of feedback integration and feedforward adaptation in the text, but I respectfully disagree with these terminologies as it creates confusion. Endpoint feedback IS involved in feedback processes and feedback integration; it is just that its effect can only be seen in the next trial for reaching paradigms. Redefining the term will unnecessarily obscure the message of the paper. The authors can call the behavioral measures within-trial correction and cross-trial correction.


The paper starts with the recognition that sensory uncertainty is believed to slow down adaptation. In the abstract, they mentioned, "Both the degree to which sensory feedback is integrated into an ongoing movement and the degree to which movement errors drive adaptive changes in feedforward motor plans have been shown to scale inversely with sensory uncertainty." However, references 15-22 (quoted on L65 in the Introduction) are not just for showing that sensory uncertainty slows down adaptation. Some quoted studies manipulated the variability of perturbation/error size, targeting uncertainty about perturbation size (or mapping uncertainty in Burge2008's terminology, or environmental/perturbation consistency in Herzfeld2014's terminology) but not about sensory uncertainty. The Kording2004 study showed the inverse scaling for midpoint feedback, which was replicated by the current study (on a side note: this replication by itself runs against the title of the paper, i.e., the error-independent effect of sensory uncertainty on motor learning. The within-trial response is also part of motor learning; Kording2004 study was based on this idea as they quantified learning by the within-trial response to perturbations). Other quoted studies (Burge2008, Wei2010, Tsay2021) are the ones that manipulated the endpoint sensory uncertainty, and they indeed showed that sensory uncertainty slows adaptation. 


Thus, the current study only challenges the three papers claiming an inverse scaling of adaptation and endpoint sensory uncertainty. However, as I put above, the experimental findings here appear intriguing, but they can be accounted for by simple or even trivial explanations that are not adequately discussed in the current manuscript. The initial movement direction, aka the feedforward adaptation, exhibited an abrupt increase in learning following low-uncertainty trials (a 4~5 degree increase). The paper repetitively emphasizes that the effect of sensory uncertainty is independent of movement error. Even with the recognition that explicit aiming strategies might underlie these abrupt changes (and I guess we all agree so), the discussion of relevant findings departs from the current theorization of visuomotor rotation adaptation. For example, the observed initial movement vectors are suggested by the authors to reflect "true adapted state of the motor system". However, it is more or less a consensus in the field that the VMR adaptation to large perturbations consists of implicit and explicit processes. I guess the authors are referring to the former as the "true adapted state." If so, please state it clearly, as readers would not understand what the true adaptation means. Critically, the initial movement vector reflects the sum of implicit and explicit learning; this is the foundation to attribute the abrupt changes following low-uncertainty trials to explicit error correction. The other obvious misread of the literature is on L647 where the authors stated that some studies with no apparent link to strategy actually reported a large drop from adaptation to washout and that this finding thus questioned the validity of initial movement vectors as a reliable estimate of the true level of adaptation. First, I don't understand the reasoning here. Second, the quoted studies are mostly old ones before re-aiming strategies are well understood or measured. Third, the washout trial with an exclusion instruction (i.e., excluding the use of strategy) is widely accepted as a reliable measure of implicit learning. This is just another sign that the authors should interpret and discuss the current findings with a clear view of the current understanding of motor adaptation.


Going back to the three studies that the current findings appear to contradict. All three of them are probably free from the contamination of explicit strategy. Burge2008 used a relatively small step change of perturbations (8.2 degrees) in both directions with blurring cursor feedback in their experiment 1. These small angles might make the detection of external perturbations hard (see Oh & Schweighoferm 2019), and thus adaptation is mostly driven by slow implicit learning (see their data). Tsay2021 study used task-irrelevant error clamps that presumably elicit implicit adaptation only. Wei2010 study had a zero-mean perturbation size across trials, thus preventing a stable explicit strategy and a slow learning envelope. This is not correctly recognized by the authors (Line 666). Thus, careful examination of the literature and the current findings should lead us to conclude that explicit strategy use overshadows the effect of sensory uncertainty on adaptation, which are primarily implicit in relevant studies.


In summary, the study did not fulfill its aims through its experimental design since explicit strategy explains away the major findings. The study aims to study the effect of sensory uncertainty when both feedback integration (midpoint feedback if more precisely called) and feedforward adaptation (i.e., endpoint feedback) are present during visuomotor rotation adaptation. However, having midpoint feedback is not equivalent to independently invoking feedback integration, and having endpoint feedback is not equivalent to independently invoking feedforward adaptation. As I put above, midpoint feedback also drives feedforward adaptation, and endpoint feedback can also be part of the feedback process (it just does not lead to within-trial correction). Thus, the theoretical framework of the study and its actual implementation do not align. More importantly, the blurring of cursor feedback, designed to manipulate sensory uncertainty, leads to undesired consequences, e.g., "touching" the ideal direction or the target, which in turn causes strategic corrections. These strategic corrections dominate the otherwise monotonic learning pattern. On the surface, the current study challenges the Bayesian view of sensory uncertainty in motor adaptation; in reality, it is the experimental specifics that produced some intriguing data.  

I suggest the authors correctly quote relevant studies, precisely define their terms, and modestly interpret their findings with the consideration of the current understanding in the field. This means an overhaul of the Introduction and the Discussion, but it would certainly improve the study's contribution to the field. 

Minor:

Line147: I have no idea why checking these t-tests would indicate the washout decay is "somewhere between moderate and high uncertainty trials." This statement is wrong on multiple levels. t statistic and its p values are not for directly showing the effect size. If explicit learning is to be quantified, it is a common approach to compute the difference between the last couple of adaptation trials and the first washout trial, under the condition that participants are instructed to refrain from using the re-aiming strategy before the washout. 

Please correct various writing errors, e.g., L809: Th is; Line285, ?; Line475: Experiment 3.

Line288: what caveats?

Line640: combine this small paragraph with the one below. 

Line656: high r square means little for showing the validity of the models. The adaptation pattern here is a simple logarithmic function, and the log(trial) can explain most of the data variance. The additional uncertainty bias terms, retention, and learning betas are icing on the cake. Please show the partial eta squares for each term to drive this message home.

**********

Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: None

Reviewer #2: Yes

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

Figure Files:

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org.

Data Requirements:

Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5.

Reproducibility:

To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols

References:

Review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript.

If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

Attachment

Submitted filename: review3.docx

PLoS Comput Biol. doi: 10.1371/journal.pcbi.1010526.r005

Decision Letter 2

Adrian M Haith, Daniele Marinazzo

15 Aug 2023

Dear Dr Crossley,

Thank you for your diligent responses to the Reviewer's previous comments and further revisions to the manuscript. We are pleased to inform you that your manuscript 'Error-independent effect of sensory uncertainty on motor learning when both feedforward and feedback control processes are engaged' has been provisionally accepted for publication in PLOS Computational Biology.

Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. A member of our team will be in touch with a set of requests.

Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated.

IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript.

Should you, your institution's press office or the journal office choose to press release your paper, you will automatically be opted out of early publication. We ask that you notify us now if you or your institution is planning to press release the article. All press must be co-ordinated with PLOS.

Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Computational Biology. 

Best regards,

Adrian M Haith

Academic Editor

PLOS Computational Biology

Daniele Marinazzo

Section Editor

PLOS Computational Biology

***********************************************************

PLoS Comput Biol. doi: 10.1371/journal.pcbi.1010526.r006

Acceptance letter

Adrian M Haith, Daniele Marinazzo

4 Sep 2023

PCOMPBIOL-D-22-01297R2

Error-independent effect of sensory uncertainty on motor learning when both feedforward and feedback control processes are engaged

Dear Dr Crossley,

I am pleased to inform you that your manuscript has been formally accepted for publication in PLOS Computational Biology. Your manuscript is now with our production department and you will be notified of the publication date in due course.

The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript.

Soon after your final files are uploaded, unless you have opted out, the early version of your manuscript will be published online. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers.

Thank you again for supporting PLOS Computational Biology and open-access publishing. We are looking forward to publishing your work!

With kind regards,

Dorothy Lannert

PLOS Computational Biology | Carlyle House, Carlyle Road, Cambridge CB4 3DN | United Kingdom ploscompbiol@plos.org | Phone +44 (0) 1223-442824 | ploscompbiol.org | @PLOSCompBiol

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Table. Experiment 1 two-state model comparison statistics.

    Abbreviations are std is standard deviation, T is the t-statistic, dof is degrees of freedom, p-corr is the p-value corrected for multiple comparisons, and hedges is Hedges g.

    (PDF)

    S2 Table. Experiment 2 two-state model comparison statistics.

    Abbreviations are std is standard deviation, T is the t-statistic, dof is degrees of freedom, p-corr is the p-value corrected for multiple comparisons, and hedges is Hedges g.

    (PDF)

    S3 Table. Experiment 3 two-state model comparison statistics.

    Abbreviations are std is standard deviation, T is the t-statistic, dof is degrees of freedom, p-corr is the p-value corrected for multiple comparisons, and hedges is Hedges g.

    (PDF)

    S1 Fig. Experiment 1: Model fit and optimized parameter distributions for the two-state error-scaling model.

    (TIF)

    S2 Fig. Experiment 1: Model fit and optimized parameter distributions for the two-state error-scaling non-negative λ model.

    (TIF)

    S3 Fig. Experiment 1: Model fit and optimized parameter distributions for the two-state retention-scaling model.

    (TIF)

    S4 Fig. Experiment 1: Model fit and optimized parameter distributions for the two-state retention-scaling non-negative λ model.

    (TIF)

    S5 Fig. Experiment 1: Model fit and optimized parameter distributions for the two-state bias-scaling model.

    (TIF)

    S6 Fig. Experiment 1: Model fit and optimized parameter distributions for the two-state bias-scaling non-negative λ model.

    (TIF)

    S7 Fig. Experiment 1: Model fit and optimized parameter distributions for the two-state output-scaling model.

    (TIF)

    S8 Fig. Experiment 1: Model fit and optimized parameter distributions for the two-state output-scaling non-negative λ model.

    (TIF)

    S9 Fig. Experiment 1: Model fit and optimized parameter distributions for the two-state aim-scaling model.

    (TIF)

    S10 Fig. Experiment 1: Model fit and optimized parameter distributions for the two-state aim-scaling non-negative λ model.

    (TIF)

    S11 Fig. Experiment 2: Model fit and optimized parameter distributions for the two-state error-scaling model.

    (TIF)

    S12 Fig. Experiment 2: Model fit and optimized parameter distributions for the two-state error-scaling non-negative λ model.

    (TIF)

    S13 Fig. Experiment 2: Model fit and optimized parameter distributions for the two-state retention-scaling model.

    (TIF)

    S14 Fig. Experiment 2: Model fit and optimized parameter distributions for the two-state retention-scaling non-negative λ model.

    (TIF)

    S15 Fig. Experiment 2: Model fit and optimized parameter distributions for the two-state bias-scaling model.

    (TIF)

    S16 Fig. Experiment 2: Model fit and optimized parameter distributions for the two-state bias-scaling non-negative λ model.

    (TIF)

    S17 Fig. Experiment 2: Model fit and optimized parameter distributions for the two-state output-scaling model.

    (TIF)

    S18 Fig. Experiment 2: Model fit and optimized parameter distributions for the two-state output-scaling non-negative λ model.

    (TIF)

    S19 Fig. Experiment 2: Model fit and optimized parameter distributions for the two-state aim-scaling model.

    (TIF)

    S20 Fig. Experiment 2: Model fit and optimized parameter distributions for the two-state aim-scaling non-negative λ model.

    (TIF)

    S21 Fig. Experiment 3: Model fit and optimized parameter distributions for the two-state error-scaling model.

    (TIF)

    S22 Fig. Experiment 3: Model fit and optimized parameter distributions for the two-state error-scaling non-negative λ model.

    (TIF)

    S23 Fig. Experiment 3: Model fit and optimized parameter distributions for the two-state retention-scaling model.

    (TIF)

    S24 Fig. Experiment 3: Model fit and optimized parameter distributions for the two-state retention-scaling non-negative λ model.

    (TIF)

    S25 Fig. Experiment 3: Model fit and optimized parameter distributions for the two-state bias-scaling model.

    (TIF)

    S26 Fig. Experiment 3: Model fit and optimized parameter distributions for the two-state bias-scaling non-negative λ model.

    (TIF)

    S27 Fig. Experiment 3: Model fit and optimized parameter distributions for the two-state output-scaling model.

    (TIF)

    S28 Fig. Experiment 3: Model fit and optimized parameter distributions for the two-state output-scaling non-negative λ model.

    (TIF)

    S29 Fig. Experiment 3: Model fit and optimized parameter distributions for the two-state aim-scaling model.

    (TIF)

    S30 Fig. Experiment 3: Model fit and optimized parameter distributions for the two-state aim-scaling non-negative λ model.

    (TIF)

    Attachment

    Submitted filename: review3.docx

    Attachment

    Submitted filename: response_to_reviews_ploscompbio.pdf

    Attachment

    Submitted filename: review3.docx

    Attachment

    Submitted filename: response_to_reviews_ploscomp_bio_june_2023.pdf

    Data Availability Statement

    Data and analysis code can be accessed at: https://github.com/crossley/sensory_uncertainty_fffb.


    Articles from PLOS Computational Biology are provided here courtesy of PLOS

    RESOURCES