Skip to main content
eLife logoLink to eLife
. 2022 Feb 28;11:e65361. doi: 10.7554/eLife.65361

Competition between parallel sensorimotor learning systems

Scott T Albert 1,2,, Jihoon Jang 1,3, Shanaathanan Modchalingam 4, Bernard Marius 't Hart 4, Denise Henriques 4, Gonzalo Lerner 5, Valeria Della-Maggiore 5, Adrian M Haith 6, John W Krakauer 6,7,8, Reza Shadmehr 1
Editors: Kunlin Wei9, Michael J Frank10
PMCID: PMC9068222  PMID: 35225229

Abstract

Sensorimotor learning is supported by at least two parallel systems: a strategic process that benefits from explicit knowledge and an implicit process that adapts subconsciously. How do these systems interact? Does one system’s contributions suppress the other, or do they operate independently? Here, we illustrate that during reaching, implicit and explicit systems both learn from visual target errors. This shared error leads to competition such that an increase in the explicit system’s response siphons away resources that are needed for implicit adaptation, thus reducing its learning. As a result, steady-state implicit learning can vary across experimental conditions, due to changes in strategy. Furthermore, strategies can mask changes in implicit learning properties, such as its error sensitivity. These ideas, however, become more complex in conditions where subjects adapt using multiple visual landmarks, a situation which introduces learning from sensory prediction errors in addition to target errors. These two types of implicit errors can oppose each other, leading to another type of competition. Thus, during sensorimotor adaptation, implicit and explicit learning systems compete for a common resource: error.

Research organism: Human

Introduction

When our movements are perturbed, we become aware of our errors, and through our own strategy, or instructions from a coach, engage an explicit learning system to improve our outcome (Morehead et al., 2017; Mazzoni and Krakauer, 2006). This awareness, is not required to adapt; our brain also uses an implicit learning system that partially corrects behavior without our conscious awareness (Morehead et al., 2017; Mazzoni and Krakauer, 2006). How do these two systems interact during sensorimotor adaptation?

Suppose that both systems learn from the same error. In this case, when one system adapts, it will reduce the error that drives learning in the other system; thus, the two parallel systems will compete to ‘consume’ a common error. Alternatively, suppose the two systems learn from separate errors, and each produces an output to minimize its own error. In this case, when one system adapts to its error, it could change behavior in ways that paradoxically increase the other system’s error.

Current models suggest that adaptation is driven by two distinct error sources: a task error (Leow et al., 2020; Körding and Wolpert, 2004; Langsdorf et al., 2021), and a prediction error (Mazzoni and Krakauer, 2006; Tseng et al., 2007; Kawato, 1999). One leading theory suggests that the explicit system acts to decrease errors in task performance, while the implicit system acts to reduce errors in predicting sensory outcomes (Mazzoni and Krakauer, 2006; Taylor and Ivry, 2011; Wong and Shelhamer, 2012). In this model, strategies have no impact on implicit learning. A second theory suggests that task errors can drive learning in both systems (Leow et al., 2020; Kim et al., 2019; McDougle et al., 2015; Miyamoto et al., 2020). In this model, implicit and explicit systems will compete with one another.

Suppose implicit and explicit systems share at least one common error source. What will happen when experimental conditions enhance one’s explicit strategy? In this case, increases in explicit strategy will siphon away the error that the implicit system needs to adapt, thus reducing total implicit learning without directly changing implicit learning properties (e.g. its memory retention or sensitivity to error). This reduction in implicit learning creates the illusion that the implicit system was directly altered by the experimental manipulation, when in truth, it was only responding to changes in strategy.

Competitive interactions like this highlight the need to distinguish between an adaptive system’s learning properties such as its sensitivity to an error, and its learning timecourse, that is the contribution it makes to overall adaptation at any point in time. In a competitive system, an adaptive processes’ learning timecourse depends not only on its own learning properties, but also its competitors’ learning properties. In cases where implicit and explicit systems share an error source, one system’s behavior can be shaped not only by its past experience, but also by changes in the other system. Thus, competition may play an important role in savings (Haith et al., 2015; Coltman et al., 2019; Kojima et al., 2004; Medina et al., 2001; Mawase et al., 2014) and interference paradigms (Sing and Smith, 2010; Lerner et al., 2020; Caithness et al., 2004) where learning properties change over time. Measuring the interdependence between implicit and explicit learning may help to explain the disconnect between studies that have suggested acceleration in motor learning is subserved solely by explicit strategy (Haith et al., 2015; Huberdeau et al., 2019; Morehead et al., 2015; Avraham et al., 2020; Avraham et al., 2021), and studies that have pointed to concomitant changes in implicit learning systems (Leow et al., 2020; Yin and Wei, 2020; Albert et al., 2021).

Here, we begin by mathematically (McDougle et al., 2015; Miyamoto et al., 2020; Smith et al., 2006; Albert and Shadmehr, 2018; Thoroughman and Shadmehr, 2000) considering the extent to which implicit and explicit systems are engaged by task errors and prediction errors. The hypotheses make diverging predictions, which we test in various contexts. Our work suggests that in some contexts (Mazzoni and Krakauer, 2006; Taylor and Ivry, 2011), prediction errors and task errors both make important contributions to implicit learning (Results Part 3). In other contexts, the data suggest that the implicit system is primarily driven by task errors shared with the explicit system (Results Part 1). In this latter case, the competition theory explains why increases (Neville and Cressman, 2018; Benson et al., 2011) or decreases (Fernandez-Ruiz et al., 2011; Saijo and Gomi, 2010) in explicit strategy cause an opposite change in implicit learning. This model explains why in some cases implicit adaptation can saturate as perturbations grow (Neville and Cressman, 2018; Bond and Taylor, 2015; Tsay et al., 2021a), but not others (Tsay et al., 2021a; Salomonczyk et al., 2011). The model also explains why participants that utilize large explicit strategies can exhibit less implicit (Miyamoto et al., 2020) or procedural learning (Fernandez-Ruiz et al., 2011), than those who do not. Finally, the theory provides an alternate way to interpret implicit contributions to two learning hallmarks: savings (Haith et al., 2015) and interference (Lerner et al., 2020) (Results Part 2).

Altogether, our results illustrate that sensorimotor adaptation is shaped by competition between parallel learning systems, both engaged by task errors.

Results

In visuomotor rotation paradigms, participants move a cursor that travels along a rotated path (Figure 1A). This perturbation causes adaptation, resulting in both implicit recalibration (Figure 1A, implicit) and explicit (intentional) re-aiming (Figure 1A, aim) (Mazzoni and Krakauer, 2006; Taylor and Ivry, 2011; Taylor et al., 2014; Shadmehr et al., 1998).

Figure 1. Total implicit learning is shaped by competition with explicit strategy.

(A). Schematic of visuomotor rotation. Participants move from start to target. Hand path is composed of explicit (aim) and implicit corrections. Cursor path is perturbed by rotation. We explored two hypotheses: prediction error (H1, aim vs. cursor) vs. target error (H2, target vs. cursor) drives implicit learning. (B) Prediction error hypothesis predicts that enhancing aiming (dashed magenta) will not change implicit learning (black vs. dashed cyan) according to the independence equation. Target error hypothesis predicts that enhancing aiming (dashed magenta) will decrease implicit adaptation (black vs. dashed cyan). (C) Data reported by Neville and Cressman, 2018. Participants were exposed to either a 20°, 40°, or 60° rotation. Learning curves are shown. The “no aiming” inset shows implicit learning measured via exclusion trials at the end of adaptation. Explicit strategy was calculated as the voluntary reduction in reach angle during the no aiming period. (D) Implicit learning measured during no aiming period in Neville and Cressman yielded a ‘saturation’ phenotype. (E) Explicit strategies calculated in Neville & Cressman dataset by subtracting exclusion trial reach angles from the total adapted reach angle. (F) The implicit learning driving force in the competition theory: difference between rotation and explicit learning in Neville and Cressman. (G) Implicit learning predicted by the competition and independence models in Neville and Cressman. Models were fit assuming that the implicit learning gain was identical across rotation sizes. (H) Experiment 1. Subjects in the stepwise group (n = 37) experienced a 60° rotation gradually in four steps: 15°, 30°, 45°, and 60°. Implicit learning was measured via exclusion trials (points) twice in each rotation period (gray ‘no aiming’). (I) Total implicit learning calculated during each rotation period in the stepwise group yielded a ‘scaling’ phenotype. (J) Explicit strategies were calculated in the stepwise group by subtracting exclusion trial reach angles from the total adapted reach angle. (K) The implicit learning driving force in the competition theory: difference between rotation and explicit learning in the stepwise group. (L) Implicit learning predicted by the competition and independence models in the stepwise group. Models were fit assuming that implicit learning gain was constant across rotation size. (M) Data reported by Tsay et al., 2021a. Participants were exposed to either a 15°, 30°, 60°, or 90° rotation. Learning curves are shown. The “no aiming” inset shows implicit learning measured via exclusion trials at the end of adaptation. (N) Implicit learning measured during no aiming period in Tsay et al. yielded a ‘non-monotonic’ phenotype. (O) Explicit strategies calculated in Tsay et al. dataset by subtracting exclusion trial reach angles from the total adapted reach angle. (P) Implicit learning driving force in the competition theory: difference between rotation and explicit learning in Tsay et al. (Q) Total implicit learning predicted by the competition and independence models in Tsay et al. Models were fit assuming that the implicit learning gain was identical across rotation sizes. Error bars show mean ± SEM, except in the independence predictions in G, L, and Q; independence predictions show mean and standard deviation across 10,000 bootstrapped samples. Points in H, J, M, and O show individual participants.

Figure 1—source code 1. Figure 1 data and analysis code.

Figure 1.

Figure 1—figure supplement 1. Implicit learning can exhibit various phenotypes in the competition theory.

Figure 1—figure supplement 1.

Here we consider how implicit learning can respond to changes in rotation size in the competition theory. (A) Total implicit learning in the competition theory is altered by explicit strategy. We show three cases: (1) strategy increases at the same rate as rotation size (‘same’, gain = 1), (2) strategy increases more slowly than rotation size (‘slower’, gain <1), (3) strategy increases faster than rotation size (‘faster’, gain >1). Gain here is equal to each line’s slope (it is not dependent on the intercept, which is non-zero). (B) In the competition theory, the driving input to the implicit system is the error (i.e. difference) between the rotation and steady-state explicit strategy. Thus, when explicit strategy and rotation size grow by the same amount (‘same’), the implicit driving force remains constant. When explicit strategy grows more than the rotation (“faster”), the implicit driving force decreases as the rotation gets larger. When explicit strategy grows less than the rotation (“slower”), the implicit driving force increases as the rotation gets larger. (C) In the competition theory (equation at top), implicit learning is proportional to the implicit driving forces depicted in B. The proportionality constant, pi, depends on implicit error sensitivity and retention (see Equation 4). Thus, in the ‘same’ scenario in A and B, implicit learning will remain the same across rotation sizes. In the ‘slower’ scenario in A and B, implicit learning will increase with the rotation. In the ‘faster’ scenario in A and B, implicit learning with decrease as the rotation increases.

Figure 1—figure supplement 2. Variations between total learning and implicit learning are consistent with the competition model.

Figure 1—figure supplement 2.

In Figure 1, we evaluate how well the competition equations matches data across three distinct implicit learning phenotypes: saturation, scaling, and non-monotonic responses. These three implicit learning phenotypes are shown again here (data, black bars; each group from left to right shows a different phenotype). The competition model is intuitively stated as a relationship between implicit learning and explicit strategy. This equation is denoted in blue: ‘competition model 1’. Blue bars show how much implicit learning was predicted in each experiment, using explicit strategy and ‘competition model 1’ (‘model-1’ under each set of bars). The competition model can be stated another way. Noting that total adaptation is equal to the sum of implicit and explicit learning, we can replace explicit learning in ‘competition model 1’, with total adaptation minus implicit learning. Algebraic simplification yields ‘competition model 2’, shown in gray. This is an equivalent competition model, only this time, it is stated as a relationship between implicit learning and total adaptation (which were measured on separate trials). The gray bars (‘model-2’) show how much implicit learning was predicted by ‘competition model 2’, using measured total adaptation. Competition models 1 (implicit predicted using explicit) and 2 (implicit predicted using total adaptation) yielded nearly identical predictions. More detail on these comparisons is provided in Appendix 3.
Figure 1—figure supplement 2—source code 1. Figure 1—figure supplement 2 data and analysis code.

Figure 1—figure supplement 3. Scaling, saturation, and non-monotonic phenotypes across the implicit learning timecourse.

Figure 1—figure supplement 3.

(A) A ‘base’ simulation where implicit and explicit systems adapt to target error. A response to a 30° rotation is shown. This response matches the gray bars in B and C. Note the vertical lines. These indicate moments in time where the implicit and explicit responses were calculated in B and C: from left to right, 5, 10, 20, 40, and 150 rotation cycles. Also note the red dashed ‘approximated implicit’ line. This shows the implicit approximation detailed in Appendix 1.1, where xe is replaced with the average explicit strategy up until that cycle number. In B and C we show implicit and explicit responses measured at each vertical bar in A. Left to right shows the early-to-late evolution of each adaptive process. (B) shows implicit learning. (C) shows explicit learning. Green, blue, and purple bars correspond to a 45° rotation response. In addition to changing the rotation magnitude, explicit error sensitivity was also modulated to create the scaling, saturation, and nonmonotonic implicit learning modes. In green, be remained at 0.15 (the same as the gray ‘base’ simulation). In blue, be was increased to 0.435. In purple, be was increased dramatically to 0.93. The scale, saturate, and nonmonotonic phenotypes can be seen at all timepoints in B.
Figure 1—figure supplement 3—source code 1. Figure 1—figure supplement 3 analysis code.

Figure 1—figure supplement 4. Changes in implicit learning across blocks.

Figure 1—figure supplement 4.

(A) Implicit learning measured during each block in the stepwise group in Exp. 1. (B) Implicit learning measured during each block in the abrupt group in Exp. 1. (C) Implicit learning measured in a stepwise condition in Salomonczyk et al., 2011. (D) Implicit learning measured in a 30° group over three learning blocks in Salomonczyk et al., 2011. (E) Implicit learning measured in a 20° group over three learning blocks in Neville and Cressman, 2018. (F) Same as E, but for a 40° group.
Figure 1—figure supplement 4—source code 1. Figure 1—figure supplement 4 data and analysis code.

Current models suggest that the rotation r creates two distinct error sources. One error source is the deviation between cursor and target: a target error (Leow et al., 2020; Körding and Wolpert, 2004; Langsdorf et al., 2021). Notably, this target error (Figure 1A, target error) is altered by both implicit (xi) and explicit (xe) adaptation:

etarget(n)=r(n)(xi(n)+xe(n)) (1)

In addition, a second error is created due to our expectation that the cursor should move toward where we aimed our movement: a sensory prediction error (SPE) (Mazzoni and Krakauer, 2006; Tseng et al., 2007; Kawato, 1999). SPE is the deviation between the aiming direction (the expected cursor motion) and where we observed the cursor’s actual motion (Figure 1A, sensory prediction error). Critically, because this error is anchored to our aim location, it changes over time in response to implicit adaptation alone:

eSPE(n)=r(n)xi(n) (2)

How does the implicit learning system respond to these two error sources? State-space models describe implicit adaptation as a process of learning and forgetting (McDougle et al., 2015; Miyamoto et al., 2020; Smith et al., 2006; Albert and Shadmehr, 2018; Thoroughman and Shadmehr, 2000):

xi(n+1)=aixi(n)+bie(n) (3)

Forgetting is controlled by a retention factor (ai) which determines how strongly we retain the adapted state. Learning is controlled by error sensitivity (bi) which determines the amount we adapt in response to an error (e.g. an SPE or a target error).

Here, we will contrast two possibilities: (1) the implicit system responds primarily to target error, or (2) the implicit system responds primarily to SPE. In a target error learning system, explicit strategy will reduce the target error in Equation 1. This decrease in target error will lead to a competition between implicit and explicit systems, that is increasing explicit strategy reduces target error, which will then decrease implicit learning. Competition in a target error model will occur over the entire learning timecourse and can lead to unintuitive implicit learning phenotypes (Appendix 1.2). While these implicit behaviors can be observed at any point during adaptation, they are easiest to examine during steady-state adaptation (Appendix 1.1).

Consider how Equation 3 behaves in the steady-state condition. Like adapted behavior (Kim et al., 2019; Albert et al., 2021; Vaswani et al., 2015; Kim et al., 2018), Equation 3 approaches an asymptote with extended exposure to a rotation. This steady-state (Figure 1B, implicit) occurs when learning and forgetting counterbalance each other.

Consider a system where target errors alone drive implicit learning. In this system, total (steady-state) implicit learning is determined by Equations 1 and 3:

xiss=bi1ai+bi(rxess) (4)

Equation 4 demonstrates a competition between implicit and explicit systems; the total amount of implicit adaptation (xiss) is driven by the difference between the rotation r and total explicit adaptation (xess).

Now consider a system where SPEs drive implicit learning. SPEs (Equation 2) are unaltered by strategy. In this case, total implicit learning is determined by Equations 2 and 3:

xiss=bi1ai+bir (5)

Equation 5 demonstrates an independence between implicit and explicit systems; the total amount of implicit adaptation depends solely on the rotation’s magnitude, not one’s explicit strategy.

Here, we explore how implicit learning systems respond to explicit strategy, and whether behavior is more consistent with competition or independence. Competition and independence can be studied at any point during the adaptation timecourse (Appendix 1). We will primarily examine steady-state learning, where the competition equation (Equation 4) and independence equation (Equation 5) make simple predictions. The critical insight is that in an independent system (SPE learning), increasing the explicit strategy (Figure 1B, magenta solid and dashed) does not alter implicit adaptation (Figure 1B, independence, compare black and cyan). However, in a competitive system (Equation 4), the same increase in strategy will indirectly decrease implicit learning (Figure 1B, competition, compare black and cyan).

To analyze these possibilities, we begin by examining how changes in explicit strategy alter implicit learning in response to variations in rotation magnitude, experimental instructions, rotation type, and at the individual participant level (Part 1). Next, we describe how competition between implicit and explicit systems could in principle mask changes in implicit learning (Part 2). Finally, we will examine studies which suggest implicit error sources vary across experimental conditions due to the presence and/or absence of multiple visual stimuli in the experimental workspace (Part 3).

Part 1: Measuring how implicit learning responds to changes in explicit strategy

Here, we measure how implicit learning and explicit strategy vary across several factors: (1) rotation size, (2) instructions, (3) gradual versus abrupt rotations, and (4) individual subjects. We will ask whether the variations in implicit and explicit learning are consistent with the competition or independence theories.

Implicit responses to rotation size suggest a competition with explicit strategy

Over extended exposure to a rotation, adaptation appears to saturate (Morehead et al., 2017; Albert et al., 2021; Vaswani et al., 2015; Kim et al., 2018). How does implicit learning contribute to steady-state saturation, and what learning model best describes its behavior?

In Neville and Cressman, 2018, participants adapted to a 20°, 40°, or 60° rotation (Figure 1C). As is common, adaptation reached a steady-state prior to eliminating the target error (Albert et al., 2021; Figure 1C, solid vs. dashed lines). To measure implicit learning, participants were instructed to reach to the target without aiming (Figure 1C, no aiming). The independence model (Equation 5) predicts that the implicit response should scale as the rotation increases. On the contrary, total implicit learning was insensitive to rotation size; it reached only 10° and remained constant despite a threefold increase in rotation magnitude (Figure 1D). To estimate explicit strategy, we subtracted the implicit learning measure from the total adapted response. Opposite to implicit learning, explicit strategy increased proportionally with the rotation’s size (Bond and Taylor, 2015; Tsay et al., 2021a; Figure 1E).

In the competition model, implicit learning is driven by the difference between the rotation and explicit strategy (r – xess in Equation 4). As a result, when an increase in rotation magnitude is matched by an equal increase in explicit strategy (Figure 1—figure supplement 1A, same), the implicit learning system’s driving force will remain constant (Figure 1—figure supplement 1B, same). This constant driving input leads to a phenotype where implicit learning appears to ‘saturate’ with increases in rotation size (Figure 1—figure supplement 1C, same).

To investigate whether this mechanism is consistent with the implicit response, we examined how explicit strategy and the implicit driving force varied with rotation size. As rotation size increased, explicit strategies increased substantially (Figure 1E). Under the competition model, these rapid changes in explicit strategy produced an implicit driving force that responded little to rotation magnitude; while the rotation increased by 40°, the driving force changed by less than 2.5° (Figure 1F). Thus, the competition Equation (Figure 1G, competition) suggested that implicit learning would not vary with rotation size, as we observed in the measured data (Figure 1G, data).

In other words, the competition model suggests that the implicit system can exhibit an unintuitive saturation when its driving input remains constant. The key prediction is that by altering explicit strategy, this driving input will change, changing the implicit response to rotation size. One possibility is to weaken the explicit system’s response to the rotation (Figure 1—figure supplement 1A, slower) which should increase the steady-state of the implicit system (Figure 1—figure supplement 1C, slower).

To test this idea, we used a stepwise rotation (Yin and Wei, 2020). In Experiment 1, participants (n = 37) adapted to a stepwise perturbation which started at 15° but increased to 60° in 15° increments (Figure 1H). Twice toward the end of each rotation block, we assessed implicit adaptation by instructing participants to aim directly to the target (Figure 1H, gray regions). Supplemental analysis suggested that the implicit system reached its steady-state during each learning period (Appendix 2), although this is not required to test the competition theory (Appendix 1.2). Critically, the stepwise rotation onset decreased explicit responses relative to the abrupt rotations used by Neville and Cressman, 2018; explicit strategies increased with a 94.9% gain (change in strategy divided by change in rotation) across the abrupt groups in Figure 1E, but only a 55.5% gain in the stepwise condition shown in Figure 1J. In the competition model, this reduction in strategy increased the implicit system’s driving input (Figure 1K). The increased driving input produced a “scaling” phenotype in the competition model’s implicit response (Figure 1L, competition) which closely matched the measured implicit data (Figure 1I; 1 L, data; rm-ANOVA, F(3,108)=99.9, p < 0.001, ηp2=0.735).

Thus, the implicit system can exhibit both saturation (Figure 1G) and scaling (Figure 1L), consistent with the competition model. Recent work by Tsay et al., 2021a suggests a third steady-state implicit phenotype: non-monotonicity. In their study, the authors examined a wider range in rotation size, 15° to 90° (Figure 1M). A no-aiming period revealed total implicit adaptation each group (n = 25/group). Curiously, whereas implicit learning increased between the 15° and 30° rotations, it appeared similar in the 60° rotation group, and then decreased in the 90° rotation group (Figure 1N). This non-monotonic behavior was inconsistent with the independence model where implicit learning is proportional to rotation size (Figure 1Q, independence).

To determine whether this non-monotonicity could be captured by the competition theory, we considered again how explicit re-aiming increased with rotation size (Figure 1O). We observed an intriguing pattern. When the rotation increased from 15° to 30°, explicit strategy responded with a very low gain (4.5%, change in strategy divided by change in rotation). An increase in rotation size to 60° was associated with a medium-sized gain (80.1%). The last increase to 90° caused a marked change in the explicit system: a 53.3° increase in explicit strategy (177.7% gain). Thus, explicit strategy increased more than the rotation had. Critically, this condition produces a decrease in the implicit driving input in the competition theory (Figure 1—figure supplement 1, faster). Overall, we estimated that this large variation in explicit learning gain (4.5–80.1% to 177.7%) should yield non-monotonic behavior in the implicit driving input (Figure 1P): an increase between 15° and 30°, no change between 30° and 60°, and a decrease between 60° and 90°. As a result, the competition theory (Figure 1Q, competition) exhibited a non-monotonic envelope, which closely tracked the measured data (Figure 1Q, data).

Unfortunately, there is a potential problem in our analysis: implicit and explicit learning measures were not independent, because explicit strategy was estimated using implicit reach angles (i.e. explicit learning equals total learning minus implicit learning). Did this bias our analysis towards the competition model? To answer this question, Equation 4 can be stated as xiss = pi(r – xess) where pi is the learning gain determined by the implicit system’s retention and error sensitivity (i.e. ai and bi). We can replace the explicit strategy (xess) appearing in this equation noting that xess = xTss – xiss, where xTss equals total steady-state adaptation. With this, the model relates implicit learning to total learning: xiss = pi(1 – pi)–1(r – xTss), as opposed to explicit learning, and can be used to test the competition model without correlated learning measures (see Appendix 3). We reexamined all three experiments in Figure 1, using total adaptation to predict implicit learning with the competition model (Figure 1—figure supplement 2). This alternate method yielded nearly identical predictions (Figure 1—figure supplement 2, ‘model-2’) as Equation 4 (Figure 1—figure supplement 2, ‘model-1’). Thus, the qualitative and quantitative correspondence between the competition model and the measured data was not due to how we operationalized implicit and explicit learning (see Appendix 3).

Collectively, these studies demonstrate that the implicit system can exhibit at least three distinct behavioral phenomena: saturation, scaling, or non-monotonicity. The competition model matched all three phenotypes, due to the implicit system’s response to explicit strategy. The SPE learning model described by the independence equation, however, could only produce a scaling phenotype (Figure 1I). Could the SPE learning model be altered to produce implicit learning phenotypes other than scaling? One possibility is that a saturation phenotype (Figure 1D) could be built into the SPE model by adding a restriction, that is an upper bound, on total implicit adaptation, as observed in studies where participants experience invariant error perturbations (Morehead et al., 2017; Kim et al., 2018). With that said, the 10° implicit responses observed across the three rotations in Neville and Cressman, 2018, are much lower than the 20°–25° ceiling suggested by recent error-clamp studies (Kim et al., 2018), and the 35–45° implicit responses observed in some standard rotation studies (Salomonczyk et al., 2011; Maresch et al., 2021). More importantly, a learning model with a rotation-insensitive upper bound on implicit learning would be inconsistent with the scaling (Figure 1I) and nonmonotonic (Figure 1N; see Appendix 6.6) phenotypes we observed. We will explore other extensions to this SPE model in several analyses in the Control analyses section below.

Increase in explicit strategy suppresses implicit learning

The competition model predicts that increasing explicit strategy will decrease implicit learning, even when the rotation size is the same. In contrast, the independence theory predicts that implicit learning will be insensitive to differences in explicit strategy (extensions to this model are considered in Control analyses).

To test these ideas, we considered another condition tested by Neville and Cressman, 2018 where participants were exposed to the same 20°, 40°, or 60° rotation, but received coaching instructions. The coaching sharply improved adaptation over the non-instructed group (Figure 2A, compare purple with black). To understand how implicit and explicit learning contributed to these changes, we analyzed the mean implicit and explicit reach angles measured across all three rotation sizes (each individual response is shown in Figure 2—figure supplement 1).

Figure 2. Increases or decreases in explicit strategy oppositely impact implicit adaptation.

(A) Neville and Cressman, 2018 tested participants in two conditions: an uninstructed condition (black) and an instructed condition (purple) where subjects were briefed about the upcoming rotation and its solution. Instruction increased the adaptation rate across three rotation sizes: 20°, 40°, and 60°. Insets in gray shaded area show implicit adaptation measured via exclusion trials at the end of adaptation. (B) Here, we show the average strategy across all rotation sizes in the instructed (black) and uninstructed (purple) conditions. Explicit strategy was calculated by subtracting implicit learning (exclusion trials) from total adaptation. Instruction increased explicit strategy use. (C) The data show implicit adaptation averaged across all three rotation sizes. The independent (SPE learning) and competition (target error learning) models were fit to these data assuming that implicit error sensitivity and retention were identical across rotation sizes and instruction conditions (i.e. identical ai and bi across all six groups). Error bars for model predictions refer to mean and standard deviation across 10,000 bootstrapped samples. (D) In Experiment 1 we tested participants in either an abrupt condition or a stepwise (gradual) condition. Here, we show the rotation schedule. (E) Here, we show learning curves in the abrupt and stepwise conditions in Experiment 1. Bars show implicit adaptation measured during each rotation period (four blocks total) via exclusion trials. Individual learning measures are shown in the terminal 60° learning period for both groups (points at bottom-right). (F) We calculated explicit strategies during the terminal 60° learning period by subtracting implicit learning measures from total adaptation (mean over last 20 trials). Gradual onset reduced explicit strategy use. (G) The data show total implicit learning measured in the 60° rotation period. The competition (blue) and independence (green) models were fit to the data assuming that the implicit learning parameters were the same across the abrupt and stepwise groups. Error bars for the model show the mean and standard deviation across 1,000 bootstrapped samples. Statistics in B, F, and G denote a two-sample t-test: *p < 0.05, ***p < 0.001. Error bars in A, B, C (data), E, F, and G (data) denote mean ± SEM. Points in E and F show individual participants.

Figure 2—source code 1. Figure 2 data and analysis code.

Figure 2.

Figure 2—figure supplement 1. Changes in implicit adaptation in response to awareness and rotation size.

Figure 2—figure supplement 1.

Data reported from Neville and Cressman, 2018. (A) Participants were separated into 1 of 6 groups. Groups differed based on verbal instruction (instructed purple; non-instructed black) and rotation magnitude (20° left; 40° middle; 60° right). Here, we show implicit learning measured using exclusion trials (reach without re-aiming) at the end of adaptation. (B) Here, we show implicit aftereffects predicted by a model where implicit system learns from SPE only. (C) Here we show implicit aftereffects predicted by a model where implicit system learns from target error only. (D). The competition theory (target error learning) predicts that implicit learning will be proportional to the difference between the rotation size and the total explicit strategy. Here we show this quantity for all six experimental groups. Note that model predictions in B and C assume that the implicit learning gain is the same across all six experimental groups. Error bars for data show mean ± SEM. Error bars for model predictions refer to mean and standard deviation across 10,000 bootstrapped samples.
Figure 2—figure supplement 1—source code 1. Figure 2—figure supplement 1 data and analysis code.
Figure 2—figure supplement 2. Total implicit adaptation varies slowly with changes in implicit error sensitivity.

Figure 2—figure supplement 2.

We used a sensitivity analysis to explore whether changes in error sensitivity could explain the variations in implicit learning in Exp. 1 (abrupt vs. stepwise) and Neville and Cressman (instruction vs. no instruction). Above we compare a ‘reference’ and a ‘test’ condition. We chose several possible implicit error sensitivity and retention levels in our ‘reference’. From left to right, we test implicit retention factors between 0.95 and 0.99. The colors in each inset, denote different reference error sensitivities: from 0.1 to 0.35. Each curve shows how much implicit learning will increase in a ‘test’ condition (i.e. the y-axis) over the reference condition; y-axis denotes the percent change in total implicit learning and x-axis denotes error sensitivity in the test condition. For example, for the point highlighted by the black arrow in the second column: this point shows that total implicit learning will increase by about 20% (y-axis) in a scenario where implicit retention = 0.96, and implicit error sensitivity increases from 0.1625 (i.e. it is on the red line) in the reference condition to 0.8 (i.e. the x-axis value) in the test condition.
Figure 2—figure supplement 2—source code 1. Figure 2—figure supplement 2 analysis code.
Figure 2—figure supplement 3. Suppressing explicit strategy increases total implicit adaptation.

Figure 2—figure supplement 3.

Data reported from Saijo and Gomi, 2010. (A) Participants experienced an abrupt or gradual 60° rotation (followed by washout). (B) We explored two hypotheses: prediction error (H1, aim vs. cursor) vs. target error (H2, target vs. cursor) drives implicit learning. Prediction error hypothesis predicts that suppressing aiming (dashed magenta) through gradual perturbation onset will not change implicit learning (black vs. dashed cyan). Target error hypothesis predicts that suppressing aiming (dashed magenta) will increase implicit adaptation (black vs. dashed cyan). (C) Directional error during adaptation. Note that while the abrupt group exhibited greater adaptation during the rotation, they also showed a smaller aftereffect suggesting less implicit adaptation. (D) We simulated a state-space model where the implicit system learned from SPE. The model parameters were selected to best fit the data in C. In the middle row, hypothetical abrupt explicit strategy was simulated based on data reported by Neville and Cressman, 2018 (yellow points). The gradual explicit strategy was assumed to be zero because participants were less aware. At bottom, we show implicit learning predicted by an SPE error source. Note the identical saturation levels. (E) Same as in D, but for implicit adaptation based on target error. Note greater implicit learning in gradual condition at the bottom row. Models in D and E were fit assuming that implicit error sensitivity and retention are identical across abrupt and gradual conditions. (F) Here, we show the implicit aftereffect on the first washout cycle (12 total trials). Model predictions for SPE learning (indep.) and target error learning (competition) are shown. Data show mean ± SEM across participants. Error bars for model are mean and standard deviation across 20,000 bootstrapped samples.
Figure 2—figure supplement 3—source code 1. Figure 2—figure supplement 3 data and analysis code.
Figure 2—figure supplement 4. The competition model is compatible with various explicit strategy levels in Saijo and Gomi, 2010.

Figure 2—figure supplement 4.

In Appendix 5, we analyzed the Saijo and Gomi, 2010 abrupt and gradual learning conditions with both the competition and independence models. We measured whether each model could predict total implicit learning, with the assumption that the initial washout reach angle primarily reflected implicit adaptation. These analyses assumed that participants in the gradual condition did not use explicit strategy. In the right column, we reproduce the competition model’s predictions under this zero strategy assumption, as in Figure 2—figure supplement 2E and F. Next, we repeated these analyses in an alternate scenario, where explicit strategy was assumed to be 10° during the rotation period. This new control analysis is shown in the left column. We observed that model predictions were qualitatively similar whether gradual learning was simulated with 0° strategy, or 10° strategy. (A) shows directional errors predicted by competition. (B) shows the explicit strategies used as input in these simulations. (C) shows the implicit response to the rotation and explicit strategies predicted by the competition model. (D) shows the aftereffect predicted by the competition model (blue), independence model (green), and that observed in the actual experiment (data). For the model predictions, we used the implicit angle on the initial washout cycle.
Figure 2—figure supplement 4—source code 1. Figure 2—figure supplement 4 data and analysis code.

Unsurprisingly, explicit adaptation was enhanced in the participants that received coaching instructions. Explicit re-aiming increased by approximately 10° (Figure 2B, t(61)=2.29, p = 0.026, d = 0.56). However, while instruction enhanced explicit strategy, it suppressed implicit learning, decreasing total implicit learning by approximately 32% (Figure 2C, data, t(61)=2.62, p = 0.011, d = 0.66). To interpret this implicit response, we fit the competition (Equation 4) and independence equations (Equation 5) to the behavior across all experimental conditions (six groups: 3 rotation magnitudes, 2 instruction conditions), while holding the implicit learning parameters in the model constant (i.e. holding ai and bi constant across all conditions).

As in Figure 1, implicit learning in the independence model does not respond to explicit strategy, and is not altered by instruction (Figure 2C, implicit learning, indep.). On the other hand, the competition model accurately suggested that total implicit learning would decrease by approximately 3° (data showed 2.98° decrease, model produced a 2.92° decrease) in response to increases in explicit strategy (Figure 2C, implicit learning, competition, t(61)=2.05, p = 0.045, d = 0.52). Altogether, the competition theory parsimoniously captured how the implicit system responded to explicit instruction (Figure 2C) as well as changes in rotation size (Figure 1G) with the same model parameter set (same ai and bi in the competition equation).

Decrease in explicit strategy enhances implicit learning

Next, we examined how implicit learning responds to decreases in explicit strategy. Yin and Wei, 2020 recently demonstrated that explicit strategies can be suppressed using gradual rotations. The competition theory predicts that decreasing explicit strategy will lead to greater implicit adaptation. We tested this prediction in Exp. 1. Participants were exposed to a 60° rotation, either abruptly (n = 36), or in a stepwise manner (n = 37) where perturbation magnitude increased by 15° across four distinct learning blocks (Figure 2D). We measured implicit and explicit learning during each block, as in Figure 1. To compare gradual and abrupt learning, we analyzed reach angles during the 4th learning block, where both groups experienced the 60° rotation size (Figure 2E).

As in Yin and Wei, 2020, participants in the stepwise condition exhibited a 10° reduction in explicit re-aiming (Figure 2F, two-sample t-test, t(71)=4.97, p < 0.001, d = 1.16). Reductions in strategy led to a decrease in total adaptation in the stepwise group by approximately 4°, relative to the abrupt group (Figure 2E, right-most gray region (last 20 trials); two-sample t-test, t(71)=3.33, p = 0.001, d = 0.78), but an increase in implicit learning by approximately 80% (Figure 2G, data, two-sample t-test, t(71)=6.4, p < 0.001, d = 1.5). Thus, the data presented a curious pattern; greater total adaptation in the abrupt condition was paradoxically associated with reduced implicit adaptation. As expected, these surprising patterns did not match the independence model (Figure 2G, indep.), in which implicit learning does not respond to changes in explicit strategy.

To test whether implicit learning patterns matched the competition model we fit Equation 4 to implicit and explicit reach angles measured in Blocks 1–4, across the stepwise and abrupt conditions, while holding the model’s implicit learning parameters (ai and bi) constant. The competition model correctly predicted that the decrease in strategy in the gradual condition should produce an increase in implicit learning (Figure 2G, comp., two-sample t-test, t(71)=4.97, p < 0.001, d = 1.16). In addition, the competition model predicted a decrease in total learning, consistent again with the data (the model yielded 53.47° total adaptation in abrupt, and 50.42° in gradual: values not provided in Figure 2). The model’s negative correlation between implicit learning and total adaptation occurred in two steps: (1) greater abrupt strategies increased overall adaptation, but (2) siphoned away target errors, reducing implicit adaptation.

We analyzed another hypothesis: changes in implicit adaptation were caused by variation in error sensitivity (e.g. greater implicit error sensitivity in the stepwise condition), rather than competition. Note, however, that the implicit learning gain, pi, is given by pi = bi(1 – ai+ bi)–1. Because the bi term appears in both numerator and denominator, total implicit learning varies slowly with changes in bi (Appendix 4). Accordingly, supplemental analyses (Appendix 4, Figure 2—figure supplement 2) showed that no change in bi could yield the 80% increase in stepwise implicit learning in Figure 2G, let alone the 46% increase in implicit learning in the no-instruction group in Figure 2C. Thus, while variation in implicit error sensitivity might contribute to changes in steady-learning learning, its role is minor compared to error competition.

In summary, we observed that explicit strategies could be suppressed by increasing the rotation gradually. Reductions in explicit strategy were associated with increased implicit adaptation (Figure 2G) as predicted by the competition theory. Furthermore, the same competition theory parameter set (i.e. same ai and bi, see Materials and methods) accurately matched the extent to which implicit learning responded to decreases in explicit strategy (Figure 2G) as well as increases in rotation size (Figure 1L). It is interesting to note that these implicit patterns are broadly consistent with the observation that gradual rotations improve procedural learning (Saijo and Gomi, 2010; Kagerer et al., 1997), although these earlier studies did not properly tease apart implicit and explicit adaptation (see the Saijo and Gomi analysis described in Appendix 5).

Implicit adaptation responds to between-subject differences in explicit adaptation

Use of explicit strategy is highly variable between individuals (Miyamoto et al., 2020; Fernandez-Ruiz et al., 2011; Bromberg et al., 2019). According to the competition theory (Equation 4), implicit and explicit learning will negatively co-vary according to a line whose slope and bias are determined by the properties of the implicit learning system (ai and bi). In Experiment 2, we tested this prediction. In one group, we limited preparation time to inhibit time-consuming explicit strategies (Fernandez-Ruiz et al., 2011; McDougle and Taylor, 2019; Figure 3D–F, Limit PT). In the other group, we imposed no preparation time constraints (Figure 3A–C, No PT Limit). We measured ai and bi in the Limit PT group and used these values to predict the implicit-explicit relationship across No PT Limit participants.

Figure 3. Strategy suppresses implicit learning across individual participants.

(A–C) In Experiment 2, participants in the No PT Limit (no preparation time limit) group adapted to a 30° rotation. The paradigm is shown in A. The learning curve is shown in B. Implicit learning was measured via exclusion trials (no aiming). Preparation time is shown in C (movement start minus target onset). (D–F) Same as in A–C, but in a limited preparation time condition (Limit PT). Participants in the Limit PT group had to execute movements with restricted preparation time (F). The task ended with a prolonged no visual feedback period where memory retention was measured (E, gray region). (G) Total implicit and explicit adaptation in each participant in the No PT Limit condition (points). Implicit learning measured during the terminal no aiming probe. Explicit learning represents difference between total adaptation (last 10 rotation cycles) and implicit probe. The black line shows a linear regression. The blue line shows the theoretical relationship predicted by the competition equation which assumes implicit system adapts to target error. The parameters for this model prediction (implicit error sensitivity and retention) were measured in the Limit PT group. (H–J) In Experiment 3, participants adapted to a 30° rotation using a personal computer in the No PT Limit condition. The paradigm is shown in H. The learning curve is shown in I. Implicit learning was measured at the end of adaptation over a 20-cycle period where participants were instructed to reach straight to the target without aiming and without feedback (no aiming seen in I). We measured explicit adaptation as difference between total adaptation and reach angle on first no aiming cycle. We measured ‘early’ implicit aftereffect as reach angle on first no aiming cycle. We measured ‘late’ implicit aftereffect as mean reach angle over last 15 no aiming cycles. (K–M) Same as in H–J, but for a Limit PT condition. (N) Explicit adaptation measured in the No PT Limit condition in Experiment 2 (E2), No PT Limit condition in Experiment (E3, black), and Limit PT condition in Experiment 3 (E3, red). (O) Late implicit learning in the Experiment 3 No PT Limit group (No Lim.) and Experiment 3 Limit PT group (PT Limit). (P) Correspondence between late implicit learning and explicit strategy in the Experiment 3 No PT Limit group. (Q) Same as in G but where model parameters are obtained from the Limit PT group in Experiment 3, and points represent subjects in the No PT Limit group in Experiment 3. Early implicit learning is used. Throughout all insets, error bars indicate mean ± SEM across participants. Statistics in N and O are two-sample t-tests: n.s. means p > 0.05, ***p < 0.001.

Figure 3—source code 1. Figure 3 data and analysis code.

Figure 3.

Figure 3—figure supplement 1. Implicit error sensitivity varies with error.

Figure 3—figure supplement 1.

(A) We empirically estimated error sensitivity on each trial in the limited preparation time (Limit PT) group in Experiment 2. The dashed horizontal line indicates the steady-state error sensitivity used in our competition theory predictions in Figure 3G. (B) Here we show error on each trial in the Limit PT group in Experiment 3. The horizontal cyan line shows the terminal error over the last 10 cycles. (C) Error sensitivity curves reported in Kim et al., 2018 denoted E1 (Morehead et al., 2017 results) and E2 (Kim et al., 2018 results). These two studies used invariant error-clamp tasks to isolate implicit learning. We compared our implicit learning measure in the Limit PT condition in Exp. 2 to these values. The vertical blue line shows the terminal error in B. The red star shows the terminal error sensitivity measured in A. In panels D–F, we show the same data as in A–C, except for the Limit PT condition in Experiment three where participants were tested on a personal laptop. Shaded error bars denote mean ± SEM across participants.
Figure 3—figure supplement 1—source code 1. Figure 3—figure supplement 1 data and analysis code.
Figure 3—figure supplement 2. Comparing implicit and explicit adaptation via reported strategies.

Figure 3—figure supplement 2.

In Figure 3, when analyzing the No PT Limit group (no preparation time limit) in Experiment 2, we measured implicit learning using exclusion trials at the end of adaptation. Next, we estimated explicit strategies by subtracting this reach-based implicit learning measure from the total adaptation measured over the last 10 cycles of adaptation (reach-based explicit measure). In addition, we also asked participants to report their explicit strategies after the probe period. Participants were shown a ring of circles surrounding each target and asked to indicate which circle best represented their aiming direction at the end of the experiment. We averaged this report-based explicit measure across all four adaptation targets, taking the absolute value for any misreported strategies (25% of all reports in opposite direction). We estimated report-based implicit learning by subtracting the reported explicit strategy from the total adaptation measured over the last 10 rotation cycles. (A) Here, we compare report-based explicit strategy with reach-based explicit strategy. Each point represents an individual participant. The solid line is the unity line. The bars at right show the mean value for each explicit measure. (B) Similar to A except here we compare report-based implicit learning with reach-based implicit learning. (C) Here we compared report-based implicit and report-based explicit learning measures. We also show the relationship predicted by the competition theory in blue (same as in Figure 3G). Error bars show mean ± SEM across participants. Statistics in A and B show paired t-tests: *p < 0.05.
Figure 3—figure supplement 2—source code 1. Figure 3—figure supplement 2 data and analysis code.
Figure 3—figure supplement 3. Movement paths in Experiment 3 were straight and brisk.

Figure 3—figure supplement 3.

In Exp. 3, we tested participants remotely in a laptop-based rotation study. Here, we show movement paths recorded by the computer in two example subjects: one in the Limit PT group (top row) and one in the No PT Limit group (bottom row). The left column shows trajectories during the baseline period. Note the four different groupings reflect the four different targets used in the task. The middle column shows trajectories during the rotation period. The color indicates the rotation trial number (blue is early in the rotation period, red is late in the rotation period). The data show a clear rotation in participant movement angle. Reach trajectories remained straight. Finally, the right column shows movement paths during the terminal period where participants were instructed to move straight to each target, without any cursor feedback.
Figure 3—figure supplement 3—source code 1. Figure 3—figure supplement 3 data and analysis code.

As expected, Limit PT participants dramatically reduced their reach latencies throughout the adaptation period (Figure 3F), whereas the No PT Limit participants exhibited a sharp increase in movement preparation time after perturbation onset (Figure 3C), indicating explicit re-aiming (Langsdorf et al., 2021; Haith et al., 2015; Albert et al., 2021; Fernandez-Ruiz et al., 2011; McDougle and Taylor, 2019). Consistent with explicit strategy suppression, learning proceeded more slowly and was less complete under the preparation time limit (compare Figure 3B&E; two-sample t-test on last 10 adaptation epochs: t(20)=3.27, p = 0.004, d = 1.42).

Next, we measured the retention factor ai during a terminal no feedback period (Figure 3E, dark gray, no feedback) and error sensitivity bi during the steady-state adaptation period. Steady-state implicit error sensitivity (note errors are small at steady-state creating high bi) was consistent with recent literature (Figure 3—figure supplement 1A-C). Together, this retention factor (ai = 0.943) and error sensitivity (bi = 0.35), produced a specific form of Equation 4, xi = 0.86 (30 – xe). We used this result to predict how implicit and explicit learning should vary across participants in the No PT Limit group (Figure 3G, blue line).

To measure implicit and explicit learning in the No PT Limit group, we instructed participants to move their hand through the target without any re-aiming at the end of the rotation period (Figure 3B, no aiming). The precipitous change in reaching angle revealed implicit and explicit components of adaptation (post-instruction reveals implicit; voluntary decrease in reach angle reveals explicit). We observed a striking correspondence between the No PT Limit implicit-explicit relationship (Figure 3G, black dot for each participant; ρ = −0.95) and that predicted by the competition equation (Figure 3G, blue). The slope and bias predicted by Equation 4 (–0.86 and 25.74°, respectively) differed from the measured linear regression by less than 5% (Figure 3G, black line, R2 = 0.91; slope is –0.9 with 95% CI [-1.16,–0.65] and intercept is 25.46° with 95% CI [22.54°, 28.38°]).

In addition, we also asked participants to verbally report their aiming angles prior to concluding the experiment. These responses were variable, with 25% reported in the incorrect direction. Because strategies are susceptible to sign-flipped errors (McDougle and Taylor, 2019), we assumed these misreported strategies represented the correct magnitude, but the incorrect sign, and thus took their absolute value. While reported explicit strategies were on average greater than our probe-based measure, and report-based implicit learning was on average smaller than our probe-based measure (Figure 3—figure supplement 2A&B; paired t-test, t(8)=2.59, p = 0.032, d = 0.7), the two report-based measures exhibited a strong correlation which aligned with the competition theory’s prediction (Figure 3—figure supplement 2C; R2 = 0.95; slope is –0.93 with 95% CI [-1.11,–0.75] and intercept is 25.51° with 95% CI [22.69°, 28.34°]).

In summary, individual participants exhibited an inverse relationship between implicit and explicit learning; participants who used large explicit strategies inadvertently suppressed their implicit learning, a pattern consistent with error-based competition.

Limiting reaction time strongly suppresses explicit strategy and increases implicit learning

Our analysis in Experiment 2 had two important limitations. First, the competition theory used implicit learning parameters measured under limited preparation time conditions (Leow et al., 2020; Fernandez-Ruiz et al., 2011; Leow et al., 2017): how effectively does this condition suppress explicit learning? Second, our individual-level implicit and explicit learning measures were intrinsically correlated because they both depended on probe-based reach angles (i.e. implicit is no aiming probe, and explicit is total learning minus no aiming probe).

To address these limitations, we conducted a laptop-based control experiment (Experiment 3). Participants (n = 35) adapted to a 30° rotation (Figure 3I), but this time, we measured implicit adaptation using the no-aiming instruction over an extended 20-cycle period (Figure 3I, no aiming). We calculated early (the first no-aiming cycle; Figure 3Q) and late (last 15 no-aiming cycles; Figure 3P) implicit learning measures. Explicit strategy was estimated by subtracting the first no-aiming cycle from total adaptation. Thus, our explicit strategy measure was not calculated using late implicit learning trials; these two measures were no longer spuriously correlated. Regardless, we still observed a strong relationship between explicit strategy and late implicit learning; greater strategy use was associated with reduced late implicit adaptation (Figure 3P, ρ = −0.78, p< 0.001).

Next, we repeated this experiment, but under limited preparation time conditions in a separate participant cohort (Figure 3L, Experiment 3, Limit PT, n = 21). As for the Limit PT group in Exp. 2, we imposed a strict bound on reaction time to suppress movement preparation time (compare Figure 3J&M). Once the rotation period ended, participants were told to stop re-aiming. The decrease in reach angle revealed each participant’s explicit strategy (Figure 3N). When no reaction time limit was imposed (No PT Limit), re-aiming totaled 11.86° (Figure 3N, black). In addition, we did not detect a statistically significant difference in re-aiming across Exps. 2 and 3 (t(42)=0.50, p = 0.621). As in earlier reports (Leow et al., 2020; Albert et al., 2021; Fernandez-Ruiz et al., 2011; Leow et al., 2017), limiting reaction time dramatically suppressed explicit strategy, yielding only 2.09° of re-aiming (Figure 3N, red). Thus, these data showed that our limited reaction time technique was highly effective at suppressing explicit strategy.

Consistent with the competition theory, suppressing explicit strategy increased implicit learning by approximately 40% (Figure 3O, No PT Limit vs. Limit PT, two-sample t-test, t(54)=3.56, p < 0.001, d = 0.98). We again used the Limit PT group’s behavior to estimate implicit learning parameters (ai and bi) as we did in Exp. 2 (Figure 3G). Using these parameters, the competition theory (Equation 4) predicted that implicit and explicit adaptation should be related by the line: xi = 0.658(30 – xe). As in Exp. 2, we observed a striking correspondence between this model (Figure 3Q, bottom, model) and the actual implicit-explicit relationship measured in participants in the No PT Limit group (Figure 3Q, bottom, points). The slope and bias predicted by Equation 4 (–0.665 and 19.95°, respectively) differed from the measured linear regression by less than 5% (Figure 3Q, bottom brown line, R2 = 0.78; slope is –0.63 with 95% CI [-0.74,–0.51] and intercept is 19.7° with 95% CI [18.2°, 21.3°]).

In summary, Exp. 3 provided additional evidence that implicit and explicit systems compete with one another at the individual-participant level. Participants who relied more on strategy exhibited reductions in implicit learning, as predicted by the competition theory. Moreover, by limiting preparation time on each trial, explicit strategies were strongly suppressed, allowing us to estimate the time course of the implicit system’s adaptation.

Control analyses

Implicit learning exhibits generalization: a decay in adaptation measured when subjects move to positions across the workspace (Hwang and Shadmehr, 2005; Krakauer et al., 2000; Fernandes et al., 2012). Implicit generalization is centered where participants aim (Day et al., 2016; McDougle et al., 2017). For this reason, implicit learning measured when aiming towards the target, can underapproximate total implicit learning. Subjects that aim more (larger strategy) can exhibit a larger reduction in measured implicit learning. Might this contribute to the negative implicit-explicit correlations in Exps. 1–3?

To test this idea, we compared our data to generalization curves measured in past studies (Krakauer et al., 2000; Day et al., 2016; McDougle et al., 2017; Figure 4). Absolute implicit responses are shown in Figure 4B, and normalized measures are shown in Figure 4A (see Appendix 6.1). Implicit learning in Exps. 2&3 declined 300% more rapidly than predicted by past generalization studies (Figure 4A&B). Moreover, this comparison in Figure 4A–C is not appropriate under the generalization hypothesis. In Exps. 2&3, explicit strategies are estimated as total learning minus implicit learning. If implicit learning measured at the target underapproximates total implicit learning measured at the aim location, then the explicit strategies we calculate will overapproximate the actual strategy used by each participant. We need to correct these strategies prior to comparing to past generalization curves (Appendix 6.2). The corrected generalization curves (Figure 4C, E2 and E3 lines) that produce the patterns in Figure 4A&B exhibited an unphysiological narrowing: their standard deviation (width) was 85% smaller than that reported in recent studies (Krakauer et al., 2000; Day et al., 2016; McDougle et al., 2017) (σ is about 5.5° versus 37.76° in McDougle et al., see Appendix 6.1). These same issues occurred in the group-level phenomena that we analyzed in Figures 1 and 2: no plausible generalization curve could explain the implicit response to instruction, rotation onset (abrupt/gradual), and rotation size (Appendices 6.4 and 6.5). As an example, the variations in implicit learning across abrupt and stepwise groups in Exp. 1 would require a generalization curve that is 90% narrower than recent estimates (McDougle et al., 2017) (see Appendix 6.4 and Figure 4—figure supplement 1; σ = 3.87° versus 37.76° in McDougle et al., 2017).

Figure 4. Correlations between implicit and explicit learning are consistent with competition, not SPE generalization.

(A) Aim-centered generalization could create the illusion that implicit and explicit systems compete. To evaluate this possibility, we compared the implicit-explicit relationship in Exps. 2 and 3 to generalization curves reported in Krakauer et al., 2000, Day et al., 2016, and McDougle et al., 2017. The 1T, 2T, 4T, and 8T labels correspond to the number of adaptation targets in Krakauer et al. The gold McDougle et al. curve is particularly relevant because the authors controlled aiming direction on generalization trials and counterbalanced CW and CCW rotations. Data in Exps. 2 and 3 are shown overlaid in the inset. Implicit learning declined about 300% more rapidly with increases in re-aiming than that observed by Day et al. The solid black and brown lines show the competition theory predictions. Implicit learning in Experiments 2 and 3 was normalized to its theoretical maximum, reached when re-aiming is equal to zero. The value used to normalize was determined via linear regression (25.5° in Exp. 2, 19.7° in Exp. 3). (B) Same as in A, but without normalizing implicit learning. Generalization curves were converted to degrees by multiplying the curves in A by the max. implicit learning value in Exp. 2 (25.5°) or Exp. 3 (19.7°). (C) The comparisons in A and B are not correct. Under the generalization hypothesis, each data point’s explicit strategy needs to be corrected according to generalization. This inset shows the true implicit-explicit generalization curve that would be required to produce the data in A and B. The E2 and E3 lines show the Exp. 2 and Exp. 3 curves. (D) Points show implicit and explicit learning measured in the stepwise individual participants studied in Exp. 1 (B1 is 15° period, B2 is 30° period, B3 is 45° period, and B4 is 60° period). Three models were fit to participant data in the 60° period. Competition model fit is shown in black. A linear generalization (SPE gen. linear) with slope set by McDougle et al. is shown in cyan. A Gaussian generalization (SPE gen. normal) with width set by McDougle et al. is shown in gold. Since models were fit to B4 data, the B1, B2, and B3 lines represent predicted behavior. (E) The prediction error (RMSE) in each model’s implicit learning curve across the held-out 15°, 30°, and 45° periods in D. (F) Linear regressions fit to each rotation block in (D). Brown points and lines (data) show the regression slope and 95% CI. The black (competition), cyan (SPE gen. linear), and gold (SPE gen. normal) are model predictions where lines are 95% CIs estimated via bootstrapping. (G) All three models in D–F were fit to individual participant behavior in the stepwise group. At left, the AIC for each model is compared to that of the competition model. At right, the total number of subjects best captured by each model is shown. (H) Same as E and G but where the generalization width was varied in a sensitivity analysis. We tested values between one-half the McDougle et al. generalization curve (–50%) and twice the McDougle et al. generalization curve (+100%). Error bars in E show mean ± SEM. Statistics in E are post-hoc tests following one-way rm-ANOVA: **p < 0.01.

Figure 4—source code 1. Figure 4 data and analysis code.

Figure 4.

Figure 4—figure supplement 1. Implicit variations are inconsistent with generalization.

Figure 4—figure supplement 1.

(A) We fit a Gaussian generalization curve to the implicit and explicit learning measures in the stepwise (red) and abrupt (brown) groups in Exp. 1. The curve’s peak indicates total steady-state implicit learning. For both stepwise and abrupt groups to have the same total implicit learning, this would require total implicit learning to equal 47.8° (total steady-state implicit learning in the graph) and a standard deviation equal to 23.6° (σ in the graph). (B) The analysis in A does not yield physiological results. Total adaptation would be calculated by adding the total steady-state implicit learning to the explicit strategy (x-axis in A). This would yield total learning equal to 87.3° and 77.7° in the abrupt and stepwise groups, respectively (bars labeled generalization model). Yet the rotation was only 60°: gray line (rotation size). Thus, total learning must exceed the rotation to yield the measurements in A. Total learning in both the stepwise and abrupt groups is also provided in the inset (bars labeled data). (C&D) The violation in A&B is that measured explicit strategy is estimated as total adaptation minus measured implicit learning. Explicit strategy estimate will greatly overestimate true explicit strategy. In these insets, we corrected this in the model. In (C), we show the model when implicit learning decrements due to generalization, and strategy is estimated as total adaptation minus generalized implicit learning (best model). In D, we show the true underlying generalization curve that produces the implicit-explicit measures in C. The curve is shown in the corrected model. Note that such a curve is physiologically inconsistent with previously measured implicit learning properties. In E and F, we conduct a similar analysis, but this time analyze the response to the 20°, 40°, and 60° no-instruction rotations in Neville and Cressman, 2018. (E) shows the model’s behavior when implicit and explicit learning are both biased by generalization. The three curves correspond to each rotation size (total implicit learning scales because it is equal to pir). In F, we show the true Gaussian generalization curve (corrected model) that is needed to produce the data in C. Last, in G and H, we analyzed the response to instruction in Neville and Cressman. These insets are analogous to C and D.
Figure 4—figure supplement 1—source code 1. Figure 4—figure supplement 1 data and analysis code.
Figure 4—figure supplement 2. Differences in generalization across visuomotor rotation tasks.

Figure 4—figure supplement 2.

(A) Data collected by Day et al., 2016, reported in Figure 2 of the original manuscript (Figure 4 in current manuscript). Here, participants were exposed to a 45° rotation while reaching to a single target. On each trial they were asked to report their aiming direction, using a ring of visual landmarks. In the ‘target’ group, implicit aftereffects were measured at the trained target location. In the ‘aim’ group, implicit aftereffects were probed at a target location 30° away from the trained target, consistent with the direction of the most frequently reported aim. Here we show data from the first aftereffect cycle after the rotation period. (B) Similar to A except for data reported by McDougle et al., 2017; Figure 3A of their manuscript. Participants were also exposed to a 45° rotation while reaching to a single target. At the end of the experiment, participants were exposed to an aftereffect block where participants were told to move straight to the target without re-aiming. Here we take two relevant points from the generalization curve measured at the end of learning. The ‘target’ condition represents aftereffects probed at the training target. The ‘aim’ condition shows the aftereffect measured at 22.5° away from the primary target, which was the target closest to the mean reported explicit re-aiming strategy of 26.2°. (C) Data again from Day et al. The ‘probe’ implicit learning measure is the same as A. The ‘report’ condition shows the amount of implicit learning estimated by subtracting the reported explicit strategy from the reported reach angle on the last cycle of the rotation. (D) Similar to C, but for the intermittent reporting (IR-E) group reported by Maresch et al. (Figure 4b of their manuscript). In this group aim was only intermittently reported (four trials for every 80 normal adaptation trials). Thus, in most cases, participants only had to attend to a single target when reaching. The authors also used eight training targets (as opposed to one in A–C). The ‘probe’ condition corresponds to the total implicit learning measured at the end of adaptation by telling participants to reach without re-aiming. The ‘report’ condition corresponds to the total implicit learning estimated at the end of adaptation by subtracting the reported aim direction from the measured reach angle. (E) Here, we report implicit learning measured using the ‘probe’ and ‘report’ conditions in Experiment 2, analogous to the measures described in D. Error bars show mean ± SEM.
Figure 4—figure supplement 2—source code 1. Figure 4—figure supplement 2 data and analysis code.

We extended the independence model with implicit generalization and compared its behavior to the competition theory. The competition model is given by xiss = pi(r – xess), where pi is an implicit learning gain. The SPE generalization model is ximeasured = pirg(xess), where g(xess) encodes generalization (derivation in Appendix 6.2). We specified g(xess) with McDougle et al., 2017. We considered models where g(xess) was linear (Figure 4D–F, SPE gen. linear) and g(xess) was normal (SPE gen. normal). Then we fit each model’s pi to match implicit learning during the 60° stepwise rotation in Exp. 1. We used this gain to predict the implicit-explicit relationship across the three earlier learning periods (B1-B3 in Figure 4D). The generalization models yielded poor matches to the held-out data (model RMSE in Figure 4E, rm-ANOVA, F(2,72)=13.7, p < 0.001, ηp2 = 0.276). Further, a model comparison showed that competition best described individual subject data, minimizing AIC in 84% of stepwise participants (Figure 4G, Appendix 6.3). Poor SPE generalization model performance was not due to misestimating generalization curve properties; we conducted a sensitivity analysis in which we varied the generalization curve’s width. The competition model was superior across the entire range (Figure 4H, Appendix 6.3).

To understand why the competition theory alone generalized across rotation sizes, we fit linear regressions to the data in each rotation period. The regression slopes and 95% CIs are shown in Figure 4F (data). Remarkably, the measured implicit-explicit slope appeared to be constant across all rotation sizes. This invariance was directly consistent with the competition theory (Figure 4F, competition) which possesses an implicit gain pi that remains constant across rotations (like the data). But in generalization models (Figure 4F, generalization), the gain relating implicit and explicit learning is not constant; it changes as the rotation gets larger (see Appendix 6.3). In sum, data in Exps. 1–3 were poorly explained by an SPE model extended with generalization.

We considered one last control analysis. The competition equation predicts that implicit-explicit correlations are caused by the implicit system’s response to variations in strategy. An SPE learning model could create correlations the opposite way: individuals who possess less implicit learning compensate by increasing their explicit strategy. This scenario can be described by xess = pe(r – xiss) where pe is the explicit response gain. This model has three properties (Appendix 7.2). First, implicit and explicit learning will show a negative relationship (Figure 5A). Second, increases in implicit learning will tend to increase total adaptation (Figure 5C). Finally, increasing implicit learning leaves smaller errors to drive explicit strategy, resulting in a negative correlation between strategy and total adaptation (Figure 5B). While the competition model also predicts negative implicit-explicit correlations (Figure 5D), the other pairwise correlations differ (Appendix 7.1). Increases in explicit strategy lead to greater total learning (Figure 5E), but reduce the error which drives implicit learning, leading to a negative correlation between implicit learning and total adaptation (Figure 5F).

Figure 5. Implicit-explicit correlations with total adaptation match the competition theory.

The competition equation states that xiss = pi(r – xess), where pi is a scalar learning gain depending on ai and bi. The competition between steady-state implicit (xiss) and explicit (xess) adaptation predicted by this model is simulated in D across 250 hypothetical participants. The model pi is fit to data in Experiment 3. Total learning is given by xTss = xiss + xess. These two equations can be used to derive expressions relating total learning (xTss) to steady-state implicit (xiss) and explicit (xess) learning. In E, we show that the competition theory predicts a positive relationship between explicit learning and total adaptation (equation at top derived in Appendix 7, green denotes a positive gain). In F, we show that the competition theory predicts a negative relationship between implicit learning and total adaptation (equation at top derived in Appendix 7, red shading denotes negative gain). In (A–C), we consider an alternative model. Suppose that implicit learning is immune to explicit strategy and varies independently across participants. This is equivalent to the SPE learning model. But in this case, the explicit system could respond to variability in implicit learning via another competition equation: xess = pe(r – xiss). Here, pe is an explicit learning gain (must be less than one to yield a stable system). In A, we show the negative relationship between implicit and explicit adaptation predicted by this alternate SPE learning model. In B, we show that when the explicit system responds to implicit variability (SPE learning) there is a negative relationship between total adaptation and explicit strategy. The equation at top is derived in Appendix 7. In C, we show that the SPE learning model will yield a positive relationship between implicit learning and total adaptation. Equation at top derived in Appendix 7. (G) We measured the relationship between explicit strategy and total adaptation in Exp. 3 (No PT Limit group). Total learning exhibits a positive correlation with explicit strategy. (H) Same concept as in G, but here we show the relationship between total learning and implicit adaptation. The patterns in G and H are consistent with the competition theory (compare with E and F).

Figure 5—source code 1. Figure 5 data and analysis code.

Figure 5.

Figure 5—figure supplement 1. Relationships between implicit, explicit, and total learning indicate competition.

Figure 5—figure supplement 1.

Data were analyzed across three experiments. In the left column, we report participants in the CR, IR-E, and IR-EI groups in Maresch et al., 2021 In the middle column, we report participants collapsed across the abrupt and stepwise 60° rotation period in Experiment 1. In the right column, we report participants in the 60° rotation group in Tsay et al., 2021a. In each experiment, we analyzed implicit learning, explicit learning, and total adaptation. Implicit and explicit learning were estimated with exclusion (‘no aiming’) trials. (A–C) The relationship between implicit learning and total adaptation. (D–F) The relationship between explicit learning and total adaptation. (G–I) Relationship between implicit and explicit learning. All lines in A–I denote a linear regression. The associated R2 statistic is shown in each inset. All relationships were statistically significant (p < 0.05). Dots in each inset denote individual participants.
Figure 5—figure supplement 1—source code 1. Figure 5—figure supplement 1 data and analysis code.
Figure 5—figure supplement 2. Factors that weaken the correlation between implicit learning and total adaptation.

Figure 5—figure supplement 2.

(A) At left, we reproduce the relationship between implicit learning and total adaptation in the No PT Limit group in Experiment 3. In the middle inset, the same analysis is shown for participants in the 30° group in Tsay et al., 2021a. At right, the same analysis is shown for participants in the stepwise 30° rotation period in Experiment 1. Relationships between implicit learning and total adaptation were not statistically significant (p > 0.05) at middle and right. In B–E we explore factors that can weaken the relationship between implicit learning and total adaptation in the competition theory. The four factors are: B, total number of aftereffect trials used to measure implicit learning, (C), motor variability in the reach, (D), between-subject variability in strategy use, and E, total strategy use in the subject population. At left in each inset we conducted a power analysis. In this power analysis, n = 30 participants were simulated. Explicit strategies were randomly sampled. Implicit learning was then obtained via the competition equation. Implicit, explicit, and total learning were calculated for each simulated participant, by averaging over a set number of trials. Simulations were repeated 40,000 times. The probability that a negative relationship (red line), positive relationship (green line), and no relationship (black line) occurred is shown in the left inset. In B, at left, we show that with fewer trials to measure implicit learning, the probability that an experiment will yield a statistically significant relationship between implicit learning and total adaptation decreases substantially. At right, we compare the total number of “no aiming” trials used to measure implicit learning in Exp. 3, Tsay et al., and Exp. 1 (stepwise). In C, at left, we show that increases in trial-to-trial reach variability (i.e. motor execution noise) dramatically reduce the probability than an experiment will produce a statistically significant relationship between implicit learning and total adaptation. At right, we analyze trial-to-trial variability during the no aiming period in each experiment. In D, at left, we show that little variability in strategy use across participants reduces the probability that an experiment will yield a negative relationship between implicit learning and total adaptation. At right, we show the standard deviation in explicit strategies across subjects in the three experiments. In (E), at left, we show that little overall strategy use in the subject population decreases the probability that an experiment will yield a negative relationship between implicit learning and total adaptation. At right, we compare explicit strategies across the three experiments. Statistics in C and E denote a one-way ANOVA.
Figure 5—figure supplement 2—source code 1. Figure 5—figure supplement 2 data and analysis code.
Figure 5—figure supplement 3. Correlations between explicit learning and total adaptation are more robust to between-subject implicit variability.

Figure 5—figure supplement 3.

(A) Here, we show the correlation between explicit strategy and total adaptation in the 30° rotation group in Tsay et al., 2021a. (B) Same as A, but for the stepwise 30° rotation period in Experiment 1. In C–F, we show implicit and explicit correlations with total adaptation can be weakened by four factors: (C), total number of aftereffect trials used to measure implicit and explicit learning, (D), motor variability in the reach, (E), between-subject variability in strategy use, and (F), total strategy use in the subject population. At left in each inset we conducted a power analysis. In this power analysis, n = 30 participants were simulated. Explicit strategies were randomly sampled. Implicit learning was then obtained via the competition equation. Implicit, explicit, and total learning were calculated for each simulated participant, by averaging over a set number of trials. Simulations were repeated 40,000 times. At top we show the probability over these iterations that a statistically significant positive relationship between explicit strategy and total adaptation (green lines) and negative relationship between implicit learning and total adaptation (red lines) occur. At bottom, we calculated the average R2 value for the implicit-total and explicit-total regressions. Each point compares the two R2 values for each simulation condition above, with the unity line (black). In C, we show that more aftereffect trials improves the probability of obtaining statistically significant correlations, but the explicit-total correlation is stronger than the implicit-total correlation. In D, we show that less motor variability improves the probability of obtaining statistically significant correlations, but the explicit-total correlation is stronger than the implicit-total correlation. In E, we show that greater subject-to-subject variability in strategy improves the probability of obtaining statistically significant correlations, but the explicit-total correlation is stronger than the implicit-total correlation. In F, we show that greater overall strategy improves the probability of obtaining statistically significant correlations, but the explicit-total correlation is stronger than the implicit-total correlation.
Figure 5—figure supplement 3—source code 1. Figure 5—figure supplement 3 data and analysis code.
Figure 5—figure supplement 4. Variance in implicit learning properties weakens the relationship between implicit learning and total adaptation in the competition theory.

Figure 5—figure supplement 4.

Here, we consider two sources of variability: (1) subject-to-subject variability in explicit strategy, and (2) subject-to-subject variability in implicit learning. We simulate explicit strategies across 35 participants. The explicit strategies are the same in the left column (same explicit) and the right column (same explicit). They are sampled from a normal distribution: mean = 12°, SD = 4°. However, the left and right columns differ in terms of implicit variability. In the left column, there is no implicit variability across subjects. Implicit learning was calculated using the competition theory, using the same implicit learning gain (pi = 0.8). In the right column, we added variability to this implicit learning gain. This represents the more realistic scenario, that implicit retention and error sensitivity vary across participants. The implicit learning gain was randomly sampled with a normal distribution, mean = 0.8, SD = 0.1. In A, we show the relationship between implicit learning and total adaptation in these toy cases. In B, we show the relationship between explicit learning and total adaptation in these toy cases. In C, we show the relationship between implicit learning and explicit learning in these toy cases. Note that the relationship between implicit learning and total adaptation (A, at right) was uniquely susceptible to variability (p = 0.225), whereas other correlations remained strong and statistically significant (B and C, at right).
Figure 5—figure supplement 4—source code 1. Figure 5—figure supplement 4 analysis code.

We analyzed these predictions in the No PT Limit group in Exp. 3 (Appendix 7.4). Our observations matched the competition theory; greater explicit strategy was associated with greater total adaptation (Figure 5G, ρ = 0.84, p < 0.001), whereas greater implicit learning was associated with lower total adaptation (Figure 5H, ρ = −0.70, p < 0.001). We repeated these analyses in other datasets (Appendix 7.4) that measured implicit learning with no-aiming probe trials: (1) 60° rotation groups (combined across gradual and abrupt groups) in Experiment 1, (2) 60° groups reported by Maresch et al., 2021 (combined across the CR, IR-E, and IR-EI groups), and (3) 60° rotation group in Tsay et al., 2021a These data matched the competition theory: negative implicit-explicit correlations (Figure 5—figure supplement 1G-I), positive explicit-total correlations (Figure 5—figure supplement 1D-F), and negative implicit-total correlations (Figure 5—figure supplement 1A-C).

In summary, while an SPE learning model could exhibit negative correlations between implicit and explicit adaptation, it does not predict a negative correlation between steady-state implicit learning and total adaptation (nor a positive relationship between steady-state explicit strategy and total adaptation), as we observed in the data. The data were consistent with the competition theory, where the implicit system responds to variations in explicit strategy. However, there is a critical caveat. The predictions outlined above assumed that implicit learning properties (contained within pi) are the same across every participant. This is unlikely to be true, and variation in pi across subjects (e.g. changes in error sensitivity) will undermine some correlations in Figure 5, particularly the relationship between implicit learning and total adaptation. This phenomenon and past studies where it appears to occur are treated in Appendix 8.

Part 2: Competition with explicit learning can mask changes in the implicit learning system

Here, we show that in the competition model, implicit learning may undergo savings, without changing its learning timecourse. Next, we limit preparation time to detect increases and decreases in implicit learning.

Two ways to interpret the implicit response in a savings paradigm

When participants are exposed to the same perturbation twice, they adapt more quickly the second time. This phenomenon is known as savings and is a hallmark of sensorimotor adaptation (Smith et al., 2006; Herzfeld et al., 2014; Zarahn et al., 2008). Multiple studies have attributed this process solely to changes in explicit strategy (Haith et al., 2015; Huberdeau et al., 2019; Morehead et al., 2015; Avraham et al., 2021; Huberdeau et al., 2015).

For example, in an earlier work (Haith et al., 2015), we trained participants (n = 14) to reach to one of two targets, coincident with an audio tone (Figure 6A). By shifting the displayed target approximately 300ms prior to tone onset on a minority of trials (20%), we forced participants to execute movements with limited preparation time (Low preparation time; Figure 6A, middle). On all other trials (80%) the target did not switch resulting in high preparation time movements (Figure 6A, left). We measured adaptation to a 30° rotation during high preparation time (Figure 6B, left) and low preparation time trials (Figure 6B, middle) across two separate exposures (Day 1 and Day 2).

Figure 6. Competition predicts changes in implicit error sensitivity without changes in implicit learning rate.

Figure 6.

(A) Haith et al., 2015 instructed participants to reach to Targets T1 and T2 (right). Participants were exposed to a 30° visuomotor rotation at Target T1 only. Participants reached to the target coincident with a tone. Four tones were played with a 500ms inter-tone-interval. On most trials (80%) the same target was displayed during all four tones (left, High preparation time or High PT). On some trials (20%) the target switched approximately 300ms prior to the fourth tone (middle, Low preparation time or Low PT). (B) On Day 1, participants adapted to a 30° visuomotor rotation (Day 1, black) followed by a washout period. On Day 2, participants again experienced a 30° rotation (Day 2, blue). At left, we show the reach angle expressed on High PT trials during Days 1 and 2. Dashed vertical line shows perturbation onset. At middle, we show the same but for Low PT trials. At right, we show learning rate on High and Low PT trials, during each block. (C) As an alternative to the rate measure shown at right in B, we calculated the difference between reach angle on Days 1 and 2. At left and middle, we show the learning curve differences for High and Low PT trials, respectively. At right, we show difference in learning curves before and after the rotation. ‘Pre-rotation’ shows the average of Day 2 – Day 1 prior to rotation onset. ‘Post-rotation’ shows the average of Day 2 – Day 1 after rotation onset. (D) We fit a state-space model to the learning curves in Days 1 and 2 assuming that target errors drove implicit adaptation. Low PT trials captured the implicit system (blue). High PT trials captured the sum of implicit and explicit systems (green). Explicit trace (magenta) is the difference between the High and Low PT predictions. At right, we show error sensitivities predicted by the model. (E) Same as in D, but for a state-space model where implicit learning is driven by SPE, not target error. Model-predicted error sensitivities are shown. Error bars across all insets show mean ± SEM, except for the learning rate in B which displays the median. Two-way repeated-measures ANOVA were used in B, C, D, and E. For B and C, exposure number and preparation time condition were main effects. For D and E exposure number and learning system (implicit vs explicit) were main effects. Significant interactions in B, C, and E prompted follow-up one-way repeated-measures ANOVA (to test simple main effects). Statistical bars where two sets of asterisks appear (at left and right) indicate interactions. Statistical bars with one centered set show main effects or simple main effects. Statistics: n.s. means p > 0.05, *p < 0.05, **p < 0.01.

Figure 6—source code 1. Figure 6 data and analysis code.

To detect savings, we calculated the learning rate on low and high preparation time trials. Savings appeared to require high preparation time; learning rate increased during the second exposure on high preparation time trials, but not low preparation time trials (Figure 6B, right; two-way rm-ANOVA, preparation time by exposure number interaction, F(1,13)=5.29, p = 0.039; significant interaction followed by one-way rm-ANOVA across Days 1 and 2: high prep. time with F(1,13)=6.53, p = 0.024, ηp2=0.335; low preparation time with F(1,13)=1.11, p = 0.312, ηp2=0.079). To corroborate this rate analysis, we also measured savings via early changes in reach angle (first 5 rotation cycles) across Days 1 and 2 (Figure 6C, left and middle). Only high preparation time trials exhibited a statistically significant increase in reach angle, consistent with savings (Figure 6C, right; two-way rm-ANOVA, prep. time by exposure interaction, F(1,13)=13.79, p = 0.003; significant interaction followed by one-way rm-ANOVA across days: high prep. time with F(1,13)=11.84, p = 0.004, ηp2=0.477; low prep. time with F(1,13)=0.029, p = 0.867, ηp2=0.002).

Because explicit strategies can be suppressed by limiting movement preparation time under some conditions (Huberdeau et al., 2019; Fernandez-Ruiz et al., 2011; McDougle and Taylor, 2019), in our initial study we interpreted these data to mean that savings relied solely on time-consuming explicit strategies. Multiple studies have reached similar conclusions (Haith et al., 2015; Huberdeau et al., 2019; Morehead et al., 2015; Avraham et al., 2021; Huberdeau et al., 2015), suggesting that the implicit learning system is not improved by multiple exposures to a rotation.

However, the competition theory provides an alternate possibility: changes in the implicit learning system may occur but are hidden because of competition with explicit learning. To show this unintuitive phenomenon, we fit the competition model to individual participant behavior under the assumption that low preparation time trials relied solely on implicit adaptation, but high preparation time trials relied on both implicit and explicit adaptation. The model generated implicit (Figure 6D, blue) and explicit (Figure 6D, magenta) states that tracked the behavior well on high preparation time trials (Figure 6D, solid black line) and also low preparation time trials (Figure 6D, dashed black line).

Next, we considered the implicit and explicit error sensitivities estimated by the model, which are commonly linked to changes in learning rate (Coltman et al., 2019; Mawase et al., 2014; Lerner et al., 2020; Albert et al., 2021; Herzfeld et al., 2014). The model unmasked a surprising possibility: even though savings was observed only on high preparation time trials, but not low preparation time trials (Figure 6B&C), the model suggested that both the implicit and explicit systems exhibited a statistically significant increase in error sensitivity (Figure 6D, right; two-way rm-ANOVA, within-subject effect of exposure number, F(1,13)=10.14, p = 0.007, ηp2=0.438; within-subject effect of learning process, F(1,13)=0.051, p = 0.824, ηp2=0.004; exposure by learning process interaction, F(1,13)=1.24, p = 0.285).

In contrast, a model where the implicit system adapted to SPEs as opposed to target errors (the independence model) suggested that only the explicit system exhibited a statistically significant increase in error sensitivity (Figure 6E; two-way rm-ANOVA, learning process (i.e. implicit vs explicit) by exposure interaction, F(1,13)=7.016, p = 0.02; significant interaction followed by one-way rm-ANOVA across exposures: explicit system, F(1,13)=9.518, p = 0.009, ηp2=0.423; implicit system, F(1,13)=2.328, p = 0.151, ηp2=0.152).

In summary, when we reanalyzed our earlier data, the competition and independence theories suggested that our data could be explained by two contrasting hypothetical outcomes. If we assumed that implicit and explicit systems were independent, then only explicit learning contributed to savings, as we concluded in our original report. However, if we assumed that the implicit and explicit systems learned from the same error (competition model), then both implicit and explicit systems contributed to savings. Which interpretation is more parsimonious with measured behavior?

Competition with explicit strategy can alter measurement of implicit learning

The idea that implicit error sensitivity can increase without any change in implicit learning rate (Figure 6) is not intuitive. What the competition model suggests is that when the explicit system increases its learning rate as in Figure 6D, it leaves a smaller target error to drive implicit learning. However, despite this decrease in target error, low preparation time learning was similar on Days 1 and 2 (Figure 6B). Because we assumed that low preparation time learning relied on the implicit system, the competition theory required that the implicit system must have experienced an increase in error sensitivity to counterbalance the reduction in target error magnitude. In other words, though increase in implicit error sensitivity did not increase total implicit learning, it still contributed to savings. That is, had implicit error sensitivity remained the same, low preparation time learning would decrease on Day 2, and less overall savings would occur.

To understand how our ability to detect changes in implicit adaptation can be altered by explicit strategy we constructed a competition map (Figure 7A). Imagine that we want to compare behavior across two timepoints or conditions. Figure 7A shows how changes in implicit error sensitivity (x-axis) and explicit error sensitivity (y-axis) both contribute to measured implicit aftereffects (denoted by map colors), based on the competition equation (note that the origin denotes a 0% change in error sensitivity relative to Day 1 adaptation in Haith et al., 2015). The left region of the map (cooler colors) denotes combinations of implicit and explicit changes that decrease implicit adaptation. The right region of the map (hotter colors) denotes combinations that increase implicit adaptation. The middle black region represents combinations that manifest as a perceived invariance in implicit adaptation ( < 5% absolute change in implicit adaptation).

Figure 7. Changes in implicit adaptation depend on both implicit and explicit error sensitivity.

Figure 7.

(A) Here we depict the competition map. The x-axis shows change in implicit error sensitivity between reference and test conditions. The y-axis shows change in explicit error sensitivity. Colors indicate the percent change in implicit adaptation (measured at steady-state) from the reference to test conditions. Black region denotes an absolute change less than 5%. The map was constructed with Equation 8. (B) The map can be described in terms of five different regions. In Region A (matching increase), implicit error sensitivity and total implicit adaption both increase in test condition. Region D is same, but for decreases in error sensitivity and total adaptation. In Region B (mismatching decrease), implicit learning decreases though its error sensitivity is higher or same. In Region E (mismatching increase), implicit learning increases though its error sensitivity is lower or same. Region C shows a perceived invariance where implicit adaptation changes less than 5%. (C) Row 1: effect of enhancing explicit learning. Row 2: total learning increases. Row 3: implicit and explicit learning shown in Blocks 1 and 2, where only difference is 100% increase in explicit error sensitivity. Row 4: change in implicit learning (Block 2–1). (D) Row 1: effect of suppressing explicit learning. Row 2: total learning decreases. Row 3: implicit and explicit learning shown in Blocks 1 and 2, where explicit error sensitivity decreases 100%. Row 4: implicit learning change (Block 2–1). (E) Row 1: model simulation for Haith et al., 2015. Row 2: Total learning increases. Row 3: implicit and explicit learning during Blocks 1 and 2 where implicit error sensitivity increases by 41.5% and explicit error sensitivity increases by 70.6%. Row 4: negligible change in implicit learning (Block 2–1). (F) Same as in E except here explicit strategy is suppressed during Blocks 1 and 2.

Figure 7—source code 1. Figure 7 analysis code.

This map defines several distinct areas (Figure 7B). Region A denotes a ‘matching’ decrease between implicit adaptation and error sensitivity; total implicit learning will decline across two separate learning periods due to a reduction in implicit error sensitivity. Region D is similar. Here, total implicit learning will increase across two separate learning periods due to an increase in implicit error sensitivity.

The other regions show less intuitive cases. In Region B, there is a ‘mismatching’ change in total implicit learning and implicit error sensitivity; here total implicit learning decreases even though implicit error sensitivity has increased or stayed the same. Likewise, in Region E, total implicit learning will increase across two separate learning periods, though implicit error sensitivity has decreased or stayed the same.

Indeed, we have already described these cases in Figure 2. For example, by enhancing the explicit system via coaching (Figure 2A–C), implicit learning decreased. This scenario is equivalent to moving up the y-axis of the map (Figure 7C, top). The same implicit system will decrease its output (Figure 7C, bottom) when normal levels of explicit strategy are increased (Figure 7C, middle). On the other hand, suppressing explicit strategy by gradually increasing the rotation (Figure 2D–G), or limiting reaction time (Figure 3N&O), increased implicit learning without changing any implicit learning properties. This scenario is equivalent to moving down the y-axis of the competition map (Figure 7D, top). The same implicit system will increase its output (Figure 7D, bottom) when normal levels of explicit strategy are then suppressed (Figure 7D, middle).

Now, let us consider the savings experiment in Figure 6. The competition theory predicted (Figure 6D) that explicit error sensitivity increased by approximately 70.6% during the second exposure, whereas the implicit system’s error sensitivity increased by approximately 41.5% (Figure 7E, middle). These changes in implicit and explicit adaptation describe a single point in the competition map, denoted by the gray circle in Figure 7E (top). This experiment occupies Region C, which indicates that despite the 41.5% increase in implicit error sensitivity, the total implicit learning will increase by less than 5% (Figure 7E, bottom). In other words, the competition model suggests the possibility that implicit learning improved between Exposures 1 and 2, but this change was hidden by a dramatic increase in explicit strategy (which suppressed implicit learning during Exposure 2).

To test this prediction, we can suppress explicit adaptation, thus eliminating competition (Figure 7F, middle). Such an intervention would move our experiment from Region C to Region D (Figure 7F, top) where we will observe greater change in the implicit process (Figure 7D, bottom). We examined this possibility in a new experiment.

Savings in implicit learning is unmasked by suppression of explicit strategy

In Exp. 4 (Figure 8), participants experienced two 30° rotations, separated by washout trials with veridical feedback (mean reach angle over last three washout cycles was 0.55 ± 0.47°, one-sample t-test against zero, t(9)=1.16, p = 0.28; not shown in Figure 8). To suppress explicit strategy, we restricted reaction time on every trial, which in Exp. 3, greatly reduced explicit learning (Figure 3N; re-aiming decreases from 12° to about 2°). Under these reaction time constraints, participants exhibited reach latencies around 200ms (Figure 8B, top).

Figure 8. Removing explicit strategy reveals savings in implicit adaptation.

(A) Top: Low preparation time (Low PT) trials in Haith et al., 2015 used to isolate implicit learning. Middle: learning during Low PT in Blocks 1 and 2. Bottom: difference in Low PT learning between Blocks 1 and 2. (B) Similar to A, but here (Experiment 4) explicit learning was suppressed on every trial, as opposed to only 20% of trials. To suppress explicit strategy, we restricted reaction time on every trial. The reaction time during Blocks 1 and 2 is shown at top. At middle, we show how participants adapted to the rotation under constrained reaction time. At bottom, we show the difference between the learning curves in Blocks 1 and 2. These two periods were separated by washout cycles with veridical feedback (not shown). (C) Here, we measured savings in Haith et al. (20% of trials had reaction time limit) and Experiment 3 (100% of trials had reaction time limit). Top row: we quantify savings by fitting an exponential curve to each learning curve. Data are the rate parameter associated with the exponential. Left column shows group-level data (median). Right column shows individual participants. Bottom row: we quantify savings by comparing how Blocks 1 and 2 differed before perturbation onset (black), and after perturbation onset (purple and yellow). At left, error bars show mean ± SEM. At right, individual participants are shown. Error bars in A and B indicate mean ± SEM. Statistics in C show mixed-ANOVA (exposure number is within-subject factor, experiment type is between-subject factor). Significant interactions were observed both in rate (top) and angular (bottom) savings measure. Follow-up simple main effects were assessed via one-way repeated-measures ANOVA. Statistical bars where two sets of asterisks appear (at left and right) indicate interactions. Statistical bars with a centered set show simple main effects. Statistics: n.s. means p > 0.05, *p < 0.05, **p < 0.01.

Figure 8—source code 1. Figure 8 data and analysis code.

Figure 8.

Figure 8—figure supplement 1. Limiting preparation time eliminates explicit strategy use.

Figure 8—figure supplement 1.

(A) Participants were exposed to a 30° rotation. The perturbation schedule is shown at top. The reach angle on each cycle is shown in the middle. The preparation time on each cycle is shown at bottom. On each trial, we imposed a strict upper limit on reaction time in an attempt to suppress explicit strategy. Participants were separated into two groups. These groups had an identical perturbation schedule but received different instructions prior to the terminal no feedback period. In the Limit PT group (n = 21, red), participants were instructed to aim directly at the target without re-aiming. In the decay-only group (n = 12, black), participants were instructed to imagine the rotation was still present, and to continue trying to move the “imagined cursor” through the target. Change in reach angle in the Limit PT no feedback condition could be due to abandoning explicit strategy and also decay in a temporally-labile implicit process (instruction period was about 30 s in duration). Change in reach angle in the decay-only no feedback condition was intended to isolate any decay in a temporally-labile implicit process. (B) Change in reach angle due to instruction. Here, we subtracted the initial reach angle during the no feedback period from the total adaptation measured prior to instruction. We observed no statistically significant difference between the two groups (two-sample t-test) suggesting that change in reach angle during the no feedback period was almost entirely due to involuntary decay in a temporally-labile implicit process. Error bars indicate mean ± SEM.
Figure 8—figure supplement 1—source code 1. Figure 8—figure supplement 1 data and analysis code.

While occasionally limiting preparation time prevented savings in Haith et al., 2015 (Figure 8A, low preparation time on 20% of trials), inhibiting strategy use on every trial in Experiment 4 yielded the opposite outcome (Figure 8B). Low preparation time learning rates increased by more than 80% in Experiment 4 (Figure 8C top; mixed-ANOVA exposure number by experiment type interaction, F(1,22)=5.993, p = 0.023; significant interaction followed by one-way rm-ANOVA across exposures: Haith et al. with F(1,13)=1.109, p = 0.312, ηp2=0.079; Experiment 4 with F(1,9)=5.442, p = 0.045, ηp2=0.377). Statistically significant increases in reach angle were detected immediately following rotation onset in Experiment 4 (Figure 8B, bottom), but not our earlier data (Figure 8C, bottom; mixed-ANOVA exposure number by experiment interaction, F(1,22)=4.411, p = 0.047; significant interaction followed by one-way rm-ANOVA across exposures: Haith et al. with F(1,13)=0.029, p = 0.867, ηp2=0.002; Experiment 4 with F(1,9)=11.275, p = 0.008, ηp2=0.556).

In sum, when explicit learning was inhibited on every trial, low preparation time behavior showed savings (Figure 8B). But when explicit learning was inhibited less frequently, low preparation time behavior did not exhibit a statistically significant increase in learning rate (Figure 8A). The competition theory provided a possible explanation; that an implicit system expressible at low preparation time exhibits savings, but these changes in implicit error sensitivity can be masked by competition with explicit strategy.

However, the savings we measured at limited preparation time may not be solely due to changes in implicit learning, but also cached explicit strategies (Huberdeau et al., 2019; McDougle and Taylor, 2019). Indeed, when we limited preparation time in Exp. 3, participants still exhibited a small decrease (2.09°) in reach angle when we instructed them to stop aiming (Figure 3L, no aiming; Figure 3N and E, red). These small residual strategies could have contributed to the 8° reach angle measured early during the second rotation in Exp. 4 (Figure 8C, implicit difference, no comp.).

What that said, the ‘aiming angle’ we measured in the Limit PT group in Exp. 3, may overestimate the extent to which participants can use explicit strategy in our limited preparation time paradigm. That is, the decrease in reach angle we observed when participants were told to stop aiming (Figure 3L, no aiming) may be due to time-based decay in implicit learning (Neville and Cressman, 2018; Maresch et al., 2021) over the 30 s instruction period, as opposed to a voluntary reduction in strategy.

To test this alternate interpretation, we collected another limited preparation group (n = 12, Figure 8—figure supplement 1A, decay-only, black). But this time, participants were instructed that the experiment’s disturbance was still on, and that they should continue to move the ‘imagined’ cursor through the target during the terminal no feedback period. Despite this instruction, reach angles decreased by approximately 2.1° (Figure 8—figure supplement 1B, black). Indeed, we detected no statistically significant difference between the change in reach angle in this decay-only group, and the Limit PT group in Experiment 3 (Figure 8—figure supplement 1B; two-sample t-test, t(31)=0.016, p = 0.987).

This control experiment suggested that ‘explicit strategies’ we measured in the Limit PT condition were more likely caused by time-dependent decay in implicit learning. Indeed, our Limit PT protocol may eliminate explicit strategy. This additional analysis lends further credence to the hypothesis that savings in Experiment 4 was primarily due to changes in the implicit system rather than cached explicit strategies.

Impairments in implicit learning contribute to anterograde interference

Exp. 4 suggested that the implicit system can exhibit savings. We next wondered whether these changes are bidirectional: can the implicit learning rate decrease? When subjects learn two opposing perturbations in sequence, their adaptation slows due to another hallmark of adaptation, anterograde interference.

In Experiment 5, we exposed two groups of participants to opposing visuomotor rotations of 30° and –30° in sequence (Experiment 5). In one group, the perturbations were separated by a 5-min break (Figure 9A). In a second group, the break was 24 hr in duration (Figure 9B). We inhibited explicit strategies by strictly limiting reaction time. Under these constraints, participants executed movements at latencies near 200ms (Figure 9A&B, middle, blue). These reaction times were approximately 50% lower than those observed when no reaction time constraints were imposed on participants, as in our earlier work (Lerner et al., 2020; Figure 9A&B, middle, green).

Figure 9. Removing explicit strategy reveals anterograde interference in implicit adaptation.

Figure 9.

(A) Top: participants were adapted to a 30° rotation (A). Following a 5-min break, participants were then exposed to a –30° rotation (B). This A-B paradigm was similar to that of Lerner et al., 2020 Middle: to isolate implicit adaptation, we imposed strict reaction time constraints on every trial. Under these constraints, reaction time (blue) was reduced by approximately 50% over that observed in the self-paced condition (green) studied by Lerner et al., 2020. Bottom: learning curves during A and B in Experiment 5; under reaction time constraints, the interference paradigm produced a strong impairment in the rate of implicit adaptation. To compare learning during A and B, B period learning was reflected across y-axis. Furthermore, the curves were temporally aligned such that an exponential fit to the A period and exponential fit to the B period intersected when the reach angle crossed 0°. This alignment visually highlights differences in the learning rate during the A and B periods. (B) Here, we show the same analysis as in A but when exposures A and B were separated by 24 hr. (C) To measure the amount of anterograde interference on the implicit learning system, we fit an exponential to the A and B period behavior. Here, we show the B period exponential rate parameter divided by the A period rate parameter (values less than one indicate a slowing of adaptation). At left, group-level statistics are shown. At right, individual participants are shown. Data in the Limit PT (limited preparation time) condition in Experiment 5 are shown in blue. Data from Lerner & Albert et al. (no preparation time limit) are shown in green. A two-way ANOVA was used to test for differences in interference (preparation time condition (i.e. experiment type) was one between-subject factor, time-elapsed between exposures (5 min vs 24 hr) was the other between-subject factor). Statistical bars indicate each main effect. Statistics: *p < 0.05, **p < 0.01. Error bars in each inset show mean ± SEM.

Figure 9—source code 1. Figure 9 data and analysis code.

To assess changes in low preparation time learning, we measured the adaptation rate during each rotation period. In addition, we re-analyzed the adaptation rates obtained in our earlier work (Lerner et al., 2020) where participants were tested in a similar paradigm but without any reaction time constraints. While both low preparation time and high preparation time trials exhibited decreases in learning rate which improved with the passage of time (Figure 9C; two-way ANOVA, main effect of time delay, F(1,50)=5.643, p = 0.021, ηp2=0.101), these impairments were greatly exacerbated by limiting preparation time (Figure 9C; two-way ANOVA, main effect of preparation time, F(1,50)=11.747, p = 0.001, ηp2=0.19). This result was unrelated to initial differences in error across rotation exposures; we obtained analogous results (see Materials and methods) when learning rate was calculated after the ‘zero-crossing’ in reach angle (two-way ANOVA, main effect of time delay, F(1,50)=4.23, p = 0.045, ηp2=0.067; main effect of prep. time, F(1,50)=8.303, p = 0.006, ηp2=0.132).

Thus, inhibiting explicit strategy via preparation time constraints revealed a strong and sustained anterograde deficit in implicit learning. Under normal reaction time conditions, adaptation rates were less impaired, suggesting that explicit strategies may have partially compensated and masked lingering deficits in the implicit system’s sensitivity to error.

Part 3: Limitations of the competition theory

The competition theory assumes that learning in the implicit system is driven by only one error. Here we show that this single error hypothesis is unlikely to be true in every condition. To demonstrate the theory’s limitations, we examine two earlier studies and speculate how the theory might be extended to account for these more sophisticated behaviors.

The implicit system may adapt to multiple target errors at the same time

In Mazzoni and Krakauer, 2006, we tested two sets of participants. In a no-strategy group, participants adapted to a standard 45° rotation (Figure 10A, blue, no-strategy, adaptation) followed by washout (Figure 10A, blue, no-strategy, washout). In a second group, participants made two initial movements with the rotation (Figure 10A, red, strategy, 2 movements no instruction). Then we coached subjects to aim toward a neighboring target (45° away) which entirely compensated for the rotation. Participants adopted the aiming strategy, bringing the primary target error to zero (Figure 10A, red, strategy, instruction). Curiously, even though the primary target error had now been eliminated, reaching movements gradually drifted beyond the primary target, overcompensating for the rotation. These involuntary changes implicated an implicit process.

Figure 10. Two visual targets create two implicit error sources.

Figure 10.

(A) Data reported in Mazzoni and Krakauer, 2006. Blue shows error between primary target and cursor during adaptation and washout. Red shows the same, but in a strategy group that was instructed to aim to a neighboring target (instruction) to eliminate target errors, once participants experienced two large errors (two cycles no instruction). (B) The error between the cursor and the aimed target during the adaptation period. These curves are the same as in A except we use the aimed target rather than primary target, so as to better compare learning curves across groups. (C) The washout period reported in A. Here, error is relative to primary target, though in this case aimed and primary targets are the same. (D) We modeled behavior when implicit learning adapts to primary target errors e1. Note that the no-strategy learning group resembles data. However, strategy learning exhibits no drift because the implicit system has zero error. Note here that the primary target error of 0° is a 45° aimed target error in the strategy group. (E) Similar to D, except here the implicit system adapts to errors between the cursor and aimed target, termed e2. (F) In this model, the strategy group adapts to both the primary target error and the aimed target error (e1 and e2 at top). The no-strategy group adapts only to the primary target error. Learning parameters are identical across groups. (G) We show how aiming target and primary target errors evolve in the strategy group in F. (H) A potential neural substrate for implicit learning. The primary target error and aiming target error engage two different sub-populations of Purkinje cells in the cerebellar cortex. These two implicit learning modules combine at the deep nucleus. (I) Data reported in Taylor and Ivry, 2011. Before adaptation, subjects were taught to re-aim their reach angles. In the ‘nstruction with target’ group, participants re-aimed during adaptation with the aid of neighboring aiming targets (top-left). In the ‘instruction without target’ group, participants re-aimed during adaptation without any aiming targets, solely based on the remembered instruction from the baseline period. The middle shows learning curves. In both groups, the first two movements were uninstructed, resulting in large errors (two movements no instruction). Note in the ‘instruction with target’ group, there is an implicit drift as in A, but participants eventually reverse this by changing explicit strategy. There is no drift in the ‘instruction without target’ group. At right, we show the implicit aftereffect measured by telling participants not to aim (first no feedback, no aiming cycle post-adaptation). Greater implicit adaptation resulted from physical target. Error bars show mean ± SEM. Statistics: *p < 0.05, ***p < 0.001.

Figure 10—source code 1. Figure 10 data and analysis code.

When we compared the rate of learning with and without strategy in Mazzoni and Krakauer, 2006, we found that it was not different during the initial exposure to the perturbation (Figure 10B, gray, mean adaptation over rotation trials 1–24, Wilcoxon rank sum, p = 0.223). This statistical test led us to conclude in Mazzoni and Krakauer, that implicit adaptation was driven by a sensory prediction error that did not depend on the primary target and was not altered by explicit strategy.

However, there remained an unsolved puzzle. While the initial rates of adaptation were the same irrespective of strategy, adaptation diverged later in learning (Figure 10B, compare strategy and no-strategy curves after initial gray region; two-sample t-test, p < 0.005), with the no-strategy group exhibiting a larger aftereffect (see aftereffect in Figure 10C; two-sample t-test, p < 0.005). Might these late differences have been caused by participants in the strategy group abandoning their explicit strategy as it led to larger and larger errors? This possibility seemed unlikely. When we asked participants to stop using their aiming strategy and to move instead toward the primary target (Figure 10A, do not aim rotation on) their movement angle changed by 47.8° (difference between three movements before and three movements after instruction), indicating that they had continued to maintain the instructed explicit re-aiming strategy near 45°.

We wondered if interactions between implicit and explicit learning could help solve this puzzle. First, we considered the competition model that best described the experiments in Figures 17. In this model, the implicit system is driven exclusively by error with respect to the primary target (Equation 1) (Figure 10D, top, e1). While this model predicted learning in the standard no-strategy condition, it failed to account for the drift observed when participants were given an explicit strategy (Figure 10D, no learning in strategy group). This was not surprising. If implicit learning is driven by the primary target’s error, it will not adapt in the strategy group because participants explicitly reduce target error to zero at the start of adaptation (note that 45° in Figure 10D means a 0° primary target error).

We next considered the possibility that implicit learning was driven exclusively by an error with respect to the aimed target (target 2, Figure 10E, top, e2), as we concluded in our original study (Mazzoni and Krakauer, 2006). While this model correctly predicted non-zero implicit learning in the no-strategy and strategy groups, it could not account for any differences in learning that emerged later during the adaptation period (Figure 10E, bottom).

Finally, we noted that participants in the strategy group were given two contrasting goals. One goal was to aim for the neighboring target, whereas the other goal was to move the cursor through the primary target (both targets were always visible). Therefore, we wondered if participants in the strategy group learned from two distinct target errors: cursor with respect to target 1, and cursor with respect to target 2 (Figure 10F, top). In contrast, participants in the no-strategy group attended solely to the primary target, and thus learned only from the error between the cursor and target 1. Thus, we imagined that implicit learning in the strategy group was driven by two target errors: e1 was cursor with respect to target 1, and e2 was cursor with respect to target 2:

xi,1(n+1)=aixi,1(n)+bie1(n)xi,2(n+1)=aixi,2(n)+bie2(n) (6)

These two modules then combined to determine the total amount of implicit learning (i.e. xi = xi,1+ xi,2).

Interestingly, when we applied the dual target error model (Equation 6) to the strategy group, and the single target error model (Equations (1) and (3)) to the no-strategy group, the same implicit learning parameters (ai and bi) closely tracked the observed group behaviors (black model in Figure 10B). These models correctly predicted that initial learning would be similar across the strategy and no-strategy conditions but would diverge later during adaptation (Figure 10F). How was this possible?

In Figure 10G, we show how the primary target error and aiming target error evolved over time in the instructed strategy group. Initially, strategy reduces primary target error to zero (Figure 10G, primary target error). Thus, early in learning, the implicit system is driven predominantly by aiming target error. For this reason, initial learning will appear similar to the no-strategy group which also adapts to only one error. However, as the error with respect to the aimed target decreases, error with respect to the primary target increases but in the opposite direction (Figure 10G; see schematic in Figure 10F). Therefore, the primary target error opposes adaptation to the aiming target error. This counteracting force causes implicit adaptation to saturate prematurely. Hence, participants in the no-strategy group, who do not experience this error conflict, adapt more.

It is important, however, to note a limitation in these analyses. Our earlier study did not employ the standard conditions used to measure implicit aftereffects: that is instructing participants to aim directly at the target, and also removing any visual feedback. Thus, the proposed model relies on the assumption that differences in washout were primarily related to the implicit system. These assumptions need to be tested more completely in future experiments.

In summary, the conditions tested by Mazzoni and Krakauer show that the simplistic idea that adaptation is driven by only one target error, or only one SPE, cannot be true in general (Tsay et al., 2021b). We propose a new hypothesis that when people move a cursor to one visual target, while aiming at another visual target, cursor error with respect to each target contributes to implicit learning. When one target error conflicts with the other target error, the implicit learning system may exhibit an attenuation in total adaptation.

This experiment alone does not reveal the nature of aiming target error. That is, in the strategy group, the error between the aim direction and the cursor is both an SPE, but also a target error (because participants are aiming at a neighboring target). We explore this distinction in the next section.

The persistence of sensory prediction error, in the absence of target error

Our analysis in Figure 10A–G suggested that when participants see two targets, one to aim toward with their hand and one to move the cursor to, the landmarks can act as two different target errors. To what extent do these errors depend on the target’s physical presence in the workspace? Taylor and Ivry, 2011 tested this idea, repeating the instruction paradigm used by Mazzoni and Krakauer, though with nearly four times the number of adaptation trials (Figure 10I, instruction with target, black). Interestingly, while the reach angle exhibited the same implicit drift described by Mazzoni and Krakauer, with many more trials participants eventually counteracted this drift by modifying their explicit strategies, bringing their target error back to zero (Figure 10I, black). At the end of adaptation, participants exhibited large implicit aftereffects when instructed to stop aiming (Figure 10I, right, aftereffect; t(9)=5.16, p < 0.001, Cohen’s d = 1.63).

In a second experiment, participants were taught how to re-aim their reach angles during an initial baseline period, but during adaptation itself, they were not provided with physical aiming targets (Figure 10I, instruction without target). In this case, only SPEs (not a target error) could drive implicit learning towards the aimed location. Even without physical aiming landmarks, participants immediately eliminated error at the primary target after being instructed to re-aim (Figure 10I, middle, yellow). Curiously, without the physical aiming target, these participants did not exhibit an implicit drift in reach angle at any point during the adaptation period and exhibited only a small implicit aftereffect during the washout period (Figure 10I, right, t(9)=3.11, p = 0.012, Cohen’s d = 0.985). In fact, the aftereffect was approximately three times larger when participants aimed towards a physical target during adaptation than when this target was absent (Figure 10I, right, aftereffect; two-sample t-test, t(18)=2.85, p = 0.012, Cohen’s d = 0.935).

A target error (competition) model is consistent with some of these results, but not all. The model correctly predicts that when only a single target is present, performance during adaptation will not exhibit a drift, even though people are aiming. However, it does not explain why this condition still leads to the small aftereffect. Further, with two targets, it correctly predicts that adaptation will drift, as in Mazzoni and Krakauer, but it does not explain how this is eliminated late during adaptation; this reversal in drift would seem to indicate a compensatory and gradual reduction in explicit strategy (Taylor and Ivry, 2011; McDougle et al., 2015; Taylor et al., 2014).

Together, the data suggested a remarkable depth to the implicit system’s response to error. While implicit learning was greatest in response to target error, removing the physical target still permitted SPE-driven learning, albeit to a smaller degree. Whether this aiming-related error is both a target error and an SPE occurring together, or solely an SPE enhanced by a salient visual stimulus, remains unknown.

Discussion

Sensorimotor adaptation relies on an explicit process shaped by intention (Taylor et al., 2014; Hwang et al., 2006), and an implicit process driven by unconscious correction (Morehead et al., 2017; Mazzoni and Krakauer, 2006; Kim et al., 2018). Here, we examined the possibility that these two parallel systems can become entangled when they respond to a common error source: target (i.e. task) error (Leow et al., 2020; Kim et al., 2019). The data suggested that this coupling resembles a competition by which enhancing the explicit system’s response rapidly depletes error, decreasing the driving force for implicit adaptation. Thus, providing instructions on how to reduce errors enhances the explicit system, but comes at the cost of robbing the implicit system of what it needs to adapt.

This simple rule explained why the implicit system can operate in three modes, one that appears insensitive to perturbation magnitude, another that scales with the perturbation’s size, and a third that exhibits non-monotonic behavior (Figure 1). It also predicted that priming or suppressing explicit awareness can inversely change implicit adaptation (Figure 2). As a result, subjects that utilize strategies inadvertently suppress their implicit learning (Figures 35). This inhibition can continue to the extent that improvements in implicit learning (e.g. savings) are masked by dramatic upregulation in strategic learning (Figures 68).

The task-error driven implicit system likely exists in parallel with other implicit processes (Leow et al., 2020; Kim et al., 2019; Morehead and Orban de Xivry, 2021). For example, in cases where primary target errors are eliminated, small amounts of implicit adaptation persist (Figure 10). These residual changes are likely due to sensory prediction errors (Mazzoni and Krakauer, 2006; Leow et al., 2020; Taylor and Ivry, 2011; Kim et al., 2019) as well as other target errors that remain in the workspace (Figure 10I). When these error sources oppose one another, competition between parallel implicit learning modules may inhibit the overall implicit response (Figure 10A–C).

In a broader sense, these competitive interactions extend beyond implicit and explicit processes, to other parallel neural circuits that respond to a common error. Changes in one neural circuit’s response to error may be indirectly driven, or hidden, by a parallel circuit. Thus, competition may lead to long-range interactions between neuroanatomical regions that subserve separate neural processes. For example, strategic learning systems housed within the cortex (Shadmehr et al., 1998; Milner, 1962; Gabrieli et al., 1993), may exert indirect changes on a subcortical structure like the cerebellum, which is widely implicated in subconscious adaptation (Tseng et al., 2007; Donchin et al., 2012; Smith and Shadmehr, 2005; Izawa et al., 2012; Wong et al., 2019; Becker and Person, 2019; Morton and Bastian, 2006).

Flexibility in the implicit response to error and its contribution to savings

When two similar perturbations are experienced in sequence, the rate of relearning is enhanced during the second exposure (Haith et al., 2015; Coltman et al., 2019; Mawase et al., 2014; Zarahn et al., 2008; Kording et al., 2007). This hallmark of memory (MacLeod, 1988; Ebbinghaus, 1885) is referred to as savings, which is often quantified based on differences in the learning curves for each exposure (Haith et al., 2015; Morehead et al., 2015), or the rate of adaptation (Kitago et al., 2013). These conventions are based on an underlying assumption: when a learning system is enhanced, its total adaptation will also change. Here, we showed that this intuition is incorrect.

The state space model (Smith et al., 2006; Albert and Shadmehr, 2018; Thoroughman and Shadmehr, 2000) quantified behavior using two processes: learning and forgetting. This model described savings as a change in sensitivity to error (Coltman et al., 2019; Mawase et al., 2014; Herzfeld et al., 2014). When similar errors are experienced on consecutive trials, the brain becomes more sensitive to their occurrence and responds more strongly on subsequent trials (Albert et al., 2021; Herzfeld et al., 2014; Leow et al., 2016). Generally, as error sensitivity increases, so too does the rate at which we adapt to the perturbation (e.g. High PT trials in Figure 6). However, under certain circumstances, changes in one’s implicit sensitivity to error may not lead to differences in measured behavior (e.g. Low PT trials in Figure 6).

The reason is competition. When strategy is enhanced, it reduces the error available for implicit learning. Therefore, although the implicit system may become more sensitive to error, this increase in sensitivity is canceled out by the decrease in error size.

For example, recent lines of work have suggested that increases in learning rate depend solely on the explicit recall of past actions. Implicit adaptation does not seem to contribute to faster re-learning, whether implicit learning is estimated via reported strategies (Morehead et al., 2015), or by intermittently restricting movement preparation time (Haith et al., 2015; Huberdeau et al., 2019; Figure 6). These results suggested that implicit processes do not show savings. Our data suggest a different possibility. When we limited reaction time on all trials in Experiment 4, thus suppressing explicit contributions to behavior, we found that the implicit system exhibited savings (Figure 8). The disconnect between studies that have detected changes in both implicit and explicit learning rates (Leow et al., 2020; Yin and Wei, 2020; Albert et al., 2021), versus studies that have only observed changes in explicit learing (Haith et al., 2015; Huberdeau et al., 2019; Morehead et al., 2015; Avraham et al., 2020; Avraham et al., 2021), can be resolved by the competition equation (Equation 4).

The competition equation links steady-state implicit learning to both implicit and explicit learning properties (Figure 7). When both implicit and explicit systems become more sensitive to error, the explicit response can hide changes in the implicit response (Figure 7B, Region C). Moreover, dramatic enhancement in explicit adaptation could even lead to a decrease in implicit learning, even when implicit error sensitivity has increased (Figure 7B, Region B). Indeed, this prediction can explain cases whereby re-exposure to a rotation increases explicit strategies, but can attenuate implicit learning (Huberdeau et al., 2019; Avraham et al., 2021; Wilterson and Taylor, 2021). For example, in a recent study by Huberdeau et al., 2019, seven exposures to a rotation dramatically enhanced the strategic learning system, but simultaneously attenuated implicit learning. Prolonged multi-day exposure to a rotation appears to have a similar outcome (Wilterson and Taylor, 2021).

It is critical to distinguish between cases where implicit learning is indirectly reduced by increases in explicit strategy, versus contexts that lead to direct impairments in the implicit system’s sensitivity to error. For example, when two opposing perturbations are experienced sequentially, the response to the second exposure is impaired by anterograde interference (Sing and Smith, 2010; Caithness et al., 2004; Smith et al., 2006; Miall et al., 2004). Recently, we linked these impairments in learning rate to a transient reduction in error sensitivity which recovers over time (Lerner et al., 2020). Here, we limited reaction time to try and isolate the implicit contributions to this impairment. Impairments in learning at low preparation time were long-lasting, persisting even 24 hr, and exceeded those measured at normal movement preparation times (Figure 9C). These results suggested that less-inhibited explicit strategies may sometimes compensate, at least in part, for lingering deficits in implicit adaptation (Leow et al., 2020; Huberdeau et al., 2019). Our analysis in Figure 9, however, compares Exp. 5 to our earlier work in Lerner et al., 2020 where we did not tease apart implicit and explicit learning. Thus, future work needs to test these ideas more carefully.

There is a possible limitation in this interpretation. Recent studies have demonstrated that with multiple exposures to a rotation, explicit responses can be expressed at lower reaction times: a process termed caching (Huberdeau et al., 2019; McDougle and Taylor, 2019). Thus, changes in low preparation time adaptation commonly ascribed to the implicit system, may be contaminated by cached explicit strategies. This possibility seems unlikely to have altered our results. First, it is not clear why caching would occur in Experiment 4, but not our earlier study in Haith et al., 2015; Figure 8; these earlier data implied that caching remains limited with only two exposures to a rotation (at least during the initial exposure to the second rotation over which savings was assessed). Nevertheless, to test the caching hypothesis, we measured explicit re-aiming under limited preparation time conditions in Experiment 3. We found that our method restricted explicit re-aiming to only 2°, compared to about 12° in the standard condition (Figure 3N). Moreover, this 2° decrement in reach angle was more likely due to forgetting in implicit learning (Neville and Cressman, 2018; Hadjiosif and Smith, 2015; Alhussein et al., 2019; Hosseini et al., 2017; Joiner et al., 2017; Zhou et al., 2017). That is, we a similar 2° decrease in reach angle occurred over the 30 s instruction period, even when participants were not told to stop aiming (Figure 8—figure supplement 1). Thus, while it appears that caching played little role in our results, our results should be taken cautiously. It is critical that future studies investigate how caching varies across experimental methodologies, and how cached strategies interact with implicit learning. In addition, such experiments should dissociate these cached explicit responses from associative implicit memories that may be rapidly instantiated in the appropriate context.

Competition-driven enhancement and suppression of implicit adaptation

The competition theory cautions that increases or decreases in implicit learning do not necessarily imply that the implicit system has altered its response to error. That is, changes in implicit learning may occur indirectly through competition with explicit strategies.

For example, when participants are coached about a visuomotor rotation prior to its onset, their explicit strategies are greatly enhanced (Neville and Cressman, 2018; Benson et al., 2011). These increases in explicit strategy are coupled to decreases in implicit adaptation (Figure 2). A similar phenomenon is observed in other experiments where participants report their strategy using visual landmarks. In such paradigms, increased reporting frequency leads to increased explicit strategy, but decreased implicit learning (Maresch et al., 2021; Bromberg et al., 2019; de Brouwer et al., 2018). Subjects themselves exhibit substantial variations in strategic learning, leading to negative individual-level correlations between implicit and explicit learning (Neville and Cressman, 2018; Benson et al., 2011; Fernandez-Ruiz et al., 2011; Figure 3).

The competition theory helps to reveal the input that drives implicit learning. This competitive relationship (Equation 4) naturally arises when implicit systems are driven by errors in task outcome (Equation 1). We can observe these negative interactions not solely when enhancing explicit strategy, but also when suppressing re-aiming. For example, in cases where perturbations are introduced gradually, thus reducing conscious awareness, implicit “procedural” adaptation appears to increase (Yin and Wei, 2020; Saijo and Gomi, 2010; Kagerer et al., 1997; Figure 2, Figure 2—figure supplement 3, and Appendix 5). Similarly, when participants are required to move with minimal preparation time, thus suppressing time-consuming explicit re-aiming (Haith et al., 2015; Fernandez-Ruiz et al., 2011; McDougle and Taylor, 2019), the total extent of implicit adaptation also appears to increase (Figure 3O; Albert et al., 2021; Fernandez-Ruiz et al., 2011).

Although the implicit system varies with experimental conditions, a common phenomenon is its invariant response to changes in rotation size (Morehead et al., 2017; Neville and Cressman, 2018; Bond and Taylor, 2015; Tsay et al., 2021a; Kim et al., 2018). For example, in the (Neville and Cressman, 2018) data examined in Figure 1, total implicit learning remained constant despite tripling the rotation’s magnitude. While this saturation in implicit learning is sometimes due to a restriction in implicit adaptability (Morehead et al., 2017; Kim et al., 2018), in other cases this rotation-insensitivity may have another cause entirely: competition. That is, when rotations increase in magnitude, rapid scaling in the explicit response may prevent increases in total implicit adaptation. In the competition theory, implicit learning is driven not by the rotation, but by the residual error that remains between the rotation and explicit strategy. Thus, when we used gradual rotations to reduce explicit adaptation (Experiment 1), prior invariance in the implicit response was lifted: as the rotation increased, so too did implicit learning (Salomonczyk et al., 2011; Figure 1I). The competition theory readily described these two implicit learning phenotypes: saturation and scaling (Figure 1G&L). Furthermore, it also provided insight as to why implicit learning can even exhibit a non-monotonic response, as in Tsay et al., 2021a.

With that said, changes in implicit learning occur not solely due to error-based competition, but also variations in implicit learning properties such as error sensitivity. For example, Morehead et al., 2017 show that total implicit learning paradoxically decreases when rotations exceed about 90°. A possible cause is error sensitivity, which declines as errors become larger (Kim et al., 2018; Marko et al., 2012; Wei and Körding, 2009). Because no aiming was permitted in their study, steady-state errors were >80°, which would dramatically reduce error sensitivity. Reductions in error sensitivity could contribute to the non-monotonic phenotype we described in Tsay et al. (2021). On the other hand, Tsay et al. permitted aiming, so steady-state errors were only about 5° in the 90° rotation group. These residual errors would not be associated with dramatic reduction in error sensitivity, so error-based competition seems a more likely mechanism (Figure 1Q). In addition, note that total implicit learning varies strongly with error but not error sensitivity; the implicit learning gain pi = bi(1 – ai+ bi)-1, responds weakly to changes in bi (see Appendix 4). Thus, large changes in total implicit learning are much more likely driven by a competition for error, than by changes in implicit error sensitivity (Appendix 6.6 provides additional comparisons between Morehead et al. and Tsay et al.).

In addition, there may be other ways to cast the adaptation model, that also produce competition between implicit learning and explicit strategy. Here, implicit and explicit systems are treated as parallel states that adapt to the same error. A recent inference-based model of motor adaptation (Heald et al., 2021) suggests the possibility that implicit and explicit systems participate in a credit assignment problem: with the explicit state estimating the external perturbation, and implicit state estimating the mismatch between vision and proprioception. This inference model will also produce a competition because both states attempt to sum to total state feedback. When more credit is assigned to the external perturbation, explicit adaptation will increase, and implicit adaptation will decrease. All in all, this model will produce similar phenotypes to the competition equation, given that they both describe a competitive learning process.

Variations in individual learning unveil competition between implicit and explicit processes

Individuals exhibit substantial variation in how they adapt to rotations (Miyamoto et al., 2020; Fernandez-Ruiz et al., 2011; Tsay et al., 2021d). For example, in Experiments 1–3, we observed that individuals who relied more on explicit strategy inadvertently suppressed their own implicit learning. In one prime example, Miyamoto et al., 2020 exposed participants to sum-of-sines rotations. Curiously, participants with more vigorous explicit responses to the perturbation exhibited less vigorous implicit learning. In a second case, Fernandez-Ruiz et al., 2011 observed that increases in movement preparation time helped participants adapt more rapidly, but led to reductions in aftereffects. As a third example, when Bromberg et al., 2019 measured eye movements during adaptation, participants who tended to look toward their re-aiming locations not only exhibited greater explicit strategies, but less implicit adaptation.

These results suggest that a subject’s strategy suppresses their implicit learning (Fernandez-Ruiz et al., 2011). To explain these individual-level correlations, Miyamoto et al., 2020 suggested that there may be an intrinsic relationship between implicit and explicit sensitivity to error: when an individual’s explicit error sensitivity is high, their implicit error sensitivity is low. Here, our results describe another way to account for a similar observation (Figure 3). In Exps. 2 and 3, we used the competition equation (Equation 4) to predict an individual’s implicit adaptation from their measured explicit strategy, assuming each participant had the same sensitivity to error. This equation could accurately predict the negative relationship between implicit and explicit learning. Thus, negative individual-level correlations between implicit and explicit adaptation can arise from variation in strategy, even under an extreme scenario where implicit error sensitivity is constant across participants.

There are alternate ways that such negative correlations between implicit and explicit learning might arise. For example, here we described an implicit-centered competition equation where explicit strategies suppress implicit learning. The opposite is also possible; implicit learning might be immune to explicit strategy, but strategies respond to variation in implicit learning. These contrasting possibilities both predict negative relationships between implicit learning and explicit strategy but diverge in how total adaptation should vary with implicit and explicit states (Figure 5). When we tested these ideas in Experiment 3, our data were highly consistent with the competition model: increases in total learning were associated with greater strategy, but less implicit learning (Figure 5G&H). We observed similar phenomena across three additional studies (Figure 5—figure supplement 1). Thus, in cases where implicit learning is dominated by target errors, greater total adaptation may be supported by less implicit learning. Note, however, that negative correlations at the individual-level are more nuanced. Variation in implicit learning properties will weaken the relationship between implicit learning and total adaptation (Appendices 7 and 8). Further, in conditions with enhanced SPE learning (e.g. multiple visual landmarks), these correlations can easily be invalidated.

These results imply that implicit learning responds to variations in explicit strategy, but strategies are immune to implicit learning. A similar phenomenon was noted by Miyamoto et al., 2020, using structural equation modeling. This unidirectional causality, however, is not true in general. For example, early during learning, it is common that explicit strategies increase, peak, and then decline. That is, when errors are initially large, strategies increase rapidly. But as implicit learning builds, the explicit system’s response can decline in a compensatory manner (Taylor and Ivry, 2011; McDougle et al., 2015; Taylor et al., 2014). This dynamic phenomenon can also occur in the competition theory, where both implicit and explicit systems respond to target error (Figure 6D). But in many cases, a second error source may drive this behavioral phenotype. That is, in cases with aiming landmarks (Taylor and Ivry, 2011; McDougle et al., 2015; Taylor et al., 2014), errors between the cursor and primary target can be eliminated, but implicit learning persists. This implicit learning is likely driven by SPEs and target errors that remain between the cursor and aiming landmark (Taylor and Ivry, 2011). Persistent implicit learning is counteracted by decreasing explicit strategy to avoid overcompensation. In sum, competition between implicit learning and explicit strategy is complex. Both systems can respond to one another in ways that change with experimental conditions.

Comparisons to invariant error-clamp experiments

The competition and independence models described here apply solely to standard visuomotor rotations where target errors decrease throughout the adaptation process. Another popular visuomotor paradigm is an invariant error-clamp: experiments where the target error is fixed to a constant value, noncontingent on the participant’s movement. In this paradigm, implicit adaptation reaches a ceiling whose value varies somewhere between 15 degrees (Morehead et al., 2017) and 25 degrees (Kim et al., 2018) and does not change with rotation size. It is important not to conflate this rotation-invariant saturation, with the implicit saturation phenotype we explored with the competition model in our Neville and Cressman, 2018 analysis (Figure 1G). The ceiling in the invariant error-clamp paradigm appears to be due to an upper bound on implicit corrections (Kim et al., 2018). The saturation phenotype in Figure 1G is due to implicit competition with explicit strategy.

In invariant error-clamp studies, there is no explicit strategy. In such a case, the competition and independence models are equivalent. However, the models encoded in Equations (4) and (5) only describe the standard rotation learning conditions considered in our Results. When there is no explicit strategy, these models predict implicit learning via: xiss = bi(1-ai+ bi)–1 r. In an error-clamp study, however, the correct model would be xiss = bi(1-ai)–1 r. These equations differ in their implicit learning gains: bi(1-ai)–1 for constant error-clamp and bi(1-ai+ bi)–1 for standard rotations. This has critical implications. For example, in an error-clamp condition, for ai = 0.98 and bi = 0.3, the state-space model predicts an implicit steady-state of 15 times the imposed rotation, r. In other words, implicit adaptation would need to exceed the rotation size by at least an order of magnitude to reach its steady-state; a 5° error-clamp would require 75° of implicit learning to reach a dynamic steady-state and a 30° rotation would require 450°. In sum, error-clamp rotations require implicit learning that cannot reach the dynamic steady-state described by the state-space model. For these reasons, the steady-states reached in error-clamp studies are likely caused by another mechanism: the ceiling effect shown in Morehead et al., 2017 and Kim et al., 2018. However, in a standard rotation, implicit learning must be less than the rotation size (proportional to difference between rotation size and explicit strategy: proportionality constant between 0.6 and 0.8 in the data sets we consider here). Under these conditions, the dynamic steady-state described by the competition model is attainable.

Now, a separate but related question, is what causes the implicit system’s upper limit and does it vary across experimental settings. We suspect it does. For example, in Morehead et al., 2017 the implicit system was limited to 10–15° learning, but in Kim et al., 2018 this limit increased to 20–25°. It may be that these limits relate to a reliance on proprioceptive error signals (Tsay et al., 2021c): implicit learning may be ‘halted’ by some unknown mechanism when the hand deviates too far from the target. This would make sense, as participants are told to move their hand straight to the target and ignore the cursor in this paradigm. In standard rotation paradigms, however, visual errors between the cursor and target may dominate this proprioceptive signal, extending the implicit system’s capacity. This might explain why some studies have observed implicit learning levels (e.g. about 35° in Salomonczyk et al., 2011, and even 45° in Maresch et al., 2021) which greatly exceed the error-clamp limits observed in Morehead et al. and Kim et al.

A critical puzzle that remains, however, is savings. The savings in implicit adaptation observed in Exp. 4 (Figure 8) contrasts with error-clamp behavior (Avraham et al., 2021), where implicit learning decreases during the second exposure. We can only speculate why these phenotypes differ. The discrepancy may relate to a divergence in goals. In error-clamp studies, the overall objective is to move straight to the target: to not change one’s reach angle. In standard rotation studies, the objective is to move the cursor to the target: to change one’s reach angle. This goal could play a role in enhancing or suppressing the implicit system’s response; some utility associated with adapting more rapidly may be necessary to obtain savings. On the other hand, responses to visual errors may be suppressed over time during error-clamp, as they are irrelevant to the arm’s motion. Interestingly, interacting with the visual target in error-clamp does appear to attenuate the implicit response to the rotation (Kim et al., 2019).

A second idea relates to the reward system. Sedaghat-Nejad and Shadmehr, 2021 has shown that saccade adaptation is accelerated when learning improves task success. Learning speeds up when this leads to an increase in reward probability. Adaptation rates are not improved when learning does not impact reward probability. In other words, a higher level ‘desire’ to obtain reward may be needed to increase learning rate. Again, such motivation is clear in standard rotation experiments where adaptation will improve task success and reward probability. There is no motivation to adapt more rapidly in error-clamp paradigms; participants are never rewarded. Moreover, as noted above, hitting the target in invariant error-clamp paradigms appears to attenuate the implicit response (Kim et al., 2019). Interestingly, a link between reward and savings may be present in the cerebellum. Several studies (Medina, 2019) have shown that both granule cell layers (Wagner et al., 2017) and climbing fiber inputs (Heffley et al., 2018; Kostadinov et al., 2019) carry reward-related signals to the cerebellum. Thus, it may be that the cerebellum, a potential locus for implicit adaptation (Tseng et al., 2007; Donchin et al., 2012; Smith and Shadmehr, 2005; Izawa et al., 2012; Wong et al., 2019; Becker and Person, 2019; Morton and Bastian, 2006) responds to errors differently when rewards are not attainable (Morehead et al., 2017; Avraham et al., 2021; Kim et al., 2018) such as error-clamp paradigms, versus conventional rotations where more rapid learning promotes reward acquisition. These ideas are speculative and remain to be tested.

Overall, our data suggest that some implicit learning properties may vary across standard rotation and error-clamp paradigms. Considerable future work is needed to better compare these paradigms and test the suppositions outlined above.

The relationship between competition and implicit generalization

One potential limitation in our analyses relates to implicit generalization. Earlier studies have shown that implicit learning generalizes around the reported aiming direction (Day et al., 2016; McDougle et al., 2017). Thus, participants who aim further away from the target may show smaller implicit adaptation when asked to ‘move straight to the target’. While generalization could have contributed to the negative implicit-explicit correlation, its role would be small relative to competition. In earlier studies (Figure 4B&C), implicit learning decayed only 5° or so with 22.5°–30° changes in aiming (Figure 4—figure supplement 2B shows 22.5° re-aiming, McDougle et al., 2017; Figure 4—figure supplement 2A shows 30° re-aiming, Day et al., 2016). However, in Exps. 1–3, we observed between 15°–20° changes in implicit learning (see Figure 4B&C) over similar ranges in explicit strategy. Thus, generalization-based decay in implicit learning would need to occur over 300% more rapidly than earlier reports to match our data.

Critically, in Exps. 1–3 explicit strategy was estimated as total adaptation minus implicit learning. Had generalization reduced the implicit measures, it would falsely inflate our explicit measures. While it is tempting to compare our data in Figure 2 or Figure 4 with past generalization curves, this should not be done without correcting the explicit strategy measures. These corrections revealed that implicit generalization would need to exhibit an implausible narrowing to explain our group-level (e.g., response to stepwise rotation, or instruction) and individual-level results (Appendices 6.1–6.5, Figure 4C, and Figure 4—figure supplement 1). Altogether, generalization is not a viable alternative to the competition theory.

Generalization may have played a smaller role in the studies we analyzed, because participants trained with 2 (Tsay et al., 2021a), 3 (Exp. 1, Neville and Cressman, 2018), 4 (Exps. 2–4), 8 (Exp. 5, Maresch et al., 2021), or 12 (Saijo and Gomi, 2010) targets. Past studies that measured plan-based generalization, only used one training target (Day et al., 2016; McDougle et al., 2017; Figure 4—figure supplement 2A and B). Thus, decreases in implicit learning would likely be smaller in our studies, because the generalization curve widens with additional training targets (Krakauer et al., 2000; Tanaka et al., 2009). For, example, in Neville and Cressman, 2018, subjects trained with three targets. Given the targets’ geometries, 2 had coincided with the neighboring target’s aim direction, but one did not. A narrow generalization curve would predict a larger aftereffect for the targets that coincided with aim directions, yet no variations in implicit learning were detected across targets (see their supplementary analyses).

Note that unlike past generalization studies (Day et al., 2016; McDougle et al., 2017), we did not use aiming reports to measure explicit strategy (Day et al., 2016; McDougle et al., 2017). We speculate this may play a role in generalization, given that aiming landmarks themselves drive implicit learning (Taylor and Ivry, 2011; Figure 10). For example, past generalization studies observed a discrepancy between exclusion-based implicit learning and report-based implicit learning: the exclusion measures were smaller due to plan-based generalization (Figure 4—figure supplement 2C). But in Exp. 2, the opposite occurred. Exclusion-based implicit learning was larger than implicit learning estimated with reporting (Figure 4—figure supplement 2E). The same phenomenon was noted by Maresch and colleagues (Maresch et al., 2021) in a condition where reporting was used sparsely during adaptation (Figure 4—figure supplement 2D).

Exp. 1 provided a direct way to test how our data may have been impacted by generalization. In Exp. 1, a 60° rotation resulted in 22° of implicit learning, whereas a 15° rotation caused about 7° (Figure 1I). Suppose that implicit learning exhibits about 20% generalization-based decay with a 15° change in aiming direction as in McDougle et al., 2017. This decay causes a (0.2)(22°) = 4.4° decrease in implicit learning in the 60° rotation, but only a (0.2)(7) = 1.4° in the 15° rotation (i.e., 7° implicit learning). Thus, the absolute change in implicit learning driven by generalization depends on total implicit learning achieved at steady-state, or in Exp. 1, the rotation’s size. This is not true in the competition theory: Equation 4 predicts that the gain relating implicit and explicit adaptation does not depend on rotation size. We tested these diverging predictions in Figure 4D–F. Critically, behavior matched the competition theory (Figure 4F). AIC indicated that the competition model better described participant behavior than SPE learning models extended with plan-based generalization (Figure 4G&H, Appendix 6.3).

With that said, while the generalization hypothesis did not match important patterns in our data, it remains a very important phenomena that may alter implicit learning measurements. It is imperative that implicit generalization is more thoroughly examined to determine how it varies across experimental methodologies. These data will be needed to accurately evaluate the competitive relationship between implicit and explicit learning.

Error sources that drive implicit adaptation

Mazzoni and Krakauer, 2006 exposed participants to a visuomotor rotation, but also provided instructions for how to re-aim their hand to achieve success. While participants immediately used this strategy to move the cursor through the target, the elimination of task error failed to stop implicit adaptation. These data suggested that the implicit system responded to errors in the predicted sensory consequence of their actions (Tseng et al., 2007; Shadmehr et al., 2010), rather than errors in hitting the target.

However, such a model, where implicit systems learn solely based on the angle between aiming direction and the cursor (Equation 2), could not account for the implicit-explicit interactions we observed in our data (Figures 15). These interactions could only be described by an implicit error source that is altered by explicit strategy, such as the angle between the cursor and the target (Equation 1). For example, in Experiments 2 and 3, participants did not aim straight to the target, but rather adjusted their aiming angle by 5–20° (Figure 3). These changes in re-aiming appeared to alter implicit adaptation via errors between the cursor and the target. This target-cursor error source (Equation 1) appeared to provide an accurate account of short-term visuomotor adaptation across a number of studies (McDougle et al., 2015; Miyamoto et al., 2020; Albert et al., 2021; Neville and Cressman, 2018; Benson et al., 2011; Fernandez-Ruiz et al., 2011; Saijo and Gomi, 2010).

We do not mean to suggest, however, that implicit adaptation is solely driven by a single target error. In fact, there are many cases where this idea fails (Leow et al., 2020; Taylor and Ivry, 2011; Taylor et al., 2014). We speculate that one feature which alters implicit learning is the simultaneous presence of multiple visual targets. In Figures 19, there was only one visual target on the screen at a time. However, in Mazzoni and Krakauer (Figure 10), there were two important visual targets: the adjacent target towards which participants explicitly aimed their hand, and the original target toward which the cursor should move. In theory, the brain could calculate errors with respect to both targets. When we considered the idea that the implicit system adapted to both errors at the same time, we could more completely account for these earlier data (Figure 10F).

The idea that both kinds of visual error (cursor with respect to the primary target, and cursor with respect to the aimed target) drive implicit learning, could account for other surprising observations. For example, in cases where landmarks are provided to report explicit aiming (McDougle et al., 2015; Taylor et al., 2014; Day et al., 2016), target-cursor error is often rapidly eliminated, but implicit adaptation persists. A dual-error model (Equation 6) would explain this continued adaptation based on persistent aim-cursor error. In other words, aiming landmarks may continue to drive adaptation even when primary target errors have been eliminated.

However, the nature of aim-cursor errors remains uncertain. For example, while this error source generates strong adaptation when the aim location coincides with a physical target (Figure 10I, instruction with target), implicit learning is observed even in the absence of a physical aiming landmark (Taylor and Ivry, 2011; Figure 10I, instruction without target), albeit to a smaller degree. This latter condition may implicate SPE learning that does not require an aiming target. Thus, it may be that the aim-cursor error in Mazzoni and Krakauer is actually an SPE that is enhanced by the presence of a physical target. In this view, implicit learning is driven by a target error module and an SPE module that is enhanced by a visual target error (Leow et al., 2020; Kim et al., 2019; Leow et al., 2018).

These various implicit learning modules are likely strongly dependent on experimental contexts, in ways we do not yet understand. For example, Taylor and Ivry, 2011 would suggest that all experiments produce some implicit SPE learning, but less so in paradigms with no aiming targets. Yet, the competition equation accurately matched single-target behavior in Figures 19 without an SPE learning module. It is not clear why SPE learning would be absent in these experiments. One idea may be that the aftereffect observed by Taylor and Ivry, 2011 in the absence of an aiming target, was a lingering associative motor memory that was reinforced by successfully hitting the target during the rotation period. Indeed, such a model-free learning mechanism (Huang et al., 2011) should be included in a more complete implicit learning model. It is currently overlooked in error-based systems such as the competition and independence equations.

Another idea is that some SPE learning did occur in the no aiming target experiments we analyzed in Figures 19 but was overshadowed by the implicit system’s response to target error. A third possibility is that the SPE learning observed by Taylor and Ivry, 2011 was contextually enhanced by participants implicitly recalling the aiming landmark locations provided during the baseline period. This possibility would suggest SPEs vary along a complex spectrum: (1) never providing an aiming target causes little or no SPE learning (as in our experiments), (2) providing an aiming target during past training allows implicit recall that leads to small SPE learning, (3) providing an aiming target that disappears during the movement promotes better recall and leads to medium-sized SPE learning (i.e. the disappearing target condition in Taylor and Ivry), and (4) an aiming target that always remains visible leads to the largest SPE learning levels. This context-dependent SPE hypothesis may be related to recent work suggesting that target errors and SPEs drive implicit learning, but SPEs are altered by distraction (Tsay et al., 2021b).

We speculate that the cerebellum might play an important role in supporting multiple implicit learning modules (Smith and Shadmehr, 2005; Wong et al., 2019; Hanajima et al., 2015; Kojima and Soetedjo, 2018; Bastian et al., 1996). Current models propose that complex spikes in Purkinje cells (P-cells) in the cerebellar cortex cause LTD (Marr-Albus-Ito hypothesis). These complex spikes are reliably evoked by olivary input in response to a sensory error (Kojima and Soetedjo, 2018; Herzfeld et al., 2018; Herzfeld et al., 2015). However, different P-cells are activated by different error directions, thus organizing P-cells into error-specific subpopulations (Herzfeld et al., 2018; Herzfeld et al., 2015). Therefore, our model suggests that two different sources of error might simultaneously transduce learning in two different P-cell subpopulations, which then combine their adapted states into a total implicit correction at the level of the deep nuclei. Thus, errors based on the original target, and the aiming target, might simultaneously activate two implicit learning modules in the cerebellum (Figure 10H).

Alternatively, it is equally possible that these aim-cursor errors and target-cursor errors engage separate brain regions both inside and outside the cerebellum. In this view, an interesting possibility is that patients with cerebellar disorders (Tseng et al., 2007; Gabrieli et al., 1993; Izawa et al., 2012; Maschke et al., 2004; Martin et al., 1996) may have learning deficits specific to one error but not the other, as recent results suggest (Wong et al., 2019). These possibilities remain to be fully tested.

Materials and methods

Our work involves reevaluation of earlier literature; this includes data from Haith et al., 2015 in Figures 6 and 8, data from Lerner et al., 2020 in Figure 9, data from Neville and Cressman, 2018 in Figures 1 and 2, data from Saijo and Gomi, 2010 in Figure 2—figure supplement 3, data from Mazzoni and Krakauer, 2006 in Figure 10, data from Taylor and Ivry, 2011 in Figure 10, data from McDougle et al., 2017 in Figure 4, data from Day et al., 2016 in Figure 4, data from Tsay et al., 2021a in Figure 1, data from Maresch et al., 2021 in Figure 5—figure supplement 1, and data from Krakauer et al., 2000 in Figure 4. Relevant details for all studies are summarized in the sections below alongside the new data collected for this work (Exps. 1–5). Note that some methods are described in Appendices 1–8.

Participants

Here we report the sample sizes used in past studies analyzed here: Haith and colleagues (Haith et al., 2015) (n = 14), Lerner et al., 2020 (n = 16 for 5 min group, n = 18 for 24 hr group), Neville and Cressman, 2018 (no strategy: n = 11 for 20°, n = 10 for 40°, n = 10 for 60°; strategy: n = 10 for 20°, n = 11 for 40°, n = 10 for 60°), Mazzoni and Krakauer, 2006 (n = 18), Saijo and Gomi, 2010 (n = 9 for abrupt, n = 9 for gradual), Maresch et al., 2021 (n = 40 across the CR, IR-E, and IR-EI groups), Tsay et al., 2021a (n = 25/rotation size), McDougle et al. (n = 15), and Taylor and Ivry, 2011 (n = 10 for instruction with visual target, n = 10 for instruction without visual target).

All volunteers (ages 18–62) in Experiments 1–5 were neurologically healthy and right-handed. Experiment 1 included n = 36 participants in the abrupt group (12 Male, 24 Female), n = 37 participants in the stepwise group (6 Male, 30 Female, 1 opted to not report). Experiment 2 included n = 9 participants (5 Male, 4 Female) in the No PT Limit group and included n = 13 participants (6 Male, 7 Female) in the Limit PT group. Experiment 3 included n = 35 participants in the No PT Limit group (7 Male, 14 Female), n = 21 participants in the Limit PT group (20 Male, 15 Female), and n = 12 (5 Male, 7 Female) participants in the decay-only group. Experiment 4 included n = 10 participants (6 Male, 4 Female). Experiment 5 included n = 20 participants (10 Male, 10 Female) with n = 9 in the 5 min group and n = 11 in the 24 group. Experiment 1 was approved by the York Human Participants Review Sub-committee. Experiments 2–5 were approved by the Institutional Review Board at the Johns Hopkins School of Medicine.

Data extraction

When acquiring data from published figures we first attempted to open it in Adobe Illustrator. Depending on how these figures were saved and embedded, occasionally the figure could be decomposed into its layers. This allowed us to extract the x and y pixel values for each data point (which appeared as an object) to interpolate the necessary data from the figure. However, in some cases, objects and layers could not be obtained in Illustrator. In these cases, we used the utility GRABIT in MATLAB to extract the necessary data. We clearly indicate which approach was used when discussing each dataset below. Note that the authors provided source data for our Maresch et al., 2021 and Tsay et al., 2021a analyses.

Apparatus

In Experiments 1, 2, 4, and 5 participants held the handle of a robotic arm and made reaching movements to different target locations in the horizontal plane. The forearm was obscured from view by an opaque screen. An overhead projector displayed a small white cursor (diameter = 3 mm) on the screen that tracked the hand’s motion. We recorded the position of the handle at submillimeter precision with a differential encoder. Data were recorded at 200 Hz. Protocol details were similar for Haith et al., 2015, Neville and Cressman, 2018, Saijo and Gomi, 2010, and Maresch et al., 2021 in that participants gripped a two-link robotic manipulandum, were prevented from viewing their arm, and received visual feedback of their hand position in the form of a visual cursor. In Lerner et al., 2020, participants performed pointing movements with their thumb and index finger while gripping a joystick with their right hand. In Mazzoni and Krakauer, 2006, participants rotated their hand to displace an infrared marker placed on the index finger. In Taylor and Ivry, 2011, hand position was tracked via a sensor attached to the index finger while participants made horizontal reaching movements along the surface of a table. In Day et al., 2016, Krakauer et al., 2000, and McDougle et al., 2017, participants moved a stylus over a digitizing tablet. In Experiment 3, participants were tested remotely on a personal computer. They moved a cursor on the screen by sliding their index finger along the track pad. These conditions were similar in Tsay et al., 2021a.

Visuomotor rotation

Experiments 1–5 followed a similar protocol. At the start of each trial, the participant brought their hand to a center starting position (circle with 1 cm diameter). After maintaining the hand within the start circle, a target circle (1 cm diameter) appeared in 1 of 4 positions (0°, 90°, 180°, and 270°) at a displacement of 8 cm (Experiments 2, 4, and 5). In Experiment 5, eight targets were used, spaced in increments of 45°. In Experiment 1, three targets were used positioned in a triangular wedge (45°, 90°, and 135°). Participants made a brisk movement that terminated on (Exp. 1) or moved through (Exps. 2–5) the target. Each experiment consisted of epochs of four trials (three trials for Experiment 1, 8 trials for Experiment 5) where each target was visited once in a pseudorandom order.

Participants were provided audiovisual feedback about their movement speed and accuracy. If a movement was too fast (duration <75ms) or too slow (duration >325ms) the target turned red or blue, respectively. If the movement was the correct speed, but the cursor missed the target, the target turned white. Successful movements were rewarded with a point (total score displayed on-screen), an on-screen animation, and a pleasing tone (1000 Hz). If the movement was unsuccessful, no point was awarded, and a negative tone was played (200 Hz). Participants were instructed to obtain as many points as possible throughout the experimental session. Experiment 1 was similar but used 10 cm reach displacements and had no upper bound on movement duration.

Once the hand reached the target, visual cursor feedback was removed, and a yellow marker was frozen on-screen to indicate the final hand position. At this point, participants were instructed to move their hand back to the starting position (in Exp. 1, this return movement was aided by a circle centered on the start position, whose radius matched the hand’s displacement). The cursor remained hidden until the hand was moved within 2 cm of the starting circle (1 cm in Exp. 1).

Movements were performed in one of three conditions: null trials, rotation trials, and no feedback trials. On null trials, veridical feedback of hand position was provided. On rotation trials, the on-screen cursor was rotated relative to the start position. On no feedback trials, the subject cursor was hidden during the entire trial. No feedback was given regarding movement endpoint, accuracy, or timing.

As a measure of adaptation, we analyzed the reach angle on each trial. The reach angle was measured as the angle between the hand and the target (relative to the start position), at the moment where the hand exceeded 95% of the target displacement. In Experiment 1, reach angles were measured at the hand’s maximum velocity.

Experiments in Haith et al., 2015, Lerner et al., 2020, McDougle et al., 2017, Taylor and Ivry, 2011, Neville and Cressman, 2018, Saijo and Gomi, 2010, Day et al., 2016, McDougle et al., 2017, Krakauer et al., 2000, Maresch et al., 2021, Tsay et al., 2021a, and Mazzoni and Krakauer, 2006 were collected using similar, but separate protocols. Important differences between these studies and the rotation protocol mentioned above are briefly described in the sections below.

Statistics

Parametric t-tests were performed in MATLAB R2018a. For these tests, we report the t-statistic, p-value, and Cohen’s d as a measure of effect size. A repeated measures ANOVA (rm-ANOVA) was used to measure differences in prediction error in Figure 4E. Two-way repeated measures ANOVAs were used in Figure 6B–D to measure how preparation time (low vs high) and exposure number (Day 1 vs. Day 2) altered learning rate, reach angle, and model-based error sensitivity measurements, respectively. Mixed-ANOVAs were used in Figure 8C to examine how learning (both rate and mean over initial trials) was altered by preparation time conditions (between-subject factor: Haith et al., 2015 vs Experiment 4) and exposure number (within-subjects factor, exposure 1 vs exposure 2). A two-way ANOVA was used in Figure 9C to determine how interference patterns changed with movement preparation time (no limit vs limit) and time passage (5 min vs 24 hr). For all two-way and mixed-ANOVAs, we initially determined whether there was a statistically significant interaction effect between each factor. In cases where this interaction effect was statistically significant, we next measured simple main effects via one-way ANOVA.

Competition map

In Figure 7, we created a competition map to describe the interactions between explicit strategy and implicit learning predicted by the competition theory. To generate this map, we used a state-space model (Equations 1-3) where implicit learning and explicit learning were both driven by target errors:

xi(n+1)=aixi(n)+bie(n)xe(n+1)=aexe(n)+bee(n) (7)

The terms ai and ae represent implicit retention and explicit retention. The terms bi and be represent implicit error sensitivity and explicit error sensitivity.

Because implicit and explicit systems share a common error source in this target error model, their responses will exhibit competition. That is, increases in explicit adaptation will necessarily be coupled to decreases in implicit adaptation. To summarize this interaction, we created a competition map. The competition map describes common scenarios in which the goal is to compare two different learning curves. For example, one might want to compare the response to a 30° visuomotor rotation under two different experimental conditions. Another example would be savings, where we compare adaptation to the same perturbation at two different timepoints. In these cases, it is common to measure the amount of implicit and explicit adaptation, and then compare these across conditions or timepoints.

The critical point is that changes in the amount of implicit adaptation reflect the modulation of both implicit and explicit responses to error. This competition will occur at all points during the adaptation timecourse (Appendix 1), but is easiest to mathematically validate at steady-state. As described in the main text, the steady-state level of implicit adaptation can be derived from Equations 1-3. This derivation resulted in the competition equation shown in Equation 4. Note that Equation 4 predicts the steady-state level of implicit learning from the implicit retention factor, implicit error sensitivity, mean of the perturbation, and critically, the steady-state explicit strategy. If the explicit system is also described using a state-space model as in Equation 7, it can be shown that Equation 4 can be equivalently expressed in terms of the implicit and explicit learning parameters according to Equation 8:

xiss=bi(1ae)(1ai+bi)(1ae+be)biber (8)

Equation 8 provides the total amount of implicit adaptation as a function of the retention factors, ai and ae, as well as the error sensitivities, bi and be. We used Equation 8 to construct the competition map in Figure 7A, by comparing the total amount of implicit learning across a reference condition and a test condition.

For our reference condition, we fit our state space model to the mean behavior in Haith et al., 2015 (Figure 6B, Day 1, left). This model best described adaptation during the first perturbation exposure using the parameter set: ai = 0.9829, ae = 0.9278, bi = 0.0629, be = 0.0632. Next, we imagined that implicit error sensitivity and explicit error sensitivity differed across the reference and test conditions. On the x-axis of the map, we show a percent change in bi from the reference condition to the test condition. On the y-axis of the map, we show a percent change in be from the reference condition to the test condition. The retention factors were held constant across conditions. Then for each condition we calculated the total amount of implicit learning using Equation 8. The color at each point in the map represents the percent change in the total amount of implicit learning from the reference condition to the test condition.

As described in the main text, the competition map (Figure 7A) is composed of several important regions (Figure 7B). In Region A, there is a decrease in implicit error sensitivity (from reference to test) as well as a decrease in the total amount of implicit adaptation predicted by Equation 8. In Region B, Equation 8 predicts a decrease in implicit adaptation, despite an increase in implicit error sensitivity. In Region D, there is an increase both in implicit error sensitivity as well as steady-state implicit learning. In Region E, there is an increase in implicit adaptation, despite a decrease in implicit error sensitivity. Finally, Region C shows cases where there are changes in implicit error sensitivity, but the total absolute change in implicit adaptation (Equation 8) is less than 5%. To localize this region, we solved for the linear bounds that describe a 5% increase or a 5% decrease in the output of Equation 8.

Neville and Cressman, 2018

To understand how enhancing explicit strategy might alter implicit learning, we considered data collected by Neville and Cressman, 2018. Here, the authors tested how awareness of a visuomotor rotation altered the adaptation process. To do this, participants (n = 63) were divided into several groups. In the instructed groups (Figure 2A, purple), participants were instructed about the rotation and a compensatory strategy prior to perturbation onset. In other groups, no instruction was provided (Figure 1C; Figure 2A, black). During rotation periods, participants reached to three potential targets. Implicit contributions to behavior were measured at four different periods using ‘exclusion’ trials. During exclusion trials, the authors instructed participants to reach (without visual feedback) as they did during the baseline period prior to perturbation onset (without using any knowledge of the perturbation gained thus far). Exclusion trial reach angles served as our implicit learning measure. The difference between total adaptation and exclusion trial reach angles served as our explicit learning measure.

At the start of the experiment, all participants performed a baseline period without a rotation for 30 trials. Baseline implicit and explicit reach angles were then assayed. At this point, participants in the strategy group were briefed about the perturbation with an image that depicted how feedback would be rotated, and how they could compensate for it. Then all groups were exposed to the first block of a visuomotor rotation for 30 trials. Some participants experienced a 20° rotation, others a 40° rotation, and others a 60° rotation. After this first block, implicit and explicit learning were assayed. This block structure was repeated two more times.

Here, we focused on implicit and explicit adaptation measures obtained at the end of the final block. To obtain these data, we extracted the mean participant response and the associated standard error of the mean, directly from the primary figures reported by Neville and Cressman, 2018 using Adobe Illustrator CS6. The implicit and explicit responses in all six groups are shown in Figure 2—figure supplement 1. The marginal effect of instruction (average over rotation sizes) is shown in Figure 2B and C.

Finally, we tested whether the competition equation (Equation 4) or independence equation (Equation 5) could account for the levels of implicit learning observed across rotation magnitude and awareness conditions. To do this, we used a bootstrapping approach. Using the mean and standard deviation obtained from the primary figures, we sampled hypothetical explicit and implicit aftereffects for 10 participants. We then calculated the mean across these 10 simulated participants. After this, we used fmincon in MATLAB R2018a to find an implicit error sensitivity that minimized the following cost function:

θfit=argminθn=16(xinssx^inss)2 (9)

This cost function represents the difference between the simulated level of implicit adaptation, and the amount of implicit learning that would be predicted for a given perturbation size and simulated explicit adaptation, according to our competition framework (Equation 4) or independence framework (Equation 5). For this process, we set the implicit retention factor to 0.9565 (see Measuring properties of implicit learning). Therefore, only the implicit error sensitivity remained as a free parameter. In sum, we aimed to determine if a single implicit error sensitivity could account for the amount of adaptation across the no instruction group, instruction group, and each of the three perturbation magnitudes (20, 40, and 60°). The combination of instruction and perturbation magnitude yielded six groups, hence the upper limit on the sum in Equation 9. We repeated this process for a total of 10,000 simulated groups.

In Figure 2B&C, we show the marginal effect of instruction on the implicit aftereffect. This was obtained by averaging across each of the three rotation magnitudes shown in Figure 2—figure supplement 1, for each model. In Figure 1, we show the implicit learning levels predicted by the model across all rotation sizes in the no-instruction group. Model predictions across all rotations sizes in the instruction group are shown in Figure 2—figure supplement 1. Again, all model predictions were made using the same underlying implicit learning parameter set.

Experiment 1

To examine how changes in rotation onset and magnitude altered implicit learning, we recruited two participant groups. In the abrupt group, subjects (n = 36) experienced a 60° visuomotor rotation abruptly. In the stepwise group, subjects (n = 37) experienced four separate rotation magnitudes in sequence: 15°, 30°, 45°, and 60°. Thus, the experiment had four learning periods, one for each rotation size. Each period lasted 66 trials, over which three targets (45°, 90°, 135°) were visited 22 times. This same structure was used in the abrupt group, though the rotation magnitude remained constant over each learning block. Twice during each block (about 75% into each block and again at the end), exclusion trials were used to measure implicit adaptation. On these trials, subjects were told to stop using explicit strategies and to reach as they had during the baseline period. The average exclusion trial reach angle (across both probe periods in each block) served as our implicit learning measure. The difference between total adaptation and the average exclusion trial reach angle served as our explicit learning measure. Total adaptation was calculated as the average reach angle on the last 20 trials in each learning period.

Here, we focused on implicit and explicit adaptation measures obtained during each block. These measures are shown in Figures 1I and 2J. In Figures 1 and 2, we tested how well these measures were predicted by the competition and independence equations. The same model parameters were used in Figures 1 and 2, although Figure 1 only shows data in the stepwise condition. Note that the competition equation can be written as xiss = pi(r – xess) and the independence equation can be written as xiss = pir, where pi is a scalar gain determined by ai and bi. Thus, the gain pi is the only unknown model parameter.

Our goal was to identify one gain (one for each model) that could parsimoniously explain behavior across the stepwise and abrupt groups. Thus, we identified the optimal gain that minimized the squared error between the model predictions and implicit adaptation across five measures: 15° stepwise learning, 30° stepwise learning, 45° stepwise learning, 60° stepwise learning, and 60° abrupt learning. For the abrupt condition, we did not observe a statistically significant difference in implicit aftereffect across the four learning periods (rm-ANOVA, F(3,105)=2.21, p = 0.091, ηp2=0.059); thus, we averaged across learning periods to obtain a single implicit measure. We then identified the pi parameter that minimized squared error according to Equation 9, with all five terms described above appearing in the sum.

To construct the model predictions shown in Figures 1 and 2, we used a bootstrapping approach. Participants in the stepwise and abrupt group were resampled with replacement 1000 times. Each time the average implicit learning measure was calculated across the five conditions described above. Each model was then fit to these average data. Thus, Figures 1 and 2 show the mean implicit learning predicted by each model across all 1000 iterations, as well as the associated standard deviation.

In the main text, we also report a statistical comparison between implicit learning predicted by the competition theory in the 60° stepwise and 60° abrupt conditions. This statistic was obtained using a different procedure. Here the optimal pi was determined again using Equation 9, but without bootstrapping. Average across-subject implicit adaptation in the 15° stepwise period, 30° stepwise period, 45° stepwise period, 60° stepwise period, and 60° abrupt period appeared within the sum in Equation 9. Then implicit learning was predicted using Equation 4 assuming that each participant had the same pi learning gain. We then conducted a paired t-test between 60° stepwise and 60° abrupt implicit learning predicted by the model.

Exp. 1 was used extensively to compare the competition theory with an SPE generalization model. All details concerning this analysis are provided in Appendix 6. Results are depicted in Figure 4.

Finally, we analyzed subject-to-subject pairwise relationships between implicit learning, explicit strategy, and total adaptation (average over last 40 rotation trials) in Figure 5—figure supplement 1B,E&H. For these analyses, we combined subjects across the 60° rotation period in the abrupt and stepwise groups. Note we excluded three outlying participants whose reach angles differed by more than three median absolute deviations from the total population on at least 33% of all trials. This yielded a total dataset of n = 70. To analyze each pairwise relationship, we used linear regressions. In addition, we analyzed the same relationships during the 30° rotation period in the stepwise group (Figure 5—figure supplements 2A and 3B).

Tsay et al., 2021a

To evaluate the competition and independence models, we analyzed how implicit and explicit systems responded to rotation sizes between 15° and 90° in experiments conducted by Tsay et al., 2021a. Data in these experiments was collected remotely via a laptop-based experiment. Participants moved to targets at 45° and 135°, which alternated across trials. Participants were exposed to a 15°, 30°, 60°, or 90° rotation (n = 25/rotation size). The reach angles during an initial baseline period, rotation period, and terminal no aiming period are shown in Figure 1M. During the no aiming period, participants reached to each target 10 times (20 trials total). To calculate implicit learning (Figure 1M, no aiming; Figure 1N&Q, data) we averaged the reach angle across the 20 no aiming trials. To calculate total adaptation, we measured the average reach angle over the last 40 reaching trials (Figure 5—figure supplement 1C,1F,2A&3A). To calculate explicit strategy, we computed the difference between total adaptation and implicit learning (Figure 1O). We also calculated the implicit driving input in the competition theory (rotation minus explicit strategy) in Figure 1P. We also reported an explicit gain in the main text. This gain was calculated by dividing the difference between explicit strategies by the difference in rotation sizes corresponding to each strategy (and then multiplying by 100 to obtain a percentage).

To investigate the non-monotonic relationship between implicit learning and rotation size (Figure 1N), we used the competition and independence models. In Figure 1Q, we fit each model to the measured data. To do this, we estimated the implicit retention factor using the reach angle decay rate during the terminal no aiming period (see Measuring properties of implicit learning, estimate = 0.974). Next, we used a least-squares approach to determine the optimal implicit error sensitivity (bi) that best matched the implicit reach angles measured across all four rotation sizes. Note that since ai and bi appear together in the implicit learning gain, pi, fitting the gain directly would produce the same results.

For the competition theory, we averaged the implicit and explicit responses within each rotation group, and then identified the bi value that best predicted implicit learning across rotation sizes according to the competition equation (Equation 4). To do this, we used the fminbnd utility in MATLAB R2018a. This yielded bi = 0.0319. We then used the same ai and bi parameter values to predict total implicit learning across all four rotation sizes via Equation 4, assuming all participants had the same implicit learning parameters (Figure 1Q, competition). Again, this is equivalent to directly fitting the implicit learning gain pi.

We used a bootstrapping procedure to identify the optimal bi parameter in the independence model. To do this, we sampled participants in each rotation group with replacement 10,000 times. Each time, we calculated the average implicit response, and then minimized the squared error (fminbnd in MATLAB R2018a) between this implicit response and that predicted by the independence model (Equation 5), across all four rotations sizes. We used the ai and bi estimated in the bootstrapping procedure to predict total implicit learning according to the independence model (Figure 1Q, independence).

Finally, we analyzed subject-to-subject pairwise relationships between implicit learning, explicit strategy, and total adaptation in Figure 5—figure supplement 1C, F and I. For this, we considered participants in the 60° rotation group. To analyze each relationship, we used linear regressions. We also analyzed these same relationships during the 30° rotation period (Figure 5—figure supplements 2A and 3B).

Note that Tsay et al. also tested participants in an invariant error-clamp experiment. We did not analyze these data here for two reasons. First, no strategy is used in invariant error-clamp paradigms. This means that SPE and target errors are the same, meaning that the competition model and independence model cannot be distinguished (they make the same predictions). Second, as described in our Discussion (see the section on invariant error-clamp learning), the competition and independence models derived in Equations (4) and (5) only apply to standard rotation learning. The implicit learning gain in the invariant error-clamp paradigm is not the same and predicts implicit learning levels that cannot be physically achieved (see Discussion).

Experiment 2

To test whether changes in explicit strategy altered implicit learning at the individual-level, we tested two adaptation conditions. In the first experiment, participants adapted to a visuomotor rotation without any limits applied to preparation time (No PT Limit), thus allowing participants to use explicit strategy. In a second experiment, we strictly limited preparation time in order to suppress explicit strategy (Limit PT).

Participants in the No PT Limit condition began with 10 epochs of null trials (one epoch = 4 trials), followed by a rotation period of 60 epochs. Other details concerning the experiment paradigm are described in Visuomotor rotation. At the end of the perturbation period, we measured the amount of implicit and explicit learning. To do this, participants were instructed to forget about the cursor and instead move their hand through the target without applying any strategy to compensate for the perturbation. Furthermore, visual feedback was completely removed during these trials. All four targets were tested in a randomized sequence. To quantify the total amount of implicit learning, we averaged the reach angle across all targets (Figure 3B&G). To calculate the amount of explicit adaptation, we subtracted this measure of implicit learning from the mean reach angle measured over the last 10 epochs of the perturbation prior to the verbal instruction (results did not change whether we used 5, 10, 15 or 20 epochs to calculate total learning). Explicit measures are shown in Figure 3G&N (E2).

In the Limit PT group, we suppressed explicit adaptation for the duration of the experiment by limiting the time participants had to prepare their movements. To enforce this, we limited the amount of time available for the participants to start their movement after the target location was shown. This upper bound on reaction time was set to 225ms (we corrected reaction times by the average screen delay, 55ms). If the reaction time of the participant exceeded the desired upper bound, the participant was punished with a screen timeout after providing feedback of the movement endpoint. In addition, a low unpleasant tone (200 Hz) was played. This condition was effective in limiting reaction time (Figure 3F). This experiment started with 10 epochs (one epoch = 4 trials) of null trials. After this, the visuomotor rotation was introduced for 60 epochs. At the end of the perturbation period, we measured retention of the visuomotor memory in a series of 15 epochs of no feedback trials (Figure 3E, no feedback).

Our goal was to test whether the putative implicit learning properties measured in the Limit PT group could be used to predict the subject-to-subject relationship between implicit and explicit adaptation in the No PT Limit group (according to Equation 4). To do this, we measured each participant’s implicit retention factor and error sensitivity in the Limit PT condition (see Measuring properties of implicit learning below). We then averaged each parameter across participants. Next, we inserted these mean parameters into Equation 4. With these variables specified, Equation 4 predicted a specific linear relationship between implicit and explicit learning (Figure 3G, model). We overlaid this prediction on the actual amounts of implicit and explicit adaptation measured in each No PT Limit participant (Figure 3G, black dots). We performed a linear regression across these measured data (Figure 3G, black line, measured). We report the slope and intercept of this regression as well as the corresponding 95% confidence intervals.

Lastly, we also asked participants to verbally report their explicit strategy. After the implicit probe trials, we showed each target once again, with a ring of small white landmarks placed at an equal radial distance around the screen (McDougle et al., 2015). A total of 108 landmarks was used to uniformly cover the circle. Each landmark was labeled with an alphanumeric string. Subjects were asked to report the nearest landmark that they were aiming towards at the end of the experiment in order to move the cursor through the target when the rotation was on. The mean angle reported across all four targets was calculated to provide an additional assay of explicit adaptation. However, several (25% across all participants and trials) reports appeared inaccurate in that they had the incorrect sign (participants reported aiming with, not opposite to, the rotation). Noting that explicit re-aiming is prone to erroneous sign errors (McDougle and Taylor, 2019) (errors of same magnitude, opposite sign), we took each report’s absolute value when calculating explicit recalibration.

Next, we calculated a report-based implicit measure by subtracting report-based explicit strategy from total adaptation. While report-based implicit learning was smaller than reach-based implicit learning (Figure 3—figure supplement 2B), and report-based explicit strategy was larger than reach-based strategy (Figure 3—figure supplement 2A), the two exhibited close correspondence with Equation 4 (Figure 3—figure supplement 2C).

Lastly, we also analyzed our reach-based implicit and explicit learning measures in a generalization analysis (Figure 5A&B). This analysis is described in Appendix 6.

Experiment 3

We remotely tested three participant groups (No PT Limit, Limit PT, and decay-only). Participants controlled a cursor by moving their index finger across the track pad of their personal computer. The experiment was coded in Java. To familiarize themselves with the task, participants watched a 3 minute instructional video. In this video, the trial structure, point system, and feedback structure were described. After this video, there was a practice period. During the practice period, the software tracked the participant’s reach angle on each trial. If the participant achieved success on fewer than 65% of trials (measured based on an angular target-cursor discrepancy ≤30°, reaction time ≤1 sec, and movement duration ≤0.6 sec), they had to re-watch the instructional video and re-do the practice period. Movements were brisk and straight, as in standard in-person rotation studies (two example participants are shown in the No PT Limit and Limit PT groups in Figure 3—figure supplement 3).

After the practice period ended, the testing period began. This testing period was similar to the No PT Limit condition in Experiment 2. On each trial, participants reached to 1 of 4 targets (up, down, left, and right). Each target was visited once pseudorandomly in a cycle of 4 targets. After an initial 10-cycle null period, a 30° visuomotor rotation was imposed that lasted for 60 epochs. At the end of the rotation period, we measured implicit and explicit adaptation. The experiment briefly paused, and an audiovisual recording was played that instructed participants to not use any strategy and to move their hand straight through the target. After the experiment resumed, feedback was removed, and participants performed 20 cycles of no-feedback probe trials. In the No PT-Limit group, participants were told to stop aiming on these no-feedback trials, and to move their hand straight to the target.

We measured subject-to-subject correlations between implicit and explicit adaptation in the No PT Limit group. For this, we calculated two implicit learning measures. The early implicit aftereffect was simply the aftereffect observed on the first no-aiming, no-feedback probe cycle (Figure 3Q). The late implicit aftereffect was the average aftereffect observed on the last 15 cycles of this no-aiming, no-feedback period (Figure 3P). To measure explicit learning, we calculated the difference between the total amount of adaptation (mean reach angle over last 10 cycles of the rotation period) and the first cycle of the no-aiming, no-feedback period. We investigated the relationship between explicit adaptation and the early and late implicit aftereffects via linear regression in Figure 3P&Q, respectively. For the early implicit aftereffect, we measured the 95% CI for the slope and intercept. Note that explicit learning measures are also reported in Figure 3N (E3, black) and late implicit learning measures are reported in Figure 3O (No Lim.).

In addition, we also analyzed the relationship between total adaptation and implicit and explicit adaptation in the No PT Limit group. As described in the main text, the competition theory predicted that total adaptation and explicit strategy should have a positive relationship, whereas total adaptation and implicit learning should have a negative relationship (see Appendix 7). In Figure 5G, we show the relationship between total adaptation and the explicit learning measure. In Figure 5H, we show the relationship between total adaptation and the late implicit learning measure. The brown lines denote a linear regression across individual participants.

Finally, we also considered the No PT Limit data in our generalization analyses in Figure 4A&C. This process was the same as for Experiment 2 as shown in Figure 4A&B. See Appendix 6.

Next, we also tested a Limit PT group in Exp. 3. Here, we attempted to suppress explicit strategies by limiting movement preparation time. To determine the limiting preparation time, we used an adaptive algorithm during the baseline period to decrease or increase the preparation time limit in response to a correct or incorrect reach responses (i.e. reaches to the correct or incorrect target). This limit was capped at 350ms, but this upper bound did not include screen delay. We used audiovisual feedback throughout the experiment to enforce the preparation time limit. If the reaction time of the participant exceeded the desired upper bound, the participant was played a low-pitched tone during which the screen briefly timed out and shown a message to “react faster”. This condition produced the preparation times shown in Figure 3M. Apart from this, the experiment protocol was the same as the No PT Limit group.

To test whether limiting preparation time was successful in inhibiting explicit strategy, we calculated explicit strategy as in the No PT Limit. Explicit strategies were dramatically inhibited by limiting preparation time (Figure 3E&O, red). Second, we wanted to measure implicit learning properties in the Limit PT condition and use these to predict the implicit-explicit relationship in the No PT Limit group, with the competition theory. For the latter, we used the same method described above for Experiment 2 (also see Measuring properties of implicit learning). Using the Limit PT data, the competition theory predicted the line shown in blue in Figure 3Q. The black data points show the implicit and explicit learning measures in the No PT Limit group. Also note that consistent with the competition theory, limiting preparation time led to an increase in implicit learning (Figure 3O, PT Limit).

As stated above, in the No PT Limit and Limit PT groups, participants were instructed to stop re-aiming during the no feedback period, and to move their hand straight to the target (Figure 3I&L, no aiming). We used the voluntary change in reach angle to estimate explicit strategy. However, the instruction period lasted about 30 s, which may have caused decay in temporally labile implicit learning (Neville and Cressman, 2018; Maresch et al., 2021; Hadjiosif and Smith, 2015). To measure how much implicit learning had decayed over this time delay, we varied the instruction condition in a decay-only group (n = 12). The decay-only group adapted using the same restricted reaction time paradigm as the Limit PT group. However, prior to the no feedback period, participants were told that the disturbance between the cursor and their movement would still be present when they returned to the experiment, but they would no longer be able to see the cursor. Still, they were told to imagine this disturbance and to try and move the imagined cursor to the target. Changes in reach angle in this group, would be due solely to decay in implicit learning (Figure 8—figure supplement 1). We compared the behavior in the decay-only group to the Limit PT group in Figure 8—figure supplement 1.

Finally, we used a separate procedure to estimate screen delay. To do this, participants were told to tap a circle that flashed on and off in a regular, repeating cycle. Participants were told to predict the appearance of the circle, and to tap exactly as the circle appeared. Because the stimulus was predictable, the difference between the appearance time, and the participant’s button press, revealed the system’s visual delay. The average visual delay we measured was 154ms. This average value was subtracted out in the preparation times reported in Figure 3J&M, as well as Figure 8—figure supplement 1.

Day et al., 2016

A recent study by Day et al. measured implicit generalization. Participants were exposed to a 45° rotation at a single target. On each trial, they reported their aiming direction, using a ring of visual landmarks. This study measured implicit generalization by instructing participants to aim towards untrained targets. We reproduce this curve in Figure 4A (Day 1T). We only show the curve starting at the average aiming direction (0° on the x-axis), towards the training target direction (i.e. in the direction participants will change their aim when instructed to aim to the primary target). Note in Figure 4B and C, only the initial two points along the curve are shown.

Last, we also compared implicit learning measured across two groups reported in their Figure 2. In the ‘target’ group in Figure 4—figure supplement 2A, implicit aftereffects were periodically probed at the trained target location, by asking subjects to reach to the target without aiming. In the ‘aim’ group, implicit aftereffects were probed at a target location 30° away from the trained target, consistent with the direction of the most frequently reported aim. In Figure 4—figure supplement 2A, we show the implicit aftereffect measured on the first aftereffect trial at the end of the experiment. In Figure 4—figure supplement 2C we again show the implicit aftereffect measured at the trained target location in the ‘probe’ condition. The ‘report’ condition shows the amount of implicit learning estimated by subtracting the reported explicit strategy from the reported reach angle on the last cycle of the rotation. Note that all data were extracted using the primary source’s figures with MATLAB’s GRABIT utility.

Krakauer et al., 2000

Figure 4A reproduces generalization curves measured by Krakauer et al. We extracted curves shown in Figure 7B in Krakauer et al., 2000 using GRABIT in MATLAB R2018a. To demonstrate how generalization curves are altered by the number of adaptation targets, we show the one target (1T), 2 target (2T), four target (4T), and eight target (8T) curves reported in Krakauer et al., 2000. In this study, participants moved a stylus across a digitized tablet and adapted to a 30° rotation.

McDougle et al., 2017

In Figure 4—figure supplement 2B, we show data collected by McDougle et al., 2017, reported in Figure 3A of the original manuscript. Here, participants were exposed to a 45° rotation while reaching to a single target. At the end of the experiment, participants were exposed to an aftereffect block where they reached 3 times to 16 different targets spaced in varying increments around the unit circle. In this aftereffect block feedback was removed and participants were told to move straight to the target without re-aiming. This aftereffect block was used to construct a generalization curve. In Figure 4—figure supplement 2B we show data only from two relevant locations on this curve. The ‘target’ condition represents aftereffects probed at the training target. The ‘aim’ condition shows the aftereffect measured at 22.5° away from the primary target, which was the target closest to the mean reported explicit re-aiming strategy of 26.2°.

We also use the study’s implicit generalization curve (their Figure 3A) in our SPE generalization model analysis. This curve is reproduced in Figure 4. We extracted only one side: the one pointing along the vector which connected the aiming direction and the adaptation target. We also normalized the curve by dividing by the maximum implicit learning they measured along the aiming direction. These data were extensively used in our generalization analysis in Appendix 6. All relevant details are provided there. We selected this study because implicit and explicit learning were dissociated and because CW and CCW were counterbalanced across participants (alleviating potential position-based biases). Note that all data were extracted using the primary source’s figures with MATLAB’s GRABIT utility.

Maresch et al., 2021

To evaluate the competition and independence models, we analyzed how implicit and explicit learning varied across individual participants in a study conducted by Maresch and colleagues (Maresch et al., 2021). In this analysis, we collapsed across participants in the CR, IR-E, and IR-EI groups (n = 40 total). Note that we did not include participants in the IR-I group, because implicit learning was only measured at one timepoint, unlike the three other groups. In this task, participants reached to eight targets (45° between each target) while holding a robotic manipulandum. Participants were exposed to a 60° rotation. Implicit learning and explicit strategy were probed in various ways throughout the experiment. Here, we used the authors’ exclusion-based implicit and explicit learning measures. In other words, implicit learning was measured by telling subjects to stop aiming. Explicit strategy was estimated as the voluntary decrease in reach angle that occurred when participants were told not to aim (the difference between total adaptation and implicit learning). To calculate total adaption, we averaged the reach angle over the 40 terminal rotation trials. We analyzed subject-to-subject pairwise relationships between implicit learning, explicit strategy, and total adaptation in Figure 5—figure supplement 1A,D&G. To analyze each pairwise relationship, we used linear regressions.

Lastly, in Figure 5—figure supplement 3 we show data collected by Maresch et al., 2021, reported in Figure 4b of the original manuscript. This study calculated implicit learning directly with exclusion trials and indirectly with aim reports. In Figure 5—figure supplement 3D we show data from the IR-E group. This group was comparable to our data because aim was reported intermittently (4 times every 80 trials), meaning that on most trials, aiming targets would not cause adaptation (only the primary target). In addition, there were eight adaptation targets, which will widen implicit generalization. The probe condition in Figure 5—figure supplement 3D corresponds to the total implicit learning measured at the end of adaptation by telling participants to reach without re-aiming. The ‘report’ condition corresponds to total implicit learning estimated at the end of adaptation by subtracting the reported aim direction from the measured reach angle.

Haith et al., 2015

To investigate savings, Haith et al., 2015 used a forced preparation time task. Briefly, participants (n = 14) performed reaching movements to two targets, T1 and T2, under a controlled preparation time scenario. To control movement preparation time, four audio tones were played (at 500ms intervals) and participants were instructed to reach coincident with the 4th tone. On high preparation time trials (High PT), target T1 was shown during the entire tone sequence. On low preparation time trials (Low PT), T2 was initially shown, but was then switched to target T1 approximately 300ms prior to the 4th tone. High PT trials were more probable (80%) than Low PT trials (20%).

After a baseline period (100 trials for each target), a 30° visuomotor rotation was introduced for target T1 only. After 100 rotations trials (Exposure 1), the rotation was turned off for 20 trials. After a 24 hr break, participants then returned to the lab. On Day 2, participants performed 10 additional reaching movements without a rotation, followed by a second 30° rotation (Target T1 only) of 100 trials (Exposure 2). The experiment then ended with a washout period of 100 trials for each target.

We quantified the amount of savings expressed upon re-exposure to the perturbation, on High PT and Low PT trials. We measured savings using two metrics. First, we measured the rate of learning during each exposure to the perturbation using an exponential fit. We fit a two-parameter exponential function to both Low PT and High PT trials during the first and second exposure (we constrained the third parameter to enforce that the exponential begin at each participant’s measured baseline reach angle). We compared the exponential learning rate across high PT trials, low PT trials, and Exposures 1 and 2 with a two-way repeated-measures ANOVA (two within-subject factors: PT and exposure number), followed by one-way repeated-measures ANOVA to test simple main effects (Figure 6B, right).

We also quantified savings in a manner similar to that reported by Haith et al., 2015; we calculated the difference between the reach angles before and after the introduction of the perturbation, during each exposure (Figure 6C, 1st and 2nd columns). For High PT trials, we then computed the mean reach difference over the three trials preceding, and three trials following perturbation onset. Given their reduced frequency, for Low PT trials, we focused solely on the trial before and trial after perturbation onset. We used the same statistical testing procedure (two-way rm-ANOVA with follow-up simple main effects) to test for savings in the pre-perturbation and post-perturbation differences (Figure 6C, right).

Finally, we also used a state-space model of learning to measure properties of implicit and explicit learning during each exposure. We modeled implicit learning according to Equation 3 and explicit learning according to Equation 7. In our competition theory, we used target error as the error in both the implicit and explicit state-space equations. In our SPE model, we used target error as the explicit system’s error, and SPE as the implicit system’s error.

The total reach angle was set equal to the sum of implicit and explicit learning. Each system possessed a retention factor and error sensitivity. Here, we asked how implicit and explicit error sensitivity might have changed from Exposure 1 to Exposure 2, noting that savings is related to changes in error sensitivity (Coltman et al., 2019; Mawase et al., 2014; Lerner et al., 2020; Albert et al., 2021; Herzfeld et al., 2014). Therefore, we assumed that the implicit and explicit retention factors were constant across perturbations but allowed a separate implicit and explicit error sensitivity during Exposures 1 and 2. Therefore, our modeling approach included six free parameters. We fit this model to the measured behavior by minimizing the following cost function using fmincon in MATLAB R2018a:

θfit=argminθn=1N(y1(n)y^1(n))2+(y2(n)y^2(n))2 (10)

Here y1 and y2 represent the reach angles during the 1st and 2nd rotation. These reach angles are composed of High PT and Low PT trials. On Low PT trials, the reach angle is equal to the implicit process. On High PT trials, the reach angle is equal to the sum of the implicit adaptive process and the explicit adaptive process.

We fit this model to individual participant behavior, in the case where implicit learning was driven by target errors (Equation 1), and also in the alternate case where it was driven by SPEs (Equation 2). The implicit and explicit model simulations in Figure 6D (columns 1 and 2) represent the competition theory (target error learning). For the SPE model, these states are not shown, but model parameters are reported in Figure 6E.

We used a two-way repeated-measures ANOVA to test whether error sensitivity differed across implicit and explicit learning (within-subject factor) and across exposures (within-subject factor). We used follow-up one-way repeated measures ANOVA to test for differences across exposures (separately for implicit and explicit learning) for the SPE model, after detecting a statistically significant interaction effect.

Finally, we also fit the target-error (Equation 1) model to the mean behavior across all participants in Exposure 1 and Exposure 2. We obtained the parameter set: ai = 0.9829, ae = 0.9278, bi,1 = 0.0629, bi,2 = 0.089, be,1 = 0.0632, be,2 = 0.1078. Note that the subscripts 1 and 2 denote error sensitivity during Exposure 1 and 2, respectively. These parameters were used for our simulations in Figure 7 (see Competition Map).

Experiment 4

The competition theory (Figure 7) predicted that more consistently suppressing explicit strategy, relative to the conditions used by Haith et al., 2015, should reveal savings in the implicit system. That is, Haith et al., 2015 inhibited strategy only on 20% of all trials. Strategies were able to compete with the implicit system on the remaining 80% of trials. To test this prediction, we inhibited strategy on every trial in Exp. 4. To inhibit strategies, we limited reaction time using the procedure described above for Experiments 2 and 3. In Exp. 3, we observed that limiting movement preparation time drastically suppressed explicit re-aiming (Figure 3N). Limiting preparation time in Exp. 4 was effective in reducing reaction times (Figure 8B, top row), even lower than the 300ms threshold used by Haith et al., 2015.

Experiment 4 used the 4-target protocol reported in Visuomotor rotation. Apart from that, its trial structure was similar to that of Haith et al., 2015. After a familiarization period, subjects completed a baseline period of 10 epochs (one epoch = 4 trials for each target). At that point, we imposed a 30° visuomotor rotation for 60 epochs (Exposure 1). At the end of this first exposure, participants completed a washout period with no perturbation that lasted for 70 epochs. At the end of the washout period, subjects were once again exposed to a 30° visuomotor rotation for 60 epochs (Exposure 2).

We quantified savings in a manner consistent with Haith et al., 2015. First, we fit a two-parameter exponential function to the reach angle during Exposures 1 and 2 (third parameter was used to constrain the fit so that the exponential curve started at the reach angle measured prior to perturbation onset). Second, we also tested for differences in the initial response to the perturbation across each exposure. To do this, we calculated the difference between reach angle during Exposures 1 and 2 (Figure 8A&B, bottom row). We then calculated the difference in reach angle between the five epochs preceding and five epochs following rotation onset. Differences between these two savings indicators (rate and early learning) were tested with a mixed-ANOVA, to determine how adaptation differed across each perturbation exposure (within-subject) in Exp. 4 and Haith et al. (between-subject factor). Statistically significant interaction effects were followed by one-way repeated-measures ANOVA (testing simple main effect of exposure number). Results are shown in Figure 8C.

Experiment 5

Lerner et al., 2020 demonstrated that anterograde interference slows the rate of learning after 5 min (also 1 hr), but dissipates over time and is nearly gone after 24 hr. Here, we wondered if this reduction in learning rate could at least be in part driven by impairments in implicit learning. Because Lerner et al., 2020 did not constrain preparation time, one would expect that participants used both implicit and explicit learning processes. In Experiments 2–4, we isolated the implicit component of adaptation by limiting reaction time. We used the same technique to limit reaction time in Experiment 5. The experiment paradigm is described in Visuomotor rotation above. With that said, we used eight adaptation targets as opposed to four targets, to match the protocol used by Lerner et al., 2020.

The perturbation schedule is shown in Figure 9A&B at top. We recruited two groups of participants, a 5 min group (n = 9), and a 24 hr group (n = 11). After familiarization, all participants were exposed to a baseline period of null trials lasting five epochs (one epoch = 8 trials). Next participants were exposed to a 30° visuomotor rotation for 80 cycles (Exposure A). At this point, the experiment ended. After a break, participants returned to the task. For the 5 min group, the second session occurred on the same day. For the 24 hr group, participants returned the following day for the second session. At the start of the second session, participants were exposed to a 30° visuomotor rotation (Exposure B) whose orientation was opposite to that of Exposure A. This rotation lasted for 80 epochs.

We analyzed the rate of learning by fitting a two-parameter exponential function to the learning curve during Exposures A and B (the third parameter was used to constrain the exponential curve to start from the behavior on the first epoch of the rotation). For each participant, we computed an interference metric by dividing the exponential rate of learning during Exposure B, by that measured during Exposure A (Figure 9C, blue). We tested how interference was impacted by passage of time between Exposures A and B (5 min or 24 hr) as well as by the preparation time condition (no limit in Lerner and Albert et al., limit in Exp. 5) using a two-way ANOVA. In addition, we calculated each exponential’s x-intercept (i.e. zero-crossing), which we used in the control analysis described below.

One potential issue with this technique, is that it does not consider differences in the initial errors experienced during re-exposure to the rotation (Figure 9A&B, bottom row), which could alter sensitivity to error (Albert et al., 2021; Kim et al., 2018; Marko et al., 2012). To examine this, we recalculated learning rate during the second rotation exposure only after the zero-crossing in reach angle (i.e. the point at which the error reached 30°, as in the initial exposure). To estimate this zero-crossing point, we used the exponential model’s x-intercept as described above. Then we used a two-way ANOVA (same as above) to test how this alternate interference metric was altered by time passage (between exposures) and preparation time.

Lerner et al., 2020

Recently, Lerner et al., 2020 demonstrated that slowing of learning in anterograde interference paradigms is caused by reductions in sensitivity to error. Here, we re-analyze some of these data.

Lerner et al., 2020 studied how learning one visuomotor rotation altered adaptation to an opposing rotation when these exposures were separated by time periods ranging from 5 min to 24 hr. Here, we focused solely on the 5 min group (n = 16) and the 24 hr group (n = 18). A full methodological description of this experiment is provided in the earlier manuscript. Briefly, participants gripped a joystick with the thumb and index finger which controlled an on-screen cursor. Their arm was obscured from view using a screen. Targets were presented in eight different positions equally spaced at 45° intervals around a computer monitor. Each of these eight targets was visited once (random order) in epochs of eight trials. On each trial, participants were instructed to shoot the cursor through the target.

All experiment groups started with a null period of 11 epochs (one epoch = 8 trials). This was followed by a 30° visuomotor rotation for 66 epochs (Exposure A). At this point, the experiment ended. After a break, participants returned to the task. For the 5 min group, the second session occurred on the same day. For the 24 hr group, participants returned the following day for the second session. At the start of the second session, participants were immediately exposed to a 30° visuomotor rotation (Exposure B) whose orientation was opposite to that of Exposure A. This rotation lasted for 66 epochs. Short set breaks were taken every 11 epochs during Exposures A and B.

Here, as in the earlier work (Lerner et al., 2020), we analyzed the rate of learning by fitting a two-parameter exponential function to the learning curve during Exposures A and B (the third parameter was used to constrain the exponential curve to start from the behavior on the first epoch of the rotation). For each participant we computed an interference metric by dividing the exponential rate of learning during Exposure B, by that measured during Exposure A (Figure 9C, green). In addition, we also analyzed the reaction time of the participants during Exposure B. The mean reaction time over the first perturbation block is shown in Figure 9A&B (middle, green traces).

Mazzoni and Krakauer, 2006

In this study, subjects sat in a chair with their arm supported on a tripod. An infrared marker was attached to a ring placed on the participant’s index finger. The hand was held closed with surgical tape. Participants moved an on-screen cursor by rotating their hand around their wrist. These rotations were tracked with the infrared marker. On each trial, participants were instructed to make straight out-and-back movements of a cursor through 1 of 8 targets, spaced evenly in 45° intervals. A 2.2 cm marker translation was required to reach each target. Note that all eight targets remained visible throughout the task.

Two groups of participants were tested with a 45° visuomotor rotation. In the no-strategy group, participants adapted as per usual, without any instructions. After an initial null period, the rotation was turned on (Figure 10A, blue, adaptation). After about 60 trials of adaptation, the rotation was turned off and participants performed another 60 washout trials (Figure 10A, blue, washout). The break between the adaptation and washout periods in Figure 10A, no-strategy, is simply for alignment purposes.

The strategy group followed a different protocol. After the null period, participants reached for two movements under the rotation (Figure 10A, 2 cycles no instruction, red). At this point, the subjects were told that they made two errors, and that they could counter the error by reaching to the neighboring clockwise target (all targets always remained onscreen). After the instruction, participants immediately reduced their error to zero (point labeled instruction in red, Figure 10A). They continued to aim to the neighboring target throughout the adaptation period. Note that the directional errors became negative. This convention indicates overcompensation for the rotation, that is, participants are altering their hand angle by more than their strategic aim of 45°. Toward the end of the adaptation period, participants were told to stop re-aiming, and direct their movement back to the original target (Figure 10A, do not aim, rotation on). Then after several movements, the rotation was turned off as participants continued to aim for the original target during the washout period.

In Figure 10A we show the error between the primary target (target 1) and cursor during the entire experiment. In Figure 10B, we show the error between the aimed target (target 2) and cursor during the adaptation period. Note that the aimed and primary targets are related by 45° when the strategy group is re-aiming. We observed that initial adaptation rates (over first 24 movements, gray area in Figure 10B) were similar, but the no-strategy group ultimately achieved greater implicit adaptation. These data were all obtained by using the GRABIT routine in MATLAB 2018a to extract the mean (and standard error of the mean) performance in each group from the figures shown in the primary article.

We fit 1 of 3 models to the direction error during the adaptation period shown in Figure 10B. In all cases, we modeled explicit re-aiming in the strategy group as an aim sequence that started at zero during the initial two movements, and then 45° for the rest of the adaptation period (i.e. after the instruction to re-aim). In the no-strategy group, we modeled explicit learning as an aim sequence that remained at zero throughout the adaptation period.

In Figure 10D, we modeled implicit learning based on the state-space model in Equation 3 and target error term defined in Equation 1. This target error was defined as the difference between the primary target (i.e. the target associated with task outcome) and the cursor. In Figure 10E, we modeled implicit learning based on the state-space model in Equation 3 and the aim-cursor error defined in Equation 2. This aim-cursor error was defined as the difference between the aimed target (either 0° or 45°) and the cursor. Figure 10F, shows our third and final model. In this model, implicit learning in the strategy group was modeled using the dual-error system shown in Equation 6. That is, there were two implicit modules, one which responded to the target errors as in Figure 10D, and the other which responded to aim-cursor errors as in Figure 10E. The evolution of these errors is shown in Figure 10G. In the no-strategy group, we modeled implicit learning based on the primary target error and cursor alone.

Each model in Figure 10D–F was fit in an identical manner. We fit the implicit retention factor and implicit error sensitivity to minimize squared error according to:

θfit=argminθn=1N(ystrategy(n)y^strategy(n))2+(yno-strategy(n)y^no-strategy(n))2 (11)

In other words, we minimized the sum of squared error between our model fit and the observed behavior across the strategy and no-strategy groups in Figure 10B. Therefore, we constrained each group to have the same implicit learning parameters. In the case of our dual-error model in Figure 10F, we assumed that each implicit module also possessed the same retention and error sensitivity. In sum, all model fits had two free parameters (error sensitivity and retention) which were assumed to be identical independent of instruction. This fit was performed using fmincon in MATLAB R2018a. The predicted behavior is shown in Figure 10D–F at bottom. For our best model (Figure 10F), the model behavior is also overlaid in Figure 10B.

Taylor and Ivry, 2011

In Figure 10I, we show data collected and originally reported by Taylor and Ivry, 2011. In this experiment, participants moved their arm at least 10 cm toward 1 of 8 targets, that were pseudorandomly arranged in cycles of eight trials. Only endpoint feedback of the cursor position was provided. The hand was slid along the surface of a table while the position of the index finger was tracked with a sensor. After an initial familiarization block (five cycles), participants were trained how to explicitly rotate their reach angle clockwise by 45°. That is, on each trial they were shown veridical feedback of their hand position, but were told to reach to a neighboring target, that was 45° away from the primary illuminated target. After this training and another null period, the adaptation period started where the cursor position was rotated by 45° in the counterclockwise direction for 40 cycles. The first two movements in the rotation exhibited large errors (Figure 10I, 2 movements no instruction). As in Mazzoni and Krakauer, 2006, the participants were then instructed that they could minimize their error by adopting the aiming strategy they learned at the start of the experiment. Using this strategy, participants immediately reduced their direction error to zero.

Here, we report data from two critical groups in this experiment. In the ‘instruction with target’ group (Figure 10I, black, n = 10) participants were shown the neighboring targets during the adaptation period to assist their re-aiming. However, in the ‘instruction without target’ group (Figure 10I, yellow, n = 10) participants were only shown the primary target; the neighboring targets did not appear on the screen to help guide re-aiming. Only participants in the ‘instruction with target’ group exhibited the drift reported by Mazzoni and Krakauer, 2006. However, both groups exhibited an implicit aftereffect (Figure 10I, aftereffect; first cycle of washout period as reported in Figure 4C of the original manuscript Taylor and Ivry, 2011).

Data were extracted from the primary figures in Taylor and Ivry, 2011 using Adobe Illustrator CS6. We used the means and standard deviations for our statistical tests on the implicit aftereffect in Figure 10I.

Measuring properties of implicit learning

Many of our model’s predictions depended on estimates of implicit retention factor and error sensitivity. We obtained these using the Limit PT groups in Experiments 2 and 3. To calculate the retention factor for each participant, we focused on the no feedback period at the end of Experiment 2 (Figure 3E, no feedback) and the no aiming period at the end of Experiment 3 (Figure 3L, no aiming). During these error-free periods trial errors were hidden, thus causing decay of the learned behavior. The rate of this decay is governed by the implicit retention factor according to:

y(n)=ainyss (12)

Here, y(n) refers to the reach angle on feedback trial n, and yss corresponds to the asymptotic behavior prior to the no feedback period. We used fmincon in MATLAB R2018a to identify the retention factor which minimized the difference between the decay predicted by Equation 12 and that measured during the no feedback period. For Experiment 2, we obtained an epoch-by-epoch retention factor of 0.943 ± 0.011 (mean ± SEM). Note that an epoch consisted of four trials, so this corresponded to a trial-by-trial retention factor of 0.985. When modeling Neville and Cressman, 2018 (Figure 1), we cubed this trial-by-trial term because each cycle consisted of 3 different targets (final retention factor of 0.9565). For Experiment 3, we obtained an epoch-by-epoch retention factor of 0.899 (trial-by-trial: 0.9738).

Next, we measured implicit error sensitivity in the Limit PT group during rotation period trials. To measure implicit error sensitivity on each trial, we used its empirical definition:

b(n1)=y(n2)an2n1y(n1)e(n1) (13)

Equation 13 determines the sensitivity to an error experienced on trial n1 when the participant visited a particular target T. This error sensitivity is equal to the change in behavior between two consecutive visits to target T, on trials n1 and n2 divided by the error that had been experienced on trial n1. In the numerator, we account for decay in behavior by multiplying the behavior on trial n1 by a decay factor that accounted for the number of intervening trials between trials n1 and n2. For each target, we used the retention factor estimated for that target with Equation 12.

Using this procedure, we calculated implicit error sensitivity as a function of trial in Experiment 2. To remove any potential outliers, we identified error sensitivity estimates that deviated from the population median by over three median absolute deviations within windows of 10 epochs. As reported by Albert et al., 2021, implicit error sensitivity increased over trials. Equations (4) and (5) require the steady-state implicit error sensitivity observed during asymptotic performance. To estimate this value, we averaged our trial-by-trial error sensitivity measurements over the last 5 epochs of the perturbation. This yielded an implicit error sensitivity of 0.346 ± 0.071 (mean ± SEM).

To corroborate this value, we compared our estimate to data reported in Kim et al., 2018. There, error sensitivity is reported as a function of error size across various experiments in Figure 3a. These data are reproduced in Figure 3—figure supplement 1C. Note that error sensitivity increases as errors get smaller. For our analyses, we required steady-state error sensitivity, which is the error sensitivity reached at the end of the training period. Figure 3—figure supplement 1B shows how error in the PT-Limit group changed with adaptation. The terminal error (horizontal black line) corresponding to the steady-state condition was equal to about 7.6° (Figure 3—figure supplement 1B). For this error, error sensitivity fell somewhere between 0.25 and 0.35 (see Figure 3—figure supplement 1C) according to Experiments 1 and 2 reported by Kim et al., 2018. Thus, our value 0.346 appeared in agreement with these data.

Finally, we conducted a similar analysis in Experiment 3. However, trial-by-trial behavior was more variable and overall adaptation was lower in this laptop-based experiment. Thus, to obtain a more stable steady-state implicit error sensitivity estimate, we averaged error sensitivity over the asymptotic period apparent in Figure 3—figure supplement 1D (cycles 37–60). The average error sensitivity was approximately 0.193 (Figure 3—figure supplement 1D). To corroborate this value, we calculated the terminal error in the Limit PT group. This value was approximately 13.1° (Figure 3—figure supplement 1E). This error corresponded to an error sensitivity between about 0.13 and 0.22 (Figure 3—figure supplement 1F) according to Kim et al., 2018. Thus, our Limit PT error sensitivity estimate 0.193 was within this range.

Acknowledgements

This work was supported by grants from the National Institutes of Health (R01NS078311, F32NS095706), and the National Science Foundation (CNS-1714623).

Appendix 1

In the main text, the competition and independence models predict steady-state implicit learning, xiss. But each theory applies to the entire timecourse, not solely asymptotic learning. Here, we will derive general expressions that apply on each trial k. We will then show that the scaling, saturation, and non-monotonic phenotypes in Figure 1, occur throughout the entire implicit learning timecourse (in the competition model).

1.1 General model derivation

Consider the state-space model where the implicit system adapts to target error driven by the rotation r, as in the competition model (this is the competition model simulated in Figures 6 and 7):

xi(n)=aixi(n1)+bi(rxi(n1)xe(n1)) (A1.1)

Recall that ai and bi are implicit retention and error sensitivity. The xi(n) and xe(n) terms represent implicit and explicit learning on a given trial (numerator). This equation can be rewritten recursively to represent xi(n) with respect to all prior trials:

xi(n)=(aibi)n1xi(1)+bik=1n1(aibi)nk1(rxe(k)) (A1.2)

In the case where implicit learning starts at zero (naïve learner), this equation simplifies to:

xi(n)=bik=1n1(aibi)nk1(rxe(k)) (A1.3)

Equation A1.3 shows how the implicit system on trial n is driven by the explicit system on all prior trials, the rotation, and the implicit error sensitivity and retention factor. An excellent approximation to this equation can be obtained by replacing xe(k) as the average xe across all prior trials this approximation’s accuracy can be seen in the red and cyan lines in Figure 1—figure supplement 3A; cyan shows true implicit learning in Equation A1.3; red shows the approximation in A1.4 below. This approximation yields:

xi(n)bi1(aibi)n11(aibi)(rxe(avg)) (A1.4)

Equation A1.4 is analogous to the competition model in Equation 4 in the main text. It states that implicit learning on a given trial n is approximately proportional to (r – xeavg), the difference between the rotation and the average explicit strategy used by the participant. Note that in the limit as n goes to infinity (i.e., steady-state), we obtain the competition model in Equation 4. The implication of Equation A1.4 is that a linear competition between implicit learning and explicit learning can be observed throughout the adaptation process, not only in the asymptotic learning phase. Thus, the scaling, saturation, and non-monotonic phenotypes we describe in Figure 1, can be observed throughout the adaptation process.

Finally, note that the derivations above apply for the independence (SPE) model in Equation 5. For this model, simply set all xe terms in A1.1-A1.4 to zero.

1.2 Scaling, saturation, and non-monotonic phenotypes across the entire implicit learning timecourse

Here we illustrate the scaling, saturation, and non-monotonic implicit learning phenotypes produced by the competition model. In Figure 1—figure supplement 3A, we simulate the implicit and explicit response to a 30° rotation using A1.1 and its explicit learning analogue. This means that the implicit and explicit systems are driven by target error. Note that the red line shows the implicit approximation in A1.4 above, the blue line shows exact implicit learning in A1.3 above, and the magenta line shows explicit strategy. In this simulation, we used the ai, bi, and ae parameters identified in our model fit to Haith et al., 2015, as in the competition map shown in Figure 7.

The scaling, saturation, and nonmonotonic phenotypes in Figure 1 are due to how the explicit system responds to changes in rotation size. When strategy increases slower than the rotation, implicit learning will increase in the scaling phenotype. When it increases at the same rate as the rotation, implicit learning will stay the same, leading to the saturation phenotype. When it increases more rapidly than the rotation, implicit learning will decreases as the rotation increases: the nonmonotonic phenotype. All these scenarios are depicted in Figure 1—figure supplement 1.

Thus, to produce these phenotypes, we must change the way explicit strategy ‘responds’ to error: least vigorously in the scaling phenotype, and most vigorously in the nonmonotonic phenotype. To do this we simulated implicit and explicit responses to 30° and 45° rotations and tuned explicit error sensitivity: be. For all 30° simulations, be remained 0.15. To obtain the scaling implicit phenotype, explicit error sensitivity remained 0.15 in the 45° simulation. To obtain the saturation implicit phenotype, we increased be to 0.435 in the 45° condition (i.e., the explicit system became more reactive to the higher rotation). Lastly, to obtain the nonmonotonic phenotype, we increased the explicit error sensitivity dramatically, to 0.93. In Figure 1—figure supplement 3B and C, we calculate implicit and explicit learning at various points across these simulations: from left to right, 5, 10, 20, 40, and 150 rotation cycles. The explicit responses are shown in inset B. Implicit responses are shown in inset A.

There are many critical things to note. Firstly, at all time points, changes in explicit error sensitivity produce three distinct levels: less explicit learning when be = 0.15, medium explicit learning when be = 0.435, and high explicit learning when be = 0.93. These changes have dramatic effects on the implicit system. For the low explicit strategy level, the implicit system scales when the rotation increases from 30° to 45°. For the medium explicit strategy level, the implicit system remains the same when the rotation increases from 30° to 45°. For the high explicit strategy level, implicit learning decreases when the rotation increases from 30° to 45°. Most critically, all three phenotypes occur at all phases in the implicit learning time course: as early as cycle 5, and as late as cycle 150 (compare each set of bars in Figure 1—figure supplement 3B). The only thing that changes is the total difference (i.e. effect size) between 30° and 45° implicit learning, which is smallest and hardest to detect early in learning (e.g. cycle 5), and largest (easiest to detect) when implicit learning reaches its steady-state (e.g. cycle 150).

What this means is that the competition model does not solely pertain to asymptotic learning. The same phenomena that occur due to implicit-explicit competition, appear throughout all phases in the learning process. To simplify matters we have chosen in our main text to focus on steady-state learning, where the mathematical relationship between steady-state implicit and explicit learning converges and is easy to test (i.e. the competition equation).

Appendix 2

In Figure 1H–L, we applied the competition and independence theories to the stepwise group in Exp. 1. While these models do not solely apply to asymptotic learning (see Appendix 1), here we analyze whether the exposures during B1 (15°), B2 (30°), B3 (45°), and B4 (60°), lasted long enough to achieve implicit steady-state adaptation. That is, the scaling phenotype we noted in the implicit response could be influenced by two factors: (1) rotation size, and (2) exposure time.

In the stepwise condition in Exp. 1 we observed that implicit adaptation increased across the 15°, 30°, 45°, and 60° rotation blocks (Figure 1—figure supplement 4A; rm-ANOVA, F(3,38)=99.9, p < 0.001, ηp2=0.735). Were these increase due to changing the rotation’s magnitude, or additional accumulation in the implicit response over time which had not yet saturated? Comparison with other comparable data sets, suggests that implicit learning likely saturated during each block in the Exp. 1 stepwise group. Here, we will describe each data set in turn.

Most importantly, the abrupt rotation group provides a way to verify the timecourse of implicit learning in Exp. 1. Total implicit learning was assayed four times in the abrupt group. Each probe overlapped with a stepwise rotation period. For example, the initial probe in the abrupt condition occurred during B1, when implicit learning was measured in the 15° stepwise rotation. Similarly, the B2, B3, and B4 implicit measurements in the abrupt group overlapped with the 30°, 45°, and 60° rotation periods in the stepwise group. Implicit learning measures across all four abrupt periods are shown in Figure 1—figure supplement 4B. We tested whether total implicit learning varied across the four blocks in the abrupt condition. We did not detect a statistically significant effect of block number on implicit learning (rm-ANOVA, F(3,105)=2.21, p = 0.091, ηp2=0.059). The same was true when we compared solely the first and last blocks (paired t-test, t(35)=1.53, p = 0.134). This indicated that in the 60° rotation condition there was little to no change in implicit learning following B1. This suggests that a single rotation block was sufficient to achieve steady-state adaptation even in the largest rotation condition tested: 60°.

Does this generalize to smaller rotation sizes? While we did not measure implicit learning across multiple blocks in the 15°, 30°, and 45° stepwise conditions, we used a very similar experimental protocol in Salomonczyk et al., 2011: here three learning targets were also used, separated by 45° between targets, in an upper triangular wedge (the exact same conditions as Exp. 1). Note that these data provided another example where the implicit response scales with rotation size during a stepwise rotation sequence (Figure 1—figure supplement 4C). That is, when we increased the rotation in a stepwise manner across three blocks: 30° then 50° and lastly 70°, we observed strong increases in asymptotic implicit learning (p < 0.001). In this study we exposed another participant group to a 30° rotation and measured implicit learning across three consecutive blocks (Figure 1—figure supplement 4D). Critically, we did not detect any change in implicit learning across the three learning periods (all contrasts, p > 0.05). This matched our 60° rotation analysis in the abrupt group in Exp. 1. Thus, for both 30° and 60° rotations, a single block provides enough time to reach steady-state learning.

Lastly, we did not measure extended exposures to a 15° or 45° rotation, but Neville and Cressman, 2018 tested prolonged exposure to a 20° and 40° rotation in a similar paradigm (three targets, separated by 45° in an upper triangular wedge). No-instruction group implicit learning is shown in Figure 1—figure supplement 4E and F. We do not have access to the raw data, and so cannot run a repeated-measures ANOVA to test whether these groups exhibited a block-by-block change in implicit learning. However, comparing the first and third rotation blocks, we can calculate a maximum 0.7°–1.4° c hange in implicit learning. This represents a change between 7.6–16.5%, which are daunted by the 313% increase in implicit learning exhibited from the 15° to the 60° blocks in the Exp. 1 stepwise group.

In sum, the 60° implicit learning timecourse we measured in the abrupt group, the 30° implicit learning timecourse measures in Salomonczyk et al., 2011, and the 20° and 40° groups in Neville and Cressman, 2018., suggest that the implicit learning system exhibits little to no increase beyond the initial learning block. Each study was extremely comparable, testing participants with three learning targets, located in an upper triangular wedge spanning 45–135° in the workspace. We estimate that any changes in implicit learning due to exposure time alone were limited to about 1.4°, which is an order magnitude smaller than that observed across the stepwise blocks in Exp. 1 (14.5°). Therefore, our Exp. 1 analyses in Figures 1, 2 and 4, likely reflects steady-state implicit learning, or very nearly so.

Appendix 3

In Section 1.1, we note a potential concern in our Figure 1 analysis, where our implicit and explicit learning measures were not independent; explicit strategy was estimated as total adaptation minus implicit learning: xess = xTss - xiss. Thus, when total adaptation is constant, implicit and explicit learning will exhibit a negative correlation simply due to the dependencies in our implicit and explicit learning measures. At a glance, this negative correlation may appear similar to the negative correlation between implicit and explicit learning embedded within the competition equation: xiss = pi(r - xess). Thus, it is reasonable to question whether the correspondence between our empirical data and the competition model in Figure 1, is unfairly biased by the intrinsic implicit-explicit dependencies in our empirical measures.

In short, suppose that two conditions, A and B, exhibit an increase in implicit learning. Using our explicit learning measures, we would expect a decrease in explicit strategy. This negative correspondence between implicit learning and explicit strategy may appear trivially predicted by the competition model. This, however, is not the case. For this trivial relationship to occur, total adaptation must be the same in A and B. In Figure 1, we calculate implicit and explicit learning measures across changes in rotation size, over which total adaptation varies in proportion to rotation magnitude. Thus, our empirical implicit and explicit learning measures will not necessarily match the competition model.

To explain this, let us consider a hypothetical scenario similar to the data recorded by Neville and Cressman. Suppose total adaptation and implicit learning are measured over three rotations sizes (20°, 40°, and 60°): total learning (19°, 36°, 53°), implicit learning (10°, 10°, 10°). As expected, total adaptation increases with rotation size. Implicit learning remained constant, as in the Neville and Cressman data set. Next, suppose we estimate explicit strategy, by subtracting total adaptation and implicit learning. This will yield the explicit strategy estimates: (9°, 26°, 43°). Thus, in this example, our implicit and explicit learning measures are dependent. How well does the competition model match these data?

To answer this question, we can calculate the implicit learning gain, pi, for each rotation size. This can be calculated as pi = xiss/(r – xess). For the example above, the implicit learning gain would be equal to the set: (0.909, 0.714, 0.588). What this means, is that as the rotation increases, the implicit learning gain decreases by approximately 35.3%. These variations in pi will not produce a good match between the data and model. To show this, we can estimate the optimal pi that would minimize the squared error between the measured implicit learning, and model-predicted implicit learning. In this example, the optimal pi value is 0.693. Using this gain, we can now use the model to predict how much implicit learning should occur in each rotation size condition. This would yield: (7.62°, 9.70°, 11.78°) in predicted learning. We can see even though implicit learning remained constant in the ‘real’ data, the competition equation predicts a 54.5% increase in implicit learning across the three rotation sizes.

What does this mean? This hypothetical example demonstrates that calculating explicit learning via a subtraction between total adaptation and implicit learning, will not automatically yield a good match between the empirical data and the competition model. Rather, the only way that the competition model will match data, is in the case where the conditions tested in an experiment abide by the principle that pi is constant (or at least very similar) across various experimental conditions (e.g. rotation sizes). The three data sets we analyze in Figure 1, must obey this property, and thus, are intrinsically compatible with Equation 4.

Nevertheless, there is another way to corroborate the relationship between the model and data in Figure 1 that does not involve using implicit and explicit learning measures at the same time. By noting that xess = xTss – xiss, we can substitute this expression into Equation 4 to obtain a relationship between implicit learning and total adaptation: xiss = pi(1 – pi)–1(r – xTss). This equation represents an alternate way to express the competition model which can be compared with the empirical data. Fortunately, here we compare xiss and xTss which were directly measured on separate trials (they are not dependent, in a statistical sense).

A simple thought experiment shows that when data do not agree with the competition model, xiss and xess can show a correlation, but xiss and xTss will not (in general). Suppose 3 participants have 10°, 15°, and 5° implicit learning, respectively, and all have 20° total learning. Subtracting total learning and implicit learning will yield an estimated 10°, 5°, and 15° explicit strategy, respectively. The competition model will predict that implicit-explicit and implicit-total will show negative correlations. For these participants, the implicit-explicit correlation is –1, but the implicit-total correlation is 0, inconsistent with competition. That is, implicit-explicit show a correlation spuriously, but implicit-total were sampled independently, yielding a robust way to test the competition theory.

Thus, in the main text we repeated our competition model analysis in Figure 1 using the alternate equation. We used total adaptation measured in Exp. 1, Neville and Cressman, and Tsay et al., 2021a to predict implicit learning. To do this, we identified a pi value in each data set that minimized the squared error between measured implicit learning and model-predicted implicit learning (in Neville and Cressman, the pi value was calculated across six conditions (three rotations, 2 instruction types), in Exp. 1, the pi value was calculated across five conditions (all four rotation sizes in the stepwise group, as well as the 60° abrupt condition), in Tsay et al., the pi value was calculated across all four rotation sizes).

Our results are shown in Figure 1—figure supplement 2. The competition model predicted nearly identical implicit learning patterns when total adaptation was used as the independent variable (‘model-2’) and when explicit adaptation was used as the independent variable (‘model-1’). This control analyses shows that the close correspondence between measured behavior and model-predicted behavior in Figure 1, is not due to the way we empirically operationalized our learning measures. Rather, the properties embedded in the competition model (with a constant implicit learning gain) were intrinsically present within the data.

Appendix 4

At various points in our work, we analyze how well the competition model, or independence model, can predict changes in implicit learning across various conditions. These models have one free parameter, the implicit learning gain: pi = bi(1 – ai+ bi)–1. In each case, we assumed that the implicit learning gain is the same across experimental groups, that is only one gain is selected and applied to all experimental conditions. We made this choice to minimize the number of free variables; allowing pi to vary across groups, would allow arbitrarily precise matches to the data. However, holding pi constant across conditions, shows that the same equation applies across conditions, limiting overfitting and increasing model confidence.

This assumption, however, may seem inappropriate given that the implicit learning gain depends on error sensitivity, which varies with conditions such as error size (Albert et al., 2021; Kim et al., 2018; Marko et al., 2012). However, it is important to note that the implicit learning gain responds weakly to changes in error sensitivity. This insensitivity is due to the appearance of error sensitivity (bi) in both the gain’s numerator and denominator: pi = bi(1-ai+ bi)–1. For example, let us suppose that participants in Condition 1 have bi = 0.2, but in Condition 2 bi = 0.3, a 50% increase. For an implicit retention factor of 0.9565 (see in Materials and methods: Measuring properties of implicit learning), the implicit learning gain in Condition 1 would be pi = 0.821 versus pi = 0.873 in Condition 2. Thus, even though implicit error sensitivity was 50% larger in Condition 2, the implicit learning gain would change only 6.3%. For a more extreme case where implicit error sensitivity was double (0.4) in Condition 2, there would still only be a 9.8% change in implicit learning gain.

For these reasons, there are no physiologic changes in bi that could create the 46.2% increase in implicit learning in Figure 2C (ratio of no instruction to instruction groups), and 82.3% increase in learning in Figure 2G (ratio of stepwise to abrupt). To show this, we conducted sensitivity analyses. Imagine that across two conditions, a reference condition, and a test condition, implicit sensitivity varies. We set the reference implicit error sensitivity to 0.1, 0.1625, 0.225, 0.2875, 0.35. We chose these values, because steady-state implicit error sensitivity determines steady-state implicit learning, which varied between 0.19 and 0.35 in Exps. 2 and 3; see Figure 3—figure supplement 1. Then we calculated how much pi (total learning) will change, when this reference error sensitivity increases to a test error sensitivity: test error sensitivity was capped at the physiologic, yet exceedingly unlikely, upper bound of 1. Because pi also depends on implicit retention, we also tested several possible values between 0.95–0.99. The results are depicted in Figure 2—figure supplement 2. Each column denotes the same analysis, but with a different retention factor. Each line denotes a different reference error sensitivity. The x-axis continuously varies the test error sensitivity.

For example, for the point highlighted by the black arrow in the second column: this point shows that total implicit learning will increase by about 20% (y-axis) in a scenario where implicit retention = 0.96, and implicit error sensitivity increases from 0.1625 (i.e., it is on the red line) in the reference condition to 0.8 (i.e., the x-axis value) in the test condition. This example is illustrative. Here bi increase from 0.1625 to 0.8, a 392% increase, but total implicit learning only increases 20%. In sum, extreme variations in implicit error sensitivity, still produce small changes in total implicit learning.

In Figure 2—figure supplement 2, we indicated the 46.2% and 82.3% changes in Figure 2C&G with dashed horizontal lines. Note how no curve crosses either level, even though these represent several hundred % changes in implicit error sensitivity. Thus, it is not possible that variations in implicit error sensitivity could generate the changes in total implicit learning we examined in Figure 2.

On the other hand, consider the competition model, xiss = pi(r – xess). Distributing the pi term yields: xiss = pir – pixess. Note that pi varied between about 0.6–0.8 in our studies, meaning that implicit learning is very sensitive to competition. The implicit system will decrease by approximately 60–80% each unit increase in explicit strategy. For these reasons, the competition model more readily produces large fluctuations in implicit learning, such as those observed in Figure 2.

Appendix 5

In Experiment 1 in the main text, we analyze how abrupt and gradual perturbation onset alters the steady-state distribution for implicit and explicit learning. In this appendix, we conduct a similar investigation for data collected by Saijo and Gomi, 2010. In the first section, we describe the results of our analysis. In the second section, we describe the experimental paradigm and detail the methods used in our analysis.

5.1 Suppressing explicit strategy enhances procedural learning

In the main text (Figure 2D–G), we observed that gradual perturbations reduce explicit re-aiming relative to abrupt rotations. The competition theory predicts that these reductions in aiming will lead to increases in implicit adaptation. That is, suppose participants adapt with an explicit strategy (Figure 2—figure supplement 3B, aim, solid magenta line), but this strategy is then suppressed (Figure 2—figure supplement 3B, aim, dashed magenta line). Reductions in explicit strategy will increase the residual steady-state target error that drives implicit adaptation. Thus, Equation 4 predicts that suppressing explicit aiming will increase implicit learning (Figure 2—figure supplement 3B, H2, right, compare dashed blue and solid black implicit lines). However, a model where the implicit system responds to SPEs (Equation 5) does not predict any change in implicit learning.

We corroborated these predictions in Experiment 1 (Figure 2D–G). Here, we describe similar data collected by Saijo and Gomi, 2010. Participants were exposed to an abrupt (Figure 2—figure supplement 3A, abrupt) or gradual (Figure 2—figure supplement 3A, gradual) rotation. The abrupt perturbation was immediately set to 60°, but the gradual perturbation reached this magnitude over time, in six 10° steps. Each rotation size persisted for 36 trials (three cycles, 12 adaptation targets). Participants in the abrupt condition adapted rapidly to the perturbation, greatly decreasing their target error to about 5° over about 10 perturbation cycles (Figure 2—figure supplement 3C, abrupt). In the gradual group, target errors remained small throughout training. Curiously, total adaptation was smaller in the gradual condition; participants exhibited a terminal error nearly three times greater than the abrupt condition (Figure 2—figure supplement 3C, gradual).

At this point, the perturbation was abruptly removed, revealing large aftereffects in each group. Paradoxically, even though participants in the gradual group had adapted less completely to the rotation, they exhibited larger aftereffects (Figure 2—figure supplement 3F, data), which remained elevated throughout the entire washout period (Figure 2—figure supplement 3C, aftereffect).

Here, we demonstrate that these patterns are consistent with the competition theory, provided we make several assumptions. First, we assume that the initial washout cycle is driven by implicit learning. While this is likely not entirely true, we should note that when Morehead et al., 2015 measured aim reports during washout with a 45° rotation, participants appeared to immediately “turn off” their explicit strategy. Nevertheless, this assumption likely introduces some error in our modeling approach. Second, we assume that participants in the abrupt group used larger strategies than participants in the gradual group. This assumption seems reasonable, considering that we also measured larger strategies in the abrupt group in Experiment 1, as did Yin and Wei, 2020. Furthermore, as shown in Figure 2 in the primary manuscript (Saijo and Gomi, 2010), subjects in the abrupt group exhibited a sharp increase in reaction time upon rotation onset, consistent with explicit strategy use (Fernandez-Ruiz et al., 2011; McDougle and Taylor, 2019).

With these assumptions, we investigated how well the competition and independence models predicted the observed data. To simulate these models, we estimated the explicit strategies in each group. (Neville and Cressman, 2018) measured the explicit response to a 60° rotation, demonstrating that participants re-aimed their hand approximately 35° consistently over the adaptation period (see yellow points in Figure 2—figure supplement 3D and E, explicit aim). This estimate agreed well with the data; participants in the abrupt condition adapted 55° and exhibited an aftereffect of approximately 20° (Figure 2—figure supplement 3F, data, abrupt), suggesting about 35° of re-aiming. In the gradual group, we assumed that little to no re-aiming occurred. This also seemed consistent with the data; participants in the gradual group adapted approximately 40° and exhibited an aftereffect of approximately 38° (Figure 2—figure supplement 3F, data, gradual) suggesting <5° of re-aiming. Using these estimates, we constructed hypothetical explicit learning timecourses, as shown in Figure 2—figure supplement 3D and E, explicit aim.

We next used the state-space model to simulate the implicit learning timecourse, in cases where the implicit system learned solely due to SPE (Figure 2—figure supplement 3D, implicit angle) or solely due to target error (Figure 2—figure supplement 3E, implicit angle), under the assumption that participants in both the abrupt and gradual groups had the same implicit error sensitivity (bi) and retention factor (ai). The parameter sets that yielded the closest match to the measured behavior (Figure 2—figure supplement 3C) are shown in Figure 2—figure supplement 3D and E. In both cases, the models predicted that abrupt learning would be more complete than gradual learning (i.e. steady-state error is smaller in the abrupt condition).

However, the implicit states predicted by SPE learning and target error learning possessed a critical difference. According to Equation 4, the target error model predicted that the total extent of implicit learning would be suppressed by explicit strategy in the abrupt condition, yielding a smaller aftereffect (Figure 2—figure supplement 2E, implicit angle). However, according to Equation 5, the SPE model predicted that implicit learning should reach the same level, yielding identical aftereffects (Figure 2—figure supplement 3D, implicit angle).

In summary, the differences in aftereffects across the abrupt and gradual conditions (Figure 2—figure supplement 3F, data) were accurately predicted by the competition theory (Figure 2—figure supplement 3F, competition), but not the independence equation (Figure 2—figure supplement 3F, indep.). Suppressing explicit strategy revealed competition between implicit and explicit systems which suggested that the implicit system predominantly responded to target error. Furthermore, it is interesting to note that these data were consistent with our observation that steady-state adaptation is greater when explicit learning is large and implicit adaptation is small (Figure 5G&H). These trends are consistent with competition (Figure 5D–F).

5.2 Methods used to analyze Saijo and Gomi, 2010

To understand how suppressing explicit strategy might alter implicit learning, we considered data collected by Saijo and Gomi, 2010. In one of their experiments, the authors tested how perturbation onset altered the adaptation process. Subjects were divided into either an abrupt (n = 9) or gradual group (n = 9), and reached to 1 of 12 targets, which were ordered pseudorandomly in each cycle of 12 trials. After a baseline period of 8 cycles, a visuomotor rotation was introduced. The perturbation period lasted 32 cycles. After this, the perturbation was removed for 6 cycles of a washout condition. Participants were exposed to either an abrupt rotation where the perturbation magnitude suddenly changed from 0° to 60°, or a gradual condition where the perturbation magnitude increased over smaller increments (10° increments that lasted three cycles each, Figure 2—figure supplement 3A).

Here, we considered why participants in the abrupt perturbation condition achieved greater adaptation during the rotation period (smaller error in Figure 2—figure supplement 3C) but exhibited a smaller aftereffect when the perturbation was removed. Our theory suggested that this may be due to competition. If the gradual condition suppressed explicit awareness of the rotation (Yin and Wei, 2020), then Equation 4 would predict increases in implicit learning which were observed in the aftereffects measured during the washout period (where explicit strategies were disengaged). However, the SPE model (Equation 5) would predict the same amount of implicit adaptation: the same aftereffect in each condition.

To test these hypotheses, we simulated implicit adaptation using the state-space model in Equation 3. In Figure 2—figure supplement 3D, we used an SPE for the error term in Equation 3. In Figure 2—figure supplement 3E, we used the target error for the error term in Equation 3. We imagined that the total reach angle was determined based on the sum of implicit and explicit learning. However, these authors did not directly measure explicit strategies. Fortunately, Neville and Cressman, 2018 measured explicit strategies using inclusion and exclusion trials during a 60° abrupt rotation (yellow points, explicit aim in Figure 2—figure supplement 3D and E).

We used these measurements in our abrupt simulations. Neville and Cressman observed that explicit strategies rapidly reached 35.5° and remained stable during adaptation. To approximate these data, we simulated abrupt explicit strategy using the exponential curve: xe = 35.5 - 10e-2t (Figure 2—figure supplement 3D and E, explicit aim, black line). Note that the nature of this exponential curve is entirely inconsequential to our analysis, apart from its saturation level. Outside of the rotation period, we assumed explicit strategy was zero. This is consistent with data from Morehead et al., 2015 that showed almost immediate disengagement in aiming strategy during washout (though this assumption likely introduced error into our modeling approach). For the gradual condition, we assumed explicit strategy was zero throughout the entire experiment (Figure 2—figure supplement 3D, explicit aim, gradual), as the participants remained largely unaware of the rotation. This seemed consistent with the data; gradual participants adapted approximately 40°, and exhibited an aftereffect of about 38°, indicating a re-aiming angle less than even 5°. Note, our primary results (Figure 2—figure supplement 3F) were unchanged in a sensitivity test where we assumed 10° of re-aiming in the gradual group (Figure 2—figure supplement 4).

Thus, our simulations included two free parameters: error sensitivity (bi) and retention factor (ai) for the implicit system. In each simulation, we assumed that these parameters were identical across the gradual and abrupt groups. To fit these parameters, we minimized the following cost function:

θfit=argminθn(eabrupt(n)e^abrupt(n))2+(egradual(n)e^gradual(n))2 (A5.1)

Equation A5.1 is the sum of squared errors between the directional errors predicted by the model (Figure 2—figure supplement 3D and E, directional error) and observed in the data (Figure 2—figure supplement 3C) across all trials in the abrupt and gradual conditions. Note that each simulation incorporated variability. We simulated noisy directional errors using the standard errors shown in the data in Figure 2—figure supplement 3C. In the explicit state, we added variability to each trial using the standard error in explicit strategy reported by Neville and Cressman, 2018. For the implicit state, we used 20% of the explicit variability, given that aiming strategies are more variable than implicit corrections (Miyamoto et al., 2020). We repeated these simulations 20,000 times, each time resampling our noise sources and then fitting our parameter set (ai and bi) by minimizing Equation A5.1 with fmincon in MATLAB R2018a. The mean implicit curves for the SPE learning model and target error learning models are shown in Figure 2—figure supplement 3D and E respectively (implicit angle; mean ± SD). Critically, in each simulation we measured the aftereffect that occurred on the first cycle of the washout period (Figure 2—figure supplement 3D and E, aftereffect). The mean and standard deviation in these aftereffects is reported in Figure 2—figure supplement 3F.

Finally, note that we obtained the directional errors in Figure 2—figure supplement 3C directly from the primary figure in the original manuscript (using the GRABIT routine in MATLAB R2018a). Please also note in the actual experiment, on some trials (7.1% of all trials), the perturbation was introduced midway during the reach to test feedback corrections at only one target location (the 0° target). These trials were not relevant for our current analysis. Otherwise, the visuomotor rotation was applied during the entire movement. Also note that because the authors analyzed feedback responses, participants made 15 cm movements, with a 0.6 second movement duration at baseline. Here, we only wanted to consider the feedforward adaptive component. Fortunately, the authors reported initial movement errors 100ms following movement onset that could not have been altered by feedback. Therefore, we used these early measures of adaptation in the current study.

Appendix 6

Adapted movement patterns exhibit generalization: a decay in adaptation measured when participants reach towards new areas in the workspace (Hwang and Shadmehr, 2005; Krakauer et al., 2000; Fernandes et al., 2012). Recent studies have observed that this generalization is centered where participants aim their movement, as opposed to the visual target (Day et al., 2016; McDougle et al., 2017). In Exps. 1–3, we measured implicit adaptation by instructing participants to aim directly to the target without an explicit strategy. Had implicit learning truly been ‘centered’ at the aiming location and not the target, these no aiming trials may underapproximate total implicit learning. This discrepancy will increase with re-aiming. Thus, it may appear that individuals who aim more, possess less implicit learning. These generalization properties could contribute to the negative subject-to-subject relationships we observed in Figure 3. In other words, could an SPE model with plan-based generalization produce the implicit-explicit correlations that we observed in Figure 3? Our main text explores this possibility in Figure 4. Here, we expand on these analyses to provide additional intuition, derivations, and technical background.

6.1 Comparing our data to past generalization curves

We compared data in Exps. 2 and 3 (Figure 4A–C), with generalization curves measured by Krakauer et al., 2000, Day et al., 2016, and McDougle et al., 2017. This last study is most important because aiming was controlled during generalization measurements and CW and CCW rotations were counterbalanced. In Figure 4A we normalized our data so that true implicit learning equals measured implicit learning (100% generalization) when the re-aiming angle is 0°. To estimate total implicit learning (the value we normalized to), we used the y-intercepts corresponding to the regressions in Figure 3 (25.5° and 19.7° in Exps. 2 & 3) which extrapolates total implicit learning when explicit strategy is zero. Note that dimensioned data (in degrees) are shown in Figure 4B&C. As described in our Results, these comparison showed that implicit learning in Exps. 2 and 3 (Figure 4A) declined nearly 300% faster than the generalization curves predicted. For example, given the explicit strategies used in the No PT Limit groups in Exps. 2 and 3, McDougle et al. would predict about a 5° reduction in implicit learning, whereas the actual data varied by about 15–20°. SPE generalization was too small in magnitude to match our data.

This analysis, however, has a critical limitation. Assuming that our implicit learning measures are altered by generalization, the explicit strategy we estimated will be affected too. That is, in Experiments 1–3 explicit strategy was estimated as total adaptation minus implicit learning. Had generalization reduced the implicit measures, it would falsely inflate our explicit measures. While it is tempting to compare our data in Figure 2 or Figure 4A with past generalization curves, this should not be done without correcting the explicit strategy measures. Such corrections create a substantial narrowing in the generalization curve. In Figure 4C we show corrected implicit-explicit generalization curves that best match the data in Exps. 2 and 3. This process is described in Appendix 6.2. These curves had σ = 5.16° and 5.76° in Exps. 2 and 3, respectively, which is about 80–85% narrower than that observed in McDougle et al., 2017 (σ = 37.76°). Altogether, to obtain the implicit-explicit relationships we report in Exps. 2 and 3 would require unphysiological generalization curves that are an order of magnitude narrower than any study has reported to date.

6.2 Deriving SPE generalization models

The competition model (Equation 4) can be represented as xiss = pi(r – xess), where pi is an implicit learning gain (between 0 and 1). Multiplying out pi yields, xiss = pir – pixess. Assuming pi is roughly constant (see Appendix 4), this expression predicts implicit learning with vary with explicit strategy according to a line with slope -pi and bias pir. In other terms, while the bias in implicit learning will increase with rotation size, the slope that relates implicit and explicit learning should not be altered by the rotation’s magnitude.

Let us now derive an SPE generalization model. In this model implicit learning is driven by SPEs, and also exhibits plan-based generalization when it is measured. To derive this model, we begin with Equation 5, the independence model: xiss = pir. This encodes SPE learning. Next, we add generalization; measured implicit learning (what we obtained when participants are instructed to stop aiming) is related to steady-state implicit learning by the generalization curve, g, which varies with steady-state explicit strategy, xess. Thus, we have: ximeasured = xissg(xess). Normally, this generalization curve would be modeled via a nonlinear cosine, or normal, tuning function. However, our data suggested that implicit and explicit varied linearly. Thus, we considered two model classes: (1) g(xess) is linear and (2) g(xess) is Gaussian. For the linear model, g(xess) = 1 + mxess and for the normal model, g(xess) = exp(–(xess/ σ)2). By combining together all equations, we get: ximeasured = pir(1+ mxess) for the linear model and ximeasured = pir exp(-(xess/ σ)2) for the normal model. Here m is the generalization line’s slope (negative), and σ is the normal distribution’s standard deviation.

For the linear model, this indicates that measured implicit learning and explicit strategy will vary according to a line with slope pirm and bias pir. Thus, the rotation size will alter the slope relating implicit and explicit learning. In the Gaussian generalization model, nonlinearity also contributes to this variation. In the competition model, the slope does not vary with rotation size (see equation above). Thus a key way to compare competition with generalization is to examine whether the implicit-explicit slope changes with rotation size. This is discussed in more detail in Appendix 6.3.

Critically, xess in the SPE generalization model is equal to the total steady-state explicit strategy. In Exps. 1–3, our Tsay et al. analysis, and our Neville and Cressman analysis, we did not measure total explicit strategy. Rather our explicit strategies were estimated by subtracting total adaptation and implicit learning. Thus, we cannot fit the SPE generalization model to our data without correcting our explicit strategy measures as our explicit strategies will overapproximate true explicit strategy. To say this another way, suppose that the implicit learning we measured, ximeasured is less than total implicit learning xiss. The explicit strategy we calculated was total adaptation, xTss, minus ximeasured. It should instead be xTss - xiss. We need to correct the explicit measures in the SPE generalization model.

To do this, start with a Gaussian generalization model:

ximeasured=xissexp(0.5(xess/σ)2). (A6.1)

Generalization causes a discrepancy between measured implicit learning and total implicit learning. The total amount that implicit learning is underapproximated, xiss - ximeasured, is equal and opposite to the total amount that explicit learning is overapproximated. Thus, we have:

xemeasured=xess+xissximeasured. (A6.2)

Equation A6.2 can be rearranged:

xess=xemeasuredxiss+ximeasured. (A6.3)

Combining Equations A6.1 and A6.3 yields the expression:

xess=xemeasuredxiss+xissexp(0.5(xess/σ)2). (A6.4)

Equations A6.1 and A6.4 together express and constrain the relationship between (1) total implicit learning, (2) total explicit learning, (3) measured implicit learning, and (4) measured explicit learning. The same process can be used to correct the linear SPE generalization model. Here, we begin with the linear implicit generalization equation:

ximeasured=xiss(1+mxess). (A6.5)

As above, the discrepancy xiss - ximeasured will be equal and opposite to the discrepancy between xess and xemeasured. Thus,

xemeasured=xess+xissximeasured. (A6.6)

Combining these equations together yields

ximeasured=xiss(1+mxemeasured/(1mxiss)). (A6.7)

We fit the SPE generalization models to the data noting that xiss = pir in the generalization model. These models have two unknown parameters: pi and m in the linear model, and pi and σ in the normal model. In some cases (e.g. Figure 4D–F), m and σ were set equal to the implicit generalization properties in McDougle et al., 2017 (more on this below). When these parameters were set, we used fmincon to identify the optimal pi that minimized the squared error between measured implicit learning, and the implicit learning predicted by Equations A6.1 (Gaussian) and Equation A6.5 (linear) above. Note in the Gaussian model, we also needed to constrain that the model’s solutions satisfy Equation A6.4.

In other cases, we did not constrain m or σ. Instead, we used fmincon (MATLAB R2021a) to identify the optimal pi and m (linear), or pi and σ (Gaussian), that minimized the squared error as described above.

6.3 Comparing the competition model to SPE generalization models

In Figure 4D–F, we compare the competition model to the SPE generalization models above. We did this in multiple ways. Note that the competition model possesses one unknown parameter, pi. The SPE models possess two, m and pi in the linear model, and m and σ in the normal model. In one analysis, we used the data collected by McDougle et al., 2017, to estimate the generalization parameters m and σ. To estimate m, we used the initial two points on the generalization curve in Figure 4A. This yielded m = –0.011. For the σ parameter, we used the value calculated by McDougle et al. for their data: σ = 37.76°. These parameters were used in Figure 4D–F, in the SPE gen. linear and SPE gen. normal models.

In Figure 4D–F, we fit all three models to the stepwise group’s implicit and explicit learning measures in B4 in Exp. 1 (the 60° rotation period). The fitting procedure is described in Appendix 6.2 above. This revealed the optimal pi that best matched the implicit learning measures. Next, we used these parameters to predict the implicit-explicit relationship across the held-out rotation sizes (i.e., the B1, B2, and B3 periods). Each model’s curve is shown in Figure 4D.

To determine how well each model generalized to the held-out 15°, 30°, and 45° rotations, we calculated the RMSE between the model and measured data. We compared this error using an rm-ANOVA. Interestingly, the Gaussian model had slightly worse predictive power than the linear model (Figure 4E, p < 0.01). This was likely because the underlying data did not appear to be normally distributed.

Issues with the linear and normal SPE generalization functions were due to an intrinsic property possessed by the SPE generalization model; the relationship between implicit and explicit learning should vary as implicit learning increases. To understand this, suppose two participants have an explicit strategy of 20°, which hypothetically yields 80% generalization. If the first participant has 15° implicit learning, they will exhibit a 15° (0.2) = 3° decrease in implicit learning. If the second participant has 7.5° implicit learning, they will exhibit only a 1.5° decrease. What this means is that as total implicit learning changes, the gain that relates implicit learning and explicit learning will also vary. Because implicit learning increased with rotation size in Exp. 1 (stepwise), then the generalization curve will differ in slope across each period. For the Gaussian generalization model, there is an additional factor which alters the gain: sampling across the nonlinear distribution. As the rotation gets larger, explicit strategies increase, which results in systematic changes in where the normal distribution is sampled, hence yielding variable implicit-explicit relationships when assessed via a linear regression within the B1, B2, B3, and B4 periods.

These variations in slope/gain did not match the measured data (Figure 4F). Here, we fit a separate linear regression to each learning period in the stepwise group and calculated the regression slope as well as the 95% CI. Remarkably, the measured implicit-explicit slope appeared to be constant across all rotation sizes. This invariance was consistent with the competition theory (Figure 4F, competition) which possesses an implicit gain pi that remains constant across rotations (like the data). It was not consistent with each generalization model, where slope varied across rotation sizes. Note, the error bars on model predictions in Figure 4F were estimated with bootstrapping; we sampled participants with replacement, fit the models to collapsed participant behavior, and calculated their slope. For the Gaussian models, there is no one slope (nonlinear) so we calculated the slope in the region bounded by the explicit strategies seen in the measured data over each rotation period. Also note that any negative explicit strategies (only three points in the 15° period and 1 in the 45° period) were ignored during this calculation.

We also compared the models using AIC. For this analysis, we used the stepwise participants in Exp. 1. This is the only experiment where the model can be fit to individual participants, because implicit and explicit learning was assayed across the four rotation sizes. Thus, we fit all the models in Figure 4D–F to individual participants. We used the same process described above, where m and σ were set to McDougle et al. values in the SPE generalization models. The results are shown in Figure 4G. As expected, AIC strongly favored the competition model. At left, we show the generalization model’s AIC values relative to that of the competition model (positive values mean competition is more likely to explain the data). At right, we show how many participants are best described by each of the three models.

A potential issue is that generalization properties differed between McDougle et al. and our data. Perhaps the SPE generalization model would have exhibited better performance with another m or σ value. To assess this possibility, we conducted a sensitivity analysis. We repeated the entire analysis in Figure 4D–G described above, but varied m and σ across a wide range. For the range’s lower bound we reduced the McDougle et al. generalization parameters by 50%. For the upper bound we doubled the values. The sensitivity analysis results are shown in Figure 4H. The left inset shows the prediction error at each generalization width, similar to Figure 4E. The right inset counts the total participants best explained by each model, according to AIC as in Figure 4G. Across the entire range, the competition model had smaller error and better explained the data.

In sum, data in Exps. 1–3 were poorly explained by an SPE model extended with generalization. While plan-based generalization may promote negative implicit-explicit correlations, its contribution is small relative to the competition theory.

6.4 Abrupt and gradual rotations in Exp. 1

In Exp. 1 we observed that perturbing individuals in a stepwise manner led to an increase in implicit learning and a reduction in explicit strategy. These observations qualitatively and quantitatively matched the competition model (Figure 2D–G). But there is an alternate possibility. Suppose that the implicit system is driven by SPE as in the independence model but exhibits plan-based generalization. In Appendix 6.2 we derive this SPE generalization model. This model could predict a reduction in implicit learning via two steps: (1) both abrupt and stepwise groups have equal implicit learning, but the abrupt rotation leads to greater re-aiming. (2) More aiming in the abrupt rotation results in a decrease in the implicit learning measured at the target due to plan-based generalization. This hypothesis could be summarized with an SPE generalization model in which measured implicit learning at the target is equal to total implicit learning via: ximeasured = xissexp(–0.5(xess/σ)2). Could this model lead to the observed data?

Initially, let us assume that σ = 37.76° as measured by McDougle et al., 2017. To estimate the change in implicit learning between abrupt and gradual implicit learning, we can calculate the reduction in implicit aftereffect expected given a normal distribution with σ = 37.76°, for the 39.5° and 29.9° explicit strategies measured in the abrupt and stepwise groups. Aiming straight to the target in the abrupt group would yield 57.86% remaining implicit aftereffect. In the stepwise group, it would yield 73.09% remaining aftereffect. Altogether, the model predicts that stepwise implicit learning should increase by 100(73.09/57.86–1), or 26.3%, over the abrupt group. On the contrary, the abrupt and stepwise implicit aftereffects were 11.72° and 21.36°, respectively. This is an 82.3% increase in implicit learning.

In sum, similar to our analysis of Exps. 2 and 3 in Figure 5, while generalization will produce a negative implicit-explicit relationship, the implicit learning variations we observed were much larger in Exp. 1 than predicted by generalization alone. In this case, the 82.3% increase in implicit learning is more than threefold larger than the 26.3% increase predicted by the implicit generalization properties measured by McDougle et al., 2017. Suppose σ = 37.76° does not accurately represent our data. To match the measured data, σ would need to be smaller, to narrow the generalization curve. This is unlikely to be the case given that in Exp. 1 used three targets, whereas McDougle et al. used 1. Additional targets do not narrow the generalization curve, they would widen it (Krakauer et al., 2000). Still, let us proceed. Rather than assume that σ = 37.76°, we can fit a normal distribution to the measured data. In the abrupt group, implicit and explicit learning were 11.72° and 39.5°, respectively. In the stepwise group, implicit and explicit learning were 21.36° and 29.9°, respectively. Fitting a normal curve to these data, would yield the curve shown in Figure 4—figure supplement 1A. The optimal σ is 23.6° and total implicit learning would need to be 47.8°.

While implicit learning equal to 47.8°, or 90% the total adapted responses, appears high, there is a more important issue. These values create an unphysical scenario. For abrupt learning an xiss of 47.8° and xess of 39.5° would indicate that total adaptation should equal 47.8° + 39.5° = 87.3° (Figure 4—figure supplement 1B). This is larger than the rotation’s size and thus, is unphysical. In the stepwise group as well, total predicted learning would be about 77.7°, which is also larger than the rotation size.

There is a deep issue here, as described in Appendix 6.2. The problem is that as the generalization curve narrows (e.g., σ = 23.6° vs. 37.76°), not only does implicit learning measured at the target drastically underapproximate total implicit learning at the aim location, but the explicit strategy we estimated via xess = xTss – xiss will substantially overestimate true explicit strategy, leading to unphysical systems. To understand this, suppose xiss is larger than ximeasured. Explicit strategy in Exp. 1, xemeasured = xTss – ximeasured. When this measured strategy is taken as actual explicit strategy in the generalization curve, this is equivalent to saying that total learning will be equal to xemeasured, plus total implicit learning, xiss. Total learning would be strategy (xTss - ximeasured) plus total implicit learning xiss. This would be xTss – ximeasured+ xiss. Herein lies the contradiction. Because xiss is larger than ximeasured, by using the estimated explicit strategy as the actual explicit strategy in the model, total learning in the model will automatically be larger than actual total learning. As we described above, this problem can progress so far as to predict that total learning is larger than the rotation.

The key idea is that both implicit and explicit learning need to be corrected by the generalization curve in our data. This correction is outlined in Appendix 6.2. Using Equations A6.1 and A6.4, we identified the σ and xiss that minimized the squared error between ximeasured predicted by an SPE generalization model, and the measured stepwise and abrupt implicit values. The model revealed that the optimal σ and xiss were 3.87° and 45.69°, which produced the curve shown in Figure 4—figure supplement 1C (corrected model). This curve shows how measured implicit learning and explicit learning will interact. This is not the true implicit-explicit generalization curve. That curve is in Figure 4—figure supplement 1D (corrected model). The generalization curve required by the model, was implausible: it had a width of σ = 3.87°, compared to 37.76° as measured by McDougle et al. This is why the model’s distribution in Figure 4—figure supplement 1D is so narrow.

The relationship between Figure 4—figure supplement 1C and D may not be intuitive. To explain how these curves are entwined, consider the stepwise learning point in inset C. This point lies roughly at 20° implicit learning and 30° explicit strategy. This explicit strategy is the estimated strategy calculated in Exp. 1: total adaptation – measured implicit learning. Note that total implicit learning is about 45°. Thus, measured implicit learning is about 45°–20° = 25° smaller than total implicit learning. This means that our estimated explicit strategy at 30°, is about 25° too large. Thus, the actual strategy is much smaller: 30°–25° = 5°. These corrections reveal the mapping between insets C and D. The point, xi = 20° and xe = 30°, will approximately lie at xi = 20/45 x 100 = 44.4%, and xe = 5°.

In sum, we conclude that our abrupt vs. stepwise analysis in Figure 2D–G does not match implicit generalization. While incorrect, we began with the assumption that the implicit learning measures in Exp. 1 represent generalized learning, but explicit strategies represent total re-aiming. But in this model, the change in implicit learning in the actual data, was three times larger than that predicted by generalization as measured in McDougle et al. Moreover, this analysis is flawed, in that only correcting implicit measures with generalization, but not explicit measures, produced a situation where total adaptation would have exceeded the rotation’s magnitude. This is because explicit strategies are estimated using total adaptation minus implicit learning. When we corrected the SPE generalization model so that both the implicit and explicit learning we measured were corrected by a generalization curve, the model required that plan-based generalization resemble a Gaussian with σ = 3.87°, an unphysiological scenario. The generalization model is not a viable alternate to the competition theory.

6.5 Instructions and variations in rotation size

In Neville and Cressman, 2018, implicit learning did not vary across 20°, 40°, and 60° rotations. Saturated responses like this resemble implicit learning properties exhibited in invariant error-clamp (Morehead et al., 2017; Kim et al., 2018) paradigms. In such experiments, the implicit system appears to reach a ceiling that does not depend on the rotation’s magnitude (at least when rotations are less than 90°). Does this same phenotype cause the saturation we explored using the competition model in Neville and Cressman, 2018?

In isolation this might appear plausible, but there is one issue: the response to instruction. The authors also tested how implicit learning and explicit strategies responded to instructions. Coaching participants increased explicit strategy but decreased implicit learning. This variation in implicit learning would not be explained by invariant error-clamp implicit learning properties, which would predict that implicit learning should always saturate at the same level. One idea that could potentially rescue this alternate hypothesis is generalization; perhaps implicit learning truly is the same across the instruction group and no instruction group but only appears variable because instructed participants used larger strategies. This idea, however, would directly contradict the implicit response to rotation size. Supposing all groups had the same implicit learning, greater explicit strategy in the 40° and 60° rotations, should produce a reduction in the measured implicit learning due to plan-based generalization.

In sum, an SPE learning model with a ceiling on implicit learning, would require complete (100%) generalization, to show the saturation phenotype in Figure 1D. However, there would be no way to capture the reduction in implicit learning seen in the instruction group with complete implicit generalization – a contradiction. Conversely, the reduction in implicit learning in the instruction group would need implicit generalization. But variations in explicit strategy across the 20°, 40°, and 60° rotations would alter implicit learning, violating the saturated implicit learning phenotype in the data. Thus, there is no way that these data can be described by an upper ceiling on implicit learning as in invariant error-clamp studies (Morehead et al., 2017; Kim et al., 2018).

As discussed in our Results, this is not true in the competition model. The exact same competition model parameters (i.e. implicit learning gain pi) parsimoniously explained implicit responses to rotation size in Figure 1G and instruction in Figure 2C.

There is one last possibility to consider. Perhaps plan-based generalization alone could cause the decrease in implicit learning due to instruction, and the saturation in implicit learning across rotation size. In an SPE generalization model, the instruction and no-instruction groups could reach the same implicit learning level but show differences in implicit learning measured at the target due to variations in explicit strategy. In addition, implicit learning should scale according to pi as the rotation increases. Perhaps true implicit learning does vary across the 20°, 40°, and 60° rotation periods, but appears saturated because as rotation size increases so do strategies, reducing the implicit learning measured at the target due to generalization. We evaluated both these possibilities.

Let us begin with the response to rotation size. In Figure 4—figure supplement 1E and F, we fit a Gaussian SPE generalization model to the implicit and explicit responses measured in the no-instruction group. As described in Appendix 6.2, inset E shows uncorrected explicit strategy estimates: total adaptation minus implicit learning. Inset F shows the true implicit-explicit plan-based generalization curve that produces the data in inset E. These curves were produced by an implicit learning gain pi = 0.56, and σ = 11.2°. This shows that a generalization curve could yield a saturation phenotype, as shown in inset E. Here, the same implicit curve is shown (pi = 0.56 and σ = 11.2°) but is scaled by the rotation size r as predicted by the SPE model. Increases in implicit learning due to the rotation size are counterbalanced by increases in explicit strategy which generalize less at the target. However, while such a model produces a saturation phenotype, the generalization curve’s width as shown in inset F, is not physiological. Rather, to produce the measured responses, the curve’s width (11.2°) would need to be 70% narrower than the generalization properties measured by McDougle et al. (σ = 37.76°). This extreme narrowing has not been observed in past studies. Moreover, the notion that generalization in Neville and Cressman would be narrower than that in McDougle et al., is inconsistent with known implicit generalization properties (Krakauer et al., 2000). As shown in the Krakauer et al. generalization curves in Figure 4A, increasing the number of training targets in Neville and Cressman (three targets) would widen the generalization curve relative to McDougle et al., which only used one training target.

Next, we repeated the analyses described above, on the implicit-explicit responses to instruction in Figure 4—figure supplement 1G, H. The best generalization model (pi = 0.5 and σ = 9.8°) could produce changes in generalized implicit learning that were consistent with the data, as shown in inset G. However, as in the response to rotation size described above, the required generalization properties were not physiologically consistent with past measurements as shown in inset H. The generalization curve would need to be about 74% narrower than that measured by McDougle et al. Thus, again, while in principle generalization could produce changes in implicit learning, it would require implausible implicit learning properties.

6.6 Nonmonotonic implicit learning in Tsay et al., 2021a

In Tsay et al., 2021a, participants exhibited a non-monotonic implicit response to 15°, 30°, 60°, and 90° rotations as shown in Figure 1N. In the main text, we explain how this phenotype could be explained by the competition model. Namely, variations in strategy, could lead to changes in the residual target error that drives implicit learning in the competition model. Could another model also produce these data?

We considered that the decrease in implicit learning observed in the 90° rotation group resembles a pattern shown by the implicit process in invariant error-clamp paradigms (Morehead et al., 2017; Kim et al., 2018); rotations larger than 90° cause a drop in implicit learning. Morehead et al., 2017 suggested that a similar drop occurs in response to standard rotations, at least when participants are told to ignore the cursor and aim to the target (i.e. they did not use explicit strategies). Thus, might it be that reductions in implicit learning in response to large rotations are an intrinsic property of the implicit system, rather than a phenomenon caused by error competition? The experiments conducted by Morehead et al., have another similarity to the Tsay et al. observations. In particular, in standard rotation conditions (plus a no aiming instruction), Morehead et al. observed that a 7.5° rotation caused a reduction in implicit learning relative to larger rotation sizes tested. This decrease in implicit learning cannot be due to error competition, because participants did not aim in this study. In sum, could it be that both the increase in implicit learning between 15° and 30° in Tsay et al. as well as the drop in implicit learning between 60° and 90° are caused by the implicit system’s intrinsic learning properties, versus a competition with explicit strategy?

This is unlikely. First, in Morehead et al., subjects in the 7.5° standard rotation condition achieved complete adaptation: a total reach angle of 7.5°. This 7.5° level was smaller than that achieved in a 7.5° invariant error-clamp. To explain these results, Morehead et al., suggested that implicit learning stopped in the standard rotation condition, because the error was completely canceled (i.e. both the rotation and total implicit learning achieved were 7.5°, creating a 0° error). In the error-clamp condition, error never decreased and continued to drive implicit learning to its saturation point.

The data in Tsay et al., however, cannot be explained by the error cancellation mechanism. In the 15° rotation in Tsay et al., implicit learning reached only 7.6°, and thus did not completely cancel the error. Morehead et al. would have predicted that implicit learning should continue until 15° to cancel the error. Thus, unlike Morehead et al., the increase in implicit learning between the 15° and 30° rotations in Tsay et al. cannot be explained by an error cancellation mechanism. This increase clearly violates invariant error clamp learning properties, where implicit learning reaches the same saturation point across error sizes less than 95°, unless error is canceled. This same argument can be made in the scaling phenotype in Exp. 1, as well as the scaling phenotype observed earlier by Salomonczyk et al., 2011.

Next, consider the decrement in implicit learning shown in Figure 1N with the 90° rotation. It remains possible that this decrease is due to a rotation-insensitivity that is intrinsic to the implicit process (rather than error competition). However, it is error that drives learning, not rotations. While the large rotations used by Morehead et al. resemble the 90° group in Tsay et al., target errors were totally mismatched in these two studies. In Morehead et al., participants in both the error-clamp and standard rotation groups were told not to aim and to ignore the cursor. Because there was no strategy, the implicit learning curve reached approximately 10°, leaving an 85° target error. Past studies have shown that error sensitivity will be exceedingly small in response to such extreme errors (Kim et al., 2018; Marko et al., 2012; Wei and Körding, 2009). In our view, this insensitivity to extremely large errors likely led to the attenuation in implicit learning observed in Morehead et al. Instructions to ‘ignore the cursor’ may further exacerbate reductions in sensitivity to these large errors.

However, in Tsay et al., subjects were allowed to aim. Total learning reached about 85°, leaving a 5° target error: an error much more inclined to drive implicit learning. Comparing steady-state adaptation to this 5° residual error with the 85° residual error in Morehead et al., is not reasonable in our view.

In sum, the increase in implicit learning in the 15° and 30° groups could only be described by error competition, not an error cancellation mechanism as in Morehead et al. Second, the residual target errors experienced in Morehead et al. were about 80° larger in their 95° rotation group, than the 90° rotation group in Tsay et al. For these reasons, attenuation in implicit learning in these two studies was likely caused by differing mechanisms: a drastic reduction in target errors (the competition hypothesis) in Tsay et al., and an unresponsiveness to extreme target error in Morehead et al. (which could have been exacerbated by telling participants to ignore the cursor). For the learning patterns in Tsay et al., the competition model seems the most parsimonious choice, not only given its quantitative match to the data (Figure 1Q and Figure 1—figure supplement 2), but also because it alone (not the error-clamp learning properties in Morehead et al.) can explain implicit responses across the many other cases described in Figures 1 and 2: abrupt and stepwise responses in Exp. 1 (as well as Salomonczyk et al.), rotation responses between 15 and 60° in Tsay et al., as well as implicit behavior in Neville and Cressman. This is not to mention that the implicit learning properties in Morehead et al. provide no clear way to interpret the pairwise relationships between implicit learning, explicit strategy, and total learning detailed at length in Figures 35 at the individual-participant level.

Appendix 7

In our Results, we considered how an SPE model might also predict negative correlations between implicit learning and explicit strategy. Suppose that implicit learning is driven by SPEs and is not altered by explicit strategy. However, a subject with a better implicit learning system (e.g. a higher implicit error sensitivity) will require less explicit re-aiming to reach a desired adaptation level. In other words, individuals with large SPE implicit learning may use less explicit strategy relative to those with less SPE implicit learning. Like the competition theory, this scenario would also yield a negative relationship between implicit and explicit learning, due to the way explicit strategies respond to variation in the implicit system. We will now show the diverging predictions this model makes, relative to the competition theory.

7.1 Competition model predictions: implicit learning responds to variations in explicit strategy

Here, we will start with the competition theory. In this model, the implicit system responds to variation in explicit strategy according to: xiss = pi(r – xess). Clearly, this predicts a negative relationship between implicit learning and explicit strategy (Figure 5D). Next, we note that total adaptation is given by xTss = xiss + xess. We can solve for xess and substitute this into the model, yielding the following relationship between implicit learning and total adaptation: xiss = pi(1 – pi)–1(r – xTss). This is the expression tested in Appendix 3 where we analyzed our data in Figure 1 using steady-state implicit learning and total adaptation. We can rearrange this equation, solving for xTss yielding the dual expression: xTss = r + pi–1(pi – 1)xiss. This expression is written within the inset in Figure 5F. Both equation variants show that in a stable learning system (pi < 1) that implicit learning and total adaptation will exhibit a negative relationship. We can repeat this analysis, but this time solve for xiss, and substitute into Equation 4, to obtain a relation between explicit learning and total adaptation. This yields xTss = pir + (1 – pi)xess. Note that this expression is provided in Figure 5E. Again, noting that pi < 1, this predicts a positive relation between explicit strategy and total adaptation.

To summarize, the competition model makes three predictions about the pairwise relationships between implicit learning, explicit strategy, and total adaptation. First, as explicit strategies increase, this will tend to increase total adaptation (i.e. positive relation as in Figure 5E). As explicit strategy increases, the residual target error will decrease, leading to less implicit learning. This predicts that implicit learning will exhibit a negative correlation with both explicit strategy (Figure 5D) and total adaptation (Figure 5F).

7.2 SPE model predictions: explicit strategy responds to variations in implicit learning

Now, let us suppose we have the opposite scenario to the competition model. In an SPE model, implicit learning does not respond to explicit strategy. Suppose implicit learning varies randomly across subjects (due to inter-subject variability in implicit learning properties, e.g., error sensitivity) and explicit strategy responds to this variability in implicit learning. In this framework, competition occurs but with a reversed causal structure. Now, assuming xiss is due to an independent SPE learning mechanism, this will yield a residual target error of r – xiss. A negative relationship between implicit and explicit learning occurs in the event that explicit strategies respond in proportion to this residual target error: xess = pe(r – xiss), where pe is an explicit learning gain. This equation is the same as the competition model in Equation 4, where the roles of xess and xiss are reversed. Thus, similar relationships between xTss and each system occur. Assuming that pe is less than 1 (i.e. the explicit system does not overcompensate for the remaining error, yielding total learning > r) then the relationship between total adaptation and implicit learning will now be positive, with xTss = per + (1 – pe)xiss, and the relationship between total adaptation and explicit learning will now be negative: xTss = r + pe–1(pe – 1) xess. Note that these expressions are provided in Figure 5B&C.

To summarize, the SPE model makes three predictions about the pairwise relationships between implicit learning, explicit strategy, and total adaptation. First, as implicit learning increases, this will tend to increase total adaptation (i.e., positive relation as in Figure 5C). But as implicit learning increases, there is a smaller target error for the explicit system to correct, leading to less explicit strategy. This predicts that explicit strategy will exhibit a negative correlation with both implicit learning (Figure 5A) and total adaptation (Figure 5B). This provides a way to compare the competition and SPE models.

7.3 Simulating variations in implicit and explicit learning across participants

We constructed Figure 5A–F to provide more intuition on how to compare the competition and SPE model predictions described above. In these toy-simulations, we first fit pi and pe in the equations above to the implicit and explicit measures in the No PT Limit group in Exp. 3 yielding pi = 0.669 and pe = 0.689. These values are not important; the same qualitative behavior will occur provided they are between 0 and 1. We assumed that implicit learning varied across participants according to a normal distribution. For the distribution’s mean we calculated the average implicit learning measured in the No PT Limit group. For the distribution’s standard deviation, we used 4°. Then, we calculated explicit learning according to xess = pe(r – xiss) for each participant. We then simulated ‘measurements’ of implicit and explicit learning by adding a normal random variable with mean zero and standard deviation 2° to these simulated implicit and explicit learning measures. We simulated 250 participants in total.

Simulations for the competition theory were similar. Here, we simulated explicit strategies across participants according to a normal distribution. The mean was set equal to the average explicit strategy measured in the No PT Limit group. The standard deviation was set to 4°. To simulate implicit learning, we used the competition equation: xiss = pi(r – xess). We then added variability to these “true” values to obtain noisy implicit and explicit measures across 250 participants.

Results for these simulations are shown in Figure 5A–F. In Panels A-C, we show results for the 250 participants for the model where explicit systems respond to variability in an SPE-driven implicit system. In Panels D-F, we show simulations for the competition theory where implicit systems respond to variability in explicit strategy. In Panels A and D, we show the relationship between implicit and explicit learning. In Panels B and E, we show the relationship between total adaptation and explicit learning. In Panels C and F, we show the relationship between total adaptation and implicit learning. Red ellipses denote the 95% confidence ellipses for the 250 simulated participants.

7.4 Comparing pairwise implicit-explicit-total correlations between competition and SPE models

In this Appendix we show that both an SPE model and a target error learning model could exhibit negative participant-level correlations between implicit learning and explicit strategy. But their predictions diverge on the relationships between total adaptation and each individual learning system. Target error learning predicts a negative implicit-total correlation and positive explicit-total correlation. SPE learning predicts a positive implicit-total correlation and a negative explicit-total correlation. To test these predictions, we considered how total learning was related to implicit and explicit adaptation measured in the No PT Limit group in Exp. 3. Our observations closely agreed with the competition theory; greater explicit strategy was associated with greater total adaptation (Figure 5G, ρ = 0.84, p < 0.001), whereas greater implicit learning was associated with lower total adaptation (Figure 5H, ρ = −0.70, p < 0.001).

We repeated similar analyses across additional data sets that also measured implicit learning via exclusion (i.e. no aiming) trials: (1) the 60° rotation groups (combined across gradual and abrupt groups) in Experiment 1, (2) the 60° rotation groups reported by Maresch and colleagues (Maresch et al., 2021) (combined across the CR, IR-E, and IR-EI groups), and (3) the 60° rotation group described by Tsay and colleagues (Tsay et al., 2021a). We obtained the same result as in Experiment 3. Participants exhibited negative correlations between implicit learning and explicit strategy (Figure 5—figure supplement 1G-I), positive correlations between explicit strategy and total learning (Figure 5—figure supplement 1D-F) and negative correlations between implicit learning and total learning (Figure 5—figure supplement 1A-C). These additional studies also matched the competition theory’s predictions.

7.5 Critical exceptions to these predictions

The competition theory predicts on average that implicit learning will exhibit a negative correlation with total adaptation (across individual participants). However, this prediction assumes that implicit learning is only driven by target errors, a condition we explore more completely in Part 3 of our Results. Second, it assumes that implicit learning properties (ai and bi, summarized with the gain pi above) are identical across participants, an unlikely possibility. Variation in the implicit learning gain (e.g. Participant A has an implicit system that is more sensitive to error) will promote a positive correlation between implicit and total adaptation, that will weaken the negative correlations we described above. Two examples where this appears to occur are shown in Figure 5—figure supplement 2A. Inter-subject variability in the implicit learning gain can dominate inter-subject variability in explicit strategy, which would lead to a positive relationship between implicit learning and total adaptation. It should be noted that the converse is not true in the independence model. SPE learning rules will always promote a positive relationship between implicit learning and total adaptation and will not show a negative correlation, despite inter-subject variability in implicit and explicit learning gains. A more thorough discussion on these matters is provided in Appendix 8.

Appendix 8

In Appendix 7, we detail how an SPE independence model, and a target error competition theory predict implicit and explicit learning should vary across participants. To review, the SPE model predicts (1) positive correlations between implicit learning and total adaptation, and (2) negative correlations between explicit strategy and total adaptation. On the other hand, the competition theory predicts (1) negative correlations between implicit learning and total adaptation, and (2) positive correlations between explicit strategy and total adaptation. We noted several datasets that supported the competition theory: our data in Experiment 3 (Figure 5G&H), experiments conducted by Maresch et al., 2021 (Figure 5—figure supplement 1A,D&G), our data in Experiment 1 (Figure 5—figure supplement 1B,E&H), and data collected by Tsay et al., 2021a (Figure 5—figure supplement 1C,F&I). Here, we detail a critical nuance in the competition theory’s predictions that may result in little to no correlation between implicit learning and total adaptation.

8.1 Subject-to-subject correlations in implicit learning within the competition theory

The competition theory (i.e. target error learning model) will not always produce a negative relationship between implicit learning and total adaptation. In Appendix 7.1, we explained that the competition theory, xiss = pi (r – xess), does on average predict a negative correlation between implicit learning and total adaptation. Let us consider again why this occurs. Suppose two Participants A and B have identical implicit learning systems, but Participant A has superior explicit strategy. Overall, this means Participant A will adapt more to the perturbation. However, their greater strategy will create a smaller driving force for the implicit system, yielding less implicit learning. Thus, total adaptation is positively correlated with explicit strategy, but negatively correlated with implicit learning.

To restate this idea, in a target error model, between-subject variation in explicit strategy creates a negative relationship between implicit learning and total adaptation. However, these predictions rely on a key assumption; implicit learning properties must be the same across all participants to yield negative correlations. In other words, in our Participants A and B example, both participants were assumed to have the same pi parameter, a term that depends on implicit error sensitivity and retention. Between-subject variation in these implicit learning properties, however, will promote a positive relationship between total adaptation and implicit learning. Thus, it is entirely possible that the negative correlations promoted by between-subjects explicit variability can be negated by the positive correlations promoted by between-subjects implicit variability, yielding no correlation in some instances.

To illustrate this, consider the toy simulation in Figure 5—figure supplement 4A. At left, we simulate total implicit learning using the competition equation (pi = 0.8) across 35 participants adapting to a 30° rotation, whose explicit strategies vary according to a normal distribution (mean = 12°, S.D. = 4°). Note the strong negative relationship between implicit learning and total adaptation. At right, we show the same data (same explicit strategies) but introduce variability in implicit learning (pi in the model is varied according to a normal distribution with mean = 0.8, and S.D. = 0.1). Even though these data arise from the same competition equation, adding between-subject variation in implicit learning properties yields zero correlation between implicit learning and total adaptation (p = 0.199, R2 = 0.05).

The competition equation predicts that the correlation between implicit learning and total adaptation is uniquely susceptible to contamination with between-subject implicit variability. That is, while the correlation between implicit learning and total adaptation (Figure 5—figure supplement 4A, right) was not statistically significant, the same simulated data exhibited a strong positive correlation between explicit strategy and total adaptation (Figure 5—figure supplement 4B right; p < 0.001, R2 = 0.42), and a strong negative correlation between implicit learning and explicit strategy (Figure 5—figure supplement 4C, right; p < 0.001, R2 = 0.77).

Thus, with implicit variability the competition theory can simultaneously exhibit no correlation between implicit learning and total adaptation, a strong positive correlation between explicit strategy and total adaptation, and a strong negative correlation between implicit learning and explicit strategy.

To conclude, correlative phenomena in the competition theory represent a balance between negative correlations induced by between-subject explicit variability, and positive correlations induced by between-subject implicit variability. Observing negative correlations is a probabilistic phenomenon. A given study can easily fail to yield a statistically significant correlation between total adaptation and implicit learning, yet still be governed by the competition equation. To maximize the probability of detecting a negative correlation between implicit learning and total adaptation, there are several critical factors that should be considered by the experimenter.

To describe these factors, we compare our data in Experiment 3 (Figure 5G&H) to experimental conditions where we detected no statistically significant correlation between implicit learning and total adaptation. These include the 30° rotation groups collected in Tsay et al., 2021a and Experiment 1 (Figure 5—figure supplement 2A, middle and right). These studies used similar experimental procedures, yet only our data in Exp. 3 yielded a statistically significant correlation between implicit learning and total adaptation. Here, we describe four key factors that may have played a role in these differences. For each factor, we will perform simulations using the competition equation. Factors 1 and 2 deal with statistical power. Factors 3 and 4 deal with how changes in explicit strategy use alter the ability to measure correlations between implicit learning and total adaptation.

8.2 Factor 1. Statistical power: total number of trials

Because correlations between implicit learning and total adaptation are a balance between two opposing variability sources, high statistical power will increase one’s ability to detect them in an experiment. One simple way to increase this power, is to increase the total number of trials used to measure total adaptation and implicit learning. That is, each reaching movement is corrupted by motor variability. To better estimate total adaptation and implicit learning, averaging over more trials lessens the effect of trial-to-trial reach variability on subject-to-subject correlations. This can be especially problematic for the number of aftereffect trials used to measure implicit learning, which remain limited in many studies.

Consider the simulations in Figure 5—figure supplement 2B. These simulations show a power analysis where we vary the total number of aftereffect trials in simulation, to detect the probability that an experiment with 30 participants will yield a statistically significant correlation between implicit and total adaptation. Here, implicit learning is set by the competition equation. All simulation parameters are held still (e.g. explicit parameter variability, implicit parameter variability, mean explicit strategy; see Appendix 8.8 below) except the total number of aftereffect trials used to calculate implicit learning. That is, we average over simulated trials to calculate total learning, explicit strategy, and implicit learning. Each simulated trial differs due to motor execution noise (i.e. varied according to a normal distribution). We repeat each simulation 40,000 times with 30 participants in each simulation, and calculate the total fraction of iterations where there was a statistically significant negative correlation between total adaptation and implicit learning (Figure 5—figure supplement 2B, red, left), no statistically significant correlation between total adaptation and implicit learning (Figure 5—figure supplement 2B, black, left), and a positive statistically significant correlation between these two variables (Figure 5—figure supplement 2B, green, left).

This power analysis qualitatively demonstrates that increasing the number of aftereffect trials greatly improves one’s ability to detect a negative statistically significant correlation between total adaptation and implicit learning. We should note that our study (Experiment 3) is an outlier, in that we used a very large number of no feedback (and no aiming) trials to measure implicit learning: 80 trials. In cases where we did not detect a statistically significant correlation, the total aftereffect trial count was much smaller: Tsay et al. (2021) used only 20 trials to measure the implicit aftereffect and Exp. 1 used only 18 trials (Figure 5—figure supplement 2B, right). Thus, Exp. 3 was more likely to produce a negative correlation between implicit learning and total adaptation, given this experimental factor.

8.3 Factor 2. Statistical power: motor variability

The second factor that plays an important role in measuring subject-to-subject correlations, is also related to statistical power: motor variability. Like trial count (Factor 1), the more variable a participant’s reaching movements are, the poorer one’s estimate for total learning and implicit learning. To show this, we repeated our power analysis process described above (using the competition model), but this time held all simulation parameters constant, except trial-to-trial variability in executing a movement. We sampled this motor execution noise parameter for each participant; some simulated subjects had higher trial-to-trial variability than others. We gradually increased the mean motor noise parameter across participants, as well as the variation in motor noise across participants. Results are shown in Figure 5—figure supplement 2C.

Motor execution noise plays a strong role in detecting statistically significant negative correlations between implicit learning and total adaptation (Figure 5—figure supplement 2C, left, red); as motor execution noise increases, the probability of detecting a statistically significant correlation falls sharply. Therefore, studies where subjects have smaller trial-to-trial variability in reaching movements, will be more likely to detect negative correlations between total adaptation and implicit learning. For example, we calculated the trial-by-trial variability in reach angle during the no feedback periods in our data (Exp. 3) as well as the 30° rotation datasets we described above (Tsay et al., 2021a and Experiment 1). We used this period so that trial-to-trial volatility in explicit strategy did not corrupt our estimate of motor variability (i.e. trial-to-trial variability is much larger during asymptotic behavior).

As shown in Figure 5—figure supplement 2C at right, participants in Experiment 3 exhibited smaller trial-by-trial reach variability (one-way ANOVA, F = 6.84, p = 0.002) than both the Tsay dataset (post-hoc test: p = 0.003) as well as Experiment 1 (post-hoc text: p = 0.015). Thus, Exp. 3 was more likely to produce a negative correlation between implicit learning and total adaptation, given this experimental factor. In addition, it should be noted that motor noise variability (Factor 2) will act synergistically with limited aftereffect trials (Factor 1) to impair one’s ability to detect accurate implicit learning measures.

While it may be difficult to control motor noise, experimenters should consider the following parameters: (1) movement displacement, (2) the type of experimental apparatus (laptop vs. robot vs. tablet), (3) the speed of the reaching or shooting movements, and (4) target location. These experimental conditions will alter reaching variability and may improve one’s ability to detect negative correlations between total adaptation and implicit learning.

8.4 Factor 3: explicit strategy use

Factors 3 and 4 relate less to statistical power, and more to the variability sources that underly subject-to-subject differences in implicit and explicit learning. A critical factor that determines one’s probability of detecting negative correlations between implicit learning and total adaptation, is total explicit strategy use. To detect how overall strategy use affects the ability to obtain statistically significant correlations, we again used our power analyses. This time, we repeated our power analysis procedure, but varied the mean of the normal distribution used to simulate variable explicit strategies; we gradually increased the mean strategy use across our simulations, in the case where participants adapted to a 30° rotation. All other simulation parameters remained constant across simulations (n = 30 in each simulation, 40,000 iterations for each explicit strategy level). The results are shown in Figure 5—figure supplement 2E, at left.

Strategy use strongly affects one’s ability to detect negative statistically significant correlations between implicit learning and total adaptation (Figure 5—figure supplement 2E, left, red). When participants use little explicit strategy on average, it is more difficult to obtain a statistically significant implicit-total adaptation correlation. In other words, at a given rotation size, studies where participants use greater strategies are more likely to yield a negative relationship between implicit learning and total adaptation. Comparing participants in Experiment 3 to the Tsay dataset and Experiment 1 (Figure 5—figure supplement 2E, right), we noted that participants in the Tsay dataset exhibited large reductions in explicit strategy use (one-way ANOVA, F = 11.09, p < 0.001; post-hoc tests had p < 0.001 for Experiment 3 vs. Tsay and Experiment 1 vs. Tsay). Thus, participants in the Tsay experiment were least likely to exhibit negative correlations between implicit learning and total adaptation, according to this experimental factor.

8.5 Factor 4: Between-subject variability in explicit strategy use

Recall that the relationship between implicit learning and total adaptation is a balance between variability sources: positive correlations induced by between-subjects implicit variability, and negative correlations induced by between-subjects explicit variability. Thus, more variability in explicit strategy increases how likely one is to detect a negative correlation between implicit and total adaptation. To demonstrate this, we performed a final power analysis. All parameters were constant across simulations, except variability in strategy use. For each simulation (n = 30) we sampled strategies from a normal distribution; we gradually increased the SD of this normal distribution across simulations (40,000 simulations for each level) while holding mean explicit strategy constant. The results are shown in Figure 5—figure supplement 2D, at left.

These simulations demonstrated two critical properties. First, as subject-to-subject variability in strategy use increases, so too does the likelihood of detecting a negative relationship between implicit learning and total adaptation (Figure 5—figure supplement 2D, left, red). Second, when between-subject explicit variability is very low, there is even a chance of detecting positive correlations between total adaptation and implicit learning (Figure 5—figure supplement 2D, left, green) even in the competition theory. This key point should be kept in mind when experiments use conditions where strategy use is minimal across participants (e.g. exceedingly gradual rotations; very small rotations, etc.).

Along these lines, we should note that between-subject variability in explicit strategy use was greatest in Experiment 3. As compared to the Tsay dataset and Experiment 1, explicit variability was 32% and 72% greater in Experiment 3, respectively (Figure 5—figure supplement 2D, right). Therefore again, Experiment 3 was most likely to produce a negative correlation between implicit learning and total adaptation.

8.6 Unique susceptibility in the correlation between implicit learning and total adaptation

It is important to reiterate that with target error learning, negative correlations between implicit learning and total adaptation are uniquely challenging to detect. That is, there is more power to detect positive correlations between explicit strategy and total adaptation. For example, though we did not detect a negative relationship between implicit learning and total adaptation in the 30° conditions tested by Tsay et al., 2021a and in Experiment 1, we did detect a positive correlation between explicit strategy and total adaptation in these experiments (Figure 5—figure supplement 3A and B).

Figure 5—figure supplement 3 (panels C-F), again shows the power analyses on Factors 1-4 illustrated in Figure 5—figure supplement 2, but this time investigates the correlation between explicit strategy and total adaptation. Across all 4 factors, the power analyses demonstrated that experiments should yield greater probability of detecting positive correlations between explicit strategy and total adaptation (Figure 5—figure supplement 3C-F, green curves at top), than negative correlations between implicit learning and total adaptation (Figure 5—figure supplement 3C-F, red curves at top). These data are recapitulated in the simulated R2 statistic across the two correlations (Figure 5—figure supplement 3C-F, bottom row); the correlation between total adaptation and explicit strategy was greater than the correlation between total adaptation and implicit learning.

8.7 Appendix 8 summary

Here we explained processes that impact the correlation between implicit learning and total adaptation in the competition theory. Between-subject variability in explicit strategy and implicit learning properties promote positive and negative correlations between implicit learning and total adaptation, respectively. These opposing factors make it possible that correlations between implicit learning may be weak or absent in an experiment. We explored four key experimental factors that researchers should consider in their data sets to maximize the chance of detecting negative correlations between implicit learning and total adaptation. However, this is by no means a complete list. For example, greater SPE learning will drastically undermine the negative correlations between implicit learning and total adaptation produced by target error learning. Thus, we expect that conditions which use multiple visual landmarks (e.g., aiming targets) are unlikely to show negative correlations between implicit learning and total adaptation.

8.8 Appendix 8 methods

Here we analyzed data collected in Experiment 1, Experiment 3, and Tsay et al. (2021). Implicit and explicit learning measures were calculated as reported in the Methods section in our main text. These implicit and explicit learning measures were used to calculate the correlations shown in Figure 5—figure supplement 2A and Figure 5—figure supplement 3A&B. In addition, the explicit measures were used to calculate the strategy use in Figure 5—figure supplement 2E. Each dot in the right-most inset represents an individual participant. Variations in explicit strategy across experiments were assessed with a one-way ANOVA, with Bonferroni-corrected post-hoc tests. In addition, Figure 5—figure supplement 2D (at right) shows the std. dev. in explicit strategy across participants within the three experimental conditions. In Figure 5—figure supplement 2C, we estimated motor variability within individual participants. To do this we calculated the standard deviation in the reach angle across trials in the no-aiming period at the end of each experiment. We chose this period to prevent volatility in explicit strategy from inflating our motor variability measure. Each dot in the right-most inset shows the reach angle standard deviation for a single participant. We assessed differences in motor variability across the three experiments using a one-way ANOVA, with Bonferroni-corrected post-hoc tests.

In Figure 5—figure supplement 4, we provide toy simulations to illustrate how variation in implicit learning properties alters pairwise relationships between implicit learning, explicit strategy, and total adaptation. For the left-most inset in panels A, B, and C, we simulated a condition with no variability in implicit learning properties. That is, we used the competition equation to simulate implicit learning, but held ai and bi constant across all participants (each individual dot in the panel). We chose ai and bi so that the implicit learning gain, pi, was equal to 0.8. We simulated 35 participants adapting to a 30° rotation. Explicit strategy was sampled for each participant using a normal distribution with a mean of 12° and a standard deviation of 4°. The right-most inset in panels A, B, and C, use the exact same explicit strategies. However, here we allow pi (i.e., implicit learning properties) to vary across participants. To simulate this variation, we sample pi according to a normal distribution with a mean of 0.8 and a standard deviation of 0.1.

Finally, Figure 5—figure supplement 2 and Figure 5—figure supplement 3 show four power analyses. The power analyses were the same across these two figures, only, Figure 5—figure supplement 2 focuses on how implicit learning relates to total adaptation, and Figure 5—figure supplement 3 considers how explicit strategy relates to total adaptation. In these power analyses, there are several parameters. First, implicit error sensitivity was uniformly sampled between 0.9 and 0.95. Implicit error sensitivity was uniformly sampled between 0.2 and 0.3. The rotation size was always 30°. Other simulation parameters varied across each power analysis. For each power analysis, there was one parameter that varied across simulations, but all other parameters were fixed to default values. The default values were as follows. Explicit learning was sampled for each participant using a normal distribution with a mean of 10° and a standard deviation of 6°. The total number of trials used to measure total adaptation, implicit learning, and explicit learning was equal to 40. Motor variability had a mean of 12° across participants, with a standard deviation of 6°.

Power analyses in Figure 5—figure supplement 2B and Figure 5—figure supplement 3C used the default parameter values but varied the total number of probe trials used to measure implicit and explicit learning between 1 and 80. Power analyses in Figure 5—figure supplement 2C and Figure 5—figure supplement 3D used the default parameter values but varied the average motor variability between 5° and 20°, and the standard deviation in motor variability between 2° and 10°. As mean motor variability increased across simulations, so did the subject-level standard deviation. Power analyses in Figure 5—figure supplement 2D and Figure 5—figure supplement 3E used the default parameter values but varied the standard deviation in strategy use between participants between 0.1° and 8°. Finally, power analyses in Figure 5—figure supplement 2E and Figure 5—figure supplement 3F used the default parameters values but varied average strategy use between 0° and 20°.

In these power analyses, the parameter of interest was varied linearly between its two extreme values. For each value we conducted 40,000 simulations, each time sampling random variables for 30 participants according to the distributions noted above. Across these simulations we calculated the probability that a negative statistically significant relationship occurred between implicit learning and total adaptation (red lines in Figure 5—figure supplement 2 and Figure 5—figure supplement 3), a positive statistically significant relationship occurred between implicit learning and total adaptation (green lines in Figure 5—figure supplement 2), no statistically significant relationship occurred between implicit learning and total adaptation (black lines in Figure 5—figure supplement 2), and a positive statistically significant relationship occurred between explicit learning and total adaptation (green lines in Figure 5—figure supplement 3). Statistically significant relationships were detected using a linear regression across the 30 participants in each simulation (P < 0.05). The bottom row in Figure 5—figure supplement 3, shows the average R2 statistic for these linear regressions.

Funding Statement

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Contributor Information

Scott T Albert, Email: scottalbert1@gmail.com.

Kunlin Wei, Peking University, China.

Michael J Frank, Brown University, United States.

Funding Information

This paper was supported by the following grants:

  • National Institute of Neurological Disorders and Stroke F32NS095706 to Scott T Albert.

  • National Science Foundation CNS-1714623 to Reza Shadmehr.

  • National Institute of Neurological Disorders and Stroke R01NS078311 to Reza Shadmehr.

Additional information

Competing interests

No competing interests declared.

No competing interests declared.

Author contributions

Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Software, Writing – original draft, Writing – review and editing.

Conceptualization, Data curation, Investigation, Methodology, Software, Writing – review and editing.

Conceptualization, Data curation, Investigation, Methodology, Resources, Writing – review and editing.

Conceptualization, Data curation, Investigation, Methodology, Resources.

Conceptualization, Data curation, Investigation, Methodology, Project administration, Resources.

Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Resources, Writing – review and editing.

Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Resources, Supervision, Writing – review and editing.

Conceptualization, Data curation, Investigation, Methodology, Resources, Supervision, Writing – review and editing.

Conceptualization, Data curation, Funding acquisition, Investigation, Methodology, Resources, Supervision, Writing – review and editing.

Conceptualization, Funding acquisition, Investigation, Methodology, Project administration, Supervision, Writing – original draft, Writing – review and editing.

Ethics

Human subjects: Informed consent was obtained from all study participants. All human subjects work was approved by the Johns Hopkins School of Medicine Institutional Review Board (protocol number NA_00037510) or the York Human Participants Review Sub-committee.

Additional files

Transparent reporting form

Data availability

Source data files generated or analyzed during this study, as well as the associated analysis code, are included as supplements to Figures 1-10, as well as their associated Figure Supplements, and have also been deposited in OSF under accession code MZS6A.

The following dataset was generated:

Albert ST, Jang J, Modchalingam S, Hart M, Henriques D, Lerner G, Della-Maggiore V, Haith AM, Krakauer JW, Shadmehr R. 2022. Competition between parallel sensorimotor learning systems. Open Science Framework. 10.17605/OSF.IO/MZS6A

References

  1. Albert ST, Shadmehr R. Estimating properties of the fast and slow adaptive processes during sensorimotor adaptation. Journal of Neurophysiology. 2018;119:1367–1393. doi: 10.1152/jn.00197.2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Albert ST, Jang J, Sheahan HR, Teunissen L, Vandevoorde K, Herzfeld DJ, Shadmehr R. An implicit memory of errors limits human sensorimotor adaptation. Nature Human Behaviour. 2021;5:920–934. doi: 10.1038/s41562-020-01036-x. [DOI] [PubMed] [Google Scholar]
  3. Alhussein L, Hosseini EA, Nguyen KP, Smith MA, Joiner WM. Dissociating effects of error size, training duration, and amount of adaptation on the ability to retain motor memories. Journal of Neurophysiology. 2019;122:2027–2042. doi: 10.1152/jn.00387.2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Avraham G, Keizman M, Shmuelof L. Environmental consistency modulation of error sensitivity during motor adaptation is explicitly controlled. Journal of Neurophysiology. 2020;123:57–69. doi: 10.1152/jn.00080.2019. [DOI] [PubMed] [Google Scholar]
  5. Avraham G, Morehead JR, Kim HE, Ivry RB. Reexposure to a sensorimotor perturbation produces opposite effects on explicit and implicit learning processes. PLOS Biology. 2021;19:e3001147. doi: 10.1371/journal.pbio.3001147. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Bastian AJ, Martin TA, Keating JG, Thach WT. Cerebellar ataxia: abnormal control of interaction torques across multiple joints. Journal of Neurophysiology. 1996;76:492–509. doi: 10.1152/jn.1996.76.1.492. [DOI] [PubMed] [Google Scholar]
  7. Becker MI, Person AL. Cerebellar control of reach kinematics for endpoint precision. Neuron. 2019;103:335–348. doi: 10.1016/j.neuron.2019.05.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Benson BL, Anguera JA, Seidler RD. A spatial explicit strategy reduces error but interferes with sensorimotor adaptation. Journal of Neurophysiology. 2011;105:2843–2851. doi: 10.1152/jn.00002.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Bond KM, Taylor JA. Flexible explicit but rigid implicit learning in a visuomotor adaptation task. Journal of Neurophysiology. 2015;113:3836–3849. doi: 10.1152/jn.00009.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Bromberg Z, Donchin O, Haar S. Eye movements during visuomotor adaptation represent only part of the explicit learning. ENeuro. 2019;6:ENEURO.0308-19.2019. doi: 10.1523/ENEURO.0308-19.2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Caithness G, Osu R, Bays P, Chase H, Klassen J, Kawato M, Wolpert DM, Flanagan JR. Failure to consolidate the consolidation theory of learning for sensorimotor adaptation tasks. The Journal of Neuroscience. 2004;24:8662–8671. doi: 10.1523/JNEUROSCI.2214-04.2004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Coltman SK, Cashaback JGA, Gribble PL. Both fast and slow learning processes contribute to savings following sensorimotor adaptation. Journal of Neurophysiology. 2019;121:1575–1583. doi: 10.1152/jn.00794.2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Day KA, Roemmich RT, Taylor JA, Bastian AJ. Visuomotor learning generalizes around the intended movement. ENeuro. 2016;3:ENEURO.0005-16.2016. doi: 10.1523/ENEURO.0005-16.2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. de Brouwer AJ, Albaghdadi M, Flanagan JR, Gallivan JP. Using gaze behavior to parcellate the explicit and implicit contributions to visuomotor learning. Journal of Neurophysiology. 2018;120:1602–1615. doi: 10.1152/jn.00113.2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Donchin O, Rabe K, Diedrichsen J, Lally N, Schoch B, Gizewski ER, Timmann D. Cerebellar regions involved in adaptation to force field and visuomotor perturbation. Journal of Neurophysiology. 2012;107:134–147. doi: 10.1152/jn.00007.2011. [DOI] [PubMed] [Google Scholar]
  16. Ebbinghaus H. Uber Das Gedachtnis. Dunacker and Humblot; 1885. [Google Scholar]
  17. Fernandes HL, Stevenson IH, Kording KP. Generalization of stochastic visuomotor rotations. PLOS ONE. 2012;7:e43016. doi: 10.1371/journal.pone.0043016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Fernandez-Ruiz J, Wong W, Armstrong IT, Flanagan JR. Relation between reaction time and reach errors during visuomotor adaptation. Behavioural Brain Research. 2011;219:8–14. doi: 10.1016/j.bbr.2010.11.060. [DOI] [PubMed] [Google Scholar]
  19. Gabrieli JD, Corkin S, Mickel SF, Growdon JH. Intact acquisition and long-term retention of mirror-tracing skill in Alzheimer’s disease and in global amnesia. Behavioral Neuroscience. 1993;107:899–910. doi: 10.1037//0735-7044.107.6.899. [DOI] [PubMed] [Google Scholar]
  20. Hadjiosif AM, Smith MA. Savings is restricted to the temporally labile component of motor adaptation. Translational and Computational Motor Control 2015 [Google Scholar]
  21. Haith AM, Huberdeau DM, Krakauer JW. The influence of movement preparation time on the expression of visuomotor learning and savings. The Journal of Neuroscience. 2015;35:5109–5117. doi: 10.1523/JNEUROSCI.3869-14.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Hanajima R, Shadmehr R, Ohminami S, Tsutsumi R, Shirota Y, Shimizu T, Tanaka N, Terao Y, Tsuji S, Ugawa Y, Uchimura M, Inoue M, Kitazawa S. Modulation of error-sensitivity during a prism adaptation task in people with cerebellar degeneration. Journal of Neurophysiology. 2015;114:2460–2471. doi: 10.1152/jn.00145.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Heald JB, Lengyel M, Wolpert DM. Contextual inference underlies the learning of sensorimotor repertoires. Nature. 2021;600:489–493. doi: 10.1038/s41586-021-04129-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Heffley W, Song EY, Xu Z, Taylor BN, Hughes MA, McKinney A, Joshua M, Hull C. Coordinated cerebellar climbing fiber activity signals learned sensorimotor predictions. Nature Neuroscience. 2018;21:1431–1441. doi: 10.1038/s41593-018-0228-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Herzfeld DJ, Vaswani PA, Marko MK, Shadmehr R. A memory of errors in sensorimotor learning. Science (New York, N.Y.) 2014;345:1349–1353. doi: 10.1126/science.1253138. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Herzfeld DJ, Kojima Y, Soetedjo R, Shadmehr R. Encoding of action by the Purkinje cells of the cerebellum. Nature. 2015;526:439–442. doi: 10.1038/nature15693. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Herzfeld DJ, Kojima Y, Soetedjo R, Shadmehr R. Encoding of error and learning to correct that error by the Purkinje cells of the cerebellum. Nature Neuroscience. 2018;21:736–743. doi: 10.1038/s41593-018-0136-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Hosseini EA, Nguyen KP, Joiner WM. The decay of motor adaptation to novel movement dynamics reveals an asymmetry in the stability of motion state-dependent learning. PLOS Computational Biology. 2017;13:e1005492. doi: 10.1371/journal.pcbi.1005492. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Huang VS, Haith A, Mazzoni P, Krakauer JW. Rethinking motor learning and savings in adaptation paradigms: model-free memory for successful actions combines with internal models. Neuron. 2011;70:787–801. doi: 10.1016/j.neuron.2011.04.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Huberdeau DM, Haith AM, Krakauer JW. Formation of a long-term memory for visuomotor adaptation following only a few trials of practice. Journal of Neurophysiology. 2015;114:969–977. doi: 10.1152/jn.00369.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Huberdeau DM, Krakauer JW, Haith AM. Practice induces a qualitative change in the memory representation for visuomotor learning. Journal of Neurophysiology. 2019;122:1050–1059. doi: 10.1152/jn.00830.2018. [DOI] [PubMed] [Google Scholar]
  32. Hwang EJ, Shadmehr R. Internal models of limb dynamics and the encoding of limb state. Journal of Neural Engineering. 2005;2:S266–S278. doi: 10.1088/1741-2560/2/3/S09. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Hwang EJ, Smith MA, Shadmehr R. Dissociable effects of the implicit and explicit memory systems on learning control of reaching. Experimental Brain Research. 2006;173:425–437. doi: 10.1007/s00221-006-0391-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Izawa J, Criscimagna-Hemminger SE, Shadmehr R. Cerebellar contributions to reach adaptation and learning sensory consequences of action. The Journal of Neuroscience. 2012;32:4230–4239. doi: 10.1523/JNEUROSCI.6353-11.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Joiner WM, Sing GC, Smith MA. Temporal specificity of the initial adaptive response in motor adaptation. PLOS Computational Biology. 2017;13:e1005438. doi: 10.1371/journal.pcbi.1005438. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Kagerer FA, Contreras-Vidal JL, Stelmach GE. Adaptation to gradual as compared with sudden visuo-motor distortions. Experimental Brain Research. 1997;115:557–561. doi: 10.1007/pl00005727. [DOI] [PubMed] [Google Scholar]
  37. Kawato M. Internal models for motor control and trajectory planning. Current Opinion in Neurobiology. 1999;9:718–727. doi: 10.1016/s0959-4388(99)00028-8. [DOI] [PubMed] [Google Scholar]
  38. Kim HE, Morehead JR, Parvin DE, Moazzezi R, Ivry RB. Invariant errors reveal limitations in motor correction rather than constraints on error sensitivity. Communications Biology. 2018;1:19. doi: 10.1038/s42003-018-0021-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Kim HE, Parvin DE, Ivry RB. The influence of task outcome on implicit motor learning. eLife. 2019;8:e39882. doi: 10.7554/eLife.39882. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Kitago T, Ryan SL, Mazzoni P, Krakauer JW, Haith AM. Unlearning versus savings in visuomotor adaptation: comparing effects of washout, passage of time, and removal of errors on motor memory. Frontiers in Human Neuroscience. 2013;7:307. doi: 10.3389/fnhum.2013.00307. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Kojima Y, Iwamoto Y, Yoshida K. Memory of learning facilitates saccadic adaptation in the monkey. The Journal of Neuroscience. 2004;24:7531–7539. doi: 10.1523/JNEUROSCI.1741-04.2004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Kojima Y, Soetedjo R. Elimination of the error signal in the superior colliculus impairs saccade motor learning. PNAS. 2018;115:E8987–E8995. doi: 10.1073/pnas.1806215115. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Körding KP, Wolpert DM. The loss function of sensorimotor learning. PNAS. 2004;101:9839–9842. doi: 10.1073/pnas.0308394101. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Kording KP, Tenenbaum JB, Shadmehr R. The dynamics of memory as a consequence of optimal adaptation to a changing body. Nature Neuroscience. 2007;10:779–786. doi: 10.1038/nn1901. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Kostadinov D, Beau M, Blanco-Pozo M, Häusser M. Predictive and reactive reward signals conveyed by climbing fiber inputs to cerebellar Purkinje cells. Nature Neuroscience. 2019;22:950–962. doi: 10.1038/s41593-019-0381-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Krakauer JW, Pine ZM, Ghilardi MF, Ghez C. Learning of visuomotor transformations for vectorial planning of reaching trajectories. The Journal of Neuroscience. 2000;20:8916–8924. doi: 10.1523/JNEUROSCI.20-23-08916.2000. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Langsdorf L, Maresch J, Hegele M, McDougle SD, Schween R. Prolonged response time helps eliminate residual errors in visuomotor adaptation. Psychonomic Bulletin & Review. 2021;28:834–844. doi: 10.3758/s13423-020-01865-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Leow L-A, de Rugy A, Marinovic W, Riek S, Carroll TJ. Savings for visuomotor adaptation require prior history of error, not prior repetition of successful actions. Journal of Neurophysiology. 2016;116:1603–1614. doi: 10.1152/jn.01055.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Leow LA, Gunn R, Marinovic W, Carroll TJ. Estimating the implicit component of visuomotor rotation learning by constraining movement preparation time. Journal of Neurophysiology. 2017;118:666–676. doi: 10.1152/jn.00834.2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Leow L-A, Marinovic W, de Rugy A, Carroll TJ. Task errors contribute to implicit aftereffects in sensorimotor adaptation. The European Journal of Neuroscience. 2018;48:3397–3409. doi: 10.1111/ejn.14213. [DOI] [PubMed] [Google Scholar]
  51. Leow L-A, Marinovic W, de Rugy A, Carroll TJ. Task errors drive memories that improve sensorimotor adaptation. The Journal of Neuroscience. 2020;40:3075–3088. doi: 10.1523/JNEUROSCI.1506-19.2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Lerner G, Albert S, Caffaro PA, Villalta JI, Jacobacci F, Shadmehr R, Della-Maggiore V. The origins of anterograde interference in visuomotor adaptation. Cerebral Cortex (New York, N.Y. 2020;30:4000–4010. doi: 10.1093/cercor/bhaa016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. MacLeod CM. Forgotten but not gone: savings for pictures and words in long-term memory. Journal of Experimental Psychology. Learning, Memory, and Cognition. 1988;14:195–212. doi: 10.1037//0278-7393.14.2.195. [DOI] [PubMed] [Google Scholar]
  54. Maresch J, Werner S, Donchin O. Methods matter: Your measures of explicit and implicit processes in visuomotor adaptation affect your results. The European Journal of Neuroscience. 2021;53:504–518. doi: 10.1111/ejn.14945. [DOI] [PubMed] [Google Scholar]
  55. Marko MK, Haith AM, Harran MD, Shadmehr R. Sensitivity to prediction error in reach adaptation. Journal of Neurophysiology. 2012;108:1752–1763. doi: 10.1152/jn.00177.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Martin TA, Keating JG, Goodkin HP, Bastian AJ, Thach WT. Throwing while looking through prisms. I. Focal olivocerebellar lesions impair adaptation. Brain. 1996;119 (Pt 4):1183–1198. doi: 10.1093/brain/119.4.1183. [DOI] [PubMed] [Google Scholar]
  57. Maschke M, Gomez CM, Ebner TJ, Konczak J. Hereditary cerebellar ataxia progressively impairs force adaptation during goal-directed arm movements. Journal of Neurophysiology. 2004;91:230–238. doi: 10.1152/jn.00557.2003. [DOI] [PubMed] [Google Scholar]
  58. Mawase F, Shmuelof L, Bar-Haim S, Karniel A. Savings in locomotor adaptation explained by changes in learning parameters following initial adaptation. Journal of Neurophysiology. 2014;111:1444–1454. doi: 10.1152/jn.00734.2013. [DOI] [PubMed] [Google Scholar]
  59. Mazzoni P, Krakauer JW. An implicit plan overrides an explicit strategy during visuomotor adaptation. The Journal of Neuroscience. 2006;26:3642–3645. doi: 10.1523/JNEUROSCI.5317-05.2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. McDougle SD, Bond KM, Taylor JA. Explicit and implicit processes constitute the fast and slow processes of sensorimotor learning. The Journal of Neuroscience. 2015;35:9568–9579. doi: 10.1523/JNEUROSCI.5061-14.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. McDougle SD, Bond KM, Taylor JA. Implications of plan-based generalization in sensorimotor adaptation. Journal of Neurophysiology. 2017;118:383–393. doi: 10.1152/jn.00974.2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. McDougle SD, Taylor JA. Dissociable cognitive strategies for sensorimotor learning. Nature Communications. 2019;10:40. doi: 10.1038/s41467-018-07941-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Medina JF, Garcia KS, Mauk MD. A mechanism for savings in the cerebellum. The Journal of Neuroscience. 2001;21:4081–4089. doi: 10.1523/JNEUROSCI.21-11-04081.2001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  64. Medina JF. Teaching the cerebellum about reward. Nature Neuroscience. 2019;22:846–848. doi: 10.1038/s41593-019-0409-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Miall RC, Jenkinson N, Kulkarni K. Adaptation to rotated visual feedback: a re-examination of motor interference. Experimental Brain Research. 2004;154:201–210. doi: 10.1007/s00221-003-1630-2. [DOI] [PubMed] [Google Scholar]
  66. Milner B. BLes Troubles de La Memoire Accompagnant Des Lesions Hippocampiques Bilaterales. Physiologie de hippocampe; 1962. [Google Scholar]
  67. Miyamoto YR, Wang S, Smith MA. Implicit adaptation compensates for erratic explicit strategy in human motor learning. Nature Neuroscience. 2020;23:443–455. doi: 10.1038/s41593-020-0600-3. [DOI] [PubMed] [Google Scholar]
  68. Morehead JR, Qasim SE, Crossley MJ, Ivry R. Savings upon re-aiming in visuomotor adaptation. The Journal of Neuroscience. 2015;35:14386–14396. doi: 10.1523/JNEUROSCI.1046-15.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. Morehead JR, Taylor JA, Parvin DE, Ivry RB. Characteristics of implicit sensorimotor adaptation revealed by task-irrelevant clamped feedback. Journal of Cognitive Neuroscience. 2017;29:1061–1074. doi: 10.1162/jocn_a_01108. [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Morehead JR, Orban de Xivry JJ. A synthesis of the many errors and learning processes of visuomotor adaptation. bioRxiv. 2021 doi: 10.1101/2021.03.14.435278. [DOI]
  71. Morton SM, Bastian AJ. Cerebellar contributions to locomotor adaptations during splitbelt treadmill walking. The Journal of Neuroscience. 2006;26:9107–9116. doi: 10.1523/JNEUROSCI.2622-06.2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  72. Neville KM, Cressman EK. The influence of awareness on explicit and implicit contributions to visuomotor adaptation over time. Experimental Brain Research. 2018;236:2047–2059. doi: 10.1007/s00221-018-5282-7. [DOI] [PubMed] [Google Scholar]
  73. Saijo N, Gomi H. Multiple motor learning strategies in visuomotor rotation. PLOS ONE. 2010;5:e9399. doi: 10.1371/journal.pone.0009399. [DOI] [PMC free article] [PubMed] [Google Scholar]
  74. Salomonczyk D, Cressman EK, Henriques DYP. Proprioceptive recalibration following prolonged training and increasing distortions in visuomotor adaptation. Neuropsychologia. 2011;49:3053–3062. doi: 10.1016/j.neuropsychologia.2011.07.006. [DOI] [PubMed] [Google Scholar]
  75. Sedaghat-Nejad E, Shadmehr R. The cost of correcting for error during sensorimotor adaptation. PNAS. 2021;118:e2101717118. doi: 10.1073/pnas.2101717118. [DOI] [PMC free article] [PubMed] [Google Scholar]
  76. Shadmehr R, Brandt J, Corkin S. Time-dependent motor memory processes in amnesic subjects. Journal of Neurophysiology. 1998;80:1590–1597. doi: 10.1152/jn.1998.80.3.1590. [DOI] [PubMed] [Google Scholar]
  77. Shadmehr R, Smith MA, Krakauer JW. Error correction, sensory prediction, and adaptation in motor control. Annual Review of Neuroscience. 2010;33:89–108. doi: 10.1146/annurev-neuro-060909-153135. [DOI] [PubMed] [Google Scholar]
  78. Sing GC, Smith MA. Reduction in learning rates associated with anterograde interference results from interactions between different timescales in motor adaptation. PLOS Computational Biology. 2010;6:e1000893. doi: 10.1371/journal.pcbi.1000893. [DOI] [PMC free article] [PubMed] [Google Scholar]
  79. Smith MA, Shadmehr R. Intact ability to learn internal models of arm dynamics in Huntington’s disease but not cerebellar degeneration. Journal of Neurophysiology. 2005;93:2809–2821. doi: 10.1152/jn.00943.2004. [DOI] [PubMed] [Google Scholar]
  80. Smith MA, Ghazizadeh A, Shadmehr R. Interacting adaptive processes with different timescales underlie short-term motor learning. PLOS Biology. 2006;4:e179. doi: 10.1371/journal.pbio.0040179. [DOI] [PMC free article] [PubMed] [Google Scholar]
  81. Tanaka H, Sejnowski TJ, Krakauer JW. Adaptation to visuomotor rotation through interaction between posterior parietal and motor cortical areas. Journal of Neurophysiology. 2009;102:2921–2932. doi: 10.1152/jn.90834.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  82. Taylor JA, Ivry RB. Flexible cognitive strategies during motor learning. PLOS Computational Biology. 2011;7:e1001096. doi: 10.1371/journal.pcbi.1001096. [DOI] [PMC free article] [PubMed] [Google Scholar]
  83. Taylor JA, Krakauer JW, Ivry RB. Explicit and implicit contributions to learning in a sensorimotor adaptation task. The Journal of Neuroscience. 2014;34:3023–3032. doi: 10.1523/JNEUROSCI.3619-13.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  84. Thoroughman KA, Shadmehr R. Learning of action through adaptive combination of motor primitives. Nature. 2000;407:742–747. doi: 10.1038/35037588. [DOI] [PMC free article] [PubMed] [Google Scholar]
  85. Tsay JS, Ivry RB, Lee A, Avraham G. Moving outside the lab: The viability of conducting sensorimotor learning studies online. Neurons, Behavior, Data Analysis, and Theory. 2021a;5 doi: 10.51628/001c.26985. [DOI] [Google Scholar]
  86. Tsay JS, Haith AM, Ivry RB, Kim HE. Interactions between sensory prediction error and task error during implicit motor learning. bioRxiv. 2021b doi: 10.1101/2021.06.20.449180. [DOI] [PMC free article] [PubMed]
  87. Tsay JS, Kim HE, Haith AM, Ivry RB. Proprioceptive re-alignment drives implicit sensorimotor adaptation. bioRxiv. 2021c doi: 10.1101/2021.12.21.473747. [DOI] [PMC free article] [PubMed]
  88. Tsay JS, Kim HE, Parvin DE, Stover AR, Ivry RB. Individual differences in proprioception predict the extent of implicit sensorimotor adaptation. Journal of Neurophysiology. 2021d;125:1307–1321. doi: 10.1152/jn.00585.2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  89. Tseng YW, Diedrichsen J, Krakauer JW, Shadmehr R, Bastian AJ. Sensory prediction errors drive cerebellum-dependent adaptation of reaching. Journal of Neurophysiology. 2007;98:54–62. doi: 10.1152/jn.00266.2007. [DOI] [PubMed] [Google Scholar]
  90. Vaswani PA, Shmuelof L, Haith AM, Delnicki RJ, Huang VS, Mazzoni P, Shadmehr R, Krakauer JW. Persistent residual errors in motor adaptation tasks: reversion to baseline and exploratory escape. The Journal of Neuroscience. 2015;35:6969–6977. doi: 10.1523/JNEUROSCI.2656-14.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  91. Wagner MJ, Kim TH, Savall J, Schnitzer MJ, Luo L. Cerebellar granule cells encode the expectation of reward. Nature. 2017;544:96–100. doi: 10.1038/nature21726. [DOI] [PMC free article] [PubMed] [Google Scholar]
  92. Wei K, Körding K. Relevance of error: what drives motor adaptation? Journal of Neurophysiology. 2009;101:655–664. doi: 10.1152/jn.90545.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  93. Wilterson SA, Taylor JA. Implicit visuomotor adaptation remains limited after several days of training. ENeuro. 2021;8:ENEURO.0312-20.2021. doi: 10.1523/ENEURO.0312-20.2021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  94. Wong AL, Shelhamer M. Using prediction errors to drive saccade adaptation: the implicit double-step task. Experimental Brain Research. 2012;222:55–64. doi: 10.1007/s00221-012-3195-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  95. Wong AL, Marvel CL, Taylor JA, Krakauer JW. Can patients with cerebellar disease switch learning mechanisms to reduce their adaptation deficits? Brain. 2019;142:662–673. doi: 10.1093/brain/awy334. [DOI] [PMC free article] [PubMed] [Google Scholar]
  96. Yin C, Wei K. Savings in sensorimotor adaptation without an explicit strategy. Journal of Neurophysiology. 2020;123:1180–1192. doi: 10.1152/jn.00524.2019. [DOI] [PubMed] [Google Scholar]
  97. Zarahn E, Weston GD, Liang J, Mazzoni P, Krakauer JW. Explaining savings for visuomotor adaptation: linear time-invariant state-space models are not sufficient. Journal of Neurophysiology. 2008;100:2537–2548. doi: 10.1152/jn.90529.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  98. Zhou W, Fitzgerald J, Colucci-Chang K, Murthy KG, Joiner WM. The temporal stability of visuomotor adaptation generalization. Journal of Neurophysiology. 2017;118:2435–2447. doi: 10.1152/jn.00822.2016. [DOI] [PMC free article] [PubMed] [Google Scholar]

Editor's evaluation

Kunlin Wei 1

The interaction between implicit and explicit processes is central for motor learning. The present study builds upon diverse and sometimes seemingly conflicting data sets to propose a computational model, delineating a competing relationship between the explicit and implicit learning process during motor adaptation. The model provides a number of conceptual insights about the nature of error-based learning, not just for researchers in sensorimotor learning but also for those studying human learning in general.

Decision letter

Editor: Kunlin Wei1
Reviewed by: Kunlin Wei2, Timothy J Carroll3

In the interests of transparency, eLife publishes the most substantive revision requests and the accompanying author responses.

[Editors’ note: after the initial submission, the editors requested that the authors prepare an action plan. This action plan was to detail how the reviewer concerns could be addressed in a revised submission. What follows is the initial editorial request for an action plan.]

Thank you for sending your article entitled "Competition between parallel sensorimotor learning systems" for peer review at eLife. Your article is being evaluated by 3 peer reviewers, and the evaluation is being overseen by a Reviewing Editor and Michael Frank as the Senior Editor. The reviewers have opted to remain anonymous.

Given the list of essential revisions, including new experiments, the editors and reviewers invite you to respond as soon as you can with an action plan for the completion of the additional work. We expect a revision plan that under normal circumstances can be accomplished within two months, although we understand that in reality revisions will take longer at the moment. We plan to share your responses with the reviewers and then advise further with a formal decision. After full discussion, all reviewers agree that this work if accepted would have a significant impact on the field. However, the reviewers raised a list of major concerns over the proposed model and its data handling. In particular, a lack of strong alternative models, a possible lack of explanatory power for outstanding findings in the field, and a strong reliance on modeling assumptions make it hard to accept the paper/model as it currently stands. We hope the authors can adequately address the following major concerns. [Editors’ note, the major concerns referenced here are detailed later in this decision narrative].

[Editors’ note: the authors provided an initial action plan in response to reviewer comments. Following the initial action plan, the authors were provided additional reviewer comments. These concerns were addressed in a second action plan. Additional reviewer comments were provided after the second action plan. During the composition of a third action plan, a decision was made to reject the manuscript.]

[Editors’ note: the authors submitted for reconsideration following the decision after peer review. What follows is the decision letter after the first round of review.]

Thank you for resubmitting your work entitled "Competition between parallel sensorimotor learning systems" for consideration by eLife. Your article has been reviewed by 3 peer reviewers, one of whom is a member of our Board of Reviewing Editors, and the evaluation has been overseen by Michael J Frank as the Senior Editor. The reviewers have opted to remain anonymous.

All the reviewers concluded that the study is not ready for publishing in eLife. The primary concern is that the direct evidence for supporting the competition model is weak or simply lacking. In the previous manuscript, the only solid, quantitative evidence is the negative correlation between explicit and implicit learning (Figure 3H). But as we mentioned in the last letter decision, this negative correlation can also be obtained by the alternative (and mainstream) model based on both target error and sensory prediction error. You provided a piece of new but essential evidence to differentiate these two models in the revision plan. Still, this new result (i.e., the negative correlation between total learning and implicit learning) is at odds with multiple datasets that have been published. We find this will continue to be a problem even we provide you more time to revise the paper or continue to collect more data as listed in the revision plan. The submission has been on discussion for a prolonged period as the reviewers hope to see solid evidence for the conceptual step-up that you presented in the manuscript. However, we are regretful that the study is not ready for publishing as it currently stands.

[Editors’ note: the authors appealed the editorial decision. The appeal was granted and the authors were invited to submit a revised manuscript and response to reviewer comments. What follows are the reviewer comments provided after the initial submission of the manuscript.]

Concerns:

1. There is no clear testing of alternative models. After Figure 1 and 2, the paper goes on with the competition model only. The independent model will never be able to explain differences in implicit learning for a given visuomotor rotation (VMR) size, so it is not a credible alternative to the competition model in its current form. It seems possible that extensions of that model based on the generalization of implicit learning (Day et al. 2016, McDougle et al. 2017) or task-dependent differences in error sensitivity could explain most of the prominent findings reported here. We hope the authors can show more model comparison results to support their model.

2. The measurement of implicit learning is compromised by not asking participants to stop aiming upon entering the washout period. This happened at three places in the study, once in Experiment 2, once in the re-used data by Fernandez-Ruiz et al., 2011, and once in the re-used data by Mazzoni and Krakauer 2006 (no-instruction group). We would like to know how the conclusion is impacted by the ill-estimated implicit learning.

3. All the new experiments limit preparation time (PT) to eliminate explicit learning. In fact, the results are based on the assumption that shortened PT leads to implicit-only learning (e.g., Figure 4). Is the manipulation effective, as claimed? This paradigm needs to be validated using independent measures, e.g., by direct measurement of implicit and/or explicit strategy.

4. Related to point 3, it is noted that the aiming report produced a different implicit learning estimate than a direct probe in Exp1 (Figure S5E). However, Figure 3 only reported Exp1 data with the probe estimate. What would the data look like with report-estimated implicit learning? Will it show the same negative correlation results with good model matches (Figure 3H)?

5. Figure 3H is one of the rare pieces of direct evidence to support the model, but we find the estimated slope parameter is based on a B value of 0.35, which is unusually high. How is it obtained? Why is it larger than most previous studies have suggested? In fact, this value is even larger than the learning rate estimated in Exp 3 (Figure 6C). Is it related to the small sample size (see below)?

6. The direct evidence supporting a competition of implicit and explicit learning is still weak. Most data presented here are about the steady-state learning amplitude; the authors suggest that steady-state implicit learning is proportional to the size of the rotation (Equation 5). But this is directly contradictory to the experimental finding that implicit learning saturates quickly with rotation size (Morehead et al., 2017; Kim et al., 2018). This prominent finding has been dismissed by the proposed model. On the other hand, one classical finding for conventional VMR is that explicit learning would gradually decrease after its initial spike, while implicit learning would gradually increase. The competition model appears unable to account for this stereotypical interaction. How can we account for this simple pattern with the new model or its possible extensions?

7. General comments about the data:

Experiments 3 and 4 (Figures 6 and 7) do not contribute to the theorization of competition between implicit and explicit learning. The authors appear to use them to show that implicit learning properties (error sensitivity) can be modified, but then this conclusion critically depends on the assumption that the limit-PT paradigm produces implicit-only learning, which is yet to be validated.

Figure 4 is not a piece of supporting evidence for the competition model since those obtained parameters are based on assumptions that the competition model holds and that the limit-PT condition "only" has implicit learning.

8. Concerns about data handling:

All the experiments have a limited number of participants. This is problematic for Exp 1 and 2, which have correlation analysis. Note Fig 3H contains a critical, quantitative prediction of the model, but it is based on a correlation analysis of n=9. This is an unacceptably small sample. Experiment 3 only had 10 participants, but its savings result (argued as purely implicit learning-driven) is novel and the first of its kind.

Experiment 2: This experiment tried to decouple implicit and explicit learning, but it is still problematic. If the authors believe that the amount of implicit adaptation follows a state-space model, then the measure of early implicit is correlated to the amount of late implicit because Implicit_late = (ai)^N * Implicit_early where N is the number of trials between early and late. Therefore the two measures are not properly decoupled. To decouple them, the authors should use two separate ways of measuring implicit and explicit. To measure explicit, they could use aim report (Taylor, Krakauer and Ivry 2011), and to measure implicit independently, they could use the after-effect by asking the participants to stop aiming as done here. They actually have done so in experiment 1 but did not use these values.

Figure 6: To evaluate the saving effect across conditions and experiments, using the interaction effect of ANOVA shall be desired.

Figure 8: The hypothesis that they did use an explicit strategy could explain why the difference between the two groups rapidly vanishes. Also, it is unclear whether the data from Taylor and Ivry (2011) are in favor of one of the models as the separate and shared error models are not compared.

Reviewer #1:

The present study by Albert and colleagues investigates how implicit learning and explicit learning interact during motor adaptation. This topic is under heated debate in the sensorimotor adaptation area in recent years, spurring diverse behavioral findings that warrant a unified computational model. The study is to fulfill this goal. It proposes that both implicit and explicit adaptation processes are based on a common error source (i.e., target error). This competition leads to different behavioral patterns in diverse task paradigms.

I find this study a timely and essential work for the field. It makes two novel contributions. First, when dissecting the contribution of explicit and implicit learning, the current study highlights the importance of distinguishing apparent learning size and covert changes in learning parameters. For example, the overt size of implicit learning could decrease while the related learning parameters (e.g., implicit error sensitivity) remain the same or even increase. Many researchers for long have overlooked this dissociation. Second, the current study also emphasizes the role of target error in typical perturbation paradigms. This is an excellent waking call since the predominant view now is that sensory prediction error is for implicit learning and target error for explicit learning.

Given that the paper aims to use a unified model to explain different phenomena, it mixes results from previous work and four new experiments. The paper's presentation can be improved by reducing the use of jargon and making straightforward claims about what the new experiments produce and what the previous studies produce. I will give a list of things to work on later (minor concerns).

Furthermore, my major concern is whether the property of the implicit learning process (e.g., error sensitivity) can be shown subjective to changes without making model assumptions.

My major concern is whether we can provide concrete evidence that the implicit learning properties (error sensitivity and retention) can be modified. Even though the authors claim that the error sensitivity of implicit learning can be changed, and the change subsequently leads to savings and interference (e.g., Figures 6 and 7), I find that the evidence is still indirect, contingent on model assumptions. Here is a list of results that concern error sensitivity changes.

1. Figures 1 and 2: Instruction is assumed to leave implicit learning parameters unchanged.

2. Figure 4: It appears that the implicit error sensitivity increases during relearning. However, Fig4D cannot be taken as supporting evidence. How the model is constructed (both implicit and explicit learning are based on target error) and what assumptions are made (Low RT = implicit learning, High RT = explicit + implicit) determine that implicit learning's error sensitivity must increase. In other words, the change in error sensitivity resulted from model assumptions; whether implicit error sensitivity by itself changes cannot be independently verified by the data.

3. Figure 6: With limited PT, savings still exists with an increased implicit error sensitivity. Again, this result also relies on the assumption that limited PT leads to implicit-only learning. Only with this assumption, the error sensitivity can be calculated as such.

4. Figure 7: with limited PT anterograde interference persists, appearing to show a reduced implicit error sensitivity. Again, this is based on the assumption that the limited PT condition leads to implicit-only learning.

Across all the manipulations, I still cannot find an independent piece of evidence that task manipulations can modify learning parameters of implicit learning without resorting to model assumptions. I would like to know what the authors take on this critical issue. Is it related to how error sensitivity is defined?

Reviewer #2:

The paper provides a thorough computational test of the idea that implicit adaptation to rotation of visual feedback of movement is driven by the same errors that lead to explicit adaptation (or strategic re-aiming movement direction), and that these implicit and explicit adaptive processes are therefore in competition with one another. The results are incompatible with previous suggestions that explicit adaptation is driven by task errors (i.e. discrepancy between cursor direction and target direction), and implicit adaptation is driven by sensory prediction errors (i.e. discrepancy between cursor direction and intended movement direction). The paper begins by describing these alternative ideas via state-space models of trial by trial adaptation, and then tests the models by fitting them both to published data and to new data collected to test model predictions.

The competitive model accounts for the balance of implicit-explicit adaptations observed previously when participants were instructed how to counter the visual rotation to augment explicit learning across multiple visual rotation sizes (20, 40 and 60 degrees; Neville and Cressman, 2018). It also fits previous data in which the rotation was introduced gradually to augment implicit learning (Saijo and Gomi, 2010). Conversely, a model based on independent adaptation to target errors and sensory prediction errors could not reproduce the previous results. The competitive model also accounts for individual participant differences in implicit-explicit adaptation. Previous work showed that people who increased their movement preparation time when faced with rotated feedback had smaller implicit reach aftereffects, suggesting that greater explicit adaptation led to smaller implicit learning (Fernandes-Ruis et al., 2011). Here the authors replicated this general effect, but also measured both implicit and explicit learning under experimental manipulation of preparation time. Their model predicted the observed inverse relationship between implicit and explicit adaptation across participants.

The authors then turned their attention to the issue of persistent sensorimotor memory – both in terms of savings (or benefit from previous exposure to a given visual rotation) and interference (or impaired performance due to exposure to an opposite visual rotation). This is a topic that has a long history of controversy, and there are conflicting reports about whether or not savings and interference rely predominantly on implicit or explicit learning. The competition model was able to account for some of these unresolved issues, by revealing the potential for a paradoxical situation in which the error sensitivity of an implicit learning process could increase without observable increase in implicit learning rate (or even a reduction in implicit learning rate). The authors showed that this paradox was likely at play in a previous paper that concluded that saving is entirely explicit (Haith et al., 2015), and then ran a new experiment with reduced preparation time to confirm some recent reports that implicit learning can result in savings. They used a similar approach to show that long-lasting interference can be induced for implicit learning.

Finally, the authors considered data that has long been cited to provide evidence that implicit adaptation is obligatory and driven by sensory prediction error (Mazzoni and Krakauer, 2006). In this previous paper, participants were instructed to aim to a secondary target when the visual feedback was rotated so that the cursor error towards the primary target would immediately be cancelled. Surprisingly, the participants' reach directions drifted even further away from the primary target over time – presumably in an attempt to correct the discrepancy between their intended movement direction and the observed movement direction. However, a competition model involving simultaneous adaptation to target errors from the primary target and opposing target errors from the secondary target offers an alternative explanation. The current authors show that such a model can capture the implicit "drift" in reach direction, suggesting that two competing target errors rather than a sensory prediction error can account for the data. This conclusion is consistent with more recent work by Taylor and Ivry (2011), which showed that reach directions did not drift in the absence of a second target, when participants were coached to immediately cancel rotation errors in advance. The reason for small implicit aftereffect reported by Taylor and Ivry (2011) is open for interpretation. The authors of the current paper suggest that adaptation to sensory prediction errors could underlie the effect, but an alternative possibility is that there is an adaptive mechanism based on reinforcement of successful action.

Overall, this paper provides important new insights into sensorimotor learning. Although there are some conceptual issues that require further tightening – including around the issue of error sensitivity versus observed adaptation, and the issue of whether or not there is evidence that sensory prediction errors drive visuomotor rotation – I think that the paper will be highly influential in the field of motor control.

Line 19: I think it is possible that more than 2 adaptive processes contribute to sensorimotor adaptation – perhaps add "at least".

Lines 24-27: This section does not refer to specific context or evidence and is consequently obtuse to me.

Line 52: Again – perhaps add "at least".

Line 68: I am not really sure what you are getting at by "implicit learning properties" here. Can you clarify?

Line 203: I think the model assumption that there was zero explicit aiming during washout is highly questionable here. The Morehead study showed early washout errors of less than 10 degrees, whereas errors were ~ 20 degrees in the abrupt and ~35 degrees in the gradual group for Saijo and Gomi. I think it highly likely that participants re-aimed in the opposite direction during washout in this study – what effect would this have on the model conclusions?

Line 382: It would be helpful to explicitly define what you mean by learning, adaptation, error sensitivity, true increases and true decreases in this discussion – the terminology and usage are currently imprecise. For example, it seems to me that according to the zones defined "truth" refers to error sensitivity, and "perception" refers to observed behaviour (or modelled adaptation state), but this is not explicitly explained in the text. This raises an interesting philosophical question – is it the error sensitivity or the final adaptation state associated with any given process that "truly" reflects the learning, savings or interference experienced by that process? Some consideration of this issue would enhance the paper in my opinion.

Figure 5 label: Descriptions for C and D appear inadvertently reversed – C shows effect of enhancing explicit learning and D shows effect of suppressing explicit learning.

Line 438: The method of comparing learning rate needs justification (i.e. comparing from 0 error in both cases so that there is no confound due to the retention factor).

Line 539: I am not sure if I agree that the implicit aftereffect in the no target group need reflect error-based adaptation to sensory prediction error. It could result from a form of associative learning – where a particular reach direction is associated with successful target acquisition for each target. The presence of an adaptive response to SPE does not fit with any of the other simulations in the paper, so it seems odd to insist that it remains here. Can you fit a dual error model to the Taylor and Ivry (single-target) data? I suspect it would not work, as the SPE should cause non-zero drift (i.e. shift the reach direction away from the primary target).

Line 556: Be precise here – do you really mean SPE? It seems as though you only provided quantitative evidence of a competition between errors when there were 2 physical targets.

Line 606: Is it necessarily the explicit response that is cached? I think of this more as the development of an association between action and reward – irrespective of what adaptive process resulted in task success. It would be nice to know what happened to aftereffects after blocks 1 and 2 in study 3. An associative effect might be switched on and off more easily – so if a mechanism of that kind were at play, I would predict a reduced aftereffect as in Huberdeau et al.

Line 720: Ref 12 IS the Mazzoni and Krakauer study, ref 11 involves multiple physical targets, and ref 21 is the Taylor and Ivry paper considered above and in the next point. None of these papers therefore require an explanation based on SPE. However, ref #17 appears to provide a more compelling contradiction to the notion that target errors drive all forms of adaptation – as people adapt to correct a rotation even when there is never a target error (because the cursor jumps to match the cursor direction). A possible explanation that does not involve SPE might be that people retain a memory of the initial target location and detect a "target error" with respect to that location.

Line 737: I don't agree with this – the observation of an after-effect with only 1 physical target but instructed re-aiming so that there was 0 target error, certainly implies some other process besides a visual target error as a driver of implicit learning. However, as argued above, such a process need not be driven by SPE. Model-free processes could be at play…

Line 791: Some more detail on how timing accuracy was assured in the remote experiments is needed. The mouse sample rate can be approximately 250 Hz, but my experience with it is that there can be occasional long delays between samples on some systems. The delay between commands to print to the screen and the physical appearance on the screen can also be long and variable – depending on the software and hardware involved.

Line 861: How did you specify or vary the retention parameters to create these error sensitivity maps?

Line 865: Should not the subscripts here be "e" and "I" rather than "f" and "s"?

Line 1009: Why were data sometimes extracted using a drawing package (presumably by eye? – please provide further details), and sometimes using GRABIT in Matlab?

Line 1073: More detail about the timing accuracy and kinematics of movement are required.

Line 1148: Again – should not the subscripts here be e and I rather than f and s?

Reviewer #3:

In this paper, Albert and colleagues explore the role of target error and sensory prediction error on motor adaptation. They suggest that both implicit and explicit adaptation would be driven by target error. In addition, implicit adaptation is also influenced by sensory prediction error.

While I appreciate the effort that the authors have done to come up with at theory that could account for many studies, there is not a single figure/result that does not suffer from main limitations as I highlight in the major comments below. Overall, the main limitations are:

I believe that the authors neglect some very relevant papers that contradict their theory and model.

– They did not take into account some results on the topic such as Day et al. 2016 and McDougle et al. 2017. It is true that they acknowledge it and discuss it but I am not convinced by their arguments (more on this topic in a major comment about the discussion below).

– They did not take into account the fact that there is no proof that limiting RT is a good way to suppress the explicit component of adaptation. When the number of targets is limited, these explicit responses can then be cached without any RT cost. McDougle and Taylor, Nat Com (2019) demonstrated this for 2 targets (here the authors used only 4 targets). There exist no other papers that proof that limiting RT would suppress the explicit strategy as claimed by the authors. To do so, one needs to limit PT and to measure the explicit or the implicit component. The authors should prove that this assumption holds because it is instrumental to the whole paper. The authors acknowledge this limitation in their discussion but dismiss it quite rapidly. This manipulation is so instrumental to the paper that it needs to be proven, not argued (more on this topic in a major comment about the discussion below).

– They did not take into account the fact that the after-effect is a measure of implicit adaptation if and only if the participants are told to abandon any explicit strategy before entering the washout period (as done in Taylor, Krakauer and Ivry, 2011). This has important consequences for their interpretation of older studies because it was then never asked to participants to stop aiming before entering the washout period as the authors did in experiment 2.

There are also major problems with statistics/design:

– I disagree with the authors that 10-20 people per group (in their justification of the sample size in the second document) is standard for motor adaptation experiments. It is standard for their own laboratories but not for the field anymore. They even did not reach N=10 for some of the reported experiments (one group in experiment 1). Furthermore, this sample size is excessively low to obtain reliable estimation of correlations. Small N correlation cannot be trusted: https://garstats.wordpress.com/2018/06/01/smallncorr/

– Justification of sample size is missing. What was the criteria to stop data collection? Why is the number of participants per group so variable (N=9 and N=13 for experiment 1, N=17 for experiment 2, N=10 for experiment 3, N=10 ? per group for experiment 4). Optional stopping is problematic when linked to data peeking (Armitage, McPherson and Rowe, Journal of the Royal Statistical Society. Series A (General), 1969). It is unclear how optional stopping influences the outcome of the statistical tests of this paper.

– The authors should follow the best practices and add the individual data to all their bar graphs (Rousselet et al. EJN 2016).

– Missing interactions (Nieuwenhuis et al., Nature Neuro 2011), misinterpretation of non-significant p-values (Altman 1995 https://www.bmj.com/content/311/7003/485)

Given these main limitations, I don't think that the authors have a convincing case in favor of their model and I don't think that any of these results actually support the error competition model.

Detailed major comments per equation/figure/section:

Equation 2: the authors note that the sensory prediction error is "anchored to the aim location " (line 104-105) which is exactly what Day et al. 2016 and McDougle et al. 2017 demonstrated. Yet, they did not fully take into account the implication of this statement. If it is so, it means that the optimal location to determine the extent of implicit motor adaptation is the aim location and not the target. Indeed, if the SPE is measured with respect to the aim location and is linked to the implicit system, it means that implicit adaptation will be maximum at the aim location and, because of local generalization, the amount of implicit adaptation will decay gradually when one wants to measure this system away of that optimal location (Figure 7 of Day et al). This means that, the further the aiming direction is from the target, the smaller the amount of implicit adaptation measured at the target location will be. This will result in an artificial negative correlation between the explicit and implicit system without having to relate to a common error source.

Equation 3: the authors do not take into account results from their lab (Marko et al., Herzfeld) and from others (Wei and Kording) that show that the sensitivity to error depends on the size of the rotation.

Equation 5: In this equation, the authors suggest that the steady-state amount of implicit adaptation is directly proportional to the size of the rotation. Thanks to a paradigm similar to Mazzoni and Krakauer 2006, the team of Rich Ivry has demonstrated that the implicit response saturates very quickly with perturbation size (Kim et al., communications Biology, 2018, see their Figure 1).

Data from Cressman, Figure 1: Following Day et al., one should expect that, when the explicit strategy is larger (instruction group in Cressman et al. compared to the no-instruction group), the authors are measuring the amount of implicit adaptation further from aiming direction where it is maximum. As a result, the amount of implicit adaptation appears smaller in the instruction group simply because they were aiming more than the no-instruction group (Figure 1G).

– Figure 1H: the absence of increase in implicit adaptation is not due to competition as claimed by the authors but is due to the saturation of the implicit response with increasing rotation size (Kim et al. 2018). It saturates at around 20{degree sign} for all rotations larger than 6{degree sign}.

Data from Saijo and Gomi: Here the authors interpret the after-effect as a measure of implicit adaptation but it is not as the participants were not told that the perturbation would be switched off.

– Even if these after-effects did represent implicit adaptation, these results could be explained by Kim et al. 2018 and Day et al. 2016. For a 60{degree sign} rotation, the implicit component of adaptation will saturate around 15-20{degree sign} (like for any large perturbation). The explicit component has to compensate for that but it does a better job in the abrupt condition than in the gradual condition where some target error remains. Given that the aiming direction is larger in the abrupt case than in the gradual case, the amount of implicit adaptation is again measured further away from its optimal location in the abrupt case than in the gradual case (Day et al. 2016 and McDougle et al. 2017).

– There is no proof that introducing a perturbation gradually suppresses explicit learning (line 191). The authors did not provide a citation for that and I don't think there is one. People confound awareness and explicit (also valid for line 212). Rather, given that the implicit system saturates at around 20{degree sign} for large rotation, I would expect that the re-aiming accounts for 20{degree sign} of the ~40{degree sign} of adaptation in the gradual condition.

Line 205-215: The chosen parameters appear very subjective. The authors should perform a sensitivity analyses to demonstrate that their conclusions do not depend on these specific parameters.

Figure 3: The after-effect in the study of Fernandez-Ruiz et al. does not solely represent the implicit adaptation component as these researchers did not tell their participants that they should stop aiming during the washout.

– Correlations based on N=9 should not be considered as meaningful. Such correlation is subject to the statistical significance fallacy: it can only be true if it is significant with N=9 while this represents a fallacy (Button et al. 2013).

Experiment 1 of the authors suffer from several limitations:

– small sample size (N=9 for one group). Why is the sample size different between the two groups?

– Limiting RT does not abolish the explicit component of adaptation (Line 1027: where is the evidence that limit PT is effective in abolishing explicit re-aiming?). Haith and colleagues limited reaction time on a small subset of trials. If the authors want to use this manipulation to limit re-aiming, they should first demonstrate that this manipulation is effective in doing so (other authors that have used this manipulation but have failed to do validate it first). I wonder why the authors did not measure the implicit component for their limit PT group like they did in the No PT limit group. That is required to validate their manipulation.

– The negative correlation from Figure 3H can be explained by the fact that the SPE is anchored at the aiming direction and that, the larger the aiming direction is, the further the authors are measuring implicit adaptation away from its optimal location.

– The difference between the PT limit and NO PT limit is not very convincing. First, the difference is barely significant (line 268). Why did the authors use the last 10 epochs for experiment 1 and the last 15 for experiment 2? This looks like a post-hoc decision to me and the authors should motivate their choice and should demonstrate that their results hold for different choice of epochs (last 5, 10, 15 and 20) to demonstrate the robustness of their results. Second, the degrees of freedom of the t-test (line 268) does not match the number of participants (9 vs. 13 but t(30)?)

– Why did the authors measure the explicit strategy via report (Figure S5E) while they don't use those values for the correlations? This looks like a post-hoc decision to me.

Experiment 2:

– This experiment is based on a small sample size to test for correlations (N=17). What was the objective criterion used to stop data collection at N=17 and not at another sample size? This should be reported.

– This experiment does not decouple implicit and explicit in contrast to what the authors pretend. If the authors believe that the amount of implicit adaptation follows a state-space model, then the measure of early implicit is correlated to the amount of late implicit because Implicit_late = (ai)^N * Implicit_early where N is the number of trials between early and late. Therefore the two measures are not properly decoupled. To decouple them, the authors should use two separate ways of measuring implicit and explicit. To measure explicit, they could use aim report (Taylor, Krakauer and Ivry 2011) and to measure implicit independently, they could use the after-effect by asking the participants to stop aiming as done here. They actually have done so in experiment 1 but did not use these values.

Figure 4:

– Data from the last panel of Figure 4B (line 321-333) should be analyzed with a 2x2 ANOVA and not by a t-test if the authors want to make the point that the learning differences were higher in high PT than low PT trials (Nieuwenhuis et al., Nature Neuroscience, 2011).

– Lines 329-330: the authors should demonstrate that the explicit strategy is actually suppressed by measuring it via report. It is possible that limiting PT reduces the explicit strategy but it might not suppress it. Therefore, any remaining amount of explicit strategy could be subject to savings.

– Coltman and Gribble (2019) demonstrated that fitting state-space models to individual subject's data was highly unreliable and that a bootstrap procedure should be preferred. Lines 344-353: the authors should replace their t-tests with permutation tests in order to avoid fitting the model to individual subject's data.

– It is unclear how the error sensitivity parameters analysed in Figure 4D were obtained. I can follow in the Results section but there is basically nothing on this in the methods. This needs to be expanded.

– The authors make the assumption that the amount of implicit adaptation is the same for the high PT target and for the low PT target. What is the evidence that this assumption is reasonable? Those two targets are far apart while implicit adaptation only generalizes locally. Furthermore, low PT target is visited 4 times less frequently than the high PT target. The authors should redo the experiment and should measure the implicit component for the high PT trials to make sure that it is related to the implicit component for the low PT trials.

– I don't understand why the authors illustrated the results from line 344-349 on Figure 4D and not the results of the following lines, which are also plausible. By doing so, the authors biased the results in favor of their preferred hypothesis. This figure is by no means a proof that the competition hypothesis is true. It shows that "if" the competition hypothesis is true, then there are surprising results ahead. The authors should do a better job at explaining that both models provide different interpretation of the data.

Figure 5: This figure also represent a biased view of the results (like Figure 4D). The competition hypothesis is presented in details while the alternative hypothesis is missing. How would the competition map look like with separate errors, especially when taking into account that the SPE (and implicit adaptation) is anchored at the aiming direction (generalization)?

Figure 6:

This figure is compatible with the fact that limiting PT on all trials is not efficient to suppress explicit adaptation, at least less so than doing it on 20% of the trials (see McDougle et al. 2019 for why this is the case) and that the remaining explicit adaptation leads to savings.

– I don't understand why the authors did not measure implicit (and therefore explicit adaptation) directly in these experiment like they did in experiment 2. This would have given the authors a direct readout of the implicit component of adaptation and would have validated the fact that limiting PT might be a good way to suppress explicit adaptation. Again, this proof is missing in the paper.

– Experiment 3 is based on a very limited number of participants (N=10!). No individual data are presented.

– Data from Figure 6C should be analyzed with an ANOVA and an interaction between the factor experiment (Haith vs. experiment 3) and block (1 vs. 2) should be demonstrated to support such conclusion (Nieuwenhuis et al. 2011).

Figure 7:

The data contained in this figure suffer from the same limitations as in the previous graphs: limiting PT does not exclude explicit strategy, sample size is small (N=10 per group), no direct measure of implicit or explicit is provided.

– In addition, no statistical tests are provided beyond the two stars on the graph. The data should be analyzed with an ANOVA (Nieuwenhuis et al. 2011).

– The number of participants per group is never provided (N=20 for both groups together).

– It is unclear to me how this result contributes to the dissociation between the separate and shared error models.

Figure 8:

The whole explanation here seems post-hoc because none of the two models actually account for this data. The authors had to adapt the model to account for this data. Note that despite that, the model would fail to explain the data from Kim et al. 2018 that represents a very similar task manipulation.

– Line 463-467: the authors claim equivalence based on a non-significant p-value (Altman 1995). Given the small effect size, they don’t have any power to detect effects of small or medium size. They cannot conclude that there is no difference. They can only conclude that they don’t have enough power to detect a difference. As a result, it does NOT suggest that implicit adaptation was unaltered by the changes in explicit strategy.

– In Mazzoni and Krakauer, the aiming direction was neither controlled nor measured. As a result, given the appearance of a target error with training, it is possible that the participants aimed in the direction opposite to the target error in order to reduce it. This would have reduced the apparent increase in implicit adaptation. The authors argue against this possibility based on the 47.8{degree sign} change in hand angle due to the instruction to stop aiming. I remained unconvinced by this argument as I would like to get more info about this change (Mean, SD, individual data). Furthermore, it is unclear what the actual instructions were. Asking to stop aiming or asking to bring one’s invisible hand on the primary target will have different effects on the change in hand angle.

– The data during the washout period suffers from the fact that the participants from the no-strategy group were not told to stop aiming. The hypothesis that they did use an explicit strategy could explain why the difference between the two groups rapidly vanishes. In other words, if the authors want to use this experiment to demonstrate support any of their hypotheses, they should redo it properly by telling the participants to stop using any explicit strategy at the start of the washout period to make sure that the after-effect is devoid of any explicit strategy.

– It is unclear whether the data from Taylor and Ivry (2011) are in favor of one of the models as the separate and shared error models are not compared.

Discussion: it is important and positive that the authors discuss the limitation of their approach but I feel that they dismiss potential limitations rather quickly even though these are critical for their conclusions. They need to provide new data to prove those points rather than arguments.

On limiting PT (lines 605-615):

The authors used three different arguments to support the fact that limiting PT suppresses explicit strategy.

– Their first argument is that Haith did not observe savings in low PT trials. This is true but Haith only used low PT trials (with a change in target) on 20% of the trials. Restricting RT together with a switch in target location is probably instrumental in making the caching of the response harder. This is very different in the experiments done by the authors. In addition, one could argue that Haith et al. did not find evidence for savings but that these authors had limited power to detect a small or medium effect size (their N=12 per group). I agree that savings in low PT trials is smaller than in high PT trials but is savings completely absent in low PT trials? Figure 6H shows that learning is slightly better on block 2 compared to block 1. N=12 is clearly insufficient to detect such a small difference.

– Their second argument is that they used four different targets. McDougle et al. demonstrated caching of explicit strategy without RT costs for two targets and impossibility to do so for 12 targets. The authors could use the experimental design of McDougle if they wanted to prove that caching explicit strategies is impossible for four targets. I don't see why you could cache strategies without RT cost for 2 targets but not for 4 targets. This argument in not convincing.

– The last argument of the authors is that they imposed even shorter latencies (200ms) than Haith (300ms). Yet, if one can cache explicit strategies without reaction time cost, it does not matter whether a limit of 200 or 300ms is imposed as there is no RT cost.

On Generalization (line 678-702).

How much does the amount of implicit adaptation decays with increasing aiming direction? Above, I argued that the data from Day et al. would predict a negative correlation between the explicit and the implicit components of adaptation and a decrease in implicit adaptation with increasing rotation size. The authors clearly disagree on the basis of four arguments. None of them convinced me.

– First, they estimate this decrease to be only 5{degree sign} based on these two papers (FigS5A and B but 5C shows ~10{degree sign}). This seems to be a very conservative estimate as Day et al. reported a 10{degree sign} reduction in after-effects for 40{degree sign} of aiming direction (see their Figure 7). An explicit component of 40{degree sign} was measured by Neville and Cressman for a 60{degree sign} rotation. The 10{degree sign} reduction based on a 40{degree sign} explicit strategy fits perfectly with data from Figure 1G (black bars) and Figure 2F. Off course, the experimental conditions will influence this generalization effect but this should push the authors to investigate this possibility rather than to dismiss it because the values do not precisely match. How close should it match to be accepted?

– Second, it is unclear how this generalization changes with the number of targets (argument on lines 687-689). This has never been studied and cannot be used as an argument based on further assumptions. Furthermore, I am not sure that the generalization would be so different for 2 or 4 targets.

– Third, the authors measured the explicit strategy in experiment 1 via report in a very different way than what is usually used by the authors as the participants do not have to make use of them. It seems to be suboptimal as the authors did not use them for their correlation on Figure 3H and the difference reported in Figure S5E is tiny (no stats are provided) but is based on a very limited number of participants with no individual data to be seen. If it is suboptimal for Figure 3H why is it sufficient as an argument?

– Fourth, when interpreting the data of Neville and Cressman (line 690-692), the authors mention that there were no differences between the three targets even though two of them corresponded to aim directions for other targets. As far as I can tell, the absence of difference in implicit adaptation across the three targets is not mentioned in the paper by Neville and Cressman as they collapsed the data across the three targets for their statistical analyses throughout the paper. In addition, I don't understand why we should expect a difference between the three targets. If the SPE and the implicit process are anchored to the aiming direction and given that the aiming direction is different for the three targets, I would not expect that the aiming direction of a visible target would be influenced by the fact that, for some participants, this aiming direction corresponds to the location of an invisible target.

– Finally, the authors argue here about the size of the influence of the generalization effect on the amount of implicit adaptation. They never challenge the fact that the anchoring of implicit adaptation on the aiming direction and the presence of a generalization effect (independently of its size) leads to a negative correlation between the implicit and explicit component of adaptations (their Figure 3) without any need for the competition model.

Final recommendations

– The authors should perform an unbiased analysis of a model that include separate error sources, the generalization effect and a saturation of implicit adaptation with increasing rotation size. In my opinion, such model would account for almost all of the presented results.

– They should redo all the experiments based on limited preparation time and should include direct measures of implicit or/and explicit strategies (for validation purposes). This would require larger group size.

– They should replicate the experiments where they need a measure of after-effect devoid of any explicit strategies as this has only become standard recently (experiment for Figure 2 and Figure 8). Not that for Figure 8, they might want to measure the explicit aim during the adaptation period as well.

[Editors’ note: further revisions were suggested prior to acceptance, as described below.]

Thank you for resubmitting your work entitled "Competition between parallel sensorimotor learning systems" for further consideration by eLife. Your revised article has been evaluated by Michael Frank (Senior Editor) and a Reviewing Editor.

The manuscript has been improved but there are some remaining issues that need to be addressed, as outlined below:

One review raised the concern about Experiment 1, which presented perturbations with increasing rotation size and elicited larger implicit learning than the abrupt perturbation condition. However, this design confounded the condition/trial order and the perturbation size. The other concern is the newly added non-monotonicity data set from Tsay's study. On the one hand, the current paper states that the proposed model might not apply to error clamp learning; on the other hand, this part of the results was "predicted" by the model squarely. Thus, can it count as evidence that the proposed model is parsimonious for all visuomotor rotation paradigms? This message must be clearly stated with a special reference to error-clamp learning.

Please find the two reviewers' comments and recommendations below, and I hope this will be helpful for revising the paper.

Reviewer #1:

The present study investigates how implicit and explicit learning interacts during sensorimotor adaptation, especially the role of performance or target error during this interaction. It is timely for the area that needs a more mechanistic model to explain diverse findings accumulated in recent years. The revision has addressed previous major concerns and provided extra data that supports the idea that implicit and explicit processes compete for a common target error for a large proportion of studies in the area. The paper is thoughtfully organized and convincingly presented (though a bit too long), including a variety of data sets.

As a repeal submission, the current work has successfully addressed previous major concerns:

1) Direct evidence for supporting the competition model is lacking.

The revision added new evidence that total learning is negatively correlated with implicit learning but positively correlated with explicit learning (Figure 4G and H). This part of the data argues against the alternative SPE model and provides direct support to the competition model. The added appendix 3 also includes other studies' data sets to strengthen the supports.

Furthermore, Experiment1 is new with an incremental perturbation to decrease rotation awareness and thus provides new support for the model. This also helps to address the previous concern that the data from Saijo and Gomi is not clean enough due to a lack of stop-aiming instruction during the measurement of implicit learning. The other not-so-clean data set from Fernadez-Ruiz is removed from the main text in the revision. Putting together, these new data sets and analysis results constitute a rich set of direct supports that address the biggest concern in the previous submission.

2) Lack of testing alternative models.

The revision tested an alternative idea that explicit learning compensates for the variation in implicit learning instead of the other way around as in their competition model (Figure 4). It also tested an alternative model based on the generalization of implicit learning anchored on the aiming direction (Figure 5). The evidence is clear: both alternative models failed to capture the data.

3) The limited preparation time appears not to exclude explicit learning.

This concern was raised by all previous reviewers. The reviewers also pointed out a possible indicator of explicit learning, i.e., a small spurious drop in reaching angle at the beginning of the no-aiming washout phase. The revision provided a new experiment to show that the small drop is most likely due to a time-dependent decay of implicit learning during the 30s instruction period. This new data set (Exp 3) is indeed convincing and important. I am impressed with the authors' thoughtful effort to explain this subtle and tricky issue.

In sum, I believe the revision did an excellent job of addressing all previous major concerns with more data, more thorough analysis, and more model comparisons. Here I suggest a few minor changes for this revision:

Line 59: the cited papers (17-21) did not all suggest savings resulted from changes of the implicit learning rate, as implied by the context here.

Some references are not in the right format: ref1 states "Journal of Neuroscience," while refs2 and 4 state "The Journal of neuroscience : the official journal of the Society for Neuroscience." Possibly caused by downloads from google scholar. Please keep it consistent.

Line66: replace the semicolon with a comma. Also, is citation 9 about showing a dominant role of SPE?

Figure 1H: it appears the no-aiming trials were at the end of each stepwise period, but in the Methods, it is stated that these trials appeared twice (at the middle and the end). Which version is true?

L1331: why use bootstrapping here? Why not simply fit the model to individual subjects (given n = 37 here)? Interestingly, when comparing 60-degree stepwise vs. 60-degree abrupt, a direct fit to individuals was used… In the Results, it is stated that a single parameter can parsimoniously "predict" the data. But in the Methods, it is clear that all data were used to "fit" the parameter p_i. Can we call this a prediction, strictly? Is it possible to fit a partial data set to show the same results? Not through bootstrapping but through using, say, two stepwise phases? This type of prediction has been done for Exp2, but not here.

Line 195: the authors ruled out the potential bias caused by the interdependence between implicit and explicit learning as one was obtained from subtracting the other from the total learning. By rearranging the model (Equation4) to have the implicit learning as a function of total learning (as opposed to explicit learning), and by showing the model prediction would not change after this re-arrangement, the authors tried to argue that the modeling results are not free from the inter-dependence of two types of learning. I think that this argument aims for the wrong target. The real concern is that implicit and explicit learning are not independently measured, which is true no matter how the model equation is arranged. The prediction of the rearranged model, of course, would provide the same model prediction since this does not change the model at all, given the inter-dependence of variables. All the data in Figure 1 have this problem. It is a data problem, not a model problem. And, the data refute the alternative, independent model, that is. There is no need to re-arrange the model without solving the problem in data.

The authors added the data set from Tsay et al. to Part 1, which shows a non-monotonic change of implicit learning with increasing rotation. However, this is only one version of their data set obtained by the online platform; the other version of the same experiment conducted in person shows a different picture with constant implicit learning over different rotations. Any particular reason to report one version instead of the other? Here owes an explanation to the readers.

Line 203: dual model -> competition model

Line463: "that the x-axis in Figures 5A-C slightly overestimates explicit strategy", the axis overestimates…

Line 496: this paragraph starts with a why question (why the competition model works better), but it does not address the question but shows it works better.

Figure 6C: not clear what the right panel is without axis labeling and a detailed description in the caption.

Line 563: the family dinner story is interesting and naughty, but our readers might not need it, especially when the paper is already long. I suggest removing it and starting this section with the burning question of why the implicit error sensitivity changes without the changes of implicit learning size.

Line 589: what is Timepoint 1 and 2, never mentioned before or in the figure. It is also confusing with a capital T.

Figure 7C: it would make much better sense to plot total learning in C along with the implicit and explicit learning.

L595: the data presented before in Figures2 are explained in this map illustration. Enhancing or suppressing explicit learning has been conceptualized as moving along the y-axis without changing the implicit error sensitivity. Retrospectively, this is also the case when the model is fitted to these data by assuming a constant implicit learning rate in previous figures. Is there any evidence to support this assumption for model fitting, or can we safely claim varying implicit learning rates could not account for the data better than otherwise?

L685: we observe implicit learning is suppressed as a result of anterograde interference. Without preparation time constraints, the impairment in implicit learning is less. Is there any way to compare their respectively implicit error sensitivity? This would give us some information about how explicit learning compensates.

Figure 10G: the conceptual framework is speculative. It is fine to discuss the possible neurophysiological underpinnings in the Discussion as it currently stands. But we are better off removing it from the Results.

Reviewer #4:

In this paper, Albert et al. test a novel model where explicit and implicit motor adaptation processes share an error signal. Under this error sharing scheme, the two systems compete – that is, the more that explicit learning contributes to behavior, the less that implicit adaptation does. The authors attempt to demonstrate this effect over a variety of new experiments and a comprehensive re-analysis of older experiments. They contend that the popular model of SPEs exclusively driving implicit adaptation (and implicit/explicit independence) does not account for these results. Once target error sensitivity is included into the model, the resulting competition process allows the model to fit a variety of seemingly disparate results. Overall, the competition model is argued to be the correct model of explicit/implicit interactions during visuomotor learning.

I'm of two minds on this paper. On the positive side, this paper has compelling ideas, a laudable breadth and amount of data/analyses, and several strong results (mainly the reduced-PT results). It is important for the motor learning field to start developing a synthesis of the 'Library of Babel' of the adaptation literature, as is attempted here and elsewhere (e.g., D. Wolpert lab's 'COIN' model). On the negative side, the empirical support feels a bit like a patch-work – some experiments have clear flaws (e.g., Exp 1, see below), others are considered in a vacuum that dismisses previous work (e.g., nonmonotonicity effect), and many leftover mysteries are treated in the Discussion section rather than dealt with in targeted experiments. While some of the responses to the previous reviewers are effective (e.g., showing that reduced PT can block strategies), others are not (e.g., all of Exp 1, squaring certain key findings with other published conflicting results, the treatment of generalization). The overall effect is somewhat muddy – a genuinely interesting idea worth pursuing, but unclear if the burden of proof is met.

(1) The stepwise condition in Exp 1 is critically flawed. Rotation size is confounded with time. It is unclear – and unlikely – that implicit learning has reached an asymptote so quickly. Thus, the scaling effect is at best significantly confounded, and at worst nearly completely confounded. I think that this flaw also injects uncertainty into further analyses that use this experiment (e.g., Figure 5).

(2) It could be argued that the direct between-condition comparison of the 60° blocks in Exp 1 rescues the flaw mentioned above, in that the number of completed trials is matched. However, plan- or movement-based generalization (Gonzalez Castro et al., 2011) artifacts, which would boost adaptation at the target for the stepwise condition relative to the abrupt one, are one way (perhaps among others) to close some of that gap. With reasonable assumptions about the range of implicit learning rates that the authors themselves make, and an implicit generalization function similar to previous papers that isolate implicit adaptation (e.g., σ around ~30°), a similar gap could probably be produced by a movement- or plan-based generalization model. [I note here that Day et al., 2016 is not, in my view, a usable data set for extracting a generalization function, see point 4 below.]

(3) The nonmonotonicity result requires disregarding the results of Morehead at al., 2017, and essentially, according to the rebuttal, an entire method (invariant clamp). The authors do mention and discuss that paper and that method, which is welcome. However, the authors claim that "We are not sure that the implicit properties observed in an invariant error context apply to the conditions we consider in our manuscript." This is curious and raises several crucial parsimony issues, for instance: the results from the Tsay study are fully consistent with Morehead 2017. First, attenuated implicit adaptation to small rotations (not clamps; Morehead Figure 5) could be attributed to error cancellation (assuming aiming has become negligible to nonexistent, or never happened in the first place). This (and/or something like the independence model) thus may explain the attenuated 15° result in Figure 1N. Second, and more importantly, the drop-off of several degrees of adaptation from 30°/60° to 90° in Figure 1N is eerily similar to that seen in Morehead '17. Here's what we're left with: An odd coincidence whereby another method predicts essentially these exact results but is simultaneously (vaguely) not applicable. If the authors agree that invariant clamps do limit explicit learning contributions (see Tsay et al. 2020), it would seem that similar nonmonotonicity being present in both rotations and invariant error-clamps works directly against the competition model. Moreover, it could be argued that the weakened 90˚ adaptation in clamps (Morehead) explains why people aim more during 90° rotations. A clear reason, preferably with empirical support, for why various inconvenient results from the invariant clamp literature (see point 6 for another example) are either erroneous, or different enough in kind to essentially be dismissed, is needed.

(4) The treatment given to non-target based generalization (i.e., plan/aim) is useful and a nice revision w/r/t previous reviewer comments. However, there are remaining issues that muddy the waters. First, it should be noted that Day et al. did not counterbalance the rotation sign. This might seem like a nitpick, but it is well known that intrinsic biases will significantly contaminate VMR data, especially in e.g. single target studies. I would thus not rely on the Day generalization function as a reasonable point of comparison, especially using linear regression on what is a Gaussian/cosine-like function. It appears that the generalization explanation is somewhat less handicapped if more reasonable generalization parameterizations are considered. Admittedly, they would still likely produce, quantitatively, too slow of a drop-off relative to the competition model for explaining Exps 2 and 3. This is a quantitative difference, not a qualitative one. W/r/t point 1 above, the use of Exp 1 is an additional problematic aspect of the generalization comparison (Figure 5, lower panels). All in all, I think the authors do make a solid case that the generalization explanation is not a clear winner; but, if it is acknowledged that it can contribute to the negative correlation, and is parameterized without using Day et al. and linear assumptions, I'd expect the amount of effect left over to be smaller than depicted in the current paper. If the answer comes down to a few degrees of difference when it is known that different methods of measuring implicit learning produce differences well beyond that range (Maresch), this key result becomes less convincing. Indeed, the authors acknowledge in the Discussion a range of 22°-45° of implicit learning seen across studies.

(5) I may be missing something here, but is the error sensitivity finding reported in Figure 6 not circular? No savings is seen in the raw data itself, but if the proposed model is used, a sensitivity effect is recovered. This wasn't clear to me as presented.

(6) The attenuation (anti-savings) findings of Avraham et al. 2021 are not sufficiently explained by this model.

(7) Have the authors considered context and inference effects (Heald et al. 2020) as possible drivers of several of the presented results?

eLife. 2022 Feb 28;11:e65361. doi: 10.7554/eLife.65361.sa2

Author response


[Editors’ note: The authors appealed the original decision. What follows is the authors’ response to the first round of review.]

Concerns:

1. There is no clear testing of alternative models. After Figure 1 and 2, the paper goes on with the competition model only. The independent model will never be able to explain differences in implicit learning for a given visuomotor rotation (VMR) size, so it is not a credible alternative to the competition model in its current form. It seems possible that extensions of that model based on the generalization of implicit learning (Day et al. 2106, McDougle et al. 2017) or task-dependent differences in error sensitivity could explain most of the prominent findings reported here. We hope the authors can show more model comparison results to support their model.

Two studies have measured generalization in rotation tasks with instructions to aim directly at the target (Day et al., 2016; McDougle et al., 2017) and have demonstrated that learning generalizes around one’s aiming direction, as opposed to the target direction. As the reviewers note, this generalization rule can contribute to the negative correlation we observed between explicit learning and exclusion-based implicit learning. To address this, in our revised manuscript we directly compare the competition model to an alternative SPE generalization model. Our new analysis is documented in Figure 4 in the revised manuscript. First, we empirically compare the relationship between implicit and explicit learning in our data to generalization curves measured in past studies. In Figures 4A-B we overlay data that we collected in Experiments 2 and 3 with generalization curves measured by Krakauer et al. (2000) and Day et al. (2016). In this way, we test whether generalization is consistent with the correlations between implicit and explicit learning that we measured in our experiments.

In Figures 4B,C, we show dimensioned (i.e., in degrees) relationships between implicit and explicit learning. In Figure 4A, we show a dimensionless implicit measure in which implicit learning is normalized to its “zero strategy” level. To estimate this value, we used the y-intercepts in the linear regressions labeled “data” in Figures 3G and Q. Most notably, the magenta line (Day 1T) shows the aim-based generalization curve measured by Day et al. (2016), where assays were used to tease apart implicit and explicit learning.

These new results demonstrate that implicit learning in Experiments. 2 and 3 (black and brown points) declined about 300% more rapidly with increases in explicit strategy, than predicted by generalization measured by Day et al. (2016). Two empirical considerations would also suggest that the discrepancy between our data and the Day et al. (2006) generalization curve is even larger than that observed in Figures 4A-B.

1. Day et al. (2006) use 1 adaptation target, whereas our tasks used 4 targets. Krakauer et al. (2000) demonstrated that the generalization curve widens with increases in target number (see Figure 4AC). Thus, the Day et al. (2006) study data likely overestimate the extent of generalization-based decay in implicit learning expected in our experimental conditions.

Our “explicit angle” in Figures 4A-B is calculated by subtracting total adaptation and exclusion-based implicit learning. Thus, under the plan-based generalization hypothesis, this measure would overestimate explicit strategy. A generalization-based correction would “contract” our data along the x-axis in Figures 4C-E, increasing the disconnect between our results and the generalization curves. To demonstrate this, we conducted a control analysis where we corrected our explicit measures according to the Day et al. generalization curve (see Appendix 4 in revised manuscript). Our results are shown in Figure . As predicted, correcting explicit measures yielded a poorer match between the data and generalization hypothesis (see discrepancy below each inset).

These analyses show that the alternative hypothesis of plan-based generalization is inconsistent with our measured data, though it could make minor contributions to the relationships we report between implicit and explicit learning (analysis shown on Section 2.2., Lines 442-468 in revised manuscript).

These analyses have relied on an empirical comparison between our data and past generalization studies. But we can also demonstrate mathematically that the implicit and explicit learning patterns we measured are inconsistent with the generalization hypothesis. First, recall that the competition equation states that steady-state implicit learning varies inversely with explicit strategy according to (Equation 4):

xiss=bi1ai+bi(rxess)

We can condense this equation by defining an implicit proportionality constant, p:xiss=prpxess, where p=bi1ai+bi

These new data provide a way to directly compare the competition model to an SPE generalization model. The key prediction, again, is to test how the relationship between implicit and explicit learning varies with the rotation’s magnitude. To do this, we calculated the implicit proportionality constant (p above) in the competition model and the SPE generalization model, that best matched the measured implicit-explicit relationship. We calculated this value in the 60° learning block alone, holding out all other rotation sizes. We then used this gain to predict the implicit-explicit relationship across the held-out rotation sizes. The competition model is shown in the solid black line in Figure 5D. The Day et al. (2016) generalization model is shown in the gray line. The total prediction error across the held-out 15°, 30°, and 45° rotation periods was two times larger in the SPE generalization model (Figure 4E, repeated-measures ANOVA, F(2,35)=38.7, p<0.001, ηp2=0.689; post-hoc comparisons all p<0.001). Issues with the SPE generalization model were not caused by misestimating the generalization gain, m. We fit the SPE generalization model again this time allowing both π and m to vary to best capture behavior in the 60° period (Figure 4D, SPE gen. best B4). This optimized model generalized very poorly to the held-out data, yielding a prediction error three times larger than the competition model (Figure45D, SPE gen best B4).

To understand why the competition model yielded superior predictions, we fit separate linear regressions to the implicit-explicit relationship measured during each rotation period. The regression slopes and 95% CIs are shown in Figure 4F (data). Remarkably, the measured implicit-explicit slope appeared to be constant across all rotation magnitudes in agreement with the competition theory (Figure 4H, competition). These measurements sharply contrasted with the SPE generalization model, which predicted that the regression slope would decline as the rotation magnitude decreased.

In summary, data in Experiment 1 were poorly described by an SPE learning model with aim-based generalization. While generalization may add to the correlations between implicit and explicit learning, its contribution is small relative to the competition theory. New analyses are described in Section 2.2 in the revised paper.

These analyses all considered how a negative relationship between implicit and explicit learning could emerge due to generalization (as opposed to competition). In our revised work, we consider an additional mechanism by which this inverse relationship could occur. Suppose that implicit learning is driven solely by SPEs as in earlier models. In this case, implicit learning should be immune to changes in explicit strategy. However, a participant that has a better implicit learning system will require less explicit re-aiming to reach a desired adaptation level. In other words, individuals with large SPE-driven learning may use less explicit strategy relative to those with less SPE-driven implicit learning. Like the competition model, this scenario would also yield a negative relationship between implicit and explicit learning, due to the way explicit strategies respond to variation in the implicit system. In other words, the competition equation supposes that the implicit-explicit relationship arises because the implicit system responds to variability in explicit strategy. An SPE learning model supposes that the implicit-explicit relationship arises because the explicit system responds to variability in implicit learning.

To model the latter possibility, suppose that the implicit system responds solely to an SPE. Total implicit learning is given by (Equation 5), and can be re-written as xiss = pir, where π is a gain that depends on implicit learning properties (ai and bi). Thus, implicit learning is centered on pir, but varies according to a normal distribution: xiss = N(pir,σi2) where σi represents the standard deviation in implicit learning. The target error that remains to drive explicit strategy is equal to the rotation minus implicit adaptation: rxiss. A negative relationship between implicit and explicit strategy will occur when explicit strategy responds in proportion to this error, xess = pe(rxiss) where pe is the explicit system’s response gain.

Overall, this model predicts important pairwise relationships between implicit learning, explicit strategy, and total adaptation (equations provided on Lines 1577-1620 in paper, and also in Figure 4). First, as noted implicit learning and explicit adaptation will show a negative relationship. Second, increases in implicit learning will tend to increase total adaptation. These increases in implicit learning will leave smaller errors to drive explicit learning, resulting in a negative relationship between explicit strategy and total learning. Figures 5A-C illustrate these pairwise relationships in our revised manuscript.

How do these relationships compare to the competition equation, in which implicit learning is driven by target errors? Suppose that participants choose a strategy that scales with the rotation’s size according to per (pe is the explicit gain) but varies across participants (standard deviation, σe). Thus, xess is given by a normal distribution N(per,σe2). The target error which drives implicit learning is equal to the rotation minus explicit strategy. The competition theory proposes that the implicit system is driven in proportion to this error according to: xiss = pi(rxess), where π is an implicit learning gain that depends on implicit learning parameters ai and bi (see Equation 4). This model will produce a negative correlation between implicit learning and explicit strategy. People that use a larger strategy will exhibit greater total learning. Simultaneously, increases in strategy will leave smaller errors to drive implicit learning, resulting in a negative relationship between implicit learning and total learning. These 3 predictions are illustrated in Figures 5D-F in our revised manuscript; these relationships correspond to linear equations which are provided on Lines 1577-1620 in paper, and also in Figure 5.

In sum, both an SPE learning model and a target error learning model could exhibit negative participant level correlations between implicit learning and explicit learning (Figures 5A and 5D). However, they make opposing predictions concerning the relationships between total adaptation and each individual learning system. To test these predictions, we considered how total learning was related to implicit and explicit adaptation measured in the No PT Limit group in Experiment 3. These data are now show in Figures 5GandH in our revised work.

Our observations closely agreed with the competition theory; increases in explicit strategy led to increases in total adaptation (Figure 5G, ρ=0.84, p<0.001), whereas increases in implicit learning were associated with decreases in total adaptation (Figure 5H, ρ=-0.70, p<0.001). We repeated similar analyses across additional data sets which also measured implicit learning via exclusion (i.e., no aiming) trials: (1) the 60° rotation condition (combined across gradual and abrupt groups) in Experiment 1, (2) the 60° rotation groups reported by Maresch et al. (2021), and (3) the 60° rotation group described by Tsay et al. (2021). We obtained the same result as in Experiment 3. Participants exhibited negative correlations between implicit learning and explicit strategy, positive correlations between explicit strategy and total adaptation, and negative correlation between implicit learning and total adaptation. These additional results are reported in Figure 5-Supplement 1 in the revised manuscript.

Thus, these additional studies also matched the competition model’s predictions, but rejected the SPE learning model on two accounts: (1) the relationship between implicit learning and total adaptation was negative, not positive as predicted by the SPE model, and (2) the relationship between explicit learning and total adaptation was positive, not negative as predicted by the SPE model.

Our revised manuscript notes an important caveat with this last analysis. Indeed, the competition model predicts on average that implicit learning will exhibit a negative correlation with total adaptation (across individual participants). However, this prediction assumes that implicit learning properties (retention and error sensitivity, the gain pi above) are identical across participants, an unlikely possibility. Variation in the implicit learning gain (e.g., Participant A has an implicit system that is more sensitive to error) will promote a positive trend between implicit learning and total adaptation, that will weaken the negative correlations generated by implicit-explicit competition. Thus, pairwise correlations between implicit learning, explicit strategy, and total adaptation will vary probabilistically across experiments. While it is critical the reader keep this in mind, a complete investigation of these second-order phenomena is complex and beyond the scope of our work. For this reason, we treat these issues in Appendix 3 in the revised manuscript. Appendix 3 describes several experimental conditions (e.g., variation in explicit strategy across participants, average explicit learning) that will alter the participant-level relationship between implicit and explicit learning.

Summary

In our revised manuscript we compare the competition theory to two alternate models that also yield a negative participant-level relationship between implicit learning and explicit strategy. The first alternate model supposes that implicit learning is driven by SPEs, but is altered by plan-based generalization. The second alternate model supposes that implicit learning is driven by SPEs, but explicit strategies respond to variability in implicit learning across participants.

The generalization model, however, did not match our data:

1. Implicit learning declined 300% more rapidly than predicted by generalization (Figures 4A-C).

2. The implicit-explicit learning relationship did not accurately generalize across rotation sizes in the SPE generalization model (Figures 4DandE).

3. The gain relating implicit and explicit learning remained the same across rotation sizes, rejecting the SPE generalization model, but supporting the competition theory (Figure 4F).

The alternate model where explicit systems respond to subject-to-subject variability in SPE-driven implicit learning did not match our data:

1. We observed a negative relationship between implicit learning and total adaptation in Experiment 3 (Figure 4H) and 3 additional datasets (Figure 4-Supplement 1) as predicted by the competition theory, but in contrast to the SPE learning model.

2. We observed a positive relationship between implicit learning and total adaptation in Experiment 3 (Figure 4G) and 3 additional datasets (Figure 4-Supplement 1) as predicted by the competition theory, but in contrast to the SPE learning model.

Given the importance of these analyses, we devote an entire section (Part 2) of our revised manuscript to comparisons with alternate SPE learning models (Lines 374-504 in revised manuscript). We also describe second-order phenomena in Appendix 3 of our revised manuscript (Lines 2167-2414).

2. The measurement of implicit learning is compromised by not asking participants to stop aiming upon entering the washout period. This happened at three places in the study, once in Experiment 2, once in the re-used data by Fernadez-Ruiz et al., 2011, and once in the re-used data by Mazzoni and Krakauer 2006 (no-instruction group). We would like to know how the conclusion is impacted by the ill-estimated implicit learning.

We agree with the reviewers’ concerns, with one clarification: in Experiment 2 we did instruct participants to stop aiming prior to the no feedback washout period, so this criticism is not relevant to Experiment 2 (note Experiment 2 is now Experiment 3 in the revised work). With that said, this criticism would apply to the Saijo and Gomi (2010) analysis in our original manuscript, which was not noted in the reviewers’ concern. Here, we will address each limitation noted by the reviewers in turn.

Saijo and Gomi (2010)

Let us begin with the Saijo and Gomi (2010) analysis. In our original manuscript, we used conditions tested by Saijo and Gomi to investigate one of the competition theory’s predictions: decreasing explicit strategy for a given rotation size will increase implicit learning. Specifically, we analyzed whether gradual learning suppressed explicit strategy in this study, thus facilitating greater implicit adaptation. While our analysis was consistent with this hypothesis, as noted by the reviewers, there is a limitation; washout trials were used to estimate implicit learning, though participants were not directly instructed to stop aiming. This represents a potential error source. For this reason, in our revised manuscript, we have collected new data to test the competition model’s prediction.

These new data also examine gradual and abrupt rotations. In Experiment 1 (new data in the paper), participants were exposed to a 60° rotation, either abruptly (n=36), or in a stepwise manner (n=37) where the rotation magnitude increased by 15° across 4 distinct learning blocks (Figure 2D). Unlike Saijo and Gomi, implicit learning was measured via exclusion during each learning period by instructing participants to aim directly towards the target. As we hypothesized, in Figure 2F, we now demonstrate that stepwise rotation onset (which yields smaller target errors) muted the explicit response to the rotation (compared to abrupt learning). The competition model predicts that decreases in explicit strategy should facilitate greater implicit adaptation. To test this prediction, we compared implicit learning across the gradual and abrupt groups during the fourth learning block, where both groups experienced the 60° rotation size (Figures 2E and G).

Consistent with our hypothesis, participants in the stepwise condition exhibited a 10° reduction in explicit re-aiming (Figure 2F, two-sample t-test, t(71)=4.97, p<0.001, d=1.16), but a concomitant 80% increase in their implicit recalibration (Figure 2G, data, two-sample t-test, t(71)=6.4, p<0.001, d=1.5). To test whether these changes in implicit learning matched the competition model, we fit the independence Equation (Equation 5) and the competition Equation (Equation 4) to the implicit and explicit reach angles measured in Blocks 14, across the stepwise and abrupt conditions, while holding implicit learning parameters constant. In other words, we asked whether the same parameters (ai and bi) could parsimoniously explain the implicit learning patterns measured across all 5 conditions (all 4 stepwise rotation sizes plus the abrupt condition). As expected, the competition model predicted that implicit learning would increase in the stepwise group (Figure 2G, comp., 2-sample t-test, t(71)=4.97, p<0.001), unlike the SPE-only learning model (Figure 2G, indep.).

Thus, in our revised manuscript we present new data that confirm the hypothesis we initially explored in the Saijo and Gomi (2010) dataset: decreasing explicit strategy enhances implicit learning. These new data do not present the same limitation noted by the reviewers. Lastly, it is critical to note that while stepwise participants showed greater implicit learning, their total adaptation was approximately 4° lower than the abrupt group (Figure 2E, right-most gray area (last 20 trials); two-sample t-test, t(71)=3.33, p=0.001, d=0.78). This surprising phenomenon is predicted by the competition equation. When strategies increase in the abrupt rotation group, this will tend to increase total adaptation. However, larger strategies leave smaller errors to drive implicit learning. Hence, greater adaptation will be associated with larger strategies, but less implicit learning. Indeed, the competition model predicted 53.47° total learning in the abrupt group but only 50.42° in the stepwise group. Recall that, we described this paradoxical phenomenon at the individual-participant level in Point 1 above. Note that this pattern was also observed by Saijo and Gomi. Surprisingly, when participants were exposed to a 60° rotation in a stepwise manner, total adaptation dropped over 10°, whereas the aftereffect exhibited during the washout period nearly doubled.

Summary

In our revised paper we have removed this Saijo and Gomi analysis from the main figures, and instead use this as supportive data in Figure 2-Supplement 2, and also in Appendix 2. We now state, “It is interesting to note that these implicit learning patterns are broadly consistent with the observation that gradual rotations improve procedural learning34,43, although these earlier studies did not properly tease apart implicit and explicit adaptation (see the Saijo and Gomi analysis described in Appendix 2).”

We have replaced this analysis with our new data in Experiment 1. In these new data, abrupt and gradual learning was compared using the appropriate implicit learning measures. These new data are included in both Figures 1 and 2 in the revised manuscript. Importantly, these new data point to the same conclusions we reached in our initial Saijo and Gomi analysis.

Mazzoni and Krakauer (2006)

The reviewers note that the uninstructed group in Mazzoni and Krakauer was not told to stop aiming during the washout period. We agree with the potential error source. However, it is important to note that these data were not included to directly support the competition model, but as a way to demonstrate its limitations. For example, in Mazzoni and Krakauer (2006) our simple target error model cannot reproduce the measured data. That is, in the instructed group implicit learning continued despite eliminating the primary target error. Thus, in more complicated scenarios where multiple visual landmarks are used during adaptation, the learning rules must be more sophisticated than the simple competition theory. Here we showed that one possibility is that both the primary target and the aiming target drive implicit adaptation via two simultaneous target errors. Our goal by presenting these data (and similar data collected by Taylor and Ivry, 2011) was to emphasize that our work is not meant to support an either-or hypothesis between target error and SPE learning, but rather the more pluralistic conclusion that both error sources contribute to implicit learning in a condition-dependent manner.

In our revised manuscript, we now better appreciate this conclusion and its potential limitation in the following passage: “It is important, however, to note a limitation in these analyses. Our earlier study did not employ the standard conditions used to measure implicit aftereffects: i.e., instructing participants to aim directly at the target, and also removing any visual feedback. Thus, the proposed dual-error model relies on the assumption that differences in washout were primarily related to the implicit system. These assumptions need to be tested more completely in future experiments.

In summary, the conditions tested by Mazzoni and Krakauer show that the simplistic idea that adaptation is driven by only one target error, or only one SPE, cannot be true in general54. We propose a new hypothesis that when people move a cursor to one visual target, while aiming at another visual target, each target may partly contribute to implicit learning. When these two error sources conflict with one another, the implicit learning system may exhibit an attenuation in total adaptation. Thus, implicit learning modules may compete with one another when presented with opposing errors.”

Fernandez-Ruiz et al. (2011)

As noted by the reviewers, our original manuscript also reported data from a study by Fernandez-Ruiz et al. (2011). Here, we showed that participants in this study who increased their preparation time more upon rotation onset tended to exhibit a smaller aftereffect. We used these data to test the idea that when a participant uses a larger strategy, they inadvertently suppress their own implicit learning. However, we agree with the reviewers that this analysis is problematic because participants were not instructed to stop aiming in this earlier study. Thus, we have removed these data in the revised manuscript. This change has little to no effect on the manuscript, given that individual-level correlations between implicit and explicit learning are tested in Experiments 1, 2, and 3 in the revised paper (Figures 3-5), and are also reported from earlier works (Maresch et al., 2021; Tsay et al., 2021) in Figure 4-Supplements 1GandI, where implicit learning was more appropriately measured via exclusion. Note that all 5 studies support the same ideas suggested by the Fernandez-Ruiz et al. study: increases in explicit strategy suppress implicit learning.

3. All the new experiments limit preparation time (PT) to eliminate explicit learning. In fact, the results are based on the assumption that shortened PT leads to implicit-only learning (e.g., Figure 4). Is the manipulation effective, as claimed? This paradigm needs to be validated using independent measures, e.g., by direct measurement of implicit and/or explicit strategy.

We agree that it is important to test how effectively this condition limits explicit strategy use. Therefore, we have performed two additional control studies to measure explicit strategy in the limited PT condition. First, we added a limited preparation time (Limit PT) group to our laptop-based study in Experiment 3 (Experiment 2 in original manuscript). In Experiment 3, participants in the Limit PT group (n=21) adapted to a 30° rotation but under a limited preparation time condition. As for the Limit PT group in Experiment 2 (Experiment 1 in original paper), we imposed a bound on reaction time to suppress movement preparation time. However, unlike Experiment 2, once the rotation ended, participants were told to stop re-aiming. This permitted us to examine whether limiting preparation time suppressed explicit strategy as intended. Our analysis of these new data is shown in Figures 3K-Q.

In Experiment 3, we now compare two groups: one where participants had no preparation time limit (No PT Limit, Figures 3H-J) and one where an upper bound was placed on preparation time (Limit PT, Figures 3K-M). We measured early and late implicit learning measures over a 20-cycle no feedback and no aiming period at the end of the experiment. The voluntary decrease in reach angle over this no aiming period revealed each participant’s explicit strategy (Figure 3N). When no reaction time limit was imposed (No PT Limit), reaiming totaled approximately 11.86° (Figure 3N, E3, black), and did not differ statistically across Experiments 2 and 3 (t(42)=0.50, p=0.621). Recall that Experiment 2 tested participants using a robotic manipulandum and Experiment 3 tested participants in a similar laptop-based paradigm. As in earlier reports, limiting reaction time dramatically suppressed explicit strategy, yielding only 2.09° of re-aiming (Figure 3N, E3, red). Therefore, our new control dataset suggests that our limited reaction time technique was highly effective at suppressing explicit strategy as initially claimed in our original manuscript.

Since we have now demonstrated that limiting PT is effective at suppressing explicit strategy, we use our new Limit PT group in Experiment 3, to corroborate several key competition model predictions. First, consistent with the competition model, suppressing explicit strategy increased implicit learning by approximately 40% (Figure 3O, No PT Limit vs. Limit PT, two-sample t-test, t(54)=3.56, p<0.001, d=0.98). Second, we used the Limit PT group’s behavior in Experiment 3 to estimate implicit learning parameters (ai and bi) as we did in Experiment 1 (now Experiment 2) in our original manuscript. Briefly, the implicit retention factor was estimated during the no feedback and no aiming period, and the implicit error sensitivity was estimated via trial-to-trial changes in reach angle during the rotation period. We used these parameters to specify the unknown implicit learning gain in the competition model (pi described in Point 1 above). Limit PT group behavior predicted that implicit and explicit learning should be related by the following competition model: xi = 0.63(30 – xe). As in Experiment 2, we observed a striking correspondence between this model prediction (Figure 3Q, bottom, model) and the actual implicit-explicit relationship measured across participants in the No PT Limit group (Figure 3Q, bottom, points). The slope and bias predicted by Equation 4 (-0.67 and 20.2°, respectively) differed from the measured linear regression by less than 8% (Figure 3Q, bottom brown line, R2=0.78; slope is -0.63 with 95% CI [-0.74, -0.51] and intercept is 19.7° with 95% CI [18.2°, 21.3°]).

With that said, our new data alone suggest that while explicit strategies are strongly suppressed by limiting preparation time, they are not entirely eliminated; when we limited preparation time in Experiment 3, we observed that participants still exhibited a small decrease (2.09°) in reach angle when we instructed them to aim their hand straight to the target (Figure 3L, no aiming; Figure 3N, E3, red). This ‘cached’ explicit strategy, while small, may have contributed to the 8° reach angle change measured early during the second rotation in our savings experiment (Experiment 4, Figure 8C in revised paper).

For this reason, we consider another important phenomenon in our revised manuscript: time-dependent decay in implicit learning. That is, the 2° decrease in reach angle we observed when participants were told to stop aiming in the PT Limit group in Experiment 3 may be due to time-based decay in implicit learning over the 30 second instruction period, as opposed to a voluntary reduction in strategy. To test this possibility, we tested another limited preparation group (n=12, Figure 8-Supplement 1A, decay-only, black). But this time, participants were instructed that the experiment’s disturbance was still on, and that they should continue to move the ‘imagined’ cursor through the target during the terminal no feedback period. Still, reach angles decreased by approximately 2.1° (Figure 8-Supplement 1B, black). Indeed, we detected no statistically significant difference between the change in reach angle in this decay-only condition, and the Limit PT group in Experiment 3 (Figure 8-Supplement 1B; two-sample t-test, t(31)=0.016, p=0.987).

This control experiment suggested that residual ‘explicit strategies’ we measured in the Limit PT condition, were in actuality caused by time-dependent decay in implicit learning. Thus, our Limit PT protocol appears to almost entirely eliminate explicit strategy. This additional analysis lends further credence to the hypothesis that savings in Experiment 4, was primarily due to changes in the implicit system rather than cached explicit strategies (and also that the impairment in learning observed in Experiment 5, was driven by the implicit system).

Summary

We collected additional experimental conditions in Experiment 3 in the revised manuscript (a Limit PT group and a decay-only group). Data in the Limit PT condition suggested that explicit strategies are suppressed to approximately 2° by our preparation time limit (compared to about 12° under normal conditions). Data in the decay-only group suggest that this 2° change in reach angle was not due to a cached explicit strategy but rather time-dependent implicit decay. Together, these conditions demonstrate that limiting reaction time in our experiments prevents the caching of explicit strategies; our limited preparation time measures do indeed reflect implicit learning.

4. Related to point 3, it is noted that the aiming report produced a different implicit learning estimate than a direct probe in Exp1 (Figure S5E). However, Figure 3 only reported Exp1 data with the probe estimate. What would the data look like with report-estimated implicit learning? Will it show the same negative correlation results with good model matches (Figure 3H)?

We ended Experiment 2 (Experiment 1 in the original manuscript) by asking participants to verbally report where they remembered aiming in order to hit the target. While many studies use similar reports to tease apart implicit and explicit strategy (e.g., McDougle et al., 2015), our probe had important limitations. Normally, participants are probed at various points throughout adaptation by asking them to report where they plan to aim on a given trial, and then allowing them to execute their reaching movement (thus they are able to continuously evaluate and familiarize themselves with their aiming strategy over time). However, we did not use this reporting paradigm given recent evidence (Maresch et al., 2020) that reporting explicit strategies increases explicit strategy use, which we expected would have the undesirable consequence of suppressing implicit learning (given the competition theory). Thus, in Experiment 2, reporting was measured only once; participants tried to recall where they were aiming throughout the experiment.

This methodology may have reduced the reliability in this measure of explicit adaptation. As stated in our Methods, we observed that participants were prone to incorrectly reporting their aiming direction, with several reported angles in the perturbation’s direction, rather than the compensatory direction. In fact, 25% of participant responses were reported in the incorrect direction. Thus, as outlined in our Methods, we chose to take the absolute value of these measures, given recent evidence that strategies are prone to sign errors (McDougle and Taylor, 2019). That is, we assumed that participants remembered their aiming magnitude, but misreported the orientation. In sum, we suspect that our report-based strategy measures are prone to inaccuracies and opted to interpret them cautiously and sparingly in our original manuscript.

Nevertheless, in the revised manuscript, we now also report the individual-level relationships between report-based implicit learning and report-based explicit strategy in Experiment 2 (previously Experiment 1). These are now illustrated in Figure 3-Supplement 2C. While reported explicit strategies were on average greater than our probe-based measure, and report-based implicit learning was smaller than our probe-based measure (Figures 3-Supplement 2AandB; paired t-test, t(8)=2.59, p=0.032, d=0.7), the report-based measures exhibited a strong correlation which aligned with competition model’s prediction (Figure 3-Supplement 2C; R2=0.95; slope is -0.93 with 95% CI [-1.11, -0.75] and intercept is 25.51° with 95% CI [22.69°, 28.34°]).

In summary, we have now added report-based implicit learning and explicit learning measurements to our revised manuscript. We show that these measures also exhibit a strong negative correlation with good agreement to the competition model. However, we still feel that it is important to question the reliability of these report-based measures (given that they were collected only once at the end of the experiment, when no longer in a ‘reaching context’). Our new analysis is described on Lines 319-327 in the paper.

5. Figure 3H is one of the rare pieces of direct evidence to support the model, but we find the estimated slope parameter is based on a B value of 0.35, which is unusually high. How is it obtained? Why is it larger than most previous studies have suggested? In fact, this value is even larger than the learning rate estimated in Exp 3 (Figure 6C). Is it related to the small sample size (see below)?

We can see where there may be confusion concerning how we estimated implicit error sensitivity, bi in Experiment 1 (now Experiment 2 in the revised manuscript). Note that we use the same procedures to estimate error sensitivity in Experiment 3 (newly collected Limit PT group in laptop-based study). We can explain why the error sensitivity we estimate is appropriate. In Figure 3H (now Figure 3G), as well as throughout our paper, we attempt to understand asymptotic behavior. Therefore, we require error sensitivity during the asymptotic learning phase. In other words, steady-state adaptation will depend on asymptotic implicit error sensitivity, not initial error sensitivity. This subtlety is important because as we and others have shown, error sensitivity increases as errors get smaller (e.g., Marko et al., 2012; Kim et al., 2018, Albert et al., 2021). This was indeed the case in the Limit PT condition in Experiment 2 (previously Experiment 1), as shown in Figure 3-Supplement 1A in the revised manuscript.

To calculate error sensitivity in each cycle, we used its empirical definition (Equation 14 in revised manuscript). To determine the terminal error sensitivity during the adaptation period, we averaged error sensitivity over the last 5 rotation cycles (last 5 cycles in S3-1A). This error sensitivity was approximately 0.346, as shown by the horizontal dashed line in S3-1A. How reasonable is this value? The corresponding terminal error in the Limit PT condition in Experiment 2 (previously Experiment 1) was approximately 7.5° as shown in Figure 3-Supplement 1B (blue horizontal line). Recently Kim et al. (2018) calculated implicit error sensitivity as a function of error size; we show these curves in Figure 3-Supplement 1C in the revised manuscript. The blue line shows the terminal error in the Limit PT condition. As we can see, these earlier works suggested that for this particular error size, implicit error sensitivity varies somewhere between 0.25 and 0.35. Thus, our reported value of 0.346, does indeed appear reasonable. We thank the reviewers for noting this potential point of confusion and now reference our analysis on Lines 304-306 in the revised manuscript. We repeat this analysis for Experiment 3 (Limit PT group in newly added laptop-based study) in Figure 3-Supplements 1D-F.

Now we should address the inquiry about why this value is larger than the error sensitivity estimated in our Haith et al. (2015) state-space model. This question is actually common to most studies (including our own) in the literature that use state-space models to interrogate sensorimotor adaptation (e.g., Smith et al., 2006; Mawase et al., 2014; Galea et al. 2015; McDougle et al., 2015; Coltman et al., 2019, etc.). This issue arises because in our Haith et al. (2015) model, error sensitivity remains constant within a learning block. Clearly, we know this is not the case: within a single perturbation exposure not only does error sensitivity vary with error size (Figure 3-Supplement 1A, Marko et al., 2012; Kim et al., 2018, Albert et al., 2021), but it also changes with one’s error history (Herzfeld et al., 2014; Gonzalez-Castro and Hadjiosif et al., 2014; Albert et al., 2021). Our simple model does not describe these within-block changes, but instead permits savings by supposing that error sensitivity can differ across Blocks 1 and 2.

Even with this simplification, the model possesses 6 free parameters: bi in Block 1, bi in Block 2, be in Block 1, be in Block 2, ai, and ae. If we were to allow error sensitivity to change within each block, the simplest choice would require at least 2-4 free parameters: rate parameters describing how bi and be change within each block. Thus, a simple model of within-block and between-block changes in error sensitivity, would likely require a state-space model with at least 8-10 free parameters. As these models become more complex, the likelihood that they overfit the data increases significantly.

Thus, the common choice is to simplify this complexity, by not permitting b to change within a block. This assumption will cause the model to identify an error sensitivity that is intermediate between its initial value in the block, and its terminal value within a block. Of course, this is an abstraction of reality because b is not constant. Nevertheless, this common assumption still has value, as it provides a means of testing if some “average” error sensitivity differs across two different perturbation exposures. However, because error sensitivity increases over the block, this “average” error sensitivity parameter will be smaller than the terminal error sensitivity reached later in the block. Thus, this is the reason why when we estimate b late during the Limit PT condition, we obtained a sensitivity near 35%, but when we fit a model to the entire exposure period in Haith et al. (2015), the average error sensitivity identified by the model falls far short of this value.

While using a constant bi value in our Haith et al. (2015) model represents a potential limitation, a more sophisticated state-space model is not needed for our purposes. Here, we compare an SPE model and a TE model, in order to demonstrate the possibility that some “average” implicit error sensitivity is greater during the second rotation. Our main purpose is to raise an alternate hypothesis to our original conclusion in Haith et al., and recent evidence that implicit learning processes do not exhibit savings. Also note that our choice to fix bi to a constant value within a block is consistent with past studies that have used statespace models to document changes in error sensitivity across perturbation exposures (e.g., Lerner and Albert et al., 2020; Coltman et al., 2019; Mawase et al., 2014).

In summary, we hope that this discussion helps to clarify the error sensitivity estimates in our work. In our revised manuscript we have now added Figure 3-Supplement 1 to show that our asymptotic error sensitivity estimates are consistent with past literature.

6a. The direct evidence supporting a competition of implicit and explicit learning is still weak.

First, we humbly submit that our work provides considerable support for a competition between implicit and explicit learning. Given the large amount of data added to the revised manuscript it seems appropriate to begin with a brief summary of the analyses that support the competition theory over the standard SPEonly learning hypothesis.

A. Saturated learning with increasing rotation size in some experiments, yet an increasing response in others (Figure 1 in revised manuscript), which is inconsistent with generalized SPE-learning.

B. Quadratic responses to rotation size which are inconsistent with generalized SPE-learning (new analysis in Figure 1 in revised manuscript, described more below)

C. Decreases in implicit adaptation due to coaching, with a simultaneous insensitivity to rotation size (Figure 2 in revised manuscript), whose co-occurrence is unexplained by generalized SPE-learning.

D. Increases in implicit learning driven by gradual perturbation onsets (Figure 1 in revised paper) whose subject-to-subject properties are inconsistent with generalization of SPE learning (Figure 5 in revised paper along with several supplements).

E. Increases in implicit learning due to suppressed explicit strategy, consistent with the competition model (Figure 3O in revised manuscript).

F. Negative subject-level correlations between implicit and explicit learning, with properties that are quantitatively predicted by the competition model (Figures 3GandQ).

G. Negative correlations between implicit-total learning that support the competition theory and disprove an SPE-only model (Figure 4H).

H. Positive correlations between explicit-total totaling that support the competition theory and disprove an SPE-only model (Figure 4G).

I. Negative implicit-explicit correlations (Figures 3G, P, and Q) that we have now detected across three additional studies (Figure 4-Supplements 1G-I), whose properties are inconsistent with a generalized SPE learning model (Figure 5).

The totality of this evidence cannot be reconciled by the standard model where implicit learning is solely driven by SPEs. With that said, we wish to restate that despite these points, our paper is intended to show that implicit target error learning exists in addition to, not instead of, the implicit sensory prediction error system. We hope that this pluralistic intention is better emphasized in our revised paper (see for example our text on Lines 787-790 and 1036-1103).

6b. Most data presented here are about the steady-state learning amplitude; the authors suggest that steady-state implicit learning is proportional to the size of the rotation (Equation5). But this is directly contradictory to the experimental finding that implicit learning saturates quickly with rotation size (Morehead et al., 2017; Kim et al., 2018). This prominent finding has been dismissed by the proposed model.

These concerns are critical. We agree with the reviewers that Morehead et al., 2017 and Kim et al., 2018 suggest that implicit learning saturates quickly with rotation size. With that said, these studies both use invariant error-clamp perturbations, not standard visuomotor rotations. We are not sure that the implicit properties observed in an invariant error context apply to the conditions we consider in our manuscript. In our revised manuscript, we consider variations in steady-state implicit learning across two new data sets: (1) stepwise adaptation in Experiment 1 and (2) non-monotonic implicit responses in Tsay et al., 2021. As we will demonstrate below, our data and re-analysis strongly indicate that the implicit system does not always saturate with changes in rotation size. Rather, the implicit system exhibits a complicated response to both rotation size and explicit strategies, which together yield three implicit learning phenotypes:

1. Saturation in steady-state implicit learning despite increasing rotation size (analysis reported in original paper using Neville and Cressman, 2018).

2. Scaling in steady-state implicit learning with increasing rotation size (new analysis using stepwise learning in Experiment 1).

3. Non-monotonic (quadratic) steady-state implicit behavior due to increases in rotation magnitude (new analysis added which uses data from Tsay et al., 2021).

We explore these three phenotypes in Figure 1 in the revised paper. Below, we provide an overview of these phenotypes, and make references to our revised text and Figure 1 where appropriate.

To begin with, we discuss cases where implicit learning appears to remain constant over large changes in rotation magnitude. This analysis is the same as the Neville and Cressman (2018) analysis described in our original manuscript (though the way we present certain aspects has been updated). Recall that Neville and Cressman (2018) examined how both implicit and explicit systems responded to changes in rotation size.

Participants adapted to a 20° (n=11), 40° (n=10), or 60° (n=10) visuomotor rotation (Figure 1C). Figures 1D and 1E show exclusion-based measures for implicit and explicit learning across all 3 rotations. Unsurprisingly, strategies increased proportionally with the rotation’s size (Figure 1E). On the other hand, implicit learning was insensitive to the rotation magnitude; it reached only 10° and remained constant despite tripling the perturbation’s size (Figure 1D).

As the reviewers note, on its face, this implicit saturation appears in line with an upper bound on implicit learning, as in Morehead et al., 2017 and Kim et al., 2018 (with the exception that here implicit learning was limited to 10° which falls short of the 15-25° limits observed in the invariant error-clamp paradigm).

Here, however, we propose another way that implicit learning might exhibit this saturation phenotype: not due to an upper bound on its compensation, but instead, competition.

In the competition model, the implicit system is driven by the residual error between the rotation and explicit strategy (rxess in Equation 4). As a result, this model predicts that when an increase in rotation magnitude is matched by an equal change in explicit strategy (Figure 1-Supplement 1A, same), the implicit learning system’s driving force will remain constant (Figure 1-Supplement 1B, same). This constant driving input leads to a phenotype where implicit learning appears to “saturate” with increases in rotation size (Figure 1-Supplement 1C, same).

To investigate this possibility, we measured the rate at which explicit strategy increased with rotation size in the measured behavior. Rapid increases in explicit strategy (Figure 1E) limited changes in the competitive driving force; whereas the rotation increased by 40°, the driving force changed by less than 2.5° (Figure 1F). Thus, the competition model (Figure 1G, competition) yielded a close correspondence to the measured data (Figure 1G, data); when the driving input remains the same across rotation sizes, implicit learning will remain the same.

This model suggested that the implicit system’s response saturated not due to an intrinsic upper limit in implicit learning, but due to a constant driving input. The key prediction is that the implicit system should be ‘released’ from this saturation phenotype by weakening the explicit system’s response to the rotation. That is, when a change in rotation size is accompanied by a smaller change in explicit strategy (Figure 1-Supplement 1A, slower), the competitive driving input to the implicit system should increase (Figure 1Supplement 1B, slower). In the revised manuscript, we have added a new dataset (n=37) to test this idea. In this new experiment, we probed implicit and explicit learning across similar rotation sizes (15-60°) but using a stepwise perturbation. In Experiment 1, participants (n=37) adapted to a stepwise perturbation which started at 15°, but increased to 60° in 15° increments (Figure 1H). Towards the end of each learning block, we assessed implicit and explicit adaptation by instructing participants to aim directly to the target (Figure 1H, implicit learning shown in black shaded regions).

Stepwise rotation onset muted the explicit system’s gain, relative to the abrupt rotation conditions used by Neville and Cressman (2018); explicit strategies increased with a 94.9% gain (change in strategy divided by change in rotation) across the abrupt groups in Figure 1E, but only a 55.5% gain in the stepwise condition shown in Figure 1J. This suppression in the explicit response caused the competition model’s implicit driving input (Figure 1K) to increase with rotation magnitude. Remarkably, this increasing driving input predicted a “scaling” phenotype in steady-state learning (Figure 1L, competition) that precisely matched the measured implicit response (Figures 1I; 1L, data).

Thus, steady-state implicit learning can exhibit a saturation phenotype (Figure 1G) and a scaling phenotype (Figure 1L), as predicted by the competition model. But recent work in Tsay et al., 2021, suggests a third steady-state implicit phenotype: non-monotonicity. We have now added an analysis of these data to the revised manuscript. In this study, the authors probed a wider range in rotation size, 15° to 90° (Figure 1M). A terminal no aiming period was used to measure steady-state implicit learning levels (n=25/group). Curiously, whereas implicit learning increased across the 15° and 30° rotations, it remained the same in the 60° rotation group, and then decreased in the 90° rotation group (Figure 1N).

To determine whether this non-monotonicity could be captured by the competition model, we considered again how explicit re-aiming increased with rotation size (Figure 1O). We observed an intriguing pattern. When the rotation increased from 15° to 30°, explicit strategy responded with a very low gain (4.5%, change in strategy divided by change in rotation). An increase in rotation size to 60° was associated with a medium-sized gain (80.1%). The last increase to 90° caused a marked change in the explicit system: a 53.3° increase in explicit strategy (177.7% gain). Thus, explicit strategy increased more than the rotation had. Critically, this condition produces a decrease in the implicit driving input in the competition model (Figure 1-Supplement 1, faster). Overall, this large modulation in explicit learning gain (4.5% to 80.1% to 177.7%) yielded non-monotonic behavior in the implicit driving input (Figure 1P) which increased between 15° and 30°, changed little between 30° and 60°, and then dropped between 60° and 90°. As a result, the competition model (Figure 1Q, competition) exhibited a non-monotonic envelope, which closely tracked the measured implicit steady-states (Figure 1Q, data).

Together, these studies demonstrate that the implicit system exhibits at least 3 steady-state phenomena: saturation, scaling, and non-monotonicity. In abrupt conditions, the implicit response saturated. In a more gradual condition, the implicit response scaled. When the rotation increased to an orthogonal extreme (90° rotation), implicit learning showed a non-monotonic decrease. A model where implicit learning is driven solely by SPEs could only produce the scaling phenotype (Figures 1G, 1L, and 1Q, independence, not shown here but provided in the revised manuscript). The competition model, however, predicted that the implicit system should respond not only to the rotation, but also changes in explicit strategy. This additional dimension yielded a match to the data across all 3 implicit learning phenotypes (Figures 1G, 1L, and 1Q, competition). Thus, the implicit system appeared to compete with explicit strategy, in their shared response to target error.

In sum, we now add two additional datasets to the revised manuscript: data collected using a stepwise rotation in Experiment 1, and data recently collected by Tsay et al., 2021. Collectively, these new data show that steady-state implicit learning is complex, exhibiting at least three contrasting phenotypes: saturation, scaling, and non-monotonicity. Remarkably, the competition theory provides a way to account for each of these patterns. Thus, the competition model does accurately describe how implicit learning varies due to both changes in rotation size and explicit strategy, at least in standard rotation paradigms. However, these rules do not explain implicit behavior in invariant error-clamp paradigms. In the revised manuscript, we delve further into a comparison between standard rotation learning and invariant error learning in our Discussion (Lines 911-941). The relevant passage is provided below:

“Although the implicit system is altered in many experimental conditions, one commonly observed phenomenon is its invariant response to changes in rotation size3,31,35,36,40. For example, in the Neville and Cressman31 dataset examined in Figure 1, total implicit learning remained constant despite tripling the rotation’s magnitude. While this saturation in implicit learning is sometimes interpreted as a restriction in implicit adaptability, this rotation-insensitivity may have another cause entirely: competition. That is, when rotations increase in magnitude, rapid scaling in the explicit response may prevent increases in total implicit adaptation. Critically, in the competition theory, implicit learning is driven not by the rotation, but by the residual error that remains between the rotation and explicit strategy. Thus, when we used gradual rotations to reduce explicit adaptation (Experiment 1), prior invariance in the implicit response was lifted: as the rotation increased, so too did implicit learning37 (Figure 1I). The competition theory could readily describe these two implicit learning phenotypes: saturation and scaling (Figures 1GandL). Furthermore, it also provided insight as to why implicit learning can even exhibit a non-monotonic response, as in Tsay et al. (2021)36. All in all, our data suggest that implicit insensitivity to rotation size is not due to a limitation in implicit learning, but rather a suppression created by competition with explicit strategy.

With that said, this competitive saturation in implicit adaptation should not be conflated with the upper limits in implicit adaptation that have been measured in response to invariant errors3,11,40. In this latter condition, implicit adaptation reaches a ceiling whose value varies somewhere between 15 degrees3 and 25 degrees40 across studies. In these experiments, participants adapt to an error that remains constant and is not coupled to the reach angle (thus, the competition theory cannot apply). While the state-space model naturally predicts that total adaptation can exceed the error size which drives learning in this errorclamp condition (as is observed in response to small error-clamps), it cannot explain why asymptotic learning is insensitive to the error’s magnitude. One idea is that proprioceptive signals40,83 may eventually outweigh the irrelevant visual errors in the clamped-error condition, thus prematurely halting adaptation. Another possibility is that implicit learning obeys the state-space competitive model described here, but only up until a ceiling that limits total possible implicit corrections. Indeed, in Experiment 1, we produced scaling in the implicit system in response to rotation size, but never evoked more than about 22° of implicit learning. However, when we used similar gradual conditions in the past to probe implicit learning37, we observed about 32° implicit learning in response to a 70° rotation. Further, in a recent study by Maresch and colleagues47, where strategies were probed only intermittently, implicit learning reached nearly 45°. Thus, there remains much work to be done to better understand variations in implicit learning across errorclamp conditions, and standard rotation conditions.”

6c. On the other hand, one classical finding for conventional VMR is that explicit learning would gradually decrease after its initial spike, while implicit learning would gradually increase. The competition model appears unable to account for this stereotypical interaction. How can we account for this simple pattern with the new model or its possible extensions?

As the reviewers note, some studies have observed an initial spike in explicit strategy which then gradually declines over adaptation (e.g., McDougle et al., 2015; some groups in Yin and Wei, 2020) – though it should be noted that in other cases, this does not appear to occur (e.g., most cases in Morehead et al., 2015; multi-day training in Wilterson et al., 2021). While we opted not to investigate these dynamic phenomena in our paper, it should be noted that the competition model can produce this explicit phenotype.

In the simplest case where adaptation depends solely on a target error shared between the implicit and explicit systems, this gives rise to the competitive state-space system given by:

xi(n+1)=aixi(n)+bietarget(n)xe(n+1)=aexe(n)+beetarget(n)

This is the competition model we use in our Haith et al. (2015) simulations. In this model, the rise and fall in explicit strategy can occur due to the way that both systems interact through etarget in these equations. An example can even be observed in our manuscript, in our Haith et al. (2015) analysis (Figure 6D).

Note how explicit adaptation predicted by the competition model (magenta curves) rises during the first 20 rotation trials on both Day 1 and Day 2, and then declines over the rest of adaptation. Thus, clearly the competition model naturally describes such waxing and waning in the explicit response; high explicit error sensitivity initially causes increases in explicit learning, but then as the implicit process develops, errors are siphoned away from the explicit system causing it to gradually decline.

Altogether, these simulations illustrate that the competition model can indeed account for the explicit and implicit phenotypes described by the reviewers. With that said, there is another potential culprit in the rise-and-fall explicit learning phenotype. In some cases where aiming landmarks are provided during adaptation, target errors can be eliminated, but implicit learning persists with a concomitant decrease in explicit strategy (e.g., Taylor et al., 2014; McDougle et al., 2011). We suspect in these cases that persistent implicit learning is due to errors between the cursor and the aiming landmarks, as demonstrated by Taylor and Ivry (2011). Thus, in order to maintain the cursor on the target without overcompensating for the rotation, explicit strategies must decline.

In sum, in the revised manuscript we note that the competition model can produce the implicit-explicit learning phenotype noted by the reviewers: rising-and-falling explicit strategy accompanied by persistent implicit learning, as demonstrated in Figure 5D. However, we also recognize that in aiming landmark studies, this behavior can be driven by a second error (an SPE and/or target error between the cursor and aiming landmark). In our revised paper, we have added a passage concerning these matters:

“…For example, early during learning, it is common that explicit strategies increase, peak, and then decline. That is, when errors are initially large, strategies increase rapidly. But as implicit learning builds, the explicit system’s response can decline in a compensatory manner1,9,12. This dynamic phenomenon can also occur in the competition theory, where both implicit and explicit systems respond to target error (Figure 6D). But in many cases, a second error source may drive this behavioral phenotype. That is, in cases with aiming landmarks 1,9,12, errors between the cursor and primary target can be eliminated, but implicit learning persists. This implicit learning is likely driven by the SPEs and target errors that remain between the cursor and the aiming landmark, as in Taylor and Ivry (2011)9. This persistent implicit adaptation must be counteracted by decreasing explicit strategy to avoid overcompensation. In sum, competition between implicit learning and explicit strategy is complex. Both systems can respond to one another in ways that change with experimental conditions.”

7. General comments about the data:

Experiments 3 and 4 (Figures 6 and 7) do not contribute to the theorization of competition between implicit and explicit learning. The authors appear to use them to show that implicit learning properties (error sensitivity) can be modified, but then this conclusion critically depends on the assumption that the limit-PT paradigm produces implicit-only learning, which is yet to be validated.

Figure 4 is not a piece of supporting evidence for the competition model since those obtained parameters are based on assumptions that the competition model holds and that the limit-PT condition "only" has implicit learning.

We agree with these points. Yes, we do not intend for Experiments 3 and 4 (now Experiments 4 and 5) to directly contribute to the competition theory’s experimental evidence. We have made changes to our revised manuscript to make this clearer. First, our Results are now divided in 4 sections. Sections 1 and 2 present evidence that supports the competition theory. The savings and interference studies are included in a separate section (Part 3). We expect that these analyses will be important to the reader, as they provide new ideas about how the implicit system may exhibit changes that are masked by explicit strategy.

Secondly, we have now added new analysis to the Haith et al. (2015) dataset, so that it better recognizes and contrasts competition and independence model predictions. In the original Figure 4, we solely fit the competition model to the Haith et al. dataset and highlighted the implicit and explicit error sensitivities predicted by the model. In the revised manuscript, we now also fit the independence model to the Haith et al. dataset and include its error sensitivity estimates in Figure 6 (previously Figure 4) as well. We hope that this inclusion better illustrates that these model fits are not meant as evidence for one model or the other, but rather two contrasting interpretations of the same data.

In the revised paper we explain the competition model predicts that both the implicit system and explicit system exhibited a statistically significant error sensitivity increase (Figure 6D, right; two-way rm-ANOVA, within-subject effect of exposure number, F(1,13)=10.14, p=0.007, ηp2=0.438; within-subject effect of preparation time, F(1,13)=0.051, p=0.824, ηp2=0.004; exposure by preparation interaction, F(1,13)=1.24, p=0.285). The independence model suggested only the explicit system exhibited a statistically significant increase in error sensitivity (Figure 5E; 2-way rm-ANOVA, learning process by exposure number interaction, F(1,13)=7.016, p=0.02; significant interaction followed by 1-way rm-ANOVA across exposures: explicit system with F(1,13)=9.518, p=0.009, ηp2=0.423; implicit system with F(1,13)=2.328, p=0.151, ηp2=0.152).

In the revised manuscript we conclude these results with: “In summary, when we reanalyzed our earlier data, the competition and independence theories suggested that our data could be explained by two contrasting hypothetical outcomes. If we assumed that implicit and explicit systems were independent, then only explicit learning contributed to savings, as we concluded in our original report. However, if we assumed that the implicit and explicit systems learned from the same error (competition model), then both implicit and explicit systems contributed to savings. Which interpretation is more parsimonious with measured behavior?”

We hope that these additions and revisions to the manuscript better illustrate that our analysis is not meant to support one model over the other, but rather to illustrate their contrasting implications about the implicit system.

Finally, the reviewers also note in their concern, that our analyses in Figures 7, 8, and 9 (previously Figures 4, 6, and 7) depend on the assumption that limited preparation time trials isolate implicit learning. We responded to similar concerns in Point 3 above. We will reiterate this discussion here. In the revised paper, we have added 2 additional experiments to confirm that the limited preparation time conditions we use in our studies accurately isolate implicit learning. First, we have now added a limited preparation time (Limit PT) group to our laptop-based study in Experiment 3 (Experiment 2 in original manuscript). In Experiment 3, participants in the Limit PT group (n=21) adapted to a 30° rotation but under a limited preparation time condition. As for the Limit PT group in Experiment 2, we imposed a strict bound on reaction time to suppress movement preparation time. However, unlike Experiment 2 (Experiment 1 in original manuscript) once the rotation ended, participants were told to stop re-aiming. This no-aiming instruction allowed us to examine whether limiting preparation time suppressed explicit strategy as intended. Our analysis of these new data is shown in Figures 3K-N.

In Experiment 3, we now compare two groups: one where participants had no preparation time limit (No PT Limit, Figures 3H-J) and one where an upper bound was placed on preparation time (Limit PT, Figures 3K-M). We measured early and late implicit learning measures over a 20-cycle no feedback and no aiming period at the end of the experiment. The voluntary decrease in reach angle over this no aiming period revealed each participant’s explicit strategy (Figure 3N). When no reaction time limit was imposed (No PT Limit), reaiming totaled approximately 11.86° (Figure 3N, E3, black), and did not differ statistically across Experiments 2 and 3 (t(42)=0.50, p=0.621). Recall that Experiment 2 tested participants with a robotic arm and Experiment 3 tested participants in a similar laptop-based paradigm. As in earlier reports, limiting reaction time dramatically suppressed explicit strategy, yielding only 2.09° of re-aiming (Figure 3N, E3, red).

Therefore, our new control dataset suggest that our limited reaction time technique was highly effective at suppressing explicit strategy as initially claimed in our original manuscript. However, this experiment alone suggests that while explicit strategies are strongly suppressed by limiting preparation time, they are not entirely eliminated; when we limited preparation time in Experiment 3, we observed that participants still exhibited a small decrease (2.09°) in reach angle when we instructed them to aim their hand straight to the target (Figure 3L, no aiming; Figure 3N, E3, red). This ‘cached’ explicit strategy, while small, may have contributed to the 8° reach angle change measured early during the second rotation in our savings experiment (Experiment 4, Figure 7C in revised paper).

For this reason, it was critical to consider whether this 2° change in reach angle was indeed due to explicit strategy or instead caused by time-based decay in implicit learning over the 30 sec instruction period. To test this possibility, we collected another limited preparation group (n=12, Figure 7-Supplement 1A, decay-only, black). But this time, participants were instructed that the experiment’s disturbance was still on, and that they should continue to move the ‘imagined’ cursor to the target during the no feedback period. Even though participants were instructed to continue using their strategy, their reach angles decreased by approximately 2.1° (Figure 7-Supplement 1, black decay-only). Indeed, we detected no statistically significant difference between the change in reach angle in this decay-only condition, and the Limit PT group in Experiment 3 (Figure 7-Supplement 1B; two-sample t-test, t(31)=0.016, p=0.987).

This control experiment suggested that residual ‘explicit strategies’ we measured in the Limit PT condition, were caused by time-dependent decay in implicit learning. Thus, our Limit PT exhibits a near-complete suppression of explicit strategy.

Summary

In our revised manuscript, we have added the independence model’s predictions to Figure 6 (previously Figure 4) and have altered our text to better explain that our Haith et al. analysis is not intended to support the competition model, but to compare each models’ implications. In addition, we add two new experiments to corroborate our assumption that limited preparation time trials reveal implicit learning in our tasks. Data in the Limit PT condition in Experiment 3 suggest that explicit strategies are suppressed to approximately 2° by our preparation time limit (compared to about 12° under normal conditions). Data in the decay-only group suggest that this 2° change in reach angle was not due to a cached explicit strategy but rather time dependent decay in implicit learning. Together, these conditions demonstrate that limiting reaction time in our experiments almost completely prevents the caching of explicit strategies; our limited preparation time measures do indeed reflect implicit learning.

8a. Concerns about data handling:

All the experiments have a limited number of participants. This is problematic for Exp1 and 2, which have correlation analysis. Note Fig3H contains a critical, quantitative prediction of the model, but it is based on a correlation analysis of n=9. This is an unacceptably small sample. Experiment 3 only had 10 participants, but its savings result (argued as purely implicit learning-driven) is novel and the first of its kind.

We agree with these points. Yes, we do not intend for Experiments 3 and 4 (now Experiments 4 and 5) to directly contribute to the competition theory’s experimental evidence. We have made changes to our revised manuscript to make this clearer. First, our Results are now divided in 4 sections. Sections 1 and 2 present evidence that supports the competition theory. The savings and interference studies are included in a separate section (Part 3). We expect that these analyses will be important to the reader, as they provide new ideas about how the implicit system may exhibit changes that are masked by explicit strategy.

Secondly, we have now added new analysis to the Haith et al. (2015) dataset, so that it better recognizes and contrasts competition and independence model predictions. In the original Figure 4, we solely fit the competition model to the Haith et al. dataset and highlighted the implicit and explicit error sensitivities predicted by the model. In the revised manuscript, we now also fit the independence model to the Haith et al. dataset and include its error sensitivity estimates in Figure 6 (previously Figure 4) as well. We hope that this inclusion better illustrates that these model fits are not meant as evidence for one model or the other, but rather two contrasting interpretations of the same data. Below we reproduce the relevant changes to Figure 6 (previously Figure 4).

In the revised paper we explain the competition model predicts that both the implicit system and explicit system exhibited a statistically significant error sensitivity increase (Figure 6D, right; two-way rm-ANOVA, within-subject effect of exposure number, F(1,13)=10.14, p=0.007, ηp2=0.438; within-subject effect of preparation time, F(1,13)=0.051, p=0.824, ηp2=0.004; exposure by preparation interaction, F(1,13)=1.24, p=0.285). The independence model suggested only the explicit system exhibited a statistically significant increase in error sensitivity (Figure 5E; 2-way rm-ANOVA, learning process by exposure number interaction, F(1,13)=7.016, p=0.02; significant interaction followed by 1-way rm-ANOVA across exposures: explicit system with F(1,13)=9.518, p=0.009, ηp2=0.423; implicit system with F(1,13)=2.328, p=0.151, ηp2=0.152).

In the revised manuscript we conclude these results with: “In summary, when we reanalyzed our earlier data, the competition and independence theories suggested that our data could be explained by two contrasting hypothetical outcomes. If we assumed that implicit and explicit systems were independent, then only explicit learning contributed to savings, as we concluded in our original report. However, if we assumed that the implicit and explicit systems learned from the same error (competition model), then both implicit and explicit systems contributed to savings. Which interpretation is more parsimonious with measured behavior?”

We hope that these additions and revisions to the manuscript better illustrate that our analysis is not meant to support one model over the other, but rather to illustrate their contrasting implications about the implicit system.

Finally, the reviewers also note in their concern, that our analyses in Figures 7, 8, and 9 (previously Figures 4, 6, and 7) depend on the assumption that limited preparation time trials isolate implicit learning. We responded to similar concerns in Point 3 above. We will reiterate this discussion here. In the revised paper, we have added 2 additional experiments to confirm that the limited preparation time conditions we use in our studies accurately isolate implicit learning. First, we have now added a limited preparation time (Limit PT) group to our laptop-based study in Experiment 3 (Experiment 2 in original manuscript). In Experiment 3, participants in the Limit PT group (n=21) adapted to a 30° rotation but under a limited preparation time condition. As for the Limit PT group in Experiment 2, we imposed a strict bound on reaction time to suppress movement preparation time. However, unlike Experiment 2 (Experiment 1 in original manuscript) once the rotation ended, participants were told to stop re-aiming. This no-aiming instruction allowed us to examine whether limiting preparation time suppressed explicit strategy as intended. Our analysis of these new data is shown in Figures 3K-N.

In Experiment 3, we now compare two groups: one where participants had no preparation time limit (No PT Limit, Figures 3H-J) and one where an upper bound was placed on preparation time (Limit PT, Figures 3K-M). We measured early and late implicit learning measures over a 20-cycle no feedback and no aiming period at the end of the experiment. The voluntary decrease in reach angle over this no aiming period revealed each participant’s explicit strategy (Figure 3N). When no reaction time limit was imposed (No PT Limit), reaiming totaled approximately 11.86° (Figure 3N, E3, black), and did not differ statistically across Experiments 2 and 3 (t(42)=0.50, p=0.621). Recall that Experiment 2 tested participants with a robotic arm and Experiment 3 tested participants in a similar laptop-based paradigm. As in earlier reports, limiting reaction time dramatically suppressed explicit strategy, yielding only 2.09° of re-aiming (Figure 3N, E3, red).

Therefore, our new control dataset suggest that our limited reaction time technique was highly effective at suppressing explicit strategy as initially claimed in our original manuscript. However, this experiment alone suggests that while explicit strategies are strongly suppressed by limiting preparation time, they are not entirely eliminated; when we limited preparation time in Experiment 3, we observed that participants still exhibited a small decrease (2.09°) in reach angle when we instructed them to aim their hand straight to the target (Figure 3L, no aiming; Figure 3N, E3, red). This ‘cached’ explicit strategy, while small, may have contributed to the 8° reach angle change measured early during the second rotation in our savings experiment (Experiment 4, Figure 7C in revised paper).

For this reason, it was critical to consider whether this 2° change in reach angle was indeed due to explicit strategy or instead caused by time-based decay in implicit learning over the 30 sec instruction period. To test this possibility, we collected another limited preparation group (n=12, Figure 7-Supplement 1A, decay-only, black). But this time, participants were instructed that the experiment’s disturbance was still on, and that they should continue to move the ‘imagined’ cursor to the target during the no feedback period. Even though participants were instructed to continue using their strategy, their reach angles decreased by approximately 2.1° (Figure 7-Supplement 1, black decay-only). Indeed, we detected no statistically significant difference between the change in reach angle in this decay-only condition, and the Limit PT group in Experiment 3 (Figure 7-Supplement 1B; two-sample t-test, t(31)=0.016, p=0.987).

This control experiment suggested that residual ‘explicit strategies’ we measured in the Limit PT condition, were caused by time-dependent decay in implicit learning. Thus, our Limit PT exhibits a near-complete suppression of explicit strategy.

Summary

In our revised manuscript, we have added the independence model’s predictions to Figure 6 (previously Figure 4) and have altered our text to better explain that our Haith et al. analysis is not intended to support the competition model, but to compare each models’ implications. In addition, we add two new experiments to corroborate our assumption that limited preparation time trials reveal implicit learning in our tasks. Data in the Limit PT condition in Experiment 3 suggest that explicit strategies are suppressed to approximately 2° by our preparation time limit (compared to about 12° under normal conditions). Data in the decay-only group suggest that this 2° change in reach angle was not due to a cached explicit strategy but rather timedependent decay in implicit learning. Together, these conditions demonstrate that limiting reaction time in our experiments almost completely prevents the caching of explicit strategies; our limited preparation time measures do indeed reflect implicit learning.

8b. Experiment 2: This experiment tried to decouple implicit and explicit learning, but it is still problematic. If the authors believe that the amount of implicit adaptation follows a state-space model, then the measure of early implicit is correlated to the amount of late implicit because Implicit_late = (ai)^N * Implicit_early where N is the number of trials between early and late. Therefore the two measures are not properly decoupled. To decouple them, the authors should use two separate ways of measuring implicit and explicit. To measure explicit, they could use aim report (Taylor, Krakauer and Ivry 2011), and to measure implicit independently, they could use the after-effect by asking the participants to stop aiming as done here. They actually have done so in experiment 1 but did not use these values.

One reason we conducted Experiment 2 (now Experiment 3) was to examine whether the implicit-explicit relationships were due to spurious correlations (e.g., implicit is compared to explicit, which is total learning minus implicit). While we appreciate the reviewers’ suggestion, we still maintain that the late implicit learning measure in Experiment 2 eliminates spurious correlation as intended.

To illustrate this, imagine that implicit and explicit learning have no relationship. Let’s represent these as random variables I and E. Supposing I and E are independent this implies that cov(I,E) = 0. Now let’s introduce a third random variable, T, which represents total learning. Suppose that we measure total adaptation on the last several cycles. This total adaptation will be equal to the sum of I and E. So total adaptation, T is given by T = I + E. Now, we the experimenter cannot measure I, E, and T. We can only measure these variables corrupted by motor execution noise, M. A reasonable assumption is that M is a zero-mean random variable, that is independently distributed across each reach.

cov(Imeasure,Eestimate)=cov(I+M2,E+M1M2)=var(M2)

At the end of adaptation, the measured total adaptation is thus Tmeasure = T + M1 = I + E + M1. Now, on the next cycle, we do exclusion trials. These reveal I, but this is also corrupted by motor execution noise. Thus, the I that we measure is given by: Imeasure = I + M2. Now, in Experiment 1, we derived E by subtracting Imeasure from Tmeasure. This is given by: Eestimate = Tmeasure – Imeasure = I + E + M1 – (I + M2) = E + M1 – M2. Now, we can see the issue when we correlate Imeasure with Eestimate:This assumes as we said that I and E are independent.

This clearly illustrates the issue of spurious correlation. Even though I and E are independent, the way we measured I and E results in a non-zero misleading correlation.

Our late implicit measure in Experiment 3 prevents this spurious correlation. Suppose we measure I again on the second cycle in the no-aiming aftereffect period. As noted in the criticism, I here will relate to the previous I, but will be scaled by the retention factor a. Thus, the second I that we measure is given by: I2 = aI + M3 where M3 is now motor execution noise on the 2nd no-aiming cycle. Now let us determine the covariance between I2 and Eestimate: cov(I2, Eestimate) = cov(aI + M3, E + M1 – M2). Noting that I, E, M1, M2, and M3 are all independent, this covariance will be equal to 0.

What does this mean? Even though Eestimate is based on the initial ‘early I’ measurement, the second ‘late I’ that we measure (even though it is related to the first I via a) is no longer correlated with Eestimate. This is the type of spurious correlation we avoid with our late implicit measure in Experiment 2. Thus, the only way for Eestimate to be correlated with any later I measurement (apart from the initial one) requires that the original E and I are not in fact independent (as posited by the competition model).

In our revised manuscript we have increased the number of participants in Experiment 3 to n=35. Below, we show the early implicit-explicit relationship in Figure 3Q and the decoupled late implicit-explicit relationship in Figure 3P in the revised manuscript.

Second, though our late implicit measure does decouple implicit and explicit learning we have added the report-based measures requested by the reviewers to the paper. We also check the relationship between implicit and explicit learning using the aiming reports collected in the experiment. This correlation is now provided in Figure 3-Supplement 2C. In addition, we compare report and exclusion-based implicit and explicit measures in Figure 3-Supplements 2AandB. Note, we are not highly confident in these report measures, given that we collected them only one time and 25% were reported in the incorrect direction (see Point 4).

Thirdly, the revised paper includes new analyses that corroborate the competition model while avoiding spurious implicit-explicit correlations. Recall that spurious correlations can arise between exclusion-based implicit learning and explicit strategy. To avoid this potential issue, the competition model can be restated in another way. As we discussed in Point 1 above, the competition model predicts that xiss = pi(rxess), where π is a gain that depends on implicit error sensitivity and retention. In our manuscript we investigate whether xiss and xess vary according to this relationship. Note, however, that total adaptation is equal to xTss = xiss + xess. By combining these two equations, we can relate implicit learning to total adaptation via the equation: xTss = r + (pi – 1)pi-1xiss (see Lines 1576-1620 in revised paper). This equation predicts that total learning will be negatively correlated to implicit learning. This counterintuitive relationship, that greater overall adaptation will be supported by reduced implicit learning, is opposite to that predicted by an SPE learning model like the independence equation. Supposing that implicit learning responds only to SPE, but strategies respond to variations in implicit learning yields: xess = pe(rxiss) where pe is the gain in the explicit response. Combining this with xTss = xiss + xess yields a positive linear relationship between total adaptation and implicit learning: xTss = per + (1 – pe)xiss.

In sum the critical idea is that we can compare total adaptation and implicit learning. There is no spurious correlation between these quantities, because they are calculated on distinct trials. A negative correlation supports the competition model. A positive correlation supports the independence model. In the revised manuscript we analyzed this relationship in Experiment 3, as well as Experiment 1, new data from Maresch et al., 2021, and new data from Tsay et al., 2021. These are shown in Figures 4H and Figure 4-Supplements 1A-C.

Lastly, it should be noted that these analyses occur at the subject-level but can also be done to analyze group-level effects. In particular, we use the equations above in order to determine whether the implicit learning phenotypes examined in Point 6B (saturation, scaling, and non-monotonicity) also match the competition model. To this end, the relationships between implicit learning and total adaptation can be restated with algebraic manipulations. For the competition model we obtain: xiss = pi(1 – pi)-1(rxTss). In words, this equation provides a way to predict implicit learning using total adaptation. In the revised manuscript we analyze three implicit phenotypes: (1) saturation in Neville and Cressman, 2018, (2) scaling in Experiment 1, and (3) non-monotonicity in Tsay et al., 2021 (see Point 6B above for more details). In the Neville and Cressman dataset, we fit the above model to identify one π value that best predicts xiss using xTss across all 6 experimental groups (all 3 rotation magnitudes crossed with the 2 instruction conditions). In Experiment 1, we fit the above model to identify one π that best predicts xiss using xTss across 5 periods (all 4 rotation sizes in the stepwise group plus the last 60° learning period in the abrupt group). Finally, in Tsay et al., we fit the above model to identify one π that best predicts xiss using xTss across all 4 rotation sizes. The data are compared with model predictions in Figure 1-Supplement 2.

In black we show measured implicit learning. In blue we show implicit learning predicted using explicit strategy via the standard competition model: xiss = pi(rxess). In gray we show implicit learning predicted using total adaptation via the reworked competition model: xiss = pi(1 – pi)-1(rxTss). The similarity between each models predictions confirms that the group-level implicit phenotypes predicted by the competition model are not due to a spurious correlation between exclusion-based implicit and explicit learning.

Summary

Spurious correlations are common in implicit-explicit analyses. This occurs in many studies, either when both processes are inferred using exclusion trials (where explicit learning is estimated as total learning minus implicit learning), or report-based measures (where implicit learning is estimated as total learning minus explicit strategy). We have now improved and added several analyses to the revised manuscript to corroborate our main results while avoiding sources of spurious correlation. These include:

1. In Experiment 3, we have increased the total participant count to n=35. In Figure 3P we show the relation between explicit strategy and ‘late’ implicit learning which are calculated on separate trials.

2. We now re-derive the competition model in a way that relates implicit learning to total adaptation as opposed to explicit strategy. This version avoids spurious correlations. We show the correlation between implicit learning and total adaptation at the individual participant-level in Experiment 3 (Figure 4H), Experiment 1 (Figure 4-Supplement 1B), Maresch et al., 2021 (Figure 4-Supplement 1A), and Tsay et al., 2021 (Figure 4-Supplement 1C).

3. Finally, we also use total adaptation to predict variations in implicit learning at the group-level in Experiment 1 (Figure 1-Supplement 2, Experiment 1 stepwise), Neville and Cressman, 2018 (Figure 1-Supplement 2, Neville and Cressman (2018)), and Tsay et al., 2021 (Figure 1-Supplement 2, Tsay et al., 2021).

Lastly, as requested, we also report the relationship between implicit learning and explicit strategy using the report measures (Figure 3-Supplement 2) collected in Experiment 2 (previously Experiment 1).

8c. Figure 6: To evaluate the saving effect across conditions and experiments, using the interaction effect of ANOVA shall be desired.

We appreciate this suggestion. We have now updated all relevant analyses in our revised manuscript. We used either a two-way ANOVA, two-way repeated measures ANOVA, or mixed-ANOVA depending on the experimental design. When interactions were statistically significant, we next measured simple main effects via one-way ANOVA. We outline this on Lines 1195-1207. All conclusions reached in the original manuscript remained the same using this updated statistical procedure. These include:

1. When measuring savings in Haith et al., we observed that the learning rate increased during the second exposure on high preparation time trials, but not on low preparation time trials (Figure 5B, right; two-way rm-ANOVA, prep. time by exposure number interaction, F(1,13)=5.29, p=0.039; significant interaction followed by one-way rm-ANOVA across Days 1 and 2: high preparation time with F(1,13)=6.53, p=0.024, ηp2=0.335; low preparation time with F(1,13)=1.11, p=0.312, ηp2=0.079). We corroborate this rate analysis by measuring early changes in reach angle (first 40 trials following rotation onset) across Days 1 and 2 (Figure 5C, left and middle). Only high preparation time trials exhibited a statistically significant increase in reach angles, consistent with savings (Figure 5C, right; two-way rm-ANOVA, preparation time by exposure interaction, F(1,13)=13.79, p=0.003; significant interaction followed by one-way rm-ANOVA across days: high preparation time with F(1,13)=11.84, p=0.004, ηp2=0.477; low preparation time with F(1,13)=0.029, p=0.867, ηp2=0.002).

2. When comparing implicit and explicit error sensitivities predicted by the competition model, the model predicted that both implicit system and explicit systems exhibited a statistically significant error sensitivity increase (Figure 5D, right; two-way rm-ANOVA, within-subject effect of exposure no., F(1,13)=10.14, p=0.007, ηp2=0.438; within-subject effect of learning process, F(1,13)=0.051, p=0.824, ηp2=0.004; exposure no. by learning process interaction, F(1,13)=1.24, p=0.285).

3. When comparing implicit and explicit error sensitivities predicted by the independence model, the model predicted that only the explicit system exhibited a statistically significant increase in error sensitivity (Figure 5E; two-way rm-ANOVA, learning process (implicit vs explicit) by exposure interaction, F(1,13)=7.016, p=0.02; significant interaction followed by one-way rm-ANOVA across exposures: explicit system F(1,13)=9.518, p=0.009, ηp2=0.423; implicit system with F(1,13)=2.328, p=0.151, ηp2=0.152).

4. When measuring savings under limited preparation time in Experiment 4, we still observed the opposite outcome to Haith et al., 2015. Notably, low preparation time learning rates increased by more than 80% in Experiment 4 (Figure 7C top; mixed-ANOVA exposure number by experiment type interaction, F(1,22)=5.993, p=0.023; significant interaction followed by one-way rm-ANOVA across exposures: Haith et al. with F(1,13)=1.109, p=0.312, ηp2=0.079; Experiment 4 with F(1,9)=5.442, p=0.045, ηp2=0.377). Statistically significant increases in reach angle were detected immediately following rotation onset in Experiment 4 (Figure 7B, bottom), but not our earlier data (Figure 7C, bottom; mixed-ANOVA exposure number by experiment interaction, F(1,22)=4.411, p=0.047; significant interaction followed by one-way rm-ANOVA across exposures: Haith et al. with F(1,13)=0.029, p=0.867, ηp2=0.002; Experiment 4 with F(1,9)=11.275, p=0.008, ηp2=0.556).

5. Lastly, we updated our anterograde interference analysis in Figure 8. While both low preparation time and high preparation time trials exhibited decreases in learning rate which improved with the passage of time (Figure 8C; two-way ANOVA, main effect of time delay, F(1,50)=5.643, p=0.021, ηp2=0.101), these impairments were greatly exacerbated by limiting preparation time (Figure 8C; two-way ANOVA, main effect of preparation time, F(1,50)=11.747, p=0.001, ηp2=0.19).

8d. Figure 8: The hypothesis that they did use an explicit strategy could explain why the difference between the two groups rapidly vanishes. Also, it is unclear whether the data from Taylor and Ivry (2011) are in favor of one of the models as the separate and shared error models are not compared.

This is a good criticism, similar to that raised in Point 2. Please note that Figure 9 was not included to directly test the competition model, but more so as a way to demonstrate its limitations. For example, in Mazzoni and Krakauer (2006) our simple target error model cannot reproduce the measured data; in the instructed group implicit learning continued despite eliminating the primary target error. The competition model would predict no implicit learning in this case. Thus, in more complicated scenarios where multiple visual landmarks are used during adaptation, the learning rules must be more sophisticated than the simple competition theory. We feel that it is critical that this message is communicated to the reader.

While we agree that the difference between the two groups in Mazzoni and Krakauer (2006) diminishes rapidly, it did not vanish entirely. In Author response image 1 we show the mean aftereffect on the last 10 trials of washout (trials 71-80). Note that a 3° difference (t-test over last 10 trials, t(9)=3.24, p=0.01) still persisted on these trials, consistent with a lingering difference in implicit learning across the two groups. There are at least 3 reasons why this difference may have quickly diminished: (1) implicit learning decays in both groups on each trial, (2) there is implicit learning in the opposite direction in response to the washout errors, and (3) the explicit system likely comes online to mitigate the ‘negative’ washout errors.

Author response image 1. Measuring the aftereffect on the last 10 washout trials in Mazzoni and Krakauer, 2006.

Author response image 1.

Nevertheless, while it seems likely that the washout period reveals differences in implicit learning across the two groups, we agree that we cannot know this with certainty because participants were not told to stop aiming. The Mazzoni and Krakauer analysis, however, still shows limitations in the competition model and provides initial ideas about how learning models may be extended in the future. To reflect this point, this analysis now appears in Part 4 of the revised paper, which is titled: “Limitations of the competition theory”. In addition, we have revised the text to appropriately note the limitation in the Mazzoni and Krakauer analysis, reproduced below (Lines 752-762):

“It is important, however, to note a limitation in these analyses. Our earlier study did not employ the standard conditions used to measure implicit aftereffects: i.e., instructing participants to aim directly at the target, and also removing any visual feedback. Thus, the proposed dual-error model relies on the assumption that differences in washout were primarily related to the implicit system. These assumptions need to be tested more completely in future experiments.

In summary, the conditions tested by Mazzoni and Krakauer show that the simplistic idea that adaptation is driven by only one target error, or only one SPE, cannot be true in general54. We propose a new hypothesis that when people move a cursor to one visual target, while aiming at another visual target, each target may partly contribute to implicit learning. When these two error sources conflict with one another, the implicit learning system may exhibit an attenuation in total adaptation. Thus, implicit learning modules may compete with one another when presented with opposing errors.”

Finally, note that the Taylor and Ivry (2011) data are included in Part 4 of the revised paper. Similar to our Mazzoni and Krakauer analysis, our goal for this dataset, is to show that the competition theory is not a universal model that can be applied across all scenarios. In these data, even when no secondary aiming target is provided, implicit learning is evident in the aiming group where target error is zero. Thus, it would appear in the no aiming target group, that implicit learning was driven by an SPE. However, an SPE-only model cannot explain why adaptation is 3 times larger when the second target is provided. Thus, it seems in this case neither a target error nor an SPE alone drive adaptation. These data raise several questions. For instance, if SPE-learning occurs in Taylor and Ivry even without an aiming target, why did not we not detect it in our primary experiments? We think it is critical that the reader note these questions to inform new experiments in the future.

In our revised manuscript, we have expanded on the implications of both the Mazzoni and Krakauer, and Taylor and Ivry studies. These data suggest that both target errors and SPEs play a role in implicit learning, but that experimental contexts/conditions alter their contributions. One possibility is that SPE learning is highly sensitive to context. In cases where no aiming target is ever provided (as in all studies in our primary analyses), it may be minimal. Providing aiming targets in the past may invoke an implicit memory (like memory-guided saccade adaptation) that can drive ‘weaker’ SPE learning. Removing the aiming targets partway through the movement may provide a moderate memory that drives more SPE learning. And seeing the aiming target at all times may create the strongest memory driving the most SPE learning. An another possibility altogether is that the aftereffects noted in the no aiming target group in Taylor and Ivry reflect a reward-based and use-dependent memory: i.e., aiming to another location and receiving rewards may bias future reaching movements in that direction even without any visual target errors to drive learning. The relevant passage on these matters is provided below (Lines 1063-1090):

“…the nature of aim-cursor errors remains uncertain. For example, while this error source generates strong adaptation when the aim location coincides with a physical target (Figure 10H, instruction with target), implicit learning is observed even in the absence of a physical aiming landmark9 (Figure 10H, instruction without target), albeit to a smaller degree. This latter condition may implicate an SPE learning that does not require an aiming target. Thus, it may be that the aim-cursor error in Mazzoni and Krakauer is actually an SPE that is enhanced by the presence of a physical target. In this view, implicit learning is driven by a target error module and an SPE module that is enhanced by a visual target error4,11,86.

These various implicit learning modules are likely strongly dependent on experimental contexts, in ways we do not yet understand. For example, Taylor and Ivry (2011) would suggest that all experiments produce implicit some SPE learning, but less so in paradigms with no aiming targets. Yet, the competition equation accurately matched single-target behavior in Figures 1-9 without an SPE learning module. It is not clear why SPE learning would be absent in these experiments. One idea may be that the aftereffect observed by Taylor and Ivry (2011) in the absence of an aiming target, was actually a lingering associative motor memory that was reinforced by successfully hitting the target during the rotation period. Indeed, such a model-free learning mechanism87 should be included in a more complete implicit learning model. It is currently overlooked in error-based systems such as the competition and independence equations.

Another idea is that some SPE learning did occur in the no aiming target experiments we analyzed in Figures 1-9, but was overshadowed by the implicit system’s response to target error. A third possibility is that the SPE learning observed by Taylor and Ivry (2011) was contextually enhanced by participants implicitly recalling the aiming landmark locations (akin to memory-guided saccade adaptation) provided during the baseline period. This possibility would suggest SPEs vary along a complex spectrum: (1) never providing an aiming target causes little or no SPE learning (as in our experiments), (2) providing an aiming target during past training allows implicit recall that leads to small SPE learning, (3) providing an aiming target that disappears during the movement promotes better recall and leads to medium-sized SPE learning (i.e., the disappearing target condition in Taylor and Ivry), and (4) an aiming target that always remains visible leads to the largest SPE learning levels. This context-dependent SPE hypothesis may be related to recent work suggesting that both target errors and SPEs drive implicit learning, but implicit SPE learning is altered by distraction54.”

Reviewer #1:

The present study by Albert and colleagues investigates how implicit learning and explicit learning interact during motor adaptation. This topic is under heated debate in the sensorimotor adaptation area in recent years, spurring diverse behavioral findings that warrant a unified computational model. The study is to fulfill this goal. It proposes that both implicit and explicit adaptation processes are based on a common error source (i.e., target error). This competition leads to different behavioral patterns in diverse task paradigms.

I find this study a timely and essential work for the field. It makes two novel contributions. First, when dissecting the contribution of explicit and implicit learning, the current study highlights the importance of distinguishing apparent learning size and covert changes in learning parameters. For example, the overt size of implicit learning could decrease while the related learning parameters (e.g., implicit error sensitivity) remain the same or even increase. Many researchers for long have overlooked this dissociation. Second, the current study also emphasizes the role of target error in typical perturbation paradigms. This is an excellent waking call since the predominant view now is that sensory prediction error is for implicit learning and target error for explicit learning.

Given that the paper aims to use a unified model to explain different phenomena, it mixes results from previous work and four new experiments. The paper's presentation can be improved by reducing the use of jargon and making straightforward claims about what the new experiments produce and what the previous studies produce. I will give a list of things to work on later (minor concerns).

Furthermore, my major concern is whether the property of the implicit learning process (e.g., error sensitivity) can be shown subjective to changes without making model assumptions.

My major concern is whether we can provide concrete evidence that the implicit learning properties (error sensitivity and retention) can be modified. Even though the authors claim that the error sensitivity of implicit learning can be changed, and the change subsequently leads to savings and interference (e.g., Figures 6 and 7), I find that the evidence is still indirect, contingent on model assumptions. Here is a list of results that concern error sensitivity changes.

1.1. Figures 1 and 2: Instruction is assumed to leave implicit learning parameters unchanged.

Yes, we agree that this is an assumption, but feel that it is reasonable in this instance. To clarify, Figure 1 in the original manuscript concerned our Neville and Cressman (2018) analysis and Figure 2 concerned our Saijo and Gomi (2010) analysis. Instructions were only provided in the Neville and Cressman study, and so this concern seems more relevant for Figure 1 (Figures 2A-C in revised manuscript). Furthermore, our Neville and Cressman analysis explores (1) changes in rotation size, and (2) changes in strategy levels. Only the latter condition is relevant to this criticism.

In the Neville and Cressman study, instructions were given to some participants in a ‘strategy’ group. In this group, participants were briefed about the rotation with an image that depicted how feedback would be rotated, and how they could compensate for it.

In sum, participants were given instructions prior to the rotation. Once the rotation turned on, all subjects experienced the same trial structure, the same feedback, and no additional instructions. In our view, this intervention simply influences participants to use larger explicit strategies than they might have otherwise as is evident in their learning curves.

While we are not sure why changes in the participant’s aim direction would result in a change in implicit learning properties, the most likely mechanism in our view would be changes in implicit error sensitivity. Changes in implicit error sensitivity could potentially impact the competition model, where the implicit system’s learning gain is given by π = bi(1-ai+bi)-1. In other words, we might expect individuals with higher error sensitivity to have a larger implicit learning gain in the competition model. This is potentially relevant to the strategy conditions in Neville and Cressman; the no-instruction group learned much more slowly and thus, experienced larger errors throughout adaptation. These larger errors would result in a smaller error sensitivity, and therefore, a smaller gain in the competition model. This, however, would be puzzling, because the no-strategy group exhibited more implicit learning, not less, than the strategy group.

Nevertheless, whether implicit error sensitivity varies due to error size, or some other mechanism, has little impact on the model. To see this, recall that the ‘tunable’ component of the competition model is the implicit system’s learning gain, which depends on the retention factor and error sensitivity according to π = bi(1-ai+bi)-1. Fortunately, this implicit learning gain responds weakly to changes in error sensitivity (which appears in both the numerator and denominator). For example, let us suppose that participants in the strategy group exhibited an implicit error sensitivity of 0.3, but in the no-strategy group, implicit error sensitivity was only 0.2. For an implicit retention factor of 0.9565 (see Methods in revised paper), the nostrategy learning gain would be 0.821 and the strategy learning gain would be 0.873. Thus, even though implicit error sensitivity was 50% larger in the strategy participants, the competitive implicit learning gain would change only 6.3%. For an even more extreme case where implicit error sensitivity was double (0.4) in the strategy group, this would still only lead to a 9.8% change in the competitive implicit learning gain.

In sum, the competition model’s learning gain which governs implicit behavior is very robust to changes in implicit error sensitivity. Therefore, any changes in implicit error sensitivity that may have occurred across the instructed and uninstructed groups has little to no impact on model behavior. In other words, even moderate size differences in error sensitivity will have little to no effect on the competition model predictions in Figures 1 and 2 (in revised paper). We explain this on Lines 1946-1973 in the revised paper.

Despite this, we still felt it was important to check whether implicit learning properties may have differed across the strategy and no-strategy conditions, or in other words, high strategy use and low strategy use conditions. To do this, we examined implicit and explicit learning in Experiments 1-3 and measured whether the implicit-explicit relationship differed over low and high strategy domains. The idea here is that if implicit learning properties change with strategy, that we should detect structural changes in implicit behavior as we move from low strategy use to high strategy use. To do this, we divided our data in Experiments 1-3 into a low strategy (subjects whose strategy was less than the median) and a high strategy (subjects whose strategy was greater than the median) group. We then linearly regressed implicit learning onto strategy in the low strategy and high strategy groups separately. The domains for each analysis were separated by the dashed lines shown in Author response image 2.

Author response image 2. Here we split the data in Experiments 1-3 into low and high strategy groups (based on the median strategy).

Author response image 2.

We fit a regression to the ‘low-strategy’ group (blue line) and the ‘high-strategy’ group (magenta line). We compared the slope and intercept to see if there was a systematic difference in the regression’s gain and bias for low and high strategy cases.

Across the 6 conditions above, strategies were 135% larger in high strategy group participants than low strategy group participants. Implicit behavior, however, appeared insensitive to this large difference in strategy; overall, the slope and bias in the 6 implicit-explicit regressions differed by only 9.3% and 5.1%, respectively. In sum, even though strategy more than doubled in the high strategy group, the gain and bias in the implicit-explicit relationship changed less than 10%. This control analysis strongly suggests that the implicit system learns very similarly despite large changes in strategy.

Summary

We appreciate the reviewer’s concern that changes in instruction could alter implicit learning. We assume in our modeling that the implicit learning gain remains the same across instruction conditions. However, it is possible that implicit error sensitivity could differ across groups. Fortunately, the learning gain in the competition model is very insensitive to changes in implicit error sensitivity: even doubling error sensitivity increases the competition model’s learning gain by less than 10%. Indeed, the relationship between implicit and explicit learning exhibited little to no difference across participants with high and low strategy use. Given these concerns, we feel that our assumption in Neville and Cressman, that the implicit learning gain is the same across strategy conditions, is reasonable. Indeed, the data in Figures 2BandC (revised paper) well-match the competition model’s prediction even with a fixed learning gain. We should also note that overall, our paper examines many ways that implicit behavior exhibits competitive behavior that do not rely on examining changes in instruction: e.g., (1) steady-state responses to changes in rotation size (revised Figure 1, see Point 6B) in Neville and Cressman, Experiment 1, and Tsay et al., (2) changes in implicit learning with gradual vs. abrupt rotation onset (Figures 2D-G),(3) increases in implicit learning due to limits in preparation time (Figure 3O), and (4) many pairwise correlations between implicit and explicit learning, and total adaptation (Figure 4 as well as Figure 4-Supplement 1).

1.2. Figure 4: It appears that the implicit error sensitivity increases during relearning. However, Fig4D cannot be taken as supporting evidence. How the model is constructed (both implicit and explicit learning are based on target error) and what assumptions are made (Low RT = implicit learning, High RT = explicit + implicit) determine that implicit learning's error sensitivity must increase. In other words, the change in error sensitivity resulted from model assumptions; whether implicit error sensitivity by itself changes cannot be independently verified by the data.

The reviewer is correct. Please refer to Point 7 for a thorough response to these comments. To briefly summarize, it was not our intention to imply that the competition model results ‘proved’ that implicit error sensitivity increased in the Haith et al., 2015 experiment. Instead, we had intended to show that the competition model presents a hypothetical situation: that implicit error sensitivity can increase without any change in the implicit learning timecourse. This is significant because it means that additional experiments are needed to verify the conclusion that we reached in Haith et al., 2015: that savings is due to explicit strategy alone.

In the revised paper, we have made several changes to make these points clearer. Again, these are more thoroughly described in Point 7.

1. First, we now divide our results into 4 sections. In Part 1, we provide evidence for the competition theory. In Part 2, we compare the competition theory to alternative models. In Part 3, we discuss savings and interference. In Part 4, we discuss limitations in the competition model. It is our hope that by dividing our results into clearer sections, we better delineate their different purposes.

2. Secondly, we now show model parameters for the independence model in Figure 6 (previously Figure 4) in addition to the competition model. In this way, we hope to illustrate that neither model is the ‘true’ model: rather, they are models with contrasting predictions.

3. Thirdly, we have revised our text to reflect the hypothetical nature of these results. For instance, we now summarize our Haith et al., 2015 analysis with the following passage: “In summary, when we reanalyzed our earlier data, the competition and independence theories suggested that our data could be explained by two contrasting hypothetical outcomes. If we assumed that implicit and explicit systems were independent, then only explicit learning contributed to savings, as we concluded in our original report. However, if we assumed that the implicit and explicit systems learned from the same error (competition model), then both implicit and explicit systems contributed to savings. Which interpretation is more parsimonious with measured behavior?”

We hope that these additions and revisions to the manuscript better illustrate that our analysis is not meant to support one model over the other, but rather to illustrate their contrasting implications about the implicit system.

1.3. Figure 6: With limited PT, savings still exists with an increased implicit error sensitivity. Again, this result also relies on the assumption that limited PT leads to implicit-only learning. Only with this assumption, the error sensitivity can be calculated as such.

We completely agree. Please refer to Points 3and7 (in our response to the editorial summary) for a more thorough response to these comments. To briefly summarize, in the revised paper, we have added 2 additional experiments to confirm that the limited preparation time conditions we use in our studies accurately isolate implicit learning. Firstly, we directly measure explicit strategy under limited preparation time conditions. In our new Limit PT group in Experiment 3, participants adapted under limited preparation time, and are told to stop aiming at the end of the experiment. These new data are shown in Figures 3K-N.

When no reaction time limit was imposed (No PT Limit), re-aiming totaled approximately 11.86° (Figure 3N, E3, black), and did not differ statistically across Experiments 2 and 3 (t(42)=0.50, p=0.621). Recall that Experiment 2 tested participants with a robotic arm and Experiment 3 tested participants in a similar laptop-based paradigm.

As in earlier reports, limiting reaction time dramatically suppressed explicit strategy, yielding only 2.09° of re-aiming (Figure 3N, E3, red).

Therefore, our new data suggest that limiting preparation time is highly effect at suppressing strategy use. However, participants in the limited preparation time condition still exhibited a small decrease (2.09°) in reach angle when we instructed them to aim their hand straight to the target. This ‘cached’ strategy, while small, may have contributed to the 8° reach angle change measured early during the second rotation in our savings experiment (Experiment 4, Figure 8C in revised paper).

For this reason, we considered whether this 2° change in reach angle was indeed due to explicit strategy or instead caused by time-based decay in implicit learning over the 30 sec instruction period. To test this possibility, we tested another limited preparation group (n=12, Figure 8-Supplement 1A, decay-only, black). But this time, participants were instructed that the experiment’s disturbance was still on, and that they should continue to move the ‘imagined’ cursor to the target during the no feedback period. Even though subjects were instructed to continue using their strategy, their reach angles decreased by approximately 2.1° (Figure 8-Supplement 1, black decay-only). Indeed, we detected no statistically significant difference between the change in reach angle in this decay-only condition, and the Limit PT group in Experiment 3 (Figure 8-Supplement 1B; two-sample t-test, t(31)=0.016, p=0.987).

This control experiment suggested that residual ‘explicit strategies’ we measured in the Limit PT condition, were caused by time-dependent decay in implicit learning. Thus, our method for limiting preparation time does prevent the participant from expressing explicit strategies.

With regards to savings, the key prediction is that while no savings was observed on limited preparation time trials, by consistently limiting preparation time in Experiment 4 (previously Experiment 3), the implicit system may exhibit savings. In the revised manuscript we have improved our statistical methodology for comparing our new experiment with Haith et al. Using a mixed-ANOVA we determined that low preparation time learning rates increased by more than 80% in Experiment 4 (Figure 8C top; mixed-ANOVA exposure number by experiment type interaction, F(1,22)=5.993, p=0.023; significant interaction followed by one-way rmANOVA across exposures: Haith et al. with F(1,13)=1.109, p=0.312, ηp2=0.079; Experiment 4 with F(1,9)=5.442, p=0.045, ηp2=0.377). Statistically significant increases in reach angle were also detected immediately following rotation onset in Experiment 4 (Figure 7B, bottom), but not in Haith et al. (Figure 8C, bottom; mixed-ANOVA exposure number by experiment interaction, F(1,22)=4.411, p=0.047; significant interaction followed by one-way rm-ANOVA across exposures: Haith et al. with F(1,13)=0.029, p=0.867, ηp2=0.002; Experiment 4 with F(1,9)=11.275, p=0.008, ηp2=0.556).

Summary

In our revised manuscript, we add 2 experiments to corroborate our assumption that limited preparation time trials reveal implicit learning in our tasks. Data in the Limit PT condition in Experiment 3 suggest that explicit strategies are suppressed to approximately 2° by our preparation time limit (compared to about 12° under normal conditions). Data in the decay-only group suggest that this 2° change in reach angle was not due to a cached explicit strategy but rather time-dependent decay in implicit learning. Together, these conditions demonstrate that limiting reaction time in our experiments almost completely prevents the caching of explicit strategies; our limited preparation time measures do indeed reflect implicit learning. Therefore, we can study how the implicit system changes by limiting preparation time and measuring the learning curve on two consecutive exposures to that rotation. Our new statistical analysis suggested that these limited reaction time conditions produced savings that was not observed in Haith et al. In total, the data show that the implicit system does exhibit savings in Experiment 4 (previously Experiment 3), but its expression can be impeded by changes in the explicit system.

1.4. Figure 7: with limited PT anterograde interference persists, appearing to show a reduced implicit error sensitivity. Again, this is based on the assumption that the limited PT condition leads to implicit-only learning.

We again appreciate this concern. As we outline in Points 3, 7, and 1-3, we have collected two new datasets which show that limiting preparation time prevents explicit strategy use. This provides critical support that the sustained impairments in learning measured in Experiment 5 (previously Experiment 4) are dependent on an implicit system.

While on this topic, we should also note that we have made improvements to our statistical analyses of these data (as suggested by the reviewers). We now directly test how preparation time and time passage alter anterograde interference using a two-way ANOVA. Again, consistent with an impairment in implicit learning, we observed that anterograde interference worsened by limiting preparation time (Figure 9C; two-way ANOVA, main effect of preparation time, F(1,50)=11.747, p=0.001, ηp2=0.19).

1.5. Across all the manipulations, I still cannot find an independent piece of evidence that task manipulations can modify learning parameters of implicit learning without resorting to model assumptions. I would like to know what the authors take on this critical issue. Is it related to how error sensitivity is defined?

We appreciate these questions. Please refer to Points 3, 7, 1-2, 1-3, and 1-4 for a detailed response to these concerns. Let us summarize and put these points into perspective.

Our paper has several components, which were not carefully delineated in our original manuscript. In our revised manuscript we now divide our Results into 4 separate sections. The initial 2 sections show direct evidence for the competition theory and compare it to alternate models. The fourth section shows limitations in the competition model. The concerns noted by the reviewer above, are addressed in Part 3.

Part 3 begins with our analysis of the Haith et al. (2015) dataset. In our revised manuscript we fit both the competition and independence models to these data. We assume that limited reaction time trials reveal implicit learning. Here we are not trying to claim that either model is ‘correct’, but rather, we demonstrate that each model offers a different interpretation of the same experimental data. Namely, the competition model suggests that savings is due to an increase in both implicit and explicit error sensitivity. However, the independent model suggests that only the explicit system contributes to savings. The reviewer is right that these results depend on the structure of each model. In the independence model, that implicit system is only driven by the rotation. Because implicit learning (i.e., limited preparation time angle) is the same across Exposures 1 and 2, this model is ‘forced’ to predict that implicit error sensitivity is unchanged (because the implicit system learns the same in each exposure). However, in the competition model, the implicit system is driven not solely by the rotation, but also responds to explicit strategy. Because aiming increases during the second exposure, the competition model holds that the implicit system should have a smaller driving force. Thus, this model is ‘forced’ to predict that implicit error sensitivity must increase during the second exposure; the smaller driving force must be counteracted by a larger error sensitivity in order to achieve the same implicit learning level.

In sum, this initial section does not show that the competition model, or the independence model is true. Rather, it shows that the model you choose has vastly different implications about the data. We hope that the revised text on Lines 510-560 better explains this analysis and the way it should be interpreted.

Now that we know that explicit strategies could hypothetically impede our ability to measures changes in implicit learning, how can we detect them? As in our original manuscript, Part 3 (Figure 7 in revised paper) uses simulations to demonstrate that implicit learning could be better examined after suppressing explicit strategies. In the rest of this section, we do this in two scenarios, savings and interference, and compare our results to past data (i.e., Haith et al., 2015 and Lerner et al., 2020).

In Experiment 4, we limit preparation time on every trial to suppress explicit strategy. As discussed in Points 3, 7, and 1-2, we have now confirmed that our limited preparation time methodology prevents explicit strategy use. This gives us the ability to detect changes in implicit learning using limited preparation time trials. In Experiment 4, we measure changes in limited preparation time learning in two ways: measuring learning rate (via an exponential curve) and measuring the average change in reach angle immediately following rotation onset. As suggested by the reviewers, we now show the individual participant results in revisions to Figure 8 (previously Figure 6).

As outlined in Point 1-3 above, we updated our statistical approach to detect whether limited preparation time learning exhibited a change in learning rate in Haith et al., as well as Experiment 4 (our new savings Experiment, previously Experiment 3). The mixed-ANOVA showed savings on limited preparation time trials both in our rate measure (Figure 8C top; mixed-ANOVA exposure no. by experiment type interaction, F(1,22)=5.993, p=0.023; significant interaction followed by one-way rm-ANOVA across exposures: Haith et al. with F(1,13)=1.109, p=0.312, ηp2=0.079; Experiment 4 with F(1,9)=5.442, p=0.045, ηp2=0.377) and in the average reach angle following rotation onset, but not in Haith et al., 2015 (Figure 8C, bottom; mixed-ANOVA exposure no. by experiment interaction, F(1,22)=4.411, p=0.047; interaction followed by 1-way rm-ANOVA across exposures: Haith et al. with F(1,13)=0.029, p=0.867, ηp2=0.002; Experiment 4 with F(1,9)=11.275, p=0.008, ηp2=0.556).

Let us add up these pieces. Savings occurred on limited preparation time trials in Experiment 4, not Haith et al. Limiting preparation time prevents explicit strategy use. Thus, the implicit system must have caused the savings in Experiment 4. The competition model suggests this savings was not detected in Haith et al., due to the large concomitant changes in the explicit system’s response to the rotation. Please note that these results do not have to do with how error sensitivity is defined. When analyzing savings in Figure 8, we are not using a model-based approach. Rather, we are simply measuring learning rate via averages (Figure 8C, bottom) and an exponential curve (Figure 8C, top).

Finally, similar arguments hold in our anterograde interference experiment (Experiment 5, previously Experiment 4). In this experiment we again limited preparation time. We now know this isolates the implicit system given our additional control experiments. We found that our paradigm produced a strong learning impairment that did not resolve even after 24 hours. In previous work, we found that interference had lifted after 24 hours. Thus, the stronger interference observed in Experiment 5 suggests that explicit strategies can mitigate the lingering deficits that occur in the implicit system. In our revised paper, we have now added participant results to Figure 9 (previously Figure 7). We have also updated our statistics. Consistent with our original conclusions, we found that both low preparation time (Experiment 5) and high preparation time (earlier Lerner et al., 2020 study) trials exhibited decreases in learning rate which improved with the passage of time (Figure 9C; two-way ANOVA, main effect of time delay, F(1,50)=5.643, p=0.021, ηp2=0.101). These impairments were greatly exacerbated by limiting preparation time (Figure 9C; two-way ANOVA, main effect of preparation time, F(1,50)=11.747, p=0.001, ηp2=0.19). Again, these data did not rely on using a model to estimate error sensitivity; our interference metrics were entirely empirical.

Summary

We think the reviewer’s line of inquiry is very important. We have made several additions to the revised manuscript to address them, and related concerns.

1. We have added two new experiments (Limit PT and decay-only groups in Experiment 3). Here we show that limiting preparation time isolates implicit learning. We more clearly detail for the reader why it is important to test the assumption that limiting preparation time suppresses explicit strategy.

2. We better delineate our various result sections and discuss which support the competition model and which are hypothetical predictions.

3. We now illustrate independence model results in Figure 6, in our Haith et al., analysis.

4. We now provide individual participant data in Figures 6, 8, and 9 to increase the transparency of our various Results sections.

5. We have updated our statistical methods, and now using two-way ANOVA and mixed ANOVA to better detect savings and interference in our studies.

We hope that these new experiments and analyses lend further credence to our hypothesis that implicit error sensitivity can change over time. We do not believe our experiments are a final conclusion to this controversial topic. Rather, we hope to provide enough preliminary evidence to motivate the need for future experiments: especially why our results contrast with implicit behavior in the invariant error clamp paradigm (Avraham et al., 2021), which we now discuss on Lines 879-890 in the revised paper.

Reviewer #2:

The paper provides a thorough computational test of the idea that implicit adaptation to rotation of visual feedback of movement is driven by the same errors that lead to explicit adaptation (or strategic re-aiming movement direction), and that these implicit and explicit adaptive processes are therefore in competition with one another. The results are incompatible with previous suggestions that explicit adaptation is driven by task errors (i.e. discrepancy between cursor direction and target direction), and implicit adaptation is driven by sensory prediction errors (i.e. discrepancy between cursor direction and intended movement direction). The paper begins by describing these alternative ideas via state-space models of trial by trial adaptation, and then tests the models by fitting them both to published data and to new data collected to test model predictions.

The competitive model accounts for the balance of implicit-explicit adaptations observed previously when participants were instructed how to counter the visual rotation to augment explicit learning across multiple visual rotation sizes (20, 40 and 60 degrees; Neville and Cressman, 2018). It also fits previous data in which the rotation was introduced gradually to augment implicit learning (Saijo and Gomi, 2010). Conversely, a model based on independent adaptation to target errors and sensory prediction errors could not reproduce the previous results. The competitive model also accounts for individual participant differences in implicit-explicit adaptation. Previous work showed that people who increased their movement preparation time when faced with rotated feedback had smaller implicit reach aftereffects, suggesting that greater explicit adaptation led to smaller implicit learning (Fernandes-Ruis et al., 2011). Here the authors replicated this general effect, but also measured both implicit and explicit learning under experimental manipulation of preparation time. Their model predicted the observed inverse relationship between implicit and explicit adaptation across participants.

The authors then turned their attention to the issue of persistent sensorimotor memory – both in terms of savings (or benefit from previous exposure to a given visual rotation) and interference (or impaired performance due to exposure to an opposite visual rotation). This is a topic that has a long history of controversy, and there are conflicting reports about whether or not savings and interference rely predominantly on implicit or explicit learning. The competition model was able to account for some of these unresolved issues, by revealing the potential for a paradoxical situation in which the error sensitivity of an implicit learning process could increase without observable increase in implicit learning rate (or even a reduction in implicit learning rate). The authors showed that this paradox was likely at play in a previous paper that concluded that saving is entirely explicit (Haith et al., 2015), and then ran a new experiment with reduced preparation time to confirm some recent reports that implicit learning can result in savings. They used a similar approach to show that long-lasting interference can be induced for implicit learning.

Finally, the authors considered data that has long been cited to provide evidence that implicit adaptation is obligatory and driven by sensory prediction error (Mazzoni and Krakauer, 2006). In this previous paper, participants were instructed to aim to a secondary target when the visual feedback was rotated so that the cursor error towards the primary target would immediately be cancelled. Surprisingly, the participants' reach directions drifted even further away from the primary target over time – presumably in an attempt to correct the discrepancy between their intended movement direction and the observed movement direction. However, a competition model involving simultaneous adaptation to target errors from the primary target and opposing target errors from the secondary target offers an alternative explanation. The current authors show that such a model can capture the implicit "drift" in reach direction, suggesting that two competing target errors rather than a sensory prediction error can account for the data. This conclusion is consistent with more recent work by Taylor and Ivry (2011), which showed that reach directions did not drift in the absence of a second target, when participants were coached to immediately cancel rotation errors in advance. The reason for small implicit aftereffect reported by Taylor and Ivry (2011) is open for interpretation. The authors of the current paper suggest that adaptation to sensory prediction errors could underlie the effect, but an alternative possibility is that there is an adaptive mechanism based on reinforcement of successful action.

Overall, this paper provides important new insights into sensorimotor learning. Although there are some conceptual issues that require further tightening – including around the issue of error sensitivity versus observed adaptation, and the issue of whether or not there is evidence that sensory prediction errors drive visuomotor rotation – I think that the paper will be highly influential in the field of motor control.

2.1 Line 19: I think it is possible that more than 2 adaptive processes contribute to sensorimotor adaptation – perhaps add "at least".

Agreed. The text now reads: “Sensorimotor learning relies on at least two parallel systems…”

2.2. Lines 24-27: This section does not refer to specific context or evidence and is consequently obtuse to me.

Agreed. We have amended our abstract. The relevant passage now explains by ‘context’, we mean the visual targets presented to the participant. The relevant passage now reads: “This shared error leads to competition such that an increase in the explicit system’s adaptive response siphons away resources that are need for adaptation of the implicit system, thus reducing its learning. As a result, asymptotic learning in the implicit system varies with experimental conditions, and strategies can mask changes in implicit error sensitivity. However, adaptation behavior becomes more complex when participants are presented with multiple visual landmarks, a situation which introduces learning from sensory prediction errors in addition to target errors.”

2.3. Line 52: Again – perhaps add "at least".

Agreed. The text now reads: “Multiple lines of evidence suggest that the brain engages at least two…”

2.4. Line 68: I am not really sure what you are getting at by "implicit learning properties" here. Can you clarify?

Agreed. We have added to this passage: “…directly changing implicit learning properties (e.g., its memory retention or sensitivity to error)”.

2.5. Line 203: I think the model assumption that there was zero explicit aiming during washout is highly questionable here. The Morehead study showed early washout errors of less than 10 degrees, whereas errors were ~ 20 degrees in the abrupt and ~35 degrees in the gradual group for Saijo and Gomi. I think it highly likely that participants re-aimed in the opposite direction during washout in this study – what effect would this have on the model conclusions?

We agree. It is possible and probable that participants re-aimed in the opposite direction during washout. The data and model may hint at this as shown in Author response image 3. For example, note the gradual group washout period (black arrow in Author response image 3B). Here we can see that the washout proceeds at a slightly faster rate than the model fit. This could be due to explicit re-aiming in the opposite direction, which is not accounted for in the model.

Author response image 3. Model fits to Saijo and Gomi (2010).

Author response image 3.

Nevertheless, we do not believe this re-aiming alters our primary conclusions:

1. Firstly, aftereffects remain large across all washout cycles measured (see Author response image 3C), which to us, indicates that there is a strong lingering difference in the implicit aftereffects between the abrupt and gradual groups.

2. Secondly, suppose strategies rapidly switched direction during the washout period. Because the gradual participants experienced a larger error during washout, they would adopt a larger strategy to counter the error, relative to the abrupt group. This would lessen the differences between the abrupt and gradual groups, which exceeded 15° on the first washout cycle. In other words, this would mean that the ‘true’ difference in implicit learning is likely larger between the two groups than the washout aftereffects would suggest (which would only further support the hypothesis that the gradual condition led to increased implicit learning).

This being said, we do agree with the overall sentiment that our Saijo and Gomi (2010) analysis is limited. Because participants were not instructed to stop aiming, we cannot know with certainty that the reach angles during the washout period were solely due to implicit learning. We have moved this analysis to an appendix (Appendix 2) in our revised manuscript.

Second, we have added new data to our revised paper to investigate abrupt vs. gradual learning (Experiment 1) where implicit learning is more measured via no-aiming exclusion trials. We discuss these points in greater detail in our response to Point 2. We reproduce parts of this response below, for convenience.

Adapted from Point 2 response to editorial comment

In our revised manuscript, we have collected new data to test the competition model’s prediction. These new data use similar conditions studied by Saijo and Gomi. In Experiment 1 (new data in the paper), participants were exposed to a 60° rotation, either abruptly (n=36), or in a stepwise manner (n=37) where the perturbation magnitude increased by 15° across 4 distinct learning blocks (Figure 2D). Unlike Saijo and Gomi, implicit learning was measured via exclusion during each learning period by instructing participants to aim directly towards the target. As we hypothesized, in Figure 1J, we now demonstrate that stepwise rotation onset (which yields smaller target errors) muted the explicit response to the rotation (compared to abrupt learning). The competition model predicts that decreases in explicit strategy should facilitate greater implicit adaptation. To test this prediction, we compared implicit learning across the gradual and abrupt groups during the fourth learning block, where both groups experienced the 60° rotation size (Figure 2E).

2.6. Line 382: It would be helpful to explicitly define what you mean by learning, adaptation, error sensitivity, true increases and true decreases in this discussion – the terminology and usage are currently imprecise. For example, it seems to me that according to the zones defined "truth" refers to error sensitivity, and "perception" refers to observed behaviour (or modelled adaptation state), but this is not explicitly explained in the text. This raises an interesting philosophical question – is it the error sensitivity or the final adaptation state associated with any given process that "truly" reflects the learning, savings or interference experienced by that process? Some consideration of this issue would enhance the paper in my opinion.

This is a very insightful point. We agree that the words ‘true’ and ‘perceived’ imply that error sensitivity takes precedent over total adaptation. Rather, we want to know is whether changes in total adaptation ‘match’ changes in error sensitivity. Accordingly, we have revised our text and labels here. What we previously called a ‘True increase’ we now refer to as a ‘Matching increase’. What we previously called a ‘Perceived increase’ we now refer to as a ‘Mismatching increase’. What we previously called a ‘True decrease’, we now refer to as a ‘Matching decrease’. And finally, what we previously called a ‘Perceived decrease’ we now refer to as a ‘Mismatching decrease’. The revised text is now described on Lines 587-594 (and also Figure 7).

2.7. Figure 5 label: Descriptions for C and D appear inadvertently reversed – C shows effect of enhancing explicit learning and D shows effect of suppressing explicit learning.

Thank you for catching this, it is now fixed.

2.8. Line 438: The method of comparing learning rate needs justification (i.e. comparing from 0 error in both cases so that there is no confound due to the retention factor).

This is an excellent point. We repeated this analysis, but fit the reach angle only after the zero crossing during the second rotation exposure. To do this we started by fitting the exponential model to all the data. We used the model parameters to then estimate the zero-crossing point of the data, as the zero crossing point of the exponential fit (rounded to the nearest integer cycle). Our results were unchanged; two-way ANOVA exhibited a significant effect of both time between exposures and preparation time (two-way ANOVA, main effect of time delay, F(1,50)=4.23, p=0.045, ηp2=0.067; main effect of preparation time, F(1,50)=8.303, p=0.006, ηp2=0.132).

This control analysis is now described on Lines 679-682 in the revised paper: “This result was unrelated to initial differences in error across rotation exposures; we obtained analogous results (see Methods) when learning rate was calculated after the ‘zero-crossing’ in reach angle (two-way ANOVA, main effect of time delay, F(1,50)=4.23, p=0.045, ηp2=0.067; main effect of prep. time, F(1,50)=8.303, p=0.006, ηp2=0.132).”

2.9. Line 539: I am not sure if I agree that the implicit aftereffect in the no target group need reflect error-based adaptation to sensory prediction error. It could result from a form of associative learning – where a particular reach direction is associated with successful target acquisition for each target. The presence of an adaptive response to SPE does not fit with any of the other simulations in the paper, so it seems odd to insist that it remains here. Can you fit a dual error model to the Taylor and Ivry (single-target) data? I suspect it would not work, as the SPE should cause non-zero drift (i.e. shift the reach direction away from the primary target).

We agree that an associative learning mechanism may very well contribute to the implicit aftereffect, not solely in the Taylor and Ivry (2011) study, but in many motor control studies. In truth, we do not know the precise nature of the other forces driving implicit learning apart from visual target error. Your comment highlights a critical need for more research and modeling in this area.

An associative learning mechanism could very well produce an aftereffect in the no aiming target group, without a drift in the hand direction during the rotation period. A dual error model could, however, also produce the same result. The implicit system would respond to the SPE and drift, but this drift could be countered by a compensatory change in strategy. This would also cause an aftereffect, but no clear drift in hand direction during adaptation. A similar phenomenon has occurred in experiments that use aiming landmarks; for example, in Taylor et al. (2014), McDougle et al. (2015), and Day et al. (2016), target errors are rapidly eliminated, but implicit adaptation continues with a concomitant decrease in strategy.

In addition, there is another group tested by Taylor and Ivry which we did not describe in our manuscript. In the ‘disappearing target’ group, a secondary aiming location vanished mid-movement. This condition led to a medium-sized drift and aftereffect. To us, it would appear that this learning is due to an SPE, and that this SPE learning can be modulated by task conditions. Recent work by Tsay et al. (2021), also suggests that SPEs contribute to implicit learning, but can be modulated by disturbances to the target.

With that said, we completely agree that it appears contradictory that SPE learning should arise in Taylor and Ivry, but not the other data sets considered in our paper. We do not know why this is. In the revised manuscript however, we highlight this paradox, and suggest three potential hypotheses. First, there may have been SPE learning across the experiments we examined, but its magnitude was much smaller than target error learning (and thus may have gone undetected). Second, it is possible that what we call SPE learning is truly the associative process the reviewer describes, not learning due to error per se, but rather reinforcing actions that resulted in the desired outcome. The third possibility is that SPE learning is highly modulated by context, and thus, total SPE learning may vary substantially across experiments. For example, in Taylor and Ivry (2010), SPE learning may have been primed even in the no-aiming-target group due to an implicit recall of the aiming landmarks encountered during the baseline period. This idea would be akin to memory-guided saccade adaptation in the visual system. This may imply that reinforcement of the target location could modulate SPE learning along a gradient. When aiming targets are never provided during the task, there is little to no SPE learning (as in our primary data sets). When an aiming target is provided during initial training but not adaptation, there may be small SPE learning levels. Providing aiming targets that disappear during the movement would promote even better recall and promote moderate SPE driven learning. This would be equivalent to the disappearing target condition in Taylor and Ivry. Finally, always showing the aiming target would lead to the strongest SPE learning levels. This context-dependent SPE learning hypothesis could be related to recent work that shows both target errors and SPEs drive implicit learning, but the implicit SPE learning modulate is modulated by distraction.

In the revised manuscripts, we address these points in depth on Lines 1063-1090 in the revised paper. We provide the relevant passage below, for ease of reference:

“However, the nature of aim-cursor errors remains uncertain. For example, while this error source generates strong adaptation when the aim location coincides with a physical target (Figure 10H, instruction with target), implicit learning is observed even in the absence of a physical aiming landmark9 (Figure 10H, instruction without target), albeit to a smaller degree. This latter condition may implicate an SPE learning that does not require an aiming target. Thus, it may be that the aim-cursor error in Mazzoni and Krakauer is actually an SPE that is enhanced by the presence of a physical target. In this view, implicit learning is driven by a target error module and an SPE module that is enhanced by a visual target error4,11,86.

These various implicit learning modules are likely strongly dependent on experimental contexts, in ways we do not yet understand. For example, Taylor and Ivry (2011) would suggest that all experiments produce implicit some SPE learning, but less so in paradigms with no aiming targets. Yet, the competition equation accurately matched single-target behavior in Figures 1-9 without an SPE learning module. It is not clear why SPE learning would be absent in these experiments. One idea may be that the aftereffect observed by Taylor and Ivry (2011) in the absence of an aiming target, was actually a lingering associative motor memory that was reinforced by successfully hitting the target during the rotation period. Indeed, such a model-free learning mechanism87 should be included in a more complete implicit learning model. It is currently overlooked in error-based systems such as the competition and independence equations.

Another idea is that some SPE learning did occur in the no aiming target experiments we analyzed in Figures 1-9, but was overshadowed by the implicit system’s response to target error. A third possibility is that the SPE learning observed by Taylor and Ivry (2011) was contextually enhanced by participants implicitly recalling the aiming landmark locations (akin to memory-guided saccade adaptation) provided during the baseline period. This possibility would suggest SPEs vary along a complex spectrum: (1) never providing an aiming target causes little or no SPE learning (as in our experiments), (2) providing an aiming target during past training allows implicit recall that leads to small SPE learning, (3) providing an aiming target that disappears during the movement promotes better recall and leads to medium-sized SPE learning (i.e., the disappearing target condition in Taylor and Ivry), and (4) an aiming target that always remains visible leads to the largest SPE learning levels. This context-dependent SPE hypothesis may be related to recent work suggesting that both target errors and SPEs drive implicit learning, but implicit SPE learning is altered by distraction54.”

2.10. Line 556: Be precise here – do you really mean SPE? It seems as though you only provided quantitative evidence of a competition between errors when there were 2 physical targets.

We agree with this criticism. In this particular passage we have altered our language to indicate that ‘other target errors in the workspace’ can also drive implicit learning. This passage now reads: “The task-error driven implicit system likely exists in parallel with other implicit processes4,11,56. For example, in cases where primary target errors are eliminated, implicit adaptation persists (Figure 10). These residual changes are likely due to sensory prediction errors2,4,9,11 as well as other target errors that remain in the workspace (Figure 10H). When these error sources oppose one another, competition between parallel implicit learning modules may inhibit the overall implicit response (Figures 10A-C).”

We recognize this is still an oversimplification, per our response to Point 2-9 above. Thus, we also expand about the nature of SPEs and their possible confusion with associative learning in an expanded discussion on Lines 1063-1090 (see Point 2-9).

2.11. Line 606: Is it necessarily the explicit response that is cached? I think of this more as the development of an association between action and reward – irrespective of what adaptive process resulted in task success. It would be nice to know what happened to aftereffects after blocks 1 and 2 in study 3. An associative effect might be switched on and off more easily – so if a mechanism of that kind were at play, I would predict a reduced aftereffect as in Huberdeau et al.

We agree but want to make sure we understand your meaning. To us, it would seem in many cases that it is hard to dissociate the associative memory described by the reviewer, and a cached explicit strategy.

It seems possible that both memories could be contextually turned on and off. However, in Experiment 3 (now Experiment 4) we limit preparation time in order to suppress explicit strategy. Thus, rapidly ‘switching off’ part of the adapted response would likely implicate an associative memory that was formed independent of explicit caching.

While this is an intriguing possibility, unfortunately we cannot test it with our current data. There was no washout period at the end of the second rotation exposure. However, we think this is a fascinating point that should be noted to the reader. Thus, we have now added a discussion about associative memory to Lines 876-878 in our revised manuscript (reproduced below):

“…with multiple exposures to a rotation, explicit responses can be expressed at lower reaction times: a process termed caching22,45. Thus, changes in low preparation time adaptation commonly ascribed to the implicit system, may be contaminated by cached explicit strategies. This possibility seems unlikely to have altered our results. First, it is not clear why caching would occur in Experiment 4, but not our earlier study in Haith et al.17 (Figure 8); these earlier data implied that caching remains limited with only two exposures to a rotation (at least during the initial exposure to the second rotation over which savings was assessed). Nevertheless, to test the caching hypothesis, we measured explicit re-aiming under limited preparation time conditions in Experiment 3. We found that our method restricted explicit re-aiming to only 2°, compared to about 12° in the standard condition (Figure 3N). Moreover, this 2° decrement in reach angle was more likely due to temporal lability in implicit learning31,72–76 a similar 2° decrease in reach angle occurred over the 30 sec instruction period, even when participants were not told to stop aiming (Figure 8Supplement 1). Thus, while it appears that caching played little role in our results, our results should be taken cautiously. It is critical that future studies investigate how caching varies across experimental methodologies, and how cached strategies interact with implicit learning. In addition, such experiments should dissociate these cached explicit responses from associative implicit memories that may be rapidly instantiated in the appropriate context.”

2.12. Line 720: Ref 12 IS the Mazzoni and Krakauer study, ref 11 involves multiple physical targets, and ref 21 is the Taylor and Ivry paper considered above and in the next point. None of these papers therefore require an explanation based on SPE. However, ref #17 appears to provide a more compelling contradiction to the notion that target errors drive all forms of adaptation – as people adapt to correct a rotation even when there is never a target error (because the cursor jumps to match the cursor direction). A possible explanation that does not involve SPE might be that people retain a memory of the initial target location and detect a "target error" with respect to that location.

We agree this was a poorly constructed citation. We have removed the citation to Mazzoni and Krakauer here and added the citation to Reference 17. While we agree with your point about Reference 17, we also think it may be too early to say whether learning in this ‘jumping’ target case is due to a target error or an SPE. Furthermore, there are other scenarios like the disappearing target group in Taylor and Ivry, and data recently explored by Tsay et al. (2021) that suggest there is an implicit SPE learning system which might be modulated by task conditions. Overall, we think many experiments will be needed to better understand SPE learning, and we hope that our current results motivate their need. Please see our response to Point 2-9 above, where we describe revisions to the manuscript that highlight nuance in SPE learning, as well as other associative learning mechanisms that may contribute to processes labeled as SPE learning.

2.13. Line 737: I don't agree with this – the observation of an after-effect with only 1 physical target but instructed re-aiming so that there was 0 target error, certainly implies some other process besides a visual target error as a driver of implicit learning. However, as argued above, such a process need not be driven by SPE. Model-free processes could be at play…

We agree that an associative learning mechanism could also play a role here, as discussed in Point 2-9. We have now amended this passage to replace ‘strongly implicates an SPE learning mechanism’, to “may implicate an SPE learning that does not require an aiming target.” We also discuss associative learning alongside the SPE hypothesis in more complete detail on Lines 1075-1078 in the revised paper. Please see our response to Point 2-9 above for a more complete response to these concerns.

2.14. Line 791: Some more detail on how timing accuracy was assured in the remote experiments is needed. The mouse sample rate can be approximately 250 Hz, but my experience with it is that there can be occasional long delays between samples on some systems. The delay between commands to print to the screen and the physical appearance on the screen can also be long and variable – depending on the software and hardware involved.

We absolutely agree. In Experiments 2, 4, and 5 we measured the visual delay for our robotic manipulandum experiments. On average this delay is 55 ms. When we show reaction time (e.g., Figure 3) we correct these curves by subtracting the average delay. We alluded to screen delay in the original manuscript but did not report its magnitude. We have now added this to Lines 1440-1442.

We also have now measured visual delay across many laptops and operating systems (Windows and Mac) for our laptop-based experiment. This delay was on average 154 ms. We have updated our reaction time plots (e.g., Figure 3) to correct for this delay. In addition, we have added a passage to the text: “Finally, we used a separate procedure to estimate screen delay. To do this, participants were told to tap a circle that flashed on and off in a regular, repeating cycle. Participants were told to predict the appearance of the circle, and to tap exactly as the circle appeared. Because the stimulus was predictable, the difference between the appearance time, and the participant’s button press, revealed the system’s visual delay. The average visual delay we measured was 154 ms. This average value was subtracted out in the preparation times reported in Figures 3JandM, as well as Figure 7-Supplement 1.”

2.15. Line 861: How did you specify or vary the retention parameters to create these error sensitivity maps?

As noted in Point 2-16 below, we had a typographical error in the original manuscript (had replaced ‘e’ with ‘f’ and ‘i' with ‘s’ in our retention factor parameters). We intended to report that implicit and explicit retention factors were specified as: ai = 0.9829 and ae = 0.9278. These parameters were not varied in Figure 7 (previously Figure 5), only implicit and explicit error sensitivity.

As explained in response to Point 1-14 above, Figure 6 is ‘referenced’ to Haith et al. (2015). We used the retention factors identified by the competition model’s fit to the Day 1 learning curves. We now discuss this better on Lines 578-586 in the revised paper.

2.16. Line 865: Should not the subscripts here be "e" and "I" rather than "f" and "s"?

Thank you for catching this. We have fixed this error.

2.17. Line 1009: Why were data sometimes extracted using a drawing package (presumably by eye? – please provide further details), and sometimes using GRABIT in Matlab?

We used a two-step process when acquiring data from published figures. First, we attempted to open the figure in Adobe Illustrator. Depending on how these figures were constructed and embedded, occasionally the figure could be decomposed into its layers. This permitted us to use the x and y pixel values for each data point (which appeared as an object) to interpolate the necessary data from the figure.

However, in some cases, objects and layers could not be obtained in Illustrator. In these cases, we resorted to a second step. This is when we used the graphical utility GRABIT. GRABIT allows you to interpolate data at desired locations in an image. In this process we did our best to select the center of each data point in the image to estimate the data.

We now have added a section to our revised Methods that details this process on Lines 1133-1140.

2.18. Line 1073: More detail about the timing accuracy and kinematics of movement are required.

As detailed in our response to Point 2-14, we have measured visual delay across laptops and operating systems. On average this was 154 ms. We have adjusted our reaction time data in Figure 3 by this value, and also note this on Lines 1569-1574 in the revised manuscript.

Secondly, because we specified bounds on movement duration, movements were rapid and straight (like standard in-person studies). For reference we now show 2 representative participants (1 from the No PT Limit condition and 1 from the Limit PT condition) in Figure 3-Supplement 3 in the revised manuscript.

2.19. Line 1148: Again – should not the subscripts here be e and I rather than f and s?

Thank you for catching this. We have fixed this error.

Reviewer #3:

In this paper, Albert and colleagues explore the role of target error and sensory prediction error on motor adaptation. They suggest that both implicit and explicit adaptation would be driven by target error. In addition, implicit adaptation is also influenced by sensory prediction error.

While I appreciate the effort that the authors have done to come up with at theory that could account for many studies, there is not a single figure/result that does not suffer from main limitations as I highlight in the major comments below. Overall, the main limitations are:

I believe that the authors neglect some very relevant papers that contradict their theory and model.

3.1. They did not take into account some results on the topic such as Day et al. 2016 and McDougle et al. 2017. It is true that they acknowledge it and discuss it but I am not convinced by their arguments (more on this topic in a major comment about the discussion below).

We agree that a generalization analysis is important. Please see our response in Point 1 above and several points below. For convenience, we have adapted relevant passages from Point 1 below.

Adapted excerpts from Point 1

In our revised manuscript we directly compare the competition model to an alternative SPE generalization model. Our new analysis is documented in Figure 5 in the revised manuscript. First, we empirically compare the relationship between implicit and explicit learning in our data to the generalization curves measured in past studies. In Figures 5A-C we overlay data that we collected in Experiments 2 and 3 with generalization curves measured by Krakauer et al. (2000) and Day et al. (2016). In this way, we attempt to see whether past generalization curves empirically resemble our implicit-explicit learning measures.

[…]

Summary

In our revised manuscript we compare the competition theory to an SPE generalization model. This model did not match our data:

1. Implicit learning declined 300% more rapidly than predicted by generalization studies (Figures 4C-E).

2. The implicit-explicit learning relationship did not accurately generalize across rotation sizes in the SPE generalization model (Figure 4G).

3. The gain relating implicit and explicit learning remained the same across rotation sizes, rejecting the SPE generalization model, but supporting the competition theory (Figure 4H).

Given the importance of these analyses, we have devoted a section (Part 2) to them in our revised paper.

3.2. They did not take into account the fact that there is no proof that limiting RT is a good way to suppress the explicit component of adaptation. When the number of targets is limited, these explicit responses can then be cached without any RT cost. McDougle and Taylor, Nat Com (2019) demonstrated this for 2 targets (here the authors used only 4 targets). There exist no other papers that proof that limiting RT would suppress the explicit strategy as claimed by the authors. To do so, one needs to limit PT and to measure the explicit or the implicit component. The authors should prove that this assumption holds because it is instrumental to the whole paper. The authors acknowledge this limitation in their discussion but dismiss it quite rapidly. This manipulation is so instrumental to the paper that it needs to be proven, not argued (more on this topic in a major comment about the discussion below).

We appreciate this concern. We responded to this in Point 3 (in our response to editorial summary). For convenience, we provide excepts from this response below.

Adapted excerpts from Point 3 (editorial response)

We agree that it is important to test how effectively this condition limits explicit strategy use. Therefore, we have performed two additional control studies to measure explicit strategy in the limited PT condition. First, we have now added a limited preparation time (Limit PT) group to our laptop-based study in Experiment 3 (Experiment 2 in original manuscript). In Experiment 3, participants in the Limit PT group (n=21) adapted to a 30° rotation but under a limited preparation time condition. As for the Limit PT group in Experiment 2, we imposed a strict bound on reaction time to suppress movement preparation time. However, unlike Experiment 2 (Experiment 1 in original manuscript) once the rotation ended, participants were told to stop re-aiming. This permitted us to examine whether limiting preparation time suppressed explicit strategy as intended. Our analysis of these new data is shown in Figures 3K-Q.

In Experiment 3, we now compare two groups: one where participants had no preparation time limit (No PT Limit, Figures 3H-J) and one where an upper bound was placed on preparation time (Limit PT, Figures 3K-M). We measured early and late implicit learning measures over a 20-cycle no feedback and no aiming period at the end of the experiment. The voluntary decrease in reach angle over this no aiming period revealed each participant’s explicit strategy (Figure 3N). When no reaction time limit was imposed (No PT Limit), reaiming totaled approximately 11.86° (Figure 3N, E3, black), and did not differ statistically across Experiments 2 and 3 (t(42)=0.50, p=0.621). Recall that Experiment 2 tested participants using a robotic manipulandum and Experiment 3 tested participants in a similar laptop-based paradigm. As in earlier reports, limiting reaction time dramatically suppressed explicit strategy, yielding only 2.09° of re-aiming (Figure 3N, E3, red). Therefore, our new control dataset suggests that our limited reaction time technique was highly effective at suppressing explicit strategy as initially claimed in our original manuscript.

With that said, our new data alone suggest that while explicit strategies are strongly suppressed by limiting preparation time, they are not entirely eliminated; when we limited preparation time in Experiment 3, we observed that participants still exhibited a small decrease (2.09°) in reach angle when we instructed them to aim their hand straight to the target (Figure 3L, no aiming; Figure 3N, E3, red). This ‘cached’ explicit strategy, while small, may have contributed to the 8° reach angle change measured early during the second rotation in our savings experiment (Experiment 4, Figure 8C in revised paper).

For this reason, we consider another important phenomenon in our revised manuscript: time-dependent decay in implicit learning. That is, the 2° decrease in reach angle we observed when participants were told to stop aiming in the PT Limit group in Experiment 3 may be due to time-based decay in implicit learning over the 30 second instruction period, as opposed to a voluntary reduction in strategy. To test this possibility, we collected another limited preparation group (n=12, Figure 8-Supplement 1A, decay-only, black). But this time, participants were instructed that the experiment’s disturbance was still on, and that they should continue to move the ‘imagined’ cursor through the target during the terminal no feedback period. Still, reach angles decreased by approximately 2.1° (Figure 8-Supplement 1B, black). Indeed, we detected no statistically significant difference between the change in reach angle in this decay-only condition, and the Limit PT group in Experiment 3 (Figure 8-Supplement 1B; two-sample t-test, t(31)=0.016, p=0.987).

This control experiment suggested that residual ‘explicit strategies’ we measured in the Limit PT condition, were in actuality caused by time-dependent decay in implicit learning. Thus, our Limit PT protocol appears to eliminate explicit strategy. This additional analysis lends further credence to the hypothesis that savings in Experiment 4, was primarily due to changes in the implicit system rather than cached explicit strategies.

Summary

We agree that it is important to corroborate that our method for limiting reaction time suppresses explicit learning as intended. To this end we collected additional experimental conditions in Experiment 3 in the revised manuscript (a Limit PT group and a decay-only group). Data in the Limit PT condition suggest that explicit strategies are suppressed to approximately 2° by our preparation time limit (compared to about 12° under normal conditions). Data in the decay-only group suggest that this 2° change in reach angle was not due to a cached explicit strategy but rather time-dependent decay in implicit learning. Together, these conditions demonstrate that limiting reaction time in our experiments almost completely prevents the caching of explicit strategies; our limited preparation time measures do indeed reflect implicit learning. We use these data to corroborate key predictions of the competition theory in Figures 3Q (subject-to-subject correlations between implicit and explicit learning) and 8 (savings experiment) in the revised manuscript.

3.3. They did not take into account the fact that the after-effect is a measure of implicit adaptation if and only if the participants are told to abandon any explicit strategy before entering the washout period (as done in Taylor, Krakauer and Ivry, 2011). This has important consequences for their interpretation of older studies because it was then never asked to participants to stop aiming before entering the washout period as the authors did in experiment 2.

We appreciate this concern. We responded to this concern in Point 2 above. For convenience, we have adapted passages from Point 2 below.

Adapted excerpts from Point 2 (response to editorial summary)

This criticism applies to our Saijo and Gomi analysis, our Fernandez-Ruiz et al., analysis, and our Mazzoni and Krakauer analysis. We will address each in turn.

Saijo and Gomi (2010)

Let us begin with the Saijo and Gomi (2010) analysis. In our original manuscript, we used conditions tested by Saijo and Gomi to investigate one of the competition theory’s predictions: decreasing explicit strategy for a given rotation size will increase implicit learning. Specifically, we analyzed whether gradual learning suppressed explicit strategy in this study, thus facilitating greater implicit adaptation. While our analysis was consistent with this hypothesis, as noted by the reviewers, there is a limitation; washout trials were used to estimate implicit learning, though participants were not directly instructed to stop aiming. This represents a potential error source. For this reason, in our revised manuscript, we have collected new data to test the competition model’s prediction.

These new data also examine gradual and abrupt rotations. In Experiment 1 (new data in the paper), participants were exposed to a 60° rotation, either abruptly (n=36), or in a stepwise manner (n=37) where the rotation magnitude increased by 15° across 4 distinct learning blocks (Figure 2D). Unlike Saijo and Gomi, implicit learning was measured via exclusion during each learning period by instructing participants to aim directly towards the target. As we hypothesized, in Figure 2F, we now demonstrate that stepwise rotation onset (which yields smaller target errors) muted the explicit response to the rotation (compared to abrupt learning). The competition model predicts that decreases in explicit strategy should facilitate greater implicit adaptation. To test this prediction, we compared implicit learning across the gradual and abrupt groups during the fourth learning block, where both groups experienced the 60° rotation size (Figures 2EandG).

Consistent with our hypothesis, participants in the stepwise condition exhibited a 10° reduction in explicit re-aiming (Figure 2F, two-sample t-test, t(71)=4.97, p<0.001, d=1.16), but a concomitant 80% increase in their implicit recalibration (Figure 2G, data, two-sample t-test, t(71)=6.4, p<0.001, d=1.5). To test whether these changes in implicit learning matched the competition model, we fit the independence Equation (Equation 5) and the competition Equation (Equation 4) to the implicit and explicit reach angles measured in Blocks 14, across the stepwise and abrupt conditions, while holding implicit learning parameters constant. In other words, we asked whether the same parameters (ai and bi) could parsimoniously explain the implicit learning patterns measured across all 5 conditions (all 4 stepwise rotation sizes plus the abrupt condition). As expected, the competition model predicted that implicit learning would increase in the stepwise group (Figure 2G, comp., 2-sample t-test, t(71)=4.97, p<0.001), unlike the SPE-only learning model (Figure 2G, indep.).

Thus, in our revised manuscript we present new data that confirm the hypothesis we initially explored in the Saijo and Gomi (2010) dataset: decreasing explicit strategy enhances implicit learning. These new data do not present the same limitation noted by the reviewers. Lastly, it is critical to note that while stepwise participants showed greater implicit learning, their total adaptation was approximately 4° lower than the abrupt group (Figure 2E, right-most gray area (last 20 trials); two-sample t-test, t(71)=3.33, p=0.001, d=0.78). This surprising phenomenon is predicted by the competition equation. When strategies increase in the abrupt rotation group, this will tend to increase total adaptation. However, larger strategies leave smaller errors to drive implicit learning. Hence, greater adaptation will be associated with larger strategies, but less implicit learning. Indeed, the competition model predicted 53.47° total learning in the abrupt group but only 50.42° in the stepwise group. Recall that, we described this paradoxical phenomenon at the individual-participant level in Point 1 above. Note that this pattern was also observed by Saijo and Gomi. Surprisingly, when participants were exposed to a 60° rotation in a stepwise manner, total adaptation dropped over 10°, whereas the aftereffect exhibited during the washout period nearly doubled.

Summary

In our revised paper we have removed this Saijo and Gomi analysis from the main figures, and instead use this as supportive data in Figure 2-Supplement 2, and also in Appendix 2. We now state, “It is interesting to note that these implicit learning patterns are broadly consistent with the observation that gradual rotations improve procedural learning34,43, although these earlier studies did not properly tease apart implicit and explicit adaptation (see the Saijo and Gomi analysis described in Appendix 2).”

We have replaced this analysis with our new data in Experiment 1. In these new data, abrupt and gradual learning was compared using the appropriate implicit learning measures. These new data are included in both Figures 1 and 2 in the revised manuscript. Importantly, these new data point to the same conclusions we reached in our initial Saijo and Gomi analysis.

Mazzoni and Krakauer (2006)

The reviewers note that the uninstructed group in Mazzoni and Krakauer was not told to stop aiming during the washout period. We agree with the potential error source. However, it is important to note that these data were not included to directly support the competition model, but as a way to demonstrate its limitations. For example, in Mazzoni and Krakauer (2006) our simple target error model cannot reproduce the measured data. That is, in the instructed group implicit learning continued despite eliminating the primary target error. Thus, in more complicated scenarios where multiple visual landmarks are used during adaptation, the learning rules must be more sophisticated than the simple competition theory. Here we showed that one possibility is that both the primary target and the aiming target drive implicit adaptation via two simultaneous target errors. Our goal by presenting these data (and similar data collected by Taylor and Ivry, 2011) was to emphasize that our work is not meant to support an either-or hypothesis between target error and SPE learning, but rather the more pluralistic conclusion that both error sources contribute to implicit learning in a condition-dependent manner.

In our revised manuscript, we now better appreciate this conclusion and its potential limitation in the following passage: “It is important, however, to note a limitation in these analyses. Our earlier study did not employ the standard conditions used to measure implicit aftereffects: i.e., instructing participants to aim directly at the target, and also removing any visual feedback. Thus, the proposed dual-error model relies on the assumption that differences in washout were primarily related to the implicit system. These assumptions need to be tested more completely in future experiments.

In summary, the conditions tested by Mazzoni and Krakauer show that the simplistic idea that adaptation is driven by only one target error, or only one SPE, cannot be true in general54. We propose a new hypothesis that when people move a cursor to one visual target, while aiming at another visual target, each target may partly contribute to implicit learning. When these two error sources conflict with one another, the implicit learning system may exhibit an attenuation in total adaptation. Thus, implicit learning modules may compete with one another when presented with opposing errors.”

Fernandez-Ruiz et al. (2011)

As noted by the reviewers, our original manuscript also reported data from a study by Fernandez-Ruiz et al. (2011). Here, we showed that participants in this study who increased their preparation time more upon rotation onset tended to exhibit a smaller aftereffect. We used these data to test the idea that when a participant uses a larger strategy, they inadvertently suppress their own implicit learning. However, we agree with the reviewers that this analysis is problematic because participants were not instructed to stop aiming in this earlier study. Thus, we have removed these data in the revised manuscript. This change has little to no effect on the manuscript, given that individual-level correlations between implicit and explicit learning are tested in Experiments 1, 2, and 3 in the revised paper (Figures 3-5), and are also reported from earlier works (Maresch et al., 2021; Tsay et al., 2021) in Figure 4-Supplements 1GandI, where implicit learning was more appropriately measured via exclusion. Note that all 5 studies support the same ideas suggested by the Fernandez-Ruiz et al. study: increases in explicit strategy suppress implicit learning.

2.4. There are also major problems with statistics/design: I disagree with the authors that 10-20 people per group (in their justification of the sample size in the second document) is standard for motor adaptation experiments. It is standard for their own laboratories but not for the field anymore. They even did not reach N=10 for some of the reported experiments (one group in experiment 1). Furthermore, this sample size is excessively low to obtain reliable estimation of correlations. Small N correlation cannot be trusted: https://garstats.wordpress.com/2018/06/01/smallncorr/

We agree that some groups we collected are limited by small sample sizes. With that said, it is important to note that many recent published results are based on sample sizes similar to ours. For example, even in the critical literature you cite in your comments sample sizes were nearly 10 participants: (1) in Kim et al., (2018) there are 10 participants in each group. (2) In Morehead et al. (2017) there were 12 participants in each group. (3) In Day et al. (2016) there were 10 participants each group (7 different groups total).

That being said, we have made substantial additions to the manuscript to corroborate our main results in data sets that have larger samples. These new data sets include:

1. Experiment 1, added a stepwise learning group: n=37

2. Experiment 1, added an abrupt learning group: n=36

3. Experiment 3, no preparation time limit condition increased to n=35

4. Experiment 3, added a limited preparation time group: n=21

5. Figure 1, added new analysis of Tsay et al., 2021 (n=25/group)

6. Experiment 3, added a no-decay group: n=12

7. Figure 4-Supplement 2, added new analysis of Maresch et al., 2021 (n=40)

These new data corroborate and extend the conclusions we reached in our initial paper. Critically, we now reproduce our correlation analysis in Figure 3H (now Figure 3G) within our more highly powered experiments. For example, we show the same match between the competition model and data across participants in Experiment 3 (n=35, Figure 1Q). We also observe strong negative implicit-explicit correlations that agree with the competition model in the stepwise condition in Experiment 1 (n=37, Figure 4F). Similar correlations were also detected in the 60° rotation conditions in Maresch et al., 2021 (n=40, Figure 4-Supplement 2G) and Tsay et al., 2021 (Figure 4-Supplement 2I, n=25) data sets.

Finally, our revised paper illustrates that implicit and explicit correlations with total adaptation also exhibit a match to the competition model (Experiment 3, n=35, Figures 4AandB; Experiment 1, n=70, Figure 4-Supplements 2BandE; n=40, Maresch et al., 2021, Figure 4-Supplements 2AandD; n-25, Tsay et al., 2021, Figure 4-Supplements 2CandF).

In sum, we have added 7 new experiments to our revised manuscript, totaling 264 new participants. We use highly powered studies (e.g., n=25, n=35, n=40, n=70) to corroborate subject-to-subject correlations between implicit and explicit learning (Figure 3H in original manuscript). We also now consider implicit-total and explicit-total correlations across all these studies in the revised manuscript.

3.5. Justification of sample size is missing. What was the criteria to stop data collection? Why is the number of participants per group so variable (N=9 and N=13 for experiment 1, N=17 for experiment 2, N=10 for experiment 3, N=10 ? per group for experiment 4). Optional stopping is problematic when linked to data peeking (Armitage, McPherson and Rowe, Journal of the Royal Statistical Society. Series A (General), 1969). It is unclear how optional stopping influences the outcome of the statistical tests of this paper.

We did not engage in optional stopping. Unfortunately, our data collection was interrupted by the Covid19 pandemic. For this reason, we started using a laptop-based study design in Experiment 3. Furthermore, the primary data collector (Jihoon Jang) was not involved in data analysis (Scott Albert).

Nevertheless, our sample sizes are similar to other recent studies that also investigate error sources that drive implicit learning. These are studies the reviewer cites in their criticisms: (1) Kim et al., (2018): 10 participants in each group, (2) Day et al. (2016): 10 participants per group, (3) Morehead et al. (2017): 12 participants per group.

The primary exception is data in Experiment 2, where only 9 participants were recruited. For this reasons, in the revised paper we corroborate the subject-level correlations in Experiment 2, in Experiment 3, where n=35 participants. In addition, we analyze several datasets in our paper that were collected independently across many labs (Shadmehr Lab, Krakauer Lab, Henriques Lab, Ivry Lab, Donchin Lab, and the Della-Maggiore Lab).

Thus, we have done our best to verify our data sets and the competition model predictions across several studies, despite the challenges presented by the Covid-19 pandemic.

3.6. The authors should follow the best practices and add the individual data to all their bar graphs (Rousselet et al. EJN 2016).

We agree. Individual participant data are now provided for each experiment in the revised manuscript where possible:

1. Participants in Experiment 1 (Figures 1H, 1J, 2E, 3-Supp. 2, 5D, 4-Supps. 1B,E,H, 4-Supps. 3A,C,E, 4-Supp. 3B)

2. Participants in Experiment 2 (Figures 3G, 3N, 5A, 5B, Figure 3-Supplement 2)

3. Participants in Experiment 3 (Figures 3N, 3O, 3P, 3Q, 4G, 4H, 5A, 5C, 4-Supplements 2A,C,E, Figure 8-Supp. 1B)

4. Participants in Experiment 4 (Figure 8C)

5. Participants in Experiment 5 (Figure 9C)

6. Participants in Maresch et al., 2021 (Figure 4-Supplements 1A,D,G)

7. Participants in Tsay et al., 2021 (Figures 1M, 1O, 4-Supplements 1C,F,I, 4-Supps. 2A,C,E, 4-Supp. 3A)

8. Participants in Lerner et al., 2020 (Figure 9C)

9. Participants in Haith et al., 2015 (Figures 6B,C,D,E, 8C)

3.7. Missing interactions (Nieuwenhuis et al., Nature Neuro 2011), misinterpretation of non-significant p-values (Altman 1995 https://www.bmj.com/content/311/7003/485)

We appreciate this suggestion. We have now updated all relevant analyses in our revised manuscript. We used either a two-way ANOVA, two-way repeated measures ANOVA, or mixed-ANOVA depending on the experimental design. When interactions were statistically significant, we next measured simple main effects via one-way ANOVA. We outline this on Lines 1195-1207. All conclusions reached in the original manuscript remained the same using this updated statistical procedure. These include:

1. When measuring savings in Haith et al., we observed that the learning rate increased during the second exposure on high preparation time trials, but not on low preparation time trials (Figure 5B, right; two-way rm-ANOVA, prep. time by exposure number interaction, F(1,13)=5.29, p=0.039; significant interaction followed by one-way rm-ANOVA across Days 1 and 2: high preparation time with F(1,13)=6.53, p=0.024, ηp2=0.335; low preparation time with F(1,13)=1.11, p=0.312, ηp2=0.079). We corroborate this rate analysis by measuring early changes in reach angle (first 40 trials following rotation onset) across Days 1 and 2 (Figure 5C, left and middle). Only high preparation time trials exhibited a statistically significant increase in reach angles, consistent with savings (Figure 5C, right; two-way rm-ANOVA, preparation time by exposure interaction, F(1,13)=13.79, p=0.003; significant interaction followed by one-way rm-ANOVA across days: high preparation time with F(1,13)=11.84, p=0.004, ηp2=0.477; low preparation time with F(1,13)=0.029, p=0.867, ηp2=0.002).

2. When comparing implicit and explicit error sensitivities predicted by the competition model, the model predicted that both implicit system and explicit systems exhibited a statistically significant error sensitivity increase (Figure 5D, right; two-way rm-ANOVA, within-subject effect of exposure no., F(1,13)=10.14, p=0.007, ηp2=0.438; within-subject effect of learning process, F(1,13)=0.051, p=0.824, ηp2=0.004; exposure no. by learning process interaction, F(1,13)=1.24, p=0.285).

3. When comparing implicit and explicit error sensitivities predicted by the independence model, the model predicted that only the explicit system exhibited a statistically significant increase in error sensitivity (Figure 5E; two-way rm-ANOVA, learning process (implicit vs explicit) by exposure interaction, F(1,13)=7.016, p=0.02; significant interaction followed by one-way rm-ANOVA across exposures: explicit system F(1,13)=9.518, p=0.009, ηp2=0.423; implicit system with F(1,13)=2.328, p=0.151, ηp2=0.152).

4. When measuring savings under limited preparation time in Experiment 4, we still observed the opposite outcome to Haith et al., 2015. Notably, low preparation time learning rates increased by more than 80% in Experiment 4 (Figure 7C top; mixed-ANOVA exposure number by experiment type interaction, F(1,22)=5.993, p=0.023; significant interaction followed by one-way rm-ANOVA across exposures: Haith et al. with F(1,13)=1.109, p=0.312, ηp2=0.079; Experiment 4 with F(1,9)=5.442, p=0.045, ηp2=0.377). Statistically significant increases in reach angle were detected immediately following rotation onset in Experiment 4 (Figure 7B, bottom), but not our earlier data (Figure 7C, bottom; mixed-ANOVA exposure number by experiment interaction, F(1,22)=4.411, p=0.047; significant interaction followed by one-way rm-ANOVA across exposures: Haith et al. with F(1,13)=0.029, p=0.867, ηp2=0.002; Experiment 4 with F(1,9)=11.275, p=0.008, ηp2=0.556).

5. Lastly, we updated our anterograde interference analysis in Figure 8. While both low preparation time and high preparation time trials exhibited decreases in learning rate which improved with the passage of time (Figure 8C; two-way ANOVA, main effect of time delay, F(1,50)=5.643, p=0.021, ηp2=0.101), these impairments were greatly exacerbated by limiting preparation time (Figure 8C; two-way ANOVA, main effect of preparation time, F(1,50)=11.747, p=0.001, ηp2=0.19).

With regards to non-significant results. We agree that it is problematic when claims are made that two samples are the same, when no statistically significant difference is detected. We avoid such language in our revised manuscript.

3.8. Given these main limitations, I don't think that the authors have a convincing case in favor of their model and I don't think that any of these results actually support the error competition model.

We are very grateful to the reviewers for their incredibly detailed and insightful comments. We hope that the extensive changes we have made to the manuscript address your various concerns.

Detailed major comments per equation/figure/section:

3.9. Equation 2: the authors note that the sensory prediction error is "anchored to the aim location " (line 104-105) which is exactly what Day et al. 2016 and McDougle et al. 2017 demonstrated. Yet, they did not fully take into account the implication of this statement. If it is so, it means that the optimal location to determine the extent of implicit motor adaptation is the aim location and not the target. Indeed, if the SPE is measured with respect to the aim location and is linked to the implicit system, it means that implicit adaptation will be maximum at the aim location and, because of local generalization, the amount of implicit adaptation will decay gradually when one wants to measure this system away of that optimal location (Figure 7 of Day et al). This means that, the further the aiming direction is from the target, the smaller the amount of implicit adaptation measured at the target location will be. This will result in an artificial negative correlation between the explicit and implicit system without having to relate to a common error source.

We appreciate this concern. We address these points in great detail in Points 1 and 3-1 above.

3.10. Equation 3: the authors do not take into account results from their lab (Marko et al., Herzfeld) and from others (Wei and Kording) that show that the sensitivity to error depends on the size of the rotation.

Yes, in three cases in our main figures, we assume that error sensitivity is the same across experimental groups. This occurs in our Neville and Cressman analysis, our Tsay et al. analysis, and our Experiment 1 analysis. These are now included in Figure 1 in the revised manuscript. We make this assumption to reduce model complexity. That is, with the assumption that error sensitivity is the same across groups, the model only has a single free parameter.

While we agree with the heart of the reviewer’s criticism, we should note an important correction. In our previous work we have shown that error sensitivity varies with error size, not the perturbation’s size. This is an important distinction. For example, in our Neville and Cressman analysis, rotation sizes vary between 20° and 60°. Yet, the terminal asymptotic target error across these groups varies less than 5° (while the rotation varies over 40°). For the largest variation in rotation conditions, rotation size varies by 75° in Tsay et al., but the terminal error varies only 13°. Thus, in the competition model, it is reasonable to assume that error sensitivities are similar across groups, because differences in their terminal errors are much smaller than differences in their rotation sizes.

With that being said, there is another potentially more important reason why assuming that implicit error sensitivity is constant across conditions has little effect on competition model predictions. The reason why error sensitivity plays a role in the competition equation, is due to the implicit system’s learning gain. This gain depends on implicit error sensitivity and retention according to: π = bi(1-ai+bi)-1. In other words, it is not error sensitivity that determines steady-state implicit learning, but the implicit learning gain.

Fortunately, this implicit learning gain responds weakly to changes in error sensitivity (which appears in both the numerator and denominator). For example, let us suppose that participants in the strategy group exhibited an implicit error sensitivity of 0.3, but in the no-strategy group, implicit error sensitivity was only 0.2. For an implicit retention factor of 0.9565 (see Methods in revised paper), the no-strategy learning gain would be 0.821 and the strategy learning gain would be 0.873. Thus, even though implicit error sensitivity was 50% larger in the strategy participants, the competitive implicit learning gain would change only 6.3%. For an even more extreme case where implicit error sensitivity was double (0.4) in the strategy group, this would still only lead to a 9.8% change in the competitive implicit learning gain.

Thus, while we appreciate the reviewer’s concern, our assumption that implicit error sensitivity is equal across learning conditions in Figure 1, is expected to have little effect on the model predictions. At the same time, it helps us to reduce model complexity, decreasing the probability of over-fitting. We explained here that the learning gain in the competition model is very insensitive to changes in implicit error sensitivity: even doubling error sensitivity increases the competition model’s learning gain by less than 10%. These new points have been added to Lines 1946-1973 in the revised paper.

3.11. Equation 5: In this equation, the authors suggest that the steady-state amount of implicit adaptation is directly proportional to the size of the rotation. Thanks to a paradigm similar to Mazzoni and Krakauer 2006, the team of Rich Ivry has demonstrated that the implicit response saturates very quickly with perturbation size (Kim et al., communications Biology, 2018, see their Figure 1).

We provide a thorough response to this concern in Point 6B (editor summary response). For convenience, we provide relevant excerpts from this response below.

Adapted excerpts from Point 6B in editor summary

These concerns are critical. We agree with the reviewers that Morehead et al., 2017 and Kim et al., 2018 suggest that implicit learning saturates quickly with rotation size. With that said, these studies both use invariant error-clamp perturbations, not standard visuomotor rotations. We are not sure that the implicit properties observed in an invariant error context apply to the conditions we consider in our manuscript. In our revised manuscript, we consider variations in steady-state implicit learning across two new data sets: (1) stepwise adaptation in Experiment 1 and (2) non-monotonic implicit responses in Tsay et al., 2021. As we will demonstrate below, our data and re-analysis strongly indicate that the implicit system does not always saturate with changes in rotation size. Rather, the implicit system exhibits a complicated response to both rotation size and explicit strategies, which together yield three implicit learning phenotypes:

1. Saturation in steady-state implicit learning despite increasing rotation size (analysis reported in original paper using Neville and Cressman, 2018).

2. Scaling in steady-state implicit learning with increasing rotation size (new analysis using stepwise learning in Experiment 1).

3. Non-monotonic (quadratic) steady-state implicit behavior due to increases in rotation magnitude (new analysis added which uses data from Tsay et al., 2021).

[…]

In sum, we now add two additional datasets to the revised manuscript: data collected using a stepwise rotation in Experiment 1, and data recently collected by Tsay et al., 2021. Collectively, these new data show that steady-state implicit learning is complex, exhibiting at least three contrasting phenotypes: saturation, scaling, and non-monotonicity. Remarkably, the competition theory provides a way to account for each of these patterns. Thus, the competition model does accurately describe how implicit learning varies due to both changes in rotation size and explicit strategy, at least in standard rotation paradigms. However, these rules do not explain implicit behavior in invariant error-clamp paradigms. In the revised manuscript, we delve further into a comparison between standard rotation learning and invariant error learning in our Discussion (Lines 911-941). The relevant passage is provided below:

“Although the implicit system is altered in many experimental conditions, one commonly observed phenomenon is its invariant response to changes in rotation size3,31,35,36,40. For example, in the Neville and Cressman31 dataset examined in Figure 1, total implicit learning remained constant despite tripling the rotation’s magnitude. While this saturation in implicit learning is sometimes interpreted as a restriction in implicit adaptability, this rotation-insensitivity may have another cause entirely: competition. That is, when rotations increase in magnitude, rapid scaling in the explicit response may prevent increases in total implicit adaptation. Critically, in the competition theory, implicit learning is driven not by the rotation, but by the residual error that remains between the rotation and explicit strategy. Thus, when we used gradual rotations to reduce explicit adaptation (Experiment 1), prior invariance in the implicit response was lifted: as the rotation increased, so too did implicit learning37 (Figure 1I). The competition theory could readily describe these two implicit learning phenotypes: saturation and scaling (Figures 1GandL). Furthermore, it also provided insight as to why implicit learning can even exhibit a non-monotonic response, as in Tsay et al. (2021)36. All in all, our data suggest that implicit insensitivity to rotation size is not due to a limitation mplycit learning, but rather a suppression created by competition with explicit strategy.

With that said, this competitive saturation in implicit adaptation should not be conflated with the upper limits in implicit adaptation that have been measured in response to invariant errors3,11,40. In this latter condition, implicit adaptation reaches a ceiling whose value varies somewhere between 15 degrees3 and 25 degrees40 across studies. In these experiments, participants adapt to an error that remains constant and is not coupled to the reach angle (thus, the competition theory cannot apply). While the state-space model naturally predicts that total adaptation can exceed the error size which drives learning in this errorclamp condition (as is observed in response to small error-clamps), it cannot explain why asymptotic learning is insensitive to the error’s magnitude. One idea is that proprioceptive signals40,83 may eventually outweigh the irrelevant visual errors in the clamped-error condition, thus prematurely halting adaptation. Another possibility is that implicit learning obeys the state-space competitive model described here, but only up until a ceiling that limits total possible implicit corrections. Indeed, in Experiment 1, we produced scaling in the implicit system in response to rotation size, but never evoked more than about 22° of implicit learning. However, when we used similar gradual conditions in the past to probe implicit learning37, we observed about 32° implicit learning in response to a 70° rotation. Further, in a recent study by Maresch and colleagues47, where strategies were probed only intermittently, implicit learning reached nearly 45°. Thus, there remains much work to be done to better understand variations in implicit learning across errorclamp conditions, and standard rotation conditions.”

3.12. Data from Cressman, Figure 1: Following Day et al., one should expect that, when the explicit strategy is larger (instruction group in Cressman et al. compared to the no-instruction group), the authors are measuring the amount of implicit adaptation further from aiming direction where it is maximum. As a result, the amount of implicit adaptation appears smaller in the instruction group simply because they were aiming more than the no-instruction group (Figure 1G).

Please see our response to Point 3-13 below.

3.13. Figure 1H: the absence of increase in implicit adaptation is not due to competition as claimed by the authors but is due to the saturation of the implicit response with increasing rotation size (Kim et al. 2018). It saturates at around 20{degree sign} for all rotations larger than 6{degree sign}.

In Points 3-12 and 3-13, the reviewer suggests that the Neville and Cressman dataset is explained by an SPE model that saturates with rotation size and exhibits plan-based generalization. That is, in Point 3-12 it is suggested that plan-based generalization explains why implicit learning decreases in the instruction group. In Point 3-13, it is suggested that implicit learning is similar across rotation sizes due to a saturation in implicit learning (Kim et al., 2018). However, these two ideas are not self-consistent.

To see this, consider as suggested in Point 3-12, that implicit learning is decreased in the instruction group due to generalization. This would mean that the 10° difference in strategy produces a 3° change in implicit learning (roughly a 30% decline). This hypothesis, however, is inconsistent with the implicit response to changes in rotation size. That is, implicit responses were similar across rotation sizes, but explicit strategies were not. For example, the 20° rotation group used about 6.4° explicit strategy and the 40° group used about 26.4° explicit strategy. Thus, given the reviewer’s hypothesis, a ‘saturated’ implicit learning system would exhibit a 6° decrease due to generalization (i.e., a 10° strategy change causes 3° decrease, so a 20° change would cause a 6° decrease). However, the true data only differ by approximately 0.3°: about 5% the amount predicted by the reviewer’s hypothesis. In other words, while plan-based generalization and ‘upper limits’ on implicit learning may be true in isolation, their combination makes incorrect predictions about the phenotypes exhibited in the Neville and Cressman dataset.

There are several other points in our manuscript that disprove the reviewers’ proposed model. These are all described in more detail in other places in this response. These points include:

1. In some experimental conditions, implicit learning scales with rotation size. This occurs in Experiment 1 (stepwise group) and also across the initial rotation sizes in Tsay et al., 2021. For more details, see Point 6B (Figure 1 in revised paper).

2. In some experimental conditions, implicit learning exhibits a non-monotonic response to changes in rotation size. These are neither consistent with Kim et al., 2018, nor do they match plan-based generalization. Please see Point 6B (Figure 1 in revised paper).

3. Also, note that Neville and Cressman checked whether plan-based generalization influenced their findings. They calculated implicit learning separately for their training targets, noting that some targets overlapped with aiming directions, and others did not. Plan-based generalization suggests that implicit learning would be greater at targets that overlapped with aiming directions. This was not the case.

4. Finally, in the revised manuscript, we thoroughly analyze a plan-based SPE generalization model. Its predictions do not match the data. See Point 1 above (Figure 5 in revised paper).

Thus, we hope that our data and new analysis in Figure 1, help to dispel the notion that the implicit system necessarily exhibits a saturated asymptotic response to rotation size. The implicit system exhibits many phenotypes, which are consistent with the competition theory.

3.14. Data from Saijo and Gomi: Here the authors interpret the after-effect as a measure of implicit adaptation but it is not as the participants were not told that the perturbation would be switched off.

Yes, we agree this is a limitation. It is possible and probable that participants re-aimed in the opposite direction during washout. The data and model may hint at this as shown in Author response image 2. For example, note the gradual group washout period (black arrow in Author response image 2B). Here we can see that the washout proceeds at a slightly faster rate than the model fit. This could very well be due to explicit re-aiming in the opposite direction, which is not accounted for in the model.

Nevertheless, we do not believe this re-aiming alters our primary conclusions:

1. Firstly, aftereffects remain large across all washout cycles measured (see inset C above), which to us, indicates that there is a strong lingering difference in the implicit aftereffects between the abrupt and gradual groups.

2. Secondly, suppose strategies rapidly switched direction during the washout period. Because the gradual participants experienced a larger error during washout, they would adopt a larger strategy to counter the error, relative to the abrupt group. This would lessen the differences between the abrupt and gradual groups, which exceeded 15° on the first washout cycle. In other words, this would mean that the ‘true’ difference in implicit learning is likely larger between the two groups than the washout aftereffects would suggest (which would only further support the hypothesis that the gradual condition led to increased implicit learning).

This being said, we do agree with the overall sentiment that our Saijo and Gomi (2010) analysis is limited. Because participants were not instructed to stop aiming, we cannot know with certainty that the reach angles during the washout period were solely due to implicit learning. We have moved this analysis to an appendix (Appendix 2) in our revised manuscript.

Second, we have added new data to our revised paper to investigate abrupt vs. gradual learning (Experiment 1) where implicit learning is more measured via no-aiming exclusion trials. We discuss these points in greater detail in our response to Point 2. We reproduce parts of this response below, for convenience.

Adapted from Point 2 response to editorial comment

In our revised manuscript, we have collected new data to test the competition model’s prediction. These new data use similar conditions studied by Saijo and Gomi. In Experiment 1 (new data in the paper), participants were exposed to a 60° rotation, either abruptly (n=36), or in a stepwise manner (n=37) where the perturbation magnitude increased by 15° across 4 distinct learning blocks (Figure 2D). Unlike Saijo and Gomi, implicit learning was measured via exclusion during each learning period by instructing participants to aim directly towards the target. As we hypothesized, in Figure 1J, we now demonstrate that stepwise rotation onset (which yields smaller target errors) muted the explicit response to the rotation (compared to abrupt learning). The competition model predicts that decreases in explicit strategy should facilitate greater implicit adaptation. To test this prediction, we compared implicit learning across the gradual and abrupt groups during the fourth learning block, where both groups experienced the 60° rotation size (Figure 2E).

Consistent with our hypothesis, participants in the stepwise condition exhibited a 10° reduction in explicit re-aiming (Figure 2F, two-sample t-test, t(71)=4.97, p<0.001, d=1.16), but a concomitant 80% increase in their implicit recalibration (Figure 2G, data, two-sample t-test, t(71)=6.4, p<0.001, d=1.5). To test whether these changes in implicit learning matched the competition model, we fit the independence Equation (Equation 5) and the competition Equation (Equation 4) to the implicit and explicit reach angles measured in Blocks 14, across the stepwise and abrupt conditions, while holding implicit learning parameters constant. In other words, we asked whether the same parameters (ai and bi) could parsimoniously explain the implicit learning patterns measured across all 5 conditions (all 4 stepwise rotation sizes plus the abrupt condition). As expected, the competition model correctly predicted that implicit learning would increase in the stepwise group (Figure 2G, comp., two-sample t-test, t(71)=4.97, p<0.001, d=1.16), unlike the SPE-only learning model (Figure 2G, indep.).

Thus, in our revised manuscript we present new data that confirm the hypothesis we initially explored in the Saijo and Gomi (2010) dataset: decreasing explicit strategy enhances implicit learning. These new data do not present the same limitation noted by the reviewers. Lastly, it is critical to note that while stepwise participants showed greater implicit learning, their total adaptation was approximately 4° lower than the abrupt group (Figure 2E, right-most gray area (last 20 trials); two-sample t-test, t(71)=3.33, p=0.001, d=0.78). This surprising phenomenon is predicted by the competition equation. When strategies increase in the abrupt rotation group, this will tend to increase total adaptation. However, larger strategies leave smaller errors to drive implicit learning. Hence, greater adaptation will be associated with larger strategies, but less implicit learning. Indeed, the competition model predicted 53.47° total learning in the abrupt group but only 50.42° in the stepwise group. Recall that, we described this paradoxical phenomenon at the individual-participant level in Point 1. Note that this pattern was also observed by Saijo and Gomi. Surprisingly, when participants were exposed to a 60° rotation in a stepwise manner, total adaptation dropped over 10°, whereas the aftereffect exhibited during the washout period nearly doubled.

Summary

In our revised paper we have removed this Saijo and Gomi analysis from the main figures, and instead use this as supportive data in Figure 2-Supplement 2, and also in Appendix 2. We now state, “It is interesting to note that these implicit learning patterns are broadly consistent with the observation that gradual rotations improve procedural learning34,43, although these earlier studies did not properly tease apart implicit and explicit adaptation (see the Saijo and Gomi analysis described in Appendix 2).” In sum, our revised analyses no longer depend on the Saijo and Gomi dataset to test critical competition model predictions. For that matter, no critical hypothesis is tested with ill-estimated implicit learning in the revised manuscript. Importantly, our original conclusions are still supported by newly collected data in Experiment 1, now described in Figure 2.

3.15. Even if these after-effects did represent implicit adaptation, these results could be explained by Kim et al. 2018 and Day et al. 2016. For a 60{degree sign} rotation, the implicit component of adaptation will saturate around 15-20{degree sign} (like for any large perturbation). The explicit component has to compensate for that but it does a better job in the abrupt condition than in the gradual condition where some target error remains. Given that the aiming direction is larger in the abrupt case than in the gradual case, the amount of implicit adaptation is again measured further away from its optimal location in the abrupt case than in the gradual case (Day et al. 2016 and McDougle et al. 2017).

We agree that generalization is an important concern. Please see Point 1 (response to editor) where we describe new analyses on these matters. With that said, we disagree on this particular point that the Saijo and Gomi results can be due to plan-based generalization. Krakauer et al. (2000) measured generalization and varied the number of adaptation targets. With 8 training targets, they observed that adaptation completely generalized. Thus, 8 targets is sufficient to eliminate generalized decay in adaptation. In Saijo and Gomi, the authors used 12 training targets. Adaptation will not exhibit generalization-based decay under these conditions. Thus, this hypothesis is not sufficient to explain the 15° change in aftereffect, assuming as the reviewer premised, this is entirely due to implicit adaptation. We note the relationship between generalization and training targets on Lines 1002-1009 in the revised paper.

3.16. There is no proof that introducing a perturbation gradually suppresses explicit learning (line 191). The authors did not provide a citation for that and I don't think there is one. People confound awareness and explicit (also valid for line 212). Rather, given that the implicit system saturates at around 20{degree sign} for large rotation, I would expect that the re-aiming accounts for 20{degree sign} of the ~40{degree sign} of adaptation in the gradual condition.

A recent study by Yin and Wei (2020) (Figure 4) demonstrated that gradual perturbations suppress explicit strategy use. For example, in Figure 4A, the authors measured implicit and explicit learning using a report-paradigm. In C and D, they show implicit and explicit learning in the gradual group. 12 out of 28 participants never used a re-aiming strategy during gradual adaptation (Figure 4D). 16 out of 28 participants did report some reaiming during gradual adaptation, but this was suppressed by approximately 80% relative to the abrupt group (compare Figure C4, the gradual participants, to Figure 4A, the abrupt participants). We have added this citation at the appropriate location in the revised paper.

Finally, in the revised manuscript, we corroborate this recent study in Experiment 1. Participants in the stepwise adaptation group exhibited a 10° reduction in explicit strategy (see Figure 2F) relative to the abrupt group. Thus, our claim that gradual rotations reduce explicit strategy use is validated both by past literature and new data in our revised manuscript. Finally, please see Point 3-17 below where we corroborate the Saijo and Gomi analysis using a larger explicit strategy in the gradual rotation group.

3.17. Line 205-215: The chosen parameters appear very subjective. The authors should perform a sensitivity analyses to demonstrate that their conclusions do not depend on these specific parameters.

In our Saijo and Gomi (2010) analysis, we examined the implicit and explicit responses to abrupt and stepwise rotations. These simulations required two assumptions: (1) that explicit strategy reached 35.5° in the abrupt group, (2) but remained 0° in the gradual rotation group. The abrupt strategy level was not chosen arbitrarily: Neville and Cressman (2018) observed that the steady-state explicit response to a 60° rotation was equal to approximately 35.5°. However, we agree with the reviewer that our other assumption, that no strategy developed in the gradual group, is based on a subjective expectation. It could be that subjects in this group did develop a small strategy.

Fortunately, our simulations are not overly sensitive to the exact explicit strategies chosen in the abrupt and gradual conditions. The only important constraint is that abrupt strategy must be greater than gradual strategy. We have now corroborated this assumption in Experiment 1 in the revised manuscript. To demonstrate that altered strategy levels yield similar results in our Saijo and Gomi analysis, we have now repeated the simulations, but this time, chose a 10° strategy in the gradual condition (as opposed to 0°). We show the original analysis in Figure 2-Supplement 3 in the right column. We show the new control analysis in the left column.

Note that increasing explicit strategy use in the gradual condition to 10° (left column above) still produces the same qualitative results: (1) the abrupt group achieves greater total adaptation (see A), the implicit system reaches a larger saturation level in the gradual group (see C), which yields washout aftereffects in the competition model that closely resemble actual behavior (see D). We have added Figure 2-Supplement 3 to the revised manuscript and describe this analysis on Lines 1137-1138.

3.18. Figure 3: The after-effect in the study of Fernandez-Ruiz et al. does not solely represent the implicit adaptation component as these researchers did not tell their participants that they should stop aiming during the washout.

We appreciate this concern. We no longer include the Fernandez-Ruiz study in our revised Results section.

3.19. Correlations based on N=9 should not be considered as meaningful. Such correlation is subject to the statistical significance fallacy: it can only be true if it is significant with N=9 while this represents a fallacy (Button et al. 2013).

We agree with this concern. Unfortunately, our studies were halted in March 2020 due to the Covid-19 pandemic. However, we have now collected a laptop-based control study to corroborate this experiment with a much larger sample size. In Experiment 3 (previously Experiment 2) we tested two groups: a Limit PT and No PT Limit group. The latter group possessed n=35 participants to analyze correlations between implicit and explicit learning. We used the new Limit PT group to measure implicit learning properties that were used to make model predictions, which were missing in the original paper (see Point 1-11 above). We obtained similarly striking matches between the data and model, as shown in Figure 3Q.

In addition, our revised paper provided several ways to corroborate the implicit-explicit correlations noted by the reviewer in their concern. These new analyses are:

1. Experiment 1, added a stepwise learning group: n=37 (see Figure 5D).

2. Experiment 1, investigated abrupt and stepwise learning: n=70 (see Figure S4-1H).

3. Figure 1, added new analysis of Tsay et al., 2021 (n=25/group, see Figure S4-1I).

4. Figure 4-Supplement 2, added new analysis of Maresch et al., 2021 (n=40, See Figure S4-1G).

These new data corroborate and extend the conclusions we reached in our initial paper. Critically, we now reproduce our correlation analysis in Figure 3H (now Figure 3G) within our more highly powered experiments. For example, we show the same match between the competition model and data across participants in Experiment 3 (n=35, Figure 1Q). We also observe strong negative implicit-explicit correlations that agree with the competition model in the stepwise group in Experiment 1 (n=37, Figure 5D). Similar correlations were also detected in the 60° rotation conditions in Maresch et al., 2021 (n=40, Figure 4-Supplement 1G) and Tsay et al., 2021 (Figure 4-Supplement 1I, n=25) data sets.

Experiment 1 of the authors suffer from several limitations:

3.20A. small sample size (N=9 for one group). Why is the sample size different between the two groups?

Yes, we agree that the sample sizes were limited in this experiment. Please see Point 3-19 above. Sample sizes did not match as these studies both used rolling recruitment that was interrupted by the Covid-19 pandemic. Note, however, we have corroborate these results several times in the revised paper across numerous experiments (again, see Point 3-19 above). Most notably, we have used a similar design in Experiment 3 (previously Experiment 2) and test n=35 participants in the No PT Limit group, and n=21 subjects in the Limit PT group. These new data are shown in our revisions to Figure 3. Note that more subjects were recruited in the No PT Limit group to bolster our participant-level implicit-explicit correlation analysis.

3.20B. Limiting RT does not abolish the explicit component of adaptation (Line 1027: where is the evidence that limit PT is effective in abolishing explicit re-aiming?). Haith and colleagues limited reaction time on a small subset of trials. If the authors want to use this manipulation to limit re-aiming, they should first demonstrate that this manipulation is effective in doing so (other authors that have used this manipulation but have failed to do validate it first). I wonder why the authors did not measure the implicit component for their limit PT group like they did in the No PT limit group. That is required to validate their manipulation.

We appreciate this concern. We have validated that limiting preparation time abolishes explicit strategy as claimed. We address this multiple times above, and also in our response to Point 3 (response to editor).

Excerpts adapted from Point 3 response

We agree that it is important to test how effectively this condition limits explicit strategy use. Therefore, we have performed two additional control studies to measure explicit strategy in the limited PT condition. First, we added a limited preparation time (Limit PT) group to our laptop-based study in Experiment 3 (Experiment 2 in original manuscript). In Experiment 3, participants in the Limit PT group (n=21) adapted to a 30° rotation but under a limited preparation time condition. As for the Limit PT group in Experiment 2 (Experiment 1 in original paper), we imposed a bound on reaction time to suppress movement preparation time. However, unlike Experiment 2, once the rotation ended, participants were told to stop re-aiming. This permitted us to examine whether limiting preparation time suppressed explicit strategy as intended. Our analysis of these new data is shown in Figures 3K-Q.

[…]

Summary

We collected additional experimental conditions in Experiment 3 in the revised manuscript (a Limit PT group and a decay-only group). Data in the Limit PT condition suggested that explicit strategies are suppressed to approximately 2° by our preparation time limit (compared to about 12° under normal conditions). Data in the decay-only group suggest that this 2° change in reach angle was not due to a cached explicit strategy but rather time-dependent implicit decay. Together, these conditions demonstrate that limiting reaction time in our experiments prevents the caching of explicit strategies; our limited preparation time measures do indeed reflect implicit learning.

3.20C. The negative correlation from Figure 3H can be explained by the fact that the SPE is anchored at the aiming direction and that, the larger the aiming direction is, the further the authors are measuring implicit adaptation away from its optimal location.

Yes, we appreciate that generalization may contribute to the negative correlations we observed between implicit and explicit learning. We have addressed this at several points above, most notably in Point 1 and also Point 3-1. Let us reiterate the most important pieces in our response below.

Adapted excerpts from Points 1 and 3-1

In the revised manuscript, we compare the relationship between implicit and explicit learning in ExperimentsExperiments 2 and 3 with past generalization curves. These curves were measured by Krakauer et al., 2000 and also Day et al., 2016. The latter paper is most relevant as it clearly delineated implicit and explicit learning. The new analysis is documented in Figure 4 in the revised manuscript. In Figures 5BandC, we show dimensioned (i.e., in degrees) relationships between implicit and explicit learning. In Figure 5A, we show a dimensionless implicit measure in which implicit learning is normalized to its “zero-strategy” level. Empirical analysis demonstrated that implicit learning in ExperimentsExperiments 2 and 3 (black and brown points) declined over 300% more rapidly with increases in explicit strategy, than predicted by the generalization curve measured by Day et al. (2016). Thus, plan-based generalization does not appear a viable alternative to the competition model, though it could make minor contributions to the implicit-explicit correlations we observed.

These analyses have relied on an empirical comparison between our data and past generalization studies. But we can also demonstrate mathematically that the implicit and explicit learning patterns we measured are inconsistent with the generalization hypothesis. We derived an SPE generalization model in response to Point 1 (response to editor). In this model, implicit learning is driven by SPEs, but exhibits plan-based generalization. The competition model and SPE generalization model both predict a negative correlation between implicit and explicit learning, but only the SPE model suggests that this relationship (i.e., gain) is altered by rotation magnitude. Thus, to compare these two models we would need to measure the gain relating implicit and explicit adaptation, across multiple rotation sizes. To do this, we have collected additional data. These data are reported in Experiment 1 in the revised manuscript. Experiment 1 includes a step-wise perturbation condition where participants (n=37) were initially exposed to a 15° rotation, then a 30° rotation, a 45° rotation, and lastly a 60° rotation. Implicit learning was measured via exclusion (i.e., no aiming) probes for each rotation size. To calculate explicit strategy, we subtracted these implicit measures from the total reach compensation measured on the last 10 trials in each rotation block. These data are reported in Figures 5D-F in the revised manuscript.

We calculated each model’s implicit learning gain (p in Point 1) that best matched the measured implicitexplicit relationship. We calculated this value in the 60° learning block alone, holding out all other rotation sizes. We then used this gain to predict the implicit-explicit relationship across the held-out rotation sizes. The competition model is shown in the solid black line in Figure 5D. The Day et al. (2016) generalization model is shown in the gray line. The total prediction error across the held-out 15°, 30°, and 45° rotation periods was two times larger in the SPE generalization model (Figure 5E, rm-ANOVA, F(2,35)=38.7, p<0.001, ηp2=0.689; post-hoc comparisons all p<0.001). Issues with the SPE generalization model were not caused by misestimating the generalization gain, m. We fit the SPE generalization model again this time allowing both π and m to vary to best capture behavior in the 60° period (Figure 5D, SPE gen. best B4). This optimized model generalized very poorly to the held-out data, yielding a prediction error three times larger than the competition model (Figure 4G, SPE gen best B4).

To understand why the competition model yielded superior predictions, we fit separate linear regressions to the implicit-explicit relationship measured during each rotation period. The regression slopes and 95% CIs are shown in Figure 5F (data). Remarkably, the measured implicit-explicit slope appeared to be constant across all rotation magnitudes in agreement with the competition theory (Figure 5F, competition). These measurements sharply contrasted with the SPE generalization model, which predicted that the regression slope would decline as the rotation magnitude decreased.

In summary, our data in Experiment 1 were poorly described by an SPE learning model with aim-based generalization. While generalization may contribute to the measured relationship between implicit and explicit adaptation, its contribution is small relative to the competition theory. These new analyses are described in Section 2.2 in the revised paper.

3.20D. The difference between the PT limit and NO PT limit is not very convincing. First, the difference is barely significant (line 268). Why did the authors use the last 10 epochs for experiment 1 and the last 15 for experiment 2? This looks like a post-hoc decision to me and the authors should motivate their choice and should demonstrate that their results hold for different choice of epochs (last 5, 10, 15 and 20) to demonstrate the robustness of their results. Second, the degrees of freedom of the t-test (line 268) does not match the number of participants (9 vs. 13 but t(30)?)

We appreciate the reviewer’s concerns. We thank the reviewer for catching two typos. (1) First, we did use the last 10 epochs for both Experiment 1 (now Experiment 2) and Experiment 2 (now Experiment 3) but had incorrectly written 15 epochs in our original manuscript. This issue has been corrected. (2) Second, we misreported this statistic in our original paper. In the revised manuscript we have corrected this: “…learning proceeded more slowly and was less complete under the PT Limit (compare Figures 3BandE; two-sample t-test on last 10 adaptation epochs: t(20)=3.27, p=0.004, d=1.42).” In sum our results were significant at the level p=0.004. In addition, the large effect size (d=1.42) does not support the reviewer’s criticism that the PT Limit and No PT Limit had little difference. In addition, we have now corroborated this result in Experiment 3, where both the PT Limit and No PT Limit groups possessed higher sample sizes (n=35 for No PT Limit, n=21 for Limit PT). The new data supported the same conclusion (t(54)=5.58,p<0.001,d=1.54).

Nevertheless, despite this conclusive evidence that limiting preparation time suppressed total adaptation, we have completed the analysis the reviewer requested. We repeated our analysis in Experiment 1 (now Experiment 2) using the last 5, 15, and 20 cycles to measure whether limiting preparation time suppressed total learning. We detected statistically significant differences across all cases, each with similar effect sizes.

– 5 cycles: t(20)=3.21, p=0.0044, d=1.39

– 10 cycles: t(20)=3.27, p=0.004, d=1.42

– 15 cycles: t(20)=3.27, p=0.0038, d=1.42

– 20 cycles: t(20)=2.96, p=0.0077, d=1.28

Thus, our revised manuscript further corroborates that limiting preparation time leads to a reduction in total adaptation. We have also noted on Lines 1434-1437, that these conclusions are not dependent on the number of cycles used to calculate total adaptation.

3.20E. Why did the authors measure the explicit strategy via report (Figure S5E) while they don't use those values for the correlations? This looks like a post-hoc decision to me.

We ended Experiment 2 (Experiment 1 in the original manuscript) by asking participants to verbally report where they remembered aiming in order to hit the target. While many studies use similar reports to tease apart implicit and explicit strategy (e.g., McDougle et al., 2015), our probe had important limitations. Normally, participants are probed at various points throughout adaptation by asking them to report where they plan to aim on a given trial, and then allowing them to execute their reaching movement (thus they are able to continuously evaluate and familiarize themselves with their aiming strategy over time). However, we did not use this reporting paradigm given recent evidence (Maresch et al., 2021) that reporting explicit strategies increases explicit strategy use, which we expected would have the undesirable consequence of suppressing implicit learning (given the competition theory). Thus, in Experiment 2, reporting was measured only once; participants tried to recall where they were aiming throughout the experiment.

This methodology may have reduced the reliability in this measure of explicit adaptation. As stated in our Methods, we observed that participants were prone to incorrectly reporting their aiming direction, with several reported angles in the perturbation’s direction, rather than the compensatory direction. In fact, 25% of participant responses were reported in the incorrect direction. Thus, as outlined in our Methods, we chose to take the absolute value of these measures, given recent evidence that strategies are prone to sign errors (McDougle and Taylor, 2019). That is, we assumed that participants remembered their aiming magnitude, but misreported the orientation. In sum, we suspect that our report-based strategy measures are prone to inaccuracies and opted to interpret them cautiously and sparingly in our original manuscript.

Nevertheless, in the revised manuscript, we now also report the individual-level relationships between report-based implicit learning and report-based explicit strategy in Experiment 2 (previously Experiment 1). These are now illustrated in Figure 3-Supplement 2C. While reported explicit strategies were on average greater than our probe-based measure, and report-based implicit learning was smaller than our probe-based measure (Figures 3-Supplement 2AandB; paired t-test, t(8)=2.59, p=0.032, d=0.7), the report-based measures exhibited a strong correlation which aligned with competition model’s prediction (Figure 3-Supplement 2C; R2=0.95; slope is -0.93 with 95% CI [-1.11, -0.75] and intercept is 25.51° with 95% CI [22.69°, 28.34°]).

In summary, we have now added report-based implicit learning and explicit learning measurements to our revised manuscript. We show that these measures also exhibit a strong negative correlation with good agreement to the competition model. However, we still feel that it is important to question the reliability of these report-based measures (given that they were collected only once at the end of the experiment, when no longer in a ‘reaching context’). Our new analysis is described on Lines 319-327 in the paper.

Experiment 2:

3.21A. This experiment is based on a small sample size to test for correlations (N=17). What was the objective criterion used to stop data collection at N=17 and not at another sample size? This should be reported.

We did not use a criterion when stopping data recruitment in our laptop-based studies (Experiment 3). To recruit participants, we used a rolling admission. At Johns Hopkins, there is service whereby the school advertises studies in an email newsletter (Today’s Announcements). We posted about this study in the newsletter, and then recruited all eligible and interested participants that responded to the advertisement. However, in response to your concern that n=17 is too small for our correlation analysis, we reached out to a new set of participants, and have roughly doubled our sample size in Experiment 3 (the No PT Limit group) to n=35. In our revised manuscript, we show strong correlations between explicit strategy and implicit learning in our revisions to Figure 3. In addition, we have added many new data sets to the revised manuscript to test the relationship between implicit and explicit learning. We observed strong negative correlations in each of these new datasets (Experiment 1 in Figures 5D and S4-1H, Maresch et al., 2021 in Figure S41G, and Tsay et al., in Figure S4-1I). Thus, our revised manuscript strongly indicates that a negative correlation exists between implicit and explicit learning. This conclusion is not based on a sole study with n=17, but now 4 studies with n=35, n=40, n=70, and n=25.

3.21B. This experiment does not decouple implicit and explicit in contrast to what the authors pretend. If the authors believe that the amount of implicit adaptation follows a state-space model, then the measure of early implicit is correlated to the amount of late implicit because Implicit_late = (ai)^N * Implicit_early where N is the number of trials between early and late. Therefore the two measures are not properly decoupled. To decouple them, the authors should use two separate ways of measuring implicit and explicit. To measure explicit, they could use aim report (Taylor, Krakauer and Ivry 2011) and to measure implicit independently, they could use the after-effect by asking the participants to stop aiming as done here. They actually have done so in experiment 1 but did not use these values.

We appreciate your concern. This is the same comment noted in Point 8B in the editorial summary. Please see our response to Point 8B.

Figure 4:

3.22A. Data from the last panel of Figure 4B (line 321-333) should be analyzed with a 2x2 ANOVA and not by a t-test if the authors want to make the point that the learning differences were higher in high PT than low PT trials (Nieuwenhuis et al., Nature Neuroscience, 2011).

We appreciate this suggestion. We have now updated the relevant analysis in our revised manuscript. We used a two-way repeated measures ANOVA. We reached the same conclusion: when measuring savings in Haith et al., we observed that the learning rate increased during the second exposure on high preparation time trials, but not on low preparation time trials (Figure 6B, right; two-way rm-ANOVA, preparation time by exposure number interaction, F(1,13)=5.29, p=0.039; significant interaction followed by one-way rm-ANOVA across Days 1 and 2: high preparation time with F(1,13)=6.53, p=0.024, ηp2=0.335; low preparation time with F(1,13)=1.11, p=0.312, ηp2=0.079). We corroborate this rate analysis by measuring early changes in reach angle (first 40 trials following rotation onset) across Days 1 and 2 (Figure 6C, left and middle). Only high prep. time trials exhibited a statistically significant increase in reach angles, consistent with savings (Figure 6C, right; two-way rm-ANOVA, preparation time by exposure interaction, F(1,13)=13.79, p=0.003; significant interaction followed by 1-way rm-ANOVA over days: high prep. time with F(1,13)=11.84, p=0.004, ηp2=0.477; low preparation time with F(1,13)=0.029, p=0.867, ηp2=0.002).

3.22B. Lines 329-330: the authors should demonstrate that the explicit strategy is actually suppressed by measuring it via report. It is possible that limiting PT reduces the explicit strategy but it might not suppress it. Therefore, any remaining amount of explicit strategy could be subject to savings.

This is an important concern. In the revised manuscript, we demonstrate that our limited preparation time method eliminates explicit strategy use. We completely document our new data and analyses in Points 3 (response to editor summary) and 3-20B above.

3.22C. Coltman and Gribble (2019) demonstrated that fitting state-space models to individual subject's data was highly unreliable and that a bootstrap procedure should be preferred. Lines 344-353: the authors should replace their t-tests with permutation tests in order to avoid fitting the model to individual subject's data.

We appreciate the reviewer’s concern but would argue that this is not an issue in our analysis. We have rather extensive experience with state-space models. We have shown that individual participant fitting procedures can be unreliable when using a least-squares fitting technique (Albert and Shadmehr, 2018). This is a particular issue for two-state models. In the two-state model, it is hard to measure the retention factor and error sensitivities for the ‘slow state’ and ‘fast state’ because they are not measured. Rather, the data are fit to the sum of the two states, i.e., the overall measured reach angle. Many slow and fast state combinations can yield a similar overall behavior, which means there is little power to estimate the slow and fast state model parameters. This is the crux of the issue.

The competition and independence models would qualify as two-state models, because they possess an ‘implicit state’ and an ‘explicit state’. However, they do not suffer from the issue highlighted above. Our model assumes that low preparation trials are implicit learning, and high preparation trials are implicit learning plus explicit learning. Thus, the ground truth estimates for implicit learning (Low PT) and explicit learning (High PT minus Low PT), are inherently embedded in the fit. To ensure that our fitting procedure has access to both the implicit and explicit states, we fit both High PT trials and Low PT trials in our least-squares algorithm. We further describe our least-squares method in our response to Point 3-22D below.

3.22D. It is unclear how the error sensitivity parameters analysed in Figure 4D were obtained. I can follow in the Results section but there is basically nothing on this in the methods. This needs to be expanded.

We appreciate that our fitting procedure was difficult to interpret. In the revised paper, we have expanded our description about this analysis on Lines 1665-1692 in our Methods. This is reproduced below.

“Finally, we also used a state-space model of learning to measure properties of implicit and explicit learning during each exposure. We modeled implicit learning according to (Equation 3) and explicit learning according to (Equation 7). In our competition theory, we used target error as the error in both the implicit and explicit state-space equations. In our SPE model, we used target error as the explicit system’s error, and SPE as the implicit system’s error.

[…]

Finally, we also fit the target-error (Equation 1) model to the mean behavior across all participants in Exposure 1 and Exposure 2. We obtained the parameter set: ai=0.9829, ae=0.9278, bi,1=0.0629, bi,2=0.089, be,1=0.0632, be,2=0.1078. Note that the subscripts 1 and 2 denote error sensitivity during Exposure 1 and 2, respectively. These parameters were used for our simulations in Figure 7 (see Competition Map).”

3.22E. The authors make the assumption that the amount of implicit adaptation is the same for the high PT target and for the low PT target. What is the evidence that this assumption is reasonable? Those two targets are far apart while implicit adaptation only generalizes locally. Furthermore, low PT target is visited 4 times less frequently than the high PT target. The authors should redo the experiment and should measure the implicit component for the high PT trials to make sure that it is related to the implicit component for the low PT trials.

The reviewer appears to have misunderstood the task we used in Haith et al., 2015. The High PT and Low PT targets are one and the same. We should have explained this better in our Methods section. We have revised our Methods on 1638-1644.

“Briefly, participants (n=14) performed reaching movements to two targets, T1 and T2, under a controlled preparation time scenario. To control movement preparation time, four audio tones were played (at 500 ms intervals) and participants were instructed to reach coincident with the fourth tone. On high preparation time trials (High PT), target T1 was shown during the entire tone sequence. On low preparation time trials (Low PT), T2 was initially shown, but was then switched to target T1 approximately 300 ms prior to the fourth tone. High PT trials were more probable (80%) than Low PT trials (20%).”

3.22F. I don't understand why the authors illustrated the results from line 344-349 on Figure 4D and not the results of the following lines, which are also plausible. By doing so, the authors biased the results in favor of their preferred hypothesis. This figure is by no means a proof that the competition hypothesis is true. It shows that "if" the competition hypothesis is true, then there are surprising results ahead. The authors should do a better job at explaining that both models provide different interpretation of the data.

We agree with the reviewer. We have made the suggested changes. In the revised manuscript, we show the error sensitivities predicted by the competition model in Figure 6D and have now added independence model predictions to Figure 6E. In addition, we better explain that neither model is ‘true’. Rather, the models provide two different ways to interpret the data. Our revised text on Lines 555-560 is shown below:

“In summary, when we reanalyzed our earlier data, the competition and independence theories suggested that our data could be explained by two contrasting hypothetical outcomes. If we assumed that implicit and explicit systems were independent, then only explicit learning contributed to savings, as we concluded in our original report. However, if we assumed that the implicit and explicit systems learned from the same error (competition model), then both implicit and explicit systems contributed to savings. Which interpretation is more parsimonious with measured behavior?”

3.23. Figure 5: This figure also represent a biased view of the results (like Figure 4D). The competition hypothesis is presented in details while the alternative hypothesis is missing. How would the competition map look like with separate errors, especially when taking into account that the SPE (and implicit adaptation) is anchored at the aiming direction (generalization)?

We provided the competition map in Figure 7 (previously Figure 5) not to present a biased view, but because the competition model’s behavior is genuinely unintuitive. In the SPE model, implicit learning is driven by SPEs, which is unaltered by explicit strategy. Thus, when implicit learning increases, it must be due to an increase in implicit error sensitivity. Similarly, when implicit learning decreases, it must be due to a drop in implicit error sensitivity. We do not feel it is needed to explore this intuitive behavior, which is currently assumed in most motor learning studies.

The competition model, however, is not intuitive. An increase or decrease in implicit learning by itself, has no direct relation to an increase or decrease in implicit error sensitivity. Increases in implicit learning could be due to an increase in implicit error sensitivity, a drop in explicit strategy, or some combination thereof. Decreases in implicit learning could be due to a decrease in implicit error sensitivity, an increase in explicit strategy, or some combination thereof. Our goal with the competition map in Figure 7 is to illustrate each of these scenarios, in an attempt to clear up potential confusion that the reader may have.

With that said, in our revised manuscript, we now consider an SPE generalization model as the reviewer suggests. We agree that this model’s behavior is more complicated. We explore this model’s behavior in Section 2.2 in our revised manuscript. We show that this model is not consistent with our data in ExperimentsExperiments 13 in Figure 5 in the revised paper. For much greater detail on this matter, see our response to Point 1 (editor summary response in separate document).

Figure 6:

3.24A. This figure is compatible with the fact that limiting PT on all trials is not efficient to suppress explicit adaptation, at least less so than doing it on 20% of the trials (see McDougle et al. 2019 for why this is the case) and that the remaining explicit adaptation leads to savings.

We appreciate this concern about how cached explicit strategy might contribute to savings. It still seems unlikely to us that the conditions in Experiment 4 (our savings experiment) are more likely to promote a cached explicit strategy than those used in Haith et al., 2015. For example, in the paper cited by the reviewer (McDougle et al., 2019), the authors show that caching is weakened by increasing the number of training targets. In Haith et al., 2015, while there are two targets, the rotation was only active at one target. Thus, participants would only need to a cache a single reaching plan. However, Experiment 4 used four training targets. On this point alone we would expect less explicit caching.

Nevertheless, we now clearly state in the revised text that explicit caching could have contributed to the savings measured in Experiment 4. The relevant passage in our Results now reads: “In sum, when explicit learning was inhibited on every trial, low preparation time behavior showed savings (Figure 8B). But when explicit learning was inhibited less frequently, low preparation time behavior did not exhibit a statistically significant increase in learning rate (Figure 8A). The competition theory provided a possible explanation; that an implicit system expressible at low preparation time exhibits savings, but these changes in implicit error sensitivity can be masked by competition with explicit strategy.

However, the savings we measured at limited preparation time may not solely be due changes in the implicit learning system, but also cached explicit strategies22,45.”

These points, however, are somewhat moot. In the revised manuscript, we now directly test whether our limit preparation time condition permits explicit caching. Please see our responses to Points 3 (response to editor summary) and 3-20B above for a more detailed response on this matter. Briefly, our new control experiments suggest that the limited preparation time conditions we use allow little to no explicit strategy. Thus, our revised manuscript more strongly supports our conclusion that implicit adaptation contributed to savings in Experiment 4.

3.24B. I don't understand why the authors did not measure implicit (and therefore explicit adaptation) directly in these experiment like they did in experiment 2. This would have given the authors a direct readout of the implicit component of adaptation and would have validated the fact that limiting PT might be a good way to suppress explicit adaptation. Again, this proof is missing in the paper.

We appreciate this concern. We also agree that these data would be valuable. In our task design, we did not want to interrupt the learning process by instructing participants to stop aiming. We were concerned that this interruption could lead to decay in implicit learning (due to time passage) that would impact our ability to measure implicit savings. Indeed, as described in Points 3 and 3-20B above, our newly collected decay-only group in Experiment 3 exhibits time-based decay in implicit learning. Nevertheless, we now show in the revised paper that limiting preparation time isolates the implicit learning system, by stopping explicit strategy use. Please see Points 3 (response to editor) and 3-20B above for more detail on this matter.

3.24C. Experiment 3 is based on a very limited number of participants (N=10!). No individual data are presented.

We have now added the individual participants to Figure 8 (Experiment 4, which was previously Experiment 3). As shown in Figure 8C, 9 out of 10 participants showed an increase in the initial response to the perturbation (Figure 8C) and an increase in the rate of learning (Figure 8Ct).

3.24D. Data from Figure 6C should be analyzed with an ANOVA and an interaction between the factor experiment (Haith vs. experiment 3) and block (1 vs. 2) should be demonstrated to support such conclusion (Nieuwenhuis et al. 2011).

We appreciate this suggestion. We have now updated the relevant analysis in our revised manuscript. We used a mixed-ANOVA. We reached the same conclusion as before: when measuring savings under limited preparation time in Experiment 4, we still observed the opposite outcome to Haith et al., 2015. Notably, low preparation time learning rates increased by more than 80% in Experiment 4 (Figure 8C top; mixed-ANOVA, exposure no. by experiment type interaction, F(1,22)=5.993, p=0.023; significant interaction followed by one-way rm-ANOVA across exposures: Haith et al. with F(1,13)=1.109, p=0.312, ηp2=0.079; Experiment 4 with F(1,9)=5.442, p=0.045, ηp2=0.377). Statistically significant increases in reach angle were detected immediately following rotation onset in Experiment 4 (Figure 8B, bottom), but not our earlier data (Figure 8C, bottom; mixed-ANOVA exposure number by experiment interaction, F(1,22)=4.411, p=0.047; significant interaction followed by one-way rm-ANOVA across exposures: Haith et al. with F(1,13)=0.029, p=0.867, ηp2=0.002; Experiment 4 with F(1,9)=11.275, p=0.008, ηp2=0.556).

Figure 7:

3.25A. The data contained in this figure suffer from the same limitations as in the previous graphs: limiting PT does not exclude explicit strategy, sample size is small (N=10 per group), no direct measure of implicit or explicit is provided.

We appreciate this concern. As detailed above in several responses, we have now verified that our limited preparation time condition prevents explicit strategy use. Thus, by limiting preparation time in Experiment 5 (previously Experiment 4), we are measuring the implicit contributions to learning. Please see our responses to Points 3 (response to editor summary), 1-5, and 3-20B above for more details on this matter.

Secondly, we should note that we have added individual participant data to Figure 9C. We observed a very strong anterograde interference effect; all 20 participants (9 in 5-min group, 11 in 24-hr group) exhibited impaired learning rates upon exposure to the opposing rotation (as evidence by the normalized learning rates in Figure 9C all being less than 1).

3.25B. In addition, no statistical tests are provided beyond the two stars on the graph. The data should be analyzed with an ANOVA (Nieuwenhuis et al. 2011).

Thank you for this suggestion. In the revised manuscript we now use a two-way ANOVA to test whether the anterograde learning deficit was altered by limiting preparation time and increasing the time delay between exposures 1 and 2. We reached the same conclusion as before: while both low preparation time and high preparation time trials exhibited decreases in learning rate which improved with the passage of time (Figure 9C; two-way ANOVA, main effect of time delay, F(1,50)=5.643, p=0.021, ηp2=0.101), these impairments were greatly exacerbated by limiting preparation time (Figure 9C; two-way ANOVA, main effect of preparation time, F(1,50)=11.747, p=0.001, ηp2=0.19).

3.25C. The number of participants per group is never provided (N=20 for both groups together).

We apologize for the omission. In the revised manuscript we now show individual participants data in Figure 9C. In addition, we also state on that the 5-min. group and 24-hr group in Lerner et al., 2021 has n=16 and n=18 participants, respectively. Experiment 5 included n=20 participants (10 Male, 10 Female) with n=9 in the 5 min group and n=11 in the 24 group.

3.25D. It is unclear to me how this result contributes to the dissociation between the separate and shared error models.

Yes, the reviewer is correct. The anterograde interference experiment (Experiment 5) is not intended to test the dissociation between the separated and shared error models. Rather, it is intended to complement our savings paradigm in Experiment 4. The natural question arises: can the implicit system bi-directionally modulate its error sensitivity? We test for increases in Experiment 4. We test for decreases in Experiment 5.

Figure 8:

3.26A.The whole explanation here seems post-hoc because none of the two models actually account for this data. The authors had to adapt the model to account for this data. Note that despite that, the model would fail to explain the data from Kim et al. 2018 that represents a very similar task manipulation.

The reason why we include Mazzoni and Krakauer, 2006, and Taylor and Ivry, 2011, data, is to show that both the competition model and independence model have limitations. The competition model appears to work very well in situations where participants reach to a target without aiming landmarks. However, it cannot explain the data in Figure 10. We think it is critical to show the reader that much is left to understand about the errors which drive adaptation, and how they can vary across experimental conditions (i.e., there is no universal model that applies equally well in all scenarios). To emphasize these points, we have now separated our Figure 10 (previously Figure 8) discussion into its own Results section entitled “Part 4: Limitations of the competition theory”.

And yes, we agree that we could do better to contrast our results, with data measured in invariant errorclamp experiments like Kim et al., 2019. Neither the competition model, nor independence model explains implicit behavior in these paradigms. For this reason, we have now added a passage to our Discussion:

“…competitive saturation in implicit adaptation should not be conflated with the upper limits in implicit adaptation that have been measured in response to invariant errors3,11,40. In this latter condition, implicit adaptation reaches a ceiling whose value varies somewhere between 15 degrees3 and 25 degrees40 across studies. In these experiments, participants adapt to an error that remains constant and is not coupled to the reach angle (thus, the competition theory cannot apply). While the state-space model naturally predicts that total adaptation can exceed the error size which drives learning in this error-clamp condition (as is observed in response to small error-clamps), it cannot explain why asymptotic learning is insensitive to the error’s magnitude. One idea is that proprioceptive signals40,83 may eventually outweigh the irrelevant visual errors in the clamped-error condition, thus prematurely halting adaptation. Another possibility is that implicit learning obeys the state-space competitive model described here, but only up until a ceiling that limits total possible implicit corrections. Indeed, in Experiment 1, we produced scaling in the implicit system in response to rotation size, but never evoked more than about 22° of implicit learning. However, when we used similar gradual conditions in the past to probe implicit learning37, we observed about 32° implicit learning in response to a 70° rotation. Further, in a recent study by Maresch and colleagues47, where strategies were probed only intermittently, implicit learning reached nearly 45°. Thus, there remains much work to be done to better understand variations in implicit learning across errorclamp conditions, and standard rotation conditions.”

3.26B. Line 463-467: the authors claim equivalence based on a non-significant p-value (Altman 1995). Given the small effect size, they don’t have any power to detect effects of small or medium size. They cannot conclude that there is no difference. They can only conclude that they don’t have enough power to detect a difference. As a result, it does NOT suggest that implicit adaptation was unaltered by the changes in explicit strategy.

We apologize for the misunderstanding here. The statistical test we report here was not conducted in the current paper. This is a test we ran and reported in Mazzoni and Krakauer, 2006. We agree, this p-value should not be taken to mean the implicit system is unaltered. Indeed, this is a major point in our revised analysis of these data: that there are differences in adaptation in the strategy and no-strategy groups that appeared to increase over exposure to the rotation.

We report this p-value again here, because of its dramatic impact on the field over the past 15 years. This statistical test was used as evidence that implicit learning is driven by SPEs. Thus, given its significance to the literature, we feel it is very useful to remind the reader of our analysis 15 years, and why it suggested to us at the time that the implicit system was only driven by SPEs.

We have revised the text to better explain where the origins of this statistical test on PxxLxx: “When we compared the rate of learning with and without strategy in Mazzoni and Krakauer, 2006, we found that it was not different during the initial exposure to the perturbation (Figure 10B, gray, mean adaptation over rotation trials 1-24, Wilcoxon rank sum, p=0.223). This statistical test led us to conclude in Mazzoni and Krakauer, 2006, that implicit adaptation was driven by a sensory prediction error that did not depend on the primary target and was not altered by explicit strategy.”

3.26C. In Mazzoni and Krakauer, the aiming direction was neither controlled nor measured. As a result, given the appearance of a target error with training, it is possible that the participants aimed in the direction opposite to the target error in order to reduce it. This would have reduced the apparent increase in implicit adaptation. The authors argue against this possibility based on the 47.8{degree sign} change in hand angle due to the instruction to stop aiming. I remained unconvinced by this argument as I would like to get more info about this change (Mean, SD, individual data). Furthermore, it is unclear what the actual instructions were. Asking to stop aiming or asking to bring one’s invisible hand on the primary target will have different effects on the change in hand angle.

We can appreciate the reviewer’s concern. Unfortunately, we no longer have access to these data, as this study was published 15 years ago, before it was the norm to make data publicly available on a repository. As detailed in our Methods, we could only extract the mean using GRABIT in MATLAB. With that said, we are not sure these data are necessary to answer the reviewer’s question. The notion that participants maintain the same strategy despite increasing target error was tested by Taylor and Ivry, 2011. Indeed, as the reviewer suggests, participants reverse their explicit strategy to reduce target error, but this starts to occur after somewhere between 80-100 trials: in Mazzoni and Krakauer, participants only experienced 70 rotation trials. We describe this in our revised paper (Lines 772-776): “Interestingly, while the reach angle exhibited the same implicit drift described by Mazzoni and Krakauer, with many more trials participants eventually counteracted this drift by modifying their explicit strategies, bringing their target error back to zero (Figure 10H, black). At the end of adaptation, participants exhibited large implicit aftereffects after being instructed to stop aiming (Figure 10H, right, aftereffect; t(9)=5.16, p<0.001, Cohen’s d=1.63).”

With regards to the reviewer’s second question, participants were told to stop using a strategy and to aim directly at the primary target. We have revised our text on Lines 713-716 to better indicate this: “When we asked participants to stop using their aiming strategy and to move instead toward the primary target (Figure 10A, do not aim rotation on) their movement angle changed by 47.8° (difference between 3 movements before and 3 movements after instruction), indicating that they had continued to maintain the instructed explicit re-aiming strategy near 45°.”

3.26D. The data during the washout period suffers from the fact that the participants from the no-strategy group were not told to stop aiming. The hypothesis that they did use an explicit strategy could explain why the difference between the two groups rapidly vanishes. In other words, if the authors want to use this experiment to demonstrate support any of their hypotheses, they should redo it properly by telling the participants to stop using any explicit strategy at the start of the washout period to make sure that the after-effect is devoid of any explicit strategy.

This is a good criticism, similar to that raised in Point 2. However, the Mazzoni and Krakauer, 2006, data are included not to support our hypotheses, but rather to show limitations in the competition model. For this reason, we still think it is important to show these data, even though 15 years later, we have improved the methods we use to measure implicit and explicit learning. Nevertheless, we have added a limitation to our Results section on PxxLxx, to recognize this concern: “It is important, however, to note a limitation in these analyses. Our earlier study did not employ the standard conditions used to measure implicit aftereffects: i.e., instructing participants to aim directly at the target, and also removing any visual feedback. Thus, the proposed dual-error model relies on the assumption that differences in washout were primarily related to the implicit system. These assumptions need to be tested more completely in future experiments.”

With that said, while we agree that the difference in Mazzoni and Krakauer (2006) diminishes rapidly, it did not vanish entirely. Below we show the mean aftereffect on the last 10 trials of washout (trials 71-80). Note that a 3° difference still persisted on these trials, consistent with a lingering difference in implicit learning. There are at least 3 reasons why this difference may have quickly diminished: (1) implicit learning decays in both groups on each trial, (2) there is implicit learning in the opposite direction in response to the washout errors, and (3) the explicit system comes online to mitigate the ‘negative’ washout errors.

3.26E. It is unclear whether the data from Taylor and Ivry (2011) are in favor of one of the models as the separate and shared error models are not compared.

These data are in favor of neither model. Neither a target error, or an SPE alone, could drive the complex responses observed in Taylor and Ivry (2011). This is exactly why we include these data, to demonstrate to the reader the limitations in our simple models, and to show that there is so much more to understand about the error sources which drive implicit learning. We have revised our Discussion to better highlight this (see 1063-1090).

“However, the nature of aim-cursor errors remains uncertain. For example, while this error source generates strong adaptation when the aim location coincides with a physical target (Figure 10H, instruction with target), implicit learning is observed even in the absence of a physical aiming landmark9 (Figure 10H, instruction without target), albeit to a smaller degree. This latter condition may implicate an SPE learning that does not require an aiming target. Thus, it may be that the aim-cursor error in Mazzoni and Krakauer is actually an SPE that is enhanced by the presence of a physical target. In this view, implicit learning is driven by a target error module and an SPE module that is enhanced by a visual target error4,11,86.

These various implicit learning modules are likely strongly dependent on experimental contexts, in ways we do not yet understand. For example, Taylor and Ivry (2011) would suggest that all experiments produce implicit some SPE learning, but less so in paradigms with no aiming targets. Yet, the competition equation accurately matched single-target behavior in Figures 1-9 without an SPE learning module. It is not clear why SPE learning would be absent in these experiments. One idea may be that the aftereffect observed by Taylor and Ivry (2011) in the absence of an aiming target, was actually a lingering associative motor memory that was reinforced by successfully hitting the target during the rotation period. Indeed, such a model-free learning mechanism87 should be included in a more complete implicit learning model. It is currently overlooked in error-based systems such as the competition and independence equations.

Another idea is that some SPE learning did occur in the no aiming target experiments we analyzed in Figures 1-9, but was overshadowed by the implicit system’s response to target error. A third possibility is that the SPE learning observed by Taylor and Ivry (2011) was contextually enhanced by participants implicitly recalling the aiming landmark locations (akin to memory-guided saccade adaptation) provided during the baseline period. This possibility would suggest SPEs vary along a complex spectrum: (1) never providing an aiming target causes little or no SPE learning (as in our experiments), (2) providing an aiming target during past training allows implicit recall that leads to small SPE learning, (3) providing an aiming target that disappears during the movement promotes better recall and leads to medium-sized SPE learning (i.e., the disappearing target condition in Taylor and Ivry), and (4) an aiming target that always remains visible leads to the largest SPE learning levels. This context-dependent SPE hypothesis may be related to recent work suggesting that both target errors and SPEs drive implicit learning, but implicit SPE learning is altered by distraction54.”

3.27. Discussion: it is important and positive that the authors discuss the limitation of their approach but I feel that they dismiss potential limitations rather quickly even though these are critical for their conclusions. They need to provide new data to prove those points rather than arguments.

We analyze new data in the revised manuscript to address reviewer concerns. These include, but are not limited to the following major R3 concerns:

1. For the reviewer’s concern about small sample size: we have increased the sample size in Experiment 3 (previously Experiment 2) to n=35 in our No PT Limit data which is used to measure correlations between implicit and explicit learning. We now corroborate these relationships in several additional data sets: Experiment 1 (n=73), Tsay et al. 2021 (n=25/group), and Maresch et al., 2021 (n=40).

2. For the reviewer’s concern about the saturation of implicit learning in the invariant error-clamp paradigm: we now show implicit learning is not limited to a saturation phenotype. In the stepwise group in Experiment 1 (new data, n=37), we show that the implicit system can increase with rotation size. In our new Tsay et al., (2021) analysis, we show that the implicit can also have a non-monotonic response to rotation size.

3. For the reviewer’s concern about increasing implicit learning with a gradual rotation: we compare the new stepwise and abrupt participants groups in Experiment 1 and show that gradual learning reduces explicit strategy and increases implicit learning.

4. For the reviewer’s concern about generalization. We now include an SPE generalization model in the paper. We show that its predictions are inconsistent with participant behavior in Experiment 1. We also show that our data diverge from plan-based generalization curves measured in past studies.

5. For the reviewer’s concern about limited preparation time experiments. We have collected two new groups in Experiment 3 (a Limit PT and a decay-only group) where we confirm our assumption that limiting preparation time prevents explicit strategy use.

Despite these new data, we note potential limitations in our work much more clearly in the revised paper. For example, see Lines 991-1033 (generalization) and Lines 862-878 (preparation time).

On limiting PT (lines 605-615):

3.28A. The authors used three different arguments to support the fact that limiting PT suppresses explicit strategy.

We have included two new control experiments to address this point in the revised manuscript. Our new evidence that limiting preparation time suppresses explicit strategy is described at numerous points above (e.g., Points 3 (see response to editor summary) and 3-20B).

3.28B. Their first argument is that Haith did not observe savings in low PT trials. This is true but Haith only used low PT trials (with a change in target) on 20% of the trials. Restricting RT together with a switch in target location is probably instrumental in making the caching of the response harder. This is very different in the experiments done by the authors. In addition, one could argue that Haith et al. did not find evidence for savings but that these authors had limited power to detect a small or medium effect size (their N=12 per group). I agree that savings in low PT trials is smaller than in high PT trials but is savings completely absent in low PT trials? Figure 6H shows that learning is slightly better on block 2 compared to block 1. N=12 is clearly insufficient to detect such a small difference.

We agree that our limited preparation time method is not the same as that used in Haith et al., 2015. We do not necessarily agree with the reviewer that caching would be easier in our paradigm. Nevertheless, we have done two control experiments to test caching in our limited preparation time condition. We have found minimal levels of explicit caching (maximally about 2°), which is more likely explained by involuntary decay in implicit learning (due to 30 sec probe instruction period) than a voluntary change in aiming. We describe these new data many times above: most notably in Points 3 (response to editor) and 3-20B.

With regards to whether savings was occurred in Haith et al., 2015, but could not be detected: First, we would like to note that this study had n=14 participants (not n=12 as suggested by the reviewer). Second, it does not actually matter to our hypothesis whether a savings effect is entirely absent in Haith et al. or is simply ‘too small to detect’ as suggested by the reviewer. The main idea is that by limiting preparation time on more trials, this will give the implicit system a better chance to express savings. Consider Author response image 4. The gray point labelled Haith et al., 2015 matches our two-state model. This point lies within the ‘black zone’ which we arbitrarily set at a +/- 5% change in measured implicit learning. It could be as, the reviewer suggests, that this point should be shifted slightly to the right in the map (see the white ‘alternate possibility’). Nothing about this would violate the competition model. The main idea we explore in the paper is that by limiting preparation time (black point, limit prep. time) the savings effect should increase in magnitude (in this hypothetical point in the map, this would be a >20% increase in implicit learning).

Author response image 4. Shows the hypothetical scenario described by Reviewer 3.

Author response image 4.

Our data in Figure 8C above clearly demonstrate this phenomenon. While both learning rate (top row) and early implicit difference (bottom row) show no change between the two exposures (left and right bars) in the Haith et al., 2015, data set (purple, comp.), this difference is dramatically enhanced under our limited preparation time conditions in Experiment 4 (yellow lines, no comp.). Lastly, we used the statistical approach that R3 suggested in Point 3-24D to ensure that Experiment 4 yields a change in reaching behavior exceeding Haith et al.: we observed that low prep. time learning rates increased by more than 80% in Experiment 4 (Figure 8C top; mixed-ANOVA, exposure no. by experiment type interaction, F(1,22)=5.993, p=0.023; significant interaction followed by one-way rm-ANOVA across exposures: Haith et al. with F(1,13)=1.109, p=0.312, ηp2=0.079; Experiment 4 with F(1,9)=5.442, p=0.045, ηp2=0.377). Statistically significant increases in reach angle were detected immediately following rotation onset in Experiment 4 (Figure 8B, bottom), but not our earlier data (Figure 8C, bottom; mixed-ANOVA exposure number by experiment interaction, F(1,22)=4.411, p=0.047; significant interaction followed by one-way rm-ANOVA across exposures: Haith et al. with F(1,13)=0.029, p=0.867, ηp2=0.002; Experiment 4 with F(1,9)=11.275, p=0.008, ηp2=0.556).

In sum, we have addressed the reviewer’s concern by demonstrating our limited preparation time method suppresses explicit strategy. In addition, the possibility that some small and undetected implicit savings occurred in Haith et al., has no impact on our paper. The important point in the competition model, is that limiting preparation time should enhance one’s ability to observe implicit savings. We confirmed this idea using a mixed-ANOVA as suggested by R3 in Point 3-24D.

3.28C. Their second argument is that they used four different targets. McDougle et al. demonstrated caching of explicit strategy without RT costs for two targets and impossibility to do so for 12 targets. The authors could use the experimental design of McDougle if they wanted to prove that caching explicit strategies is impossible for four targets. I don't see why you could cache strategies without RT cost for 2 targets but not for 4 targets. This argument in not convincing.

As described in Points 3-28A, 3-28B (evidence in Points 3 and 3-20B), this concern is now moot. We show in the revised paper that our limited preparation time condition suppresses explicit strategy.

While no longer pertinent, we do wish to note that the McDougle et al. experiments were not similar to Experiment 4 in our paper or to the Haith et al. 2015 experiment. Thus, it is challenging to use these data to make exact predictions. For example, if caching occurs with two targets as in McDougle et al., why was caching absent in Haith et al., 2015, where two targets were also used, with only one target experiencing a cursor rotation? In any case, we have removed the statement in question in the revised manuscript; we no longer compare Experiment 4 and Haith et al. based on target numbers.

3.28D. The last argument of the authors is that they imposed even shorter latencies (200ms) than Haith (300ms). Yet, if one can cache explicit strategies without reaction time cost, it does not matter whether a limit of 200 or 300ms is imposed as there is no RT cost.

As described in Points 3-28A, B, and C (evidence in Points 3 and 3-20B), this concern is now moot. We show in the revised paper that our limited preparation time condition suppresses explicit strategy.

While no longer pertinent, we do agree with the reviewer that the reaction time limit may not alter one’s ability to retrieve the cached memory. However, the overarching question, is whether explicit re-aiming could have occurred on limited preparation time trials. While caching may be insensitive to reaction time, McDougle et al. show that general strategy use employs a mental rotation. Cutting this rotation short in time, results in intermediate strategies (at least with 12 targets in their paper). Thus, while a stricter time limit (200 ms in our data, as opposed to 300 ms in Haith et al.) may still lead to caching in conditions where caching is permitted, it still would be likely to interrupt mental rotations, thus producing smaller explicit strategies. Nevertheless, given our new experimental data (see Points 3 and 3-20B), we no longer compare Experiment 4 with Haith et al. based on reaction time limits in the revised manuscript.

On Generalization (line 678-702).

3.29A. How much does the amount of implicit adaptation decays with increasing aiming direction? Above, I argued that the data from Day et al. would predict a negative correlation between the explicit and the implicit components of adaptation and a decrease in implicit adaptation with increasing rotation size. The authors clearly disagree on the basis of four arguments. None of them convinced me.

We do not disagree with the reviewer that plan-based generalization would cause a negative correlation between implicit and explicit learning. We do however disagree that this phenomenon matches our data, given its small effect size which has been documented in past studies. Regardless, we now compare the competition model to an SPE generalization model in the revised manuscript. We show in Section 2.2 in the paper that plan-based generalization does not match our data in Experiments 1-3. This analysis is highlighted in Figure 5 in the revised manuscript. Lastly, we have also added an entire section to our Discussion where we describe similarities and differences between these models (see “The relationship between competition and implicit generalization”). Please see our response to Point 1 above for a detailed description of these new data and analyses.

3.29B. First, they estimate this decrease to be only 5{degree sign} based on these two papers (FigS5A and B but 5C shows ~10{degree sign}). This seems to be a very conservative estimate as Day et al. reported a 10{degree sign} reduction in after-effects for 40{degree sign} of aiming direction (see their Figure 7). An explicit component of 40{degree sign} was measured by Neville and Cressman for a 60{degree sign} rotation. The 10{degree sign} reduction based on a 40{degree sign} explicit strategy fits perfectly with data from Figure 1G (black bars) and Figure 2F. Off course, the experimental conditions will influence this generalization effect but this should push the authors to investigate this possibility rather than to dismiss it because the values do not precisely match. How close should it match to be accepted?

The reason why we suggested 5° in our original paper was due to our data in Experiments 2 and 3 (previously 1 and 2) where we measured the correlation between implicit learning and explicit strategy across subjects. As shown in Figures 5A-C, explicit strategies varied across a 20-25° range. The Day et al. experiment would predict maximally a 5° change in implicit learning. We have made this point clearer in the revised paper, by examining our new data and Day et al. predictions in Figure 5 above. We elaborate on this in our response to Point 1 (response to editor). To summarize, Day et al. drastically underpredicts the implicit decline observed in Experiments 2 and 3 (Figures 5A-C). It also substantially underpredicts the gain we measured in Experiment 1 (see gray lines in Figure 5D above). Moreover, an SPE generalization model violates the invariance in the implicit-explicit correlation gain we observed in Experiment 1 (see Figures 5D-F). On a related note, we do agree with the reviewers, that studies are much needed to investigate how generalization properties change with experimental conditions. We note this on Lines 1029-1033 in the revised paper:

“With that said, while the generalization hypothesis did not match important patterns in our data, it remains a very important phenomena that may alter implicit learning measurements. It is imperative that implicit generalization is more thoroughly examined to determine how it varies across experimental methodologies. These data will be needed to accurately evaluate the competitive relationship between implicit and explicit learning.”

3.29C. Second, it is unclear how this generalization changes with the number of targets (argument on lines 687-689). This has never been studied and cannot be used as an argument based on further assumptions. Furthermore, I am not sure that the generalization would be so different for 2 or 4 targets.

We are a bit confused on this comment. Krakauer et al., 2000, how the generalization curve varies with target number. This paper was cited as evidence in the original manuscript. We now include all the curves measured for each target number (1, 2, 4, and 8) in Figures 5A-C in our revised manuscript.

3.29D. Third, the authors measured the explicit strategy in experiment 1 via report in a very different way than what is usually used by the authors as the participants do not have to make use of them. It seems to be suboptimal as the authors did not use them for their correlation on Figure 3H and the difference reported in Figure S5E is tiny (no stats are provided) but is based on a very limited number of participants with no individual data to be seen. If it is suboptimal for Figure 3H why is it sufficient as an argument?

To put this in context, in Figure 5-Supplement 2E (previously Figure S5E), shows how implicit learning differed across our probe measurement and our explicit report measurement. Plan-based generalization predicts that implicit learning measured via aftereffect (i.e., no aiming) should be smaller than one’s true implicit learning, which could be measured via report. Indeed, Day et al. showed this in their study. Our data did not show this. In Figure 3-Supplement 2B and Figure 5-Supplement 2E, we show that the implicit aftereffect in Experiment 1 was larger than the report-based estimate by about 4°. This is supported by a paired t-test which we have added to the revised paper (paired t-test, t(8)=2.59, p=0.032, d=0.7).

While the reviewer may argue this is small, this should be considered relative to the Day et al. prediction. The generalization curve in Day et al., would predict an opposite relationship by about 5°. Thus, in total our measures are about 9° (over 50% the total implicit response measured) in the opposite direction predicted by Day et al. Furthermore, the reviewer’s comment overlooks our additional point in the original manuscript, that this same discrepancy was seen in Maresch et al., 2021, thus bolstering our point. Thus, it seems imperative that future work examine how experimental conditions alter implicit generalization, which we state on Lines 1029-1033 in the revised paper. We suspect that the generalization curve may substantially vary across the conditions used in Day et al. (1 target, aim report on each trial) and those used in Experiment 1 (4 targets, no aim reports during adaptation).

We agree with the reviewer, however, that our implicit report-based measures are suboptimal. As in our response to Point 4 above, it should be noted that we ended Experiment 2 (Experiment 1 in the original manuscript) by asking participants to verbally report where they remembered aiming in order to hit the target. While many studies use similar reports to tease apart implicit and explicit strategy (e.g., McDougle et al., 2015), our probe had important limitations. Normally, participants are probed at various points throughout adaptation by asking them to report where they plan to aim on a given trial, and then allowing them to execute their reaching movement (thus they are able to continuously evaluate and familiarize themselves with their aiming strategy over time).

Our methodology may have reduced the reliability in this measure of explicit adaptation. As stated in our Methods, we observed that participants were prone to incorrectly reporting their aiming direction, with several reported angles in the perturbation’s direction, rather than the compensatory direction. In fact, 25% of participant responses were reported in the incorrect direction. Thus, as outlined in our Methods, we chose to take the absolute value of these measures, given recent evidence that strategies are prone to sign errors (McDougle and Taylor, 2019). That is, we assumed that participants remembered their aiming magnitude, but misreported the orientation.

Nevertheless, as detailed in Point 4 above, we now also report the correlations in Figure 3G (previously 3H) using report-based implicit learning in Figure 3-Supplement 2C. Despite the issue with our measures, we still obtained a close match to the competition model prediction.

3.29E. Fourth, when interpreting the data of Neville and Cressman (line 690-692), the authors mention that there were no differences between the three targets even though two of them corresponded to aim directions for other targets. As far as I can tell, the absence of difference in implicit adaptation across the three targets is not mentioned in the paper by Neville and Cressman as they collapsed the data across the three targets for their statistical analyses throughout the paper. In addition, I don't understand why we should expect a difference between the three targets. If the SPE and the implicit process are anchored to the aiming direction and given that the aiming direction is different for the three targets, I would not expect that the aiming direction of a visible target would be influenced by the fact that, for some participants, this aiming direction corresponds to the location of an invisible target.

We are not sure that we understand the reviewer’s suggestion. To rephrase our point (which was made initially by Neville and Cressman), there are 3 adaptation targets. Suppose we label them Targets 1, 2, and 3. Given their arrangement, to hit Target 1 with a rotation cursor, participants needed to aim near Target 2. To hit Target 2 with a rotation cursor, participants needed to aim near Target 3. However, to hit Target 3, participants had to aim to a location that did not correspond to a target. What this means given the SPE generalization hypothesis, is that there is an implicit memory that should be centered in space on Targets 2 and 3, but not Target 1. Thus, when participants are told to reach straight towards Targets 2 and 3, this would correspond to local peaks in the generalization function. Target 1, however, is not associated with any such peak. Overall, given plan-based generalization, one would expect a larger aftereffect to occur at Targets 2 and 3, than Target 1. Neville and Cressman (2018), however, did not observe this. The reviewer may have missed this in the original paper, because the analysis is described only in their Discussion and Supplementary Materials (Figure S.2 in their manuscript). Overall, this represents additional evidence that in addition to Experiment 1 and Maresch et al., 2021 (as detailed in Point 3-29D above), that generalization may differ between studies that do not use aim reports and those that do.

3.29F. Finally, the authors argue here about the size of the influence of the generalization effect on the amount of implicit adaptation. They never challenge the fact that the anchoring of implicit adaptation on the aiming direction and the presence of a generalization effect (independently of its size) leads to a negative correlation between the implicit and explicit component of adaptations (their Figure 3) without any need for the competition model.

We agree that an SPE generalization model could also produce a negative relationship between implicit and explicit learning. However, there are numerous phenomena described in both our original and revised manuscripts, that are inconsistent with plan-based generalization. Most importantly, please see Point 1 (response to editor summary), where we document our new analyses in Figure 5, which demonstrate how our data in Experiments 1-3 support the competition model over this generalization alternative.

However, while we do not go into these details in the revised manuscript, an SPE generalization model is simply ill-equipped to describe several other results we explore. Most notably, this model does not have the flexibility needed to account for the 3 implicit learning phenotypes described in Figure 1 in the revised paper (see Point 6B in editor response): (1) invariance, (2) scaling, and (3) non-monotonicity. The SPE generalization model is not able to account for the invariance phenotype (see Point 3-13 above), nor the non-monotonic phenotype. The competition model predicts all 3 phenotypes.

In sum, while we still maintain that generalization may have made a small contribution to our data, there is overwhelming evidence that it plays little role in the implicit-explicit relationships we measure.

1. An SPE generalization model cannot describe the 3 fundamental implicit learning phenotypes we examine in Figure 1 in the revised manuscript.

2. The model’s predictions about report vs. aftereffect-based implicit learning are opposite the data measured in Experiment 2 (previously Experiment 1), Maresch et al., 2021, and Neville and Cressman (2018).

3. The model incorrectly predicts how the implicit and explicit relationship will vary across changes in rotation size, which we now examine in Experiment 1 (see Point 1 above and Figure 5).

4. Previous generalization curves (e.g., Day et al., 2016) drastically underpredict the implicit learning changes we measured in Experiments 2 and 3 (see Figures 5A-C).

We have devoted two entire sections to the SPE generalization model in our revised paper (Section 2.2 in our Results, and “The relationship between competition and implicit generalization” in our Discussion).

Final recommendations

3.30. The authors should perform an unbiased analysis of a model that include separate error sources, the generalization effect and a saturation of implicit adaptation with increasing rotation size. In my opinion, such model would account for almost all of the presented results.

We have included an unbiased analysis of an SPE generalization model in our revised paper. We detail this completely in Point 1 above. To summarize our findings, this model does not match our data in Experiments 1, 2, and 3. This is detailed in Figure 5 in the revised manuscript, and Section 2.2 in our Results.

The notion that implicit learning has an invariant saturation across increasing rotation size (as in Kim et al., 2018), is entirely inconsistent with our new data. To test this, we collected a control experiment: the stepwise group in Experiment 1. Here we show that implicit learning can respond proportionally to rotation size under the right experimental conditions. Our complete analysis on this point is described in Point 6B (response to the editorial summary) and in Figures 1H-L in the revised paper. In addition, generalization also disagrees with non-monotonic saturation points noted in Tsay et al., 2021, which are now examined in Figures 1M-Q in our revised manuscript.

3.31. They should redo all the experiments based on limited preparation time and should include direct measures of implicit or/and explicit strategies (for validation purposes). This would require larger group size.

In the revised manuscript we have added two control experiments to validate our limited preparation time condition. These include the Limit PT group and decay-only group in Experiment 3. We describe these new data in Point 3 above. To summarize our findings, our limited preparation time condition does suppress explicit strategy use as claimed in our original manuscript. These new data are shown in Figures 3K-M and Figure 8-Supplement 1 in the revised paper.

3.32. They should replicate the experiments where they need a measure of after-effect devoid of any explicit strategies as this has only become standard recently (experiment for Figure 2 and Figure 8). Not that for Figure 8, they might want to measure the explicit aim during the adaptation period as well.

In our revised manuscript, we replicate our Saijo and Gomi (2011) analysis (Figure 2 in original paper) using the abrupt and stepwise groups in Experiment 1. These new data corroborate our initial conclusion that gradual rotations suppress explicit strategy, thus diminishing competition and enhancing explicit learning. These new data ‘correctly’ measure implicit and explicit learning. The relevant analyses are shown in Figures 1H-L and Figures 2D-G in the revised paper. Please see our response to Point 2 (editor response) for more details.

Lastly, we do agree that our data in Mazzoni and Krakauer would be improved by directly measuring the implicit system. We list this as a limitation on Lines 752-756. Repeating such experiments is beyond our paper’s scope but is desperately needed to understand how implicit error sources may vary across paradigms that use aiming landmarks. In any case, our primary reason for analyzing these data in our paper, is to show limitations in the competition model. The existing data accomplish this goal, as we show in Section 4.1 in our revised paper. Finally, note that we include several passages in our revised Discussion (see “Error sources that drive implicit adaptation”) that expand on these issues.

[Editors’ note: what follows is the authors’ response to the second round of review.]

The manuscript has been improved but there are some remaining issues that need to be addressed, as outlined below:

One review raised the concern about Experiment 1, which presented perturbations with increasing rotation size and elicited larger implicit learning than the abrupt perturbation condition. However, this design confounded the condition/trial order and the perturbation size. The other concern is the newly added non-monotonicity data set from Tsay's study. On the one hand, the current paper states that the proposed model might not apply to error clamp learning; on the other hand, this part of the results was "predicted" by the model squarely. Thus, can it count as evidence that the proposed model is parsimonious for all visuomotor rotation paradigms? This message must be clearly stated with a special reference to error-clamp learning.

Please find the reviewers' comments and recommendations below, and I hope this will be helpful for revising the paper.

We are grateful as always, for the reviewers’ constructive and insightful criticisms. Below, we address each point in great detail. To summarize briefly:

Steady-states in Experiment 1

We appreciate the reviewer’s concern about steady-state implicit learning in Experiment 1. Please see our response to Point 4-1 below. There we argue that the experimental conditions we used were likely to produce steady-state adaptation, based on analyzing the multiple learning blocks in the abrupt group, as well as past evidence in Neville and Cressman, as well as Salomonczyk et al. (2011). For the abrupt condition in Experiment 1, we did not detect any statistically significant increase in implicit learning beyond the initial learning period. This suggests the initial block was sufficient to achieve steady-state learning in a 60° rotation. Similarly, we considered past data we collected in Salomonczyk et al. (2011) where participants had multiple exposures to a 30° rotation. Here as well, we did not detect a statistically significant change in implicit learning after the first learning block. Lastly, while we cannot statistically assess trends in Neville and Cressman (without subject data), the average change in implicit learning following the initial rotation block was incredibly limited in their 20° and 40° rotation groups: about 0.7° and 1.4°, respectively. These changes are grossly mismatched with the 14.5° variation in implicit learning we observed in Experiment 1. Thus, it is rotation size, not time, that causes the large changes in implicit learning in Experiment 1.

Furthermore, as we explain in Point 4-1, the competition theory is not specific to steady-state learning. That is, the scaling, saturation, and nonmonotonic phenotypes can be observed and validated at any point during the learning period; the effect size is largest during asymptotic performance. Thus, while we maintain our steady-state measures in Experiment 1 are robust, reaching steady-state is not needed to validate the competition model as we do in Figures 1 and 2.

In the revised paper, these important matters are discussed on P4L103, P4L120, P5L169, Appendices 1and2, and Figure 1-Supplements 3 and 4.

Nonmonotonic learning in Tsay et al., 2021

We appreciate the reviewers’ concern about potential similarities between implicit attenuation in Tsay et al., 2021 and Morehead et al., 2017. With that said, we do not agree that these learning patterns match one another. Firstly, the reduction in implicit learning in the 15° group in Tsay et al. cannot be explained via the error-clamp properties suggested by Morehead et al. The ‘error cancelation’ they observed in their 7.5° rotation group occurred because total learning reached 7.5°, completely eliminating the error source. In Tsay et al., implicit learning only reached half the rotation’s magnitude in the 15° group. Thus, error was not cancelled here.

Second, while the reduction in implicit learning in the 95°+ rotations in Morehead et al. is intriguingly similar to the decrease in implicit learning in the 90° Tsay et al. group, these paradigms possess critical differences that complicate their direct comparison. That is, errors drive learning, not rotations per se. In Tsay et al. because participants were allowed to aim, their residual steady-state errors were only 5°. In Morehead et al., participants were not allowed to aim, and residual errors were >80°. Given that implicit error sensitivity is exceedingly small in response to such large residual errors (Wei and Kording, 2009; Marko et al., 2012), this seems to us the most likely reason why implicit learning was attenuated in Morehead et al.

Thus, the competition model remains the most likely candidate in Tsay et al., where extreme residual errors such as Morehead et al. were never encountered. Its applicability is bolstered by the fact that the competition model alone can both qualitatively and quantitatively explain implicit phenotypes exhibited in Experiment 1, Salomonczyk et al., 2021, Neville and Cressman, 2018, the 15-60° rotations in Tsay et al., as well as the individual-level correlations between implicit, explicit, and total learning in Figures 3-5 (Experiments 1-3, Tsay et al., 2021, Maresch et al., 2021).

Finally, in our previous manuscript, we did not provide enough mathematical context for why the competition and independence models cannot be applied to error-clamp paradigms. The steady-state equations we provide in (Equations 4) and (5) are only appropriate for standard rotation learning, where errors decrease over time. These equations differ in the error-clamp condition, where errors remain constant. As we describe in Points 1-7 and 4-3, below, the ‘corrected’ steady-state equations show that for error-clamp learning, implicit learning must reach unattainable levels in order to achieve the dynamic steady-states described by the competition or independence theories. For error-clamp adaptation, the only possible way to reach “steady-state” is when the implicit system saturates at a physiologic upper bound.

We have greatly expanded on comparisons between error-clamp learning and standard rotation learning on P23L918 in our revised Discussion. We also better compare Tsay et al. and Morehead et al. on P21L849 in our Discussion, as well as Appendix 6.6.

Additional major changes

There are many other critical changes we have made to our revised paper in response to reviewer concerns. The 2 most important additions are detailed in Points 1-15 and 4-4 below. In 1-15, we show that the variations in implicit learning we examine in Part 1, cannot be explained via changes in implicit error sensitivity. This is discussed on P8L276, in Appendix 4, and Figure 2-Supplement 2. In 4-4, we describe updates to our generalization analyses. Namely, we use Gaussian generalization properties measured by McDougle et al., 2017, to validate the claims we made in our past manuscript. Using data where CW and CCW rotations are counterbalanced did not rescue the SPEgeneralization model. Using Gaussian generalization properties over linear properties did not improve the model either. We have greatly updated our Results on P10L374, Figure 4, Figure 4-Supplement 1, and Appendix 6.

Reviewer #4:

In this paper, Albert et al. test a novel model where explicit and implicit motor adaptation processes share an error signal. Under this error sharing scheme, the two systems compete – that is, the more that explicit learning contributes to behavior, the less that implicit adaptation does. The authors attempt to demonstrate this effect over a variety of new experiments and a comprehensive re-analysis of older experiments. They contend that the popular model of SPEs exclusively driving implicit adaptation (and implicit/explicit independence) does not account for these results. Once target error sensitivity is included into the model, the resulting competition process allows the model to fit a variety of seemingly disparate results. Overall, the competition model is argued to be the correct model of explicit/implicit interactions during visuomotor learning.

I'm of two minds on this paper. On the positive side, this paper has compelling ideas, a laudable breadth and amount of data/analyses, and several strong results (mainly the reduced-PT results). It is important for the motor learning field to start developing a synthesis of the 'Library of Babel' of the adaptation literature, as is attempted here and elsewhere (e.g., D. Wolpert lab's 'COIN' model). On the negative side, the empirical support feels a bit like a patch-work – some experiments have clear flaws (e.g., Exp 1, see below), others are considered in a vacuum that dismisses previous work (e.g., nonmonotonicity effect), and many leftover mysteries are treated in the Discussion section rather than dealt with in targeted experiments. While some of the responses to the previous reviewers are effective (e.g., showing that reduced PT can block strategies), others are not (e.g., all of Exp 1, squaring certain key findings with other published conflicting results, the treatment of generalization). The overall effect is somewhat muddy – a genuinely interesting idea worth pursuing, but unclear if the burden of proof is met.

(4.1) The stepwise condition in Exp 1 is critically flawed. Rotation size is confounded with time. It is unclear – and unlikely – that implicit learning has reached an asymptote so quickly. Thus, the scaling effect is at best significantly confounded, and at worst nearly completely confounded. I think that this flaw also injects uncertainty into further analyses that use this experiment (e.g., Figure 5).

We appreciate the reviewer’s concern. However, we are not sure that this issue is quite as large as suggested, nor is likely to alter our primary conclusion. The stepwise condition in Experiment 1 is intended to show that total implicit learning is modulated by rotation size when explicit strategies are muted (i.e., when competition is partially alleviated). We observed that implicit adaptation increased across the 15°, 30°, 45°, and 60° rotations (see Figure 1—figure supplement 4A, rm-ANOVA, F(3,108)=99.9, p<0.001, ηp2=0.735). The reviewer suggests a potential issue in this analysis is that implicit learning requires time to saturate, and thus, may not reach its total asymptotic level in the 15°, 30°, and 45° rotations.

Fortunately, the abrupt rotation group provides a way to verify the timecourse of implicit learning in this task. Total implicit learning was assayed 4 times in the abrupt group. Each probe overlapped with a stepwise rotation period. For example, the initial probe in the abrupt condition occurred during Block 1, when implicit learning was measured in the 15° stepwise rotation. Similarly, the second, third, and fourth implicit measurements in the abrupt group overlapped with the 30°, 45°, and 60° rotation periods is the stepwise groups. Implicit learning measures across all 4 abrupt periods are shown in Figure 1—figure supplement 4.

We tested whether total implicit learning varied across the 4 blocks in the abrupt condition. We did not detect any statistically significant effect of Block No. on total implicit learning (rm-ANOVA, F(3,105)=2.21, p=0.091, ηp2=0.059). The same was true when we compared solely the first and last blocks (paired t-test, t(35)=1.53, p=0.134). Thus, even in the 60° rotation condition, which arguably provides the largest dynamic range to measure implicit variation, there was little or no change in implicit learning following its initial measurement during Block 1. This proves that it is the gradual change in rotation size that induces greater implicit learning, not the length of exposure to the 60° rotation. It also suggests that one exposure block was sufficient to achieve steady-state adaptation.

How can we be sure this generalizes to smaller rotation sizes? While we did not measure implicit learning across each block in the 15°, 30°, and 45° stepwise conditions, we used a very similar experimental protocol in Salomonczyk et al. (2011). These data provide another example where the implicit response scales with rotation size during a stepwise rotation sequence (see Figure 1—figure supplement 4C). Here each block used 39 rotation cycles (117 total trials in each block).

When we increased the rotation in a stepwise manner across 3 blocks: 30° then 50° and lastly 70°, we observed strong increases in asymptotic implicit learning (Figure 1—figure supplement 4C, p<0.001). In a second group, participants were exposed to a 30° rotation and implicit learning was measured across 3 consecutive blocks (Figure 1—figure supplement 4D). Critically, we did not detect any change in the implicit aftereffect across the 3 learning periods (all contrasts, p>0.05). Thus, the result was the same as in our 60° abrupt data in Figure 1—figure supplement 4B. The implicit aftereffect in Block 1, served as an appropriate measure for asymptotic implicit learning. These data strongly argue against the reviewer’s concern.

Lastly, while we did not measure extended exposures to a 15° or 45° rotation, Neville and Cressman (2018) tested prolonged exposure to a 20° and 40° rotations in a similar paradigm (3 targets). These data are reproduced in Figure 1—figure supplement 4E,F. While we do not have access to the raw data to run a repeated-measures ANOVA, we can say that the total change in implicit aftereffect was no larger than 0.7-1.4° across the first and third rotation blocks. Such changes are an order of magnitude smaller than the changes in implicit learning we observed across the 15-60° rotation sizes in the Experiment 1 stepwise group (see Figure 1—figure supplement 4A).

In sum, the 60° implicit learning timecourse we measured in the abrupt group, the 30° implicit learning timecourse measures in Salomonczyk et al. (2011), and the 20° and 40° groups in Neville and Cressman (2018), suggest that the implicit learning system exhibits little to no increase beyond the initial learning block in all experiments. Each study is rather comparable, testing participants with 3 targets, in an upper wedge spanning 45-135° in the workspace. Even in the case where the 15°, 30°, and 45° responses could exhibit an increase with additional rotation exposure, the data suggest these additional gains would be limited to 1.4°. This change would be an order of magnitude smaller than the overall implicit difference (14.5°) exhibited across stepwise rotation periods (Figure 1—figure supplement 4A), and thus would not meaningfully impact our conclusions. Thus, we are confident in our analysis. We agree that the reviewer’s concern is important to consider, and now describe it on P5L170, Appendix 2, and in Figure 1-Supplement 4 in the revised paper.

Lastly, while we maintain our implicit measures in Experiment 1 do appear to provide adequate asymptotic estimates, we wish to emphasize that the competition theory would still apply prior to reaching the asymptotic state. In our paper, we focus on asymptotic learning, because it provides a relationship between implicit and explicit learning (i.e., the competition equation) that is easy to validate mathematically. However, the scaling, saturation, and nonmonotonic phenotypes we discuss in Figure 1 can be observed well before the implicit-explicit system reaches its steady-state. Consider the state-space model where the implicit system adapts to target error driven by the rotation r, as in the competition model (the model used in Figures 6D and 7; the term multiplied by bi is target error (see Equation 1)).

xi(n)bi1(aibi)n11(aibi)(rxeavg)

This equation can be rewritten recursively to represent xi(n) with respect to all prior trials:In the case where implicit learning starts at zero (naïve learner), this equation simplifies to:This equation shows how the implicit system on trial n is driven by the explicit system on all prior trials, the rotation, and the implicit error sensitivity and retention factor. An excellent approximation to this equation can be obtained by replacing xe(k) as the average xe across all prior trials (to observe this approximation’s accuracy, compare the red and blue lines in Figure —figure supplement 3A). This approximation yields:Remarkably, this equation is analogous to the competition model in Equation 4 in our manuscript. It states that implicit learning on a given trial n is proportional to , the difference between the rotation and the average explicit strategy used by the participant. As n goes to infinity, we obtain the competition model in (Equation 4). The implication of this equation is that a linear competition between implicit learning and explicit learning can be observed throughout the adaptation process, not only in the asymptotic learning phase. Thus, the scaling, saturation, and non-monotonic phenotypes we describe in Figure 1, can be observed throughout the adaptation process.

To illustrate this, consider the simulations below. In Figure —figure supplement 3A, we simulate the implicit and explicit response to a 30° rotation as in Figure 6D in the paper. In this simulation both the implicit and explicit systems are driven by target error. The red line shows the implicit approximation above, the blue line shows exact implicit learning, and the magenta line shows explicit strategy. In this simulation, we used the ai, bi, and ae parameters identified in our model fit to Haith et al., (2015), as in our paper’s competition map. Next, recall that the scaling, saturation, and nonmonotonic phenotypes are due to how the explicit system responds to changes in rotation size. When the explicit system response increases more slowly than the rotation, implicit learning will increase in the scaling phenotype. When it increases at the same rate as the rotation, implicit learning will stay the same, leading to the saturation phenotype. When it increases more rapidly than the rotation, implicit learning will decrease as the rotation increases: the nonmonotonic phenotype.

To show these phenotypes, we simulated the implicit and explicit response to a 30° and 45° rotation and tuned the explicit error sensitivity: be. For all 30° simulations, be remained 0.2. To obtain the scaling implicit phenotype, explicit error sensitivity remained 0.2 in the 45° simulation. To obtain the saturation implicit phenotype, we increased be to 0.435 in the 45° condition (i.e., the explicit system became more reactive to the higher rotation). Lastly, to obtain the nonmonotonic phenotype, we increased explicit error sensitivity dramatically, to 0.93. The plots above show the implicit and explicit states obtained at various points throughout learning: from left to right, 5, 10, 20, 40, and 150 rotation cycles. The explicit responses are shown in the bottom row. Implicit responses are shown in the top row.

There are many critical things to note. Firstly, at all time points, changes in explicit error sensitivity produce 3 distinct levels: less explicit learning when be=0.2, medium explicit learning when be=0.435, and high explicit learning when be=0.93. These changes have dramatic effects on the implicit system. For the low explicit strategy level, the implicit system scales when the rotation increases from 30° to 45°. For the medium explicit strategy level, the implicit system remains the same when the rotation increases from 30° to 45°. For the high explicit strategy level, implicit learning decreases when the rotation increases from 30° to 45°. Note, most critically, all 3 phenotypes occur at all phases in the implicit learning time course: as early as cycle 5, and as late as cycle 150 (top row, compare each bar set). The only thing that changes is the total difference between 30° and 45° implicit learning, which is smallest (and hardest to detect) early in learning (cycle 5), and largest (easiest to detect) after many cycles of exposure to the rotation.

What this means is that the competition model does not solely pertain to asymptotic learning. The same phenomena that occur due to implicit-explicit competition, also appear throughout all phases in the learning process. To simplify matters we have chosen in our manuscript to focus on steady-state learning, where the mathematical relationship between steady-state implicit and explicit learning converges and is easy to test (i.e., the competition equation).

Summary

We do appreciate the reviewer’s criticism. However, we maintain our implicit learning probes in Experiment 1 do provide accurate estimates of asymptotic learning. In Experiment 1, Salomonczyk et al., and Neville and Cressman, 3 targets situated in an upper triangular wedge (45° spacing) were used.

1. We did not detect any statistically significant change in implicit learning after the first learning block in the 60° abrupt group in Experiment 1.

2. We did not detect any statistically significant change in implicit learning after the first learning block in the 30° group in Salomonczyk et al. (2011).

3. Neville and Cressman (2018) measured implicit learning at 3 different timepoints for 20° and 40° rotations. While we cannot assess statistical significance, changes in implicit learning after the first learning block were exceedingly small (less than 1.4°).

Thus, we are confident that the implicit system approaches its asymptotic level within the first block’s duration in Experiment 1. Any additional growth in implicit learning is predicted to be quite small (less than 1.5°) – a level that has no effect on our primary conclusion (given that we measured a 14.5° change in implicit learning across the 4 stepwise rotation periods). Finally, our conclusions in Experiment 1, are not dependent on the implicit system reaching its asymptotic state. As shown above, the scaling, saturation, and nonmonotonic implicit learning phenotypes will occur throughout all adaptation phases. Thus, the critical point we are attempting to make, is that only a competition model (not an SPE model, or the implicit properties demonstrated in error-clamp studies) can exhibit the versatility to capture these distinct implicit modes (let alone the many other phenomena we explore throughout the paper), whether this be at asymptote, or earlier during adaptation.

We have decided that a critical point to describe to the reader is that the competition model does not solely apply to asymptotic learning. Thus, we have added derivations to Appendix 1 to start with describing xi on any trial n, prior to describing the limit as n gets large, which yields the asymptotic states in the competition Equation (Equation 4). Next, we also highlight the potential issue noted by the reviewer, that implicit learning has not saturated in Block 1 in Experiment 1. We describe the 60° abrupt response, the Salomonczyk et al. (2011) data, and Neville and Cressman data in Figure 1-Supplement 3, and also in Appendix 1.2 in the revised paper. These data show that our implicit learning measures provide close approximations for steady-state implicit learning which changes very slowly after the initial rotation block, if at all. We also note that the arbitrary point at which we decide implicit learning has reached the asymptotic state does not alter the implicit learning phenotypes predicted by the competition model (it still exhibits the scaling, saturation, and nonmonotonic phenotypes prior to steady-state).

(4.2) It could be argued that the direct between-condition comparison of the 60° blocks in Exp 1 rescues the flaw mentioned above, in that the number of completed trials is matched. However, plan- or movement-based generalization (Gonzalez Castro et al., 2011) artifacts, which would boost adaptation at the target for the stepwise condition relative to the abrupt one, are one way (perhaps among others) to close some of that gap. With reasonable assumptions about the range of implicit learning rates that the authors themselves make, and an implicit generalization function similar to previous papers that isolate implicit adaptation (e.g., σ around ~30°), a similar gap could probably be produced by a movement- or plan-based generalization model. [I note here that Day et al., 2016 is not, in my view, a usable data set for extracting a generalization function, see point 4 below.]

We appreciate the reviewer’s concern. But the variation between implicit learning and explicit strategy across abrupt and stepwise learning cannot not be captured by any plausible generalization curve.

The reviewer suggests that one explanation is that both groups have equal implicit learning, but the abrupt condition leads to greater re-aiming. More aiming in the abrupt group causes a decrease in the implicit learning measured at the target due to generalization. This hypothesis could be summarized with an SPE generalization model in which measured implicit learning at the target is equal to total implicit learning via: ximeasured=xissexp(0.5(xessσ)2), where σ is the generalization curve’s width and xiss is total implicit learning (measured at aim direction). Could this model lead to the observed data?

Initially, let us assume that σ = 37.76°. This is the generalization curve that was measured by McDougle et al. (2017), as proposed in Point 4-4 below. To estimate the change in implicit learning between the abrupt and gradual groups given generalization, we can calculate the reduction in implicit aftereffect expected given a normal distribution with σ=37.76°, for the 39.5° and 29.9° explicit strategies measured in the abrupt and stepwise conditions. Aiming straight to the target in the abrupt group would yield 57.86% remaining aftereffect, and 73.09% remaining implicit aftereffect in the stepwise group. Altogether, this would mean that the stepwise rotation increased the implicit aftereffect by 100(73.09/57.86 – 1) = 26.3% over the abrupt group. On the contrary, the abrupt and stepwise implicit aftereffects were 11.72° and 21.36°, respectively. This is an 82.3% increase in implicit learning.

In sum, similar to our analysis of Experiments 2 and 3 in Figure 5, while generalization will produce a negative implicit-explicit relationship, the implicit learning variations we observed were much larger in Experiment 1 than predicted by generalization alone. In this case, the 82.3% increase in implicit learning is more than 3-fold larger than the 26.3% increase predicted by the implicit generalization properties measured by McDougle et al. (2017).

Suppose σ = 37.76° is not true in our data. To match the measured data, σ would need to be smaller, to narrow the generalization curve. This is unlikely to be the case given that in Experiment 1 we used 3 targets, whereas McDougle et al. used only 1. Additional targets would not narrow the generalization curve, it would widen it (Krakauer et al., 2000). Still, let us proceed. Rather than assume that σ = 37.76°, let us fit a normal distribution to the measured data. In the abrupt group, implicit and explicit learning were 11.72° and 39.5°, respectively. In the stepwise group, implicit and explicit learning were 21.36° and 29.9°, respectively. Fitting a normal generalization curve to these data, would yield the curve shown in Figure 4—figure supplement 1A. The optimal σ is 23.6°. Surprisingly, the generalization curve would require that implicit learning (measured at the aiming location) needs to be 47.8°. This value substantially exceeds that deemed possible by Morehead et al. (2017) and Kim et al. (2018). Much more importantly, these values are unphysical. For abrupt learning an xiss = 47.8° and xess = 39.5° would indicate that total adaptation should equal 47.8° + 39.5° = 87.3° (see Figure 4—figure supplement 1B). Not only does this exceed measured abrupt adaptation by about 60%, but it is also larger than the rotation magnitude – thus, entirely unphysical. In the stepwise group as well, total predicted learning would be about 77.7°, larger than the rotation size (hence unphysical).

Clearly, there is a deep issue here. The problem is that as the generalization curve narrows (e.g., σ = 23.6° vs. 37.76°), not only does implicit learning measured at the target drastically underapproximate total implicit learning at the aim location, but the explicit strategy we estimated via xess=xTssxiss will substantially overestimate true explicit learning (because explicit strategy is estimated using implicit learning). As σ gets smaller, the issue will grow and the analyses above become inappropriate, leading to unphysical systems. The key idea is that both implicit and explicit learning need to be corrected by the generalization curve in our data. Note that:

Equation 1:ximeasured=xissexp(0.5(xessσ)2)

The discrepancy between total and measured implicit learning is: (xissximeasured). The amount that implicit learning is underapproximated is the same amount that explicit learning is overapproximated. Thus, we have:

Equation 2:xemeasured=xess+xissximearured

We can re-arrange Equation 2:

Equation 3:xess=xemeasuredxiss+ximeasured

Combining Equations 1 and 3 yields the expression:

Equation 4:xess=xemeasuredxiss+xissexp(0.5(xess/σ)2)

Equations 1 and 4 correctly express and constrain the relationship between (1) total implicit learning, (2) total explicit learning, (3) measured implicit learning and (4) measured explicit learning. We identified the σ and xiss that minimized the squared error between ximeasured predicted by Equation 1 above and the respective stepwise and abrupt values, subject to the constraint that Equation 4 must be satisfied exactly. The model revealed that the optimal σ and xiss were 3.87° and 45.69°, which produced the curve shown in Figure 4—figure supplement 1C (corrected model). This shows how measured implicit learning and explicit learning will interact. Figure 4—figure supplement 1D, however, shows the associated relationship between measured implicit learning and total strategy (see the corrected model). Figure 4—figure supplement 1C demonstrates one oddity: the model requires that the total implicit learning equal 45.69° implicit learning, or roughly 90% the total adapted response. More importantly, Figure 4—figure supplement 1D reveals that the model requires an extreme generalization curve, with σ = 3.87°, as compared to 37.76° as measured by McDougle et al. This value is not physiological, being an order of magnitude smaller than any generalization curve measured to date. Thus, we can conclude that generalization is extremely unlikely to yield the measured data and is not a viable alternative to the competition model.

Help interpreting the relationship in Figure 4-supplement 1: how are C and D related? Let us begin with the stepwise point in C. This lies roughly at 20° implicit learning and 30° explicit strategy. This explicit strategy mirrors that measured in the paper: total adaptation – measured implicit learning. Note that total implicit learning is about 45°. Thus, measured implicit learning is about 45-20 = 25° smaller than total implicit learning. This means that our estimated explicit strategy at 30°, is about 25° too large. Thus, the actual strategy is much smaller: 30-25 = 5°. In Part D, the x-axis explicit strategy will be about 5° and implicit learning will be about 20/45 x 100 = 44.4%.

Summary

Our abrupt vs. stepwise analysis in Figures 2D-G does not match implicit generalization. While incorrect, we began with the assumption that the implicit learning measures in Experiment 1 represented generalized learning, but explicit strategies represented total re-aiming. The change in implicit learning we measured, however, was three times larger than that predicted by generalization as measured in McDougle et al. Moreover, this analysis is flawed, in that only correcting implicit measures with generalization, but not explicit measures, produced a situation where total adaptation would have exceeded the rotation’s magnitude. This is because our explicit strategies are estimated using total adaptation and implicit learning. When we corrected the SPE generalization model so that both the implicit and explicit learning we measured were corrected by a generalization curve, the model required that plan-based generalization resemble a Gaussian with σ = 3.87°, an unphysiological scenario.

These analyses show that a generalization model cannot explain the measured data. We have updated our text on P10L374, as well as Figure 4-Supplement 1, and Appendix 6. These issues also occur in Figure 4, when we compare Experiments 2 and 3 to past generalization curves (see Point 4-4 below). Also note that we have conducted similar analyses for the Neville and Cressman (2018) dataset, analyzing the response to rotation size, and instructions. These are shown in Appendix 6.5 in the revised paper and demonstrate the same limitations in generalization as shown above.

(4.3) The nonmonotonicity result requires disregarding the results of Morehead at al., 2017, and essentially, according to the rebuttal, an entire method (invariant clamp). The authors do mention and discuss that paper and that method, which is welcome. However, the authors claim that "We are not sure that the implicit properties observed in an invariant error context apply to the conditions we consider in our manuscript." This is curious and raises several crucial parsimony issues, for instance: the results from the Tsay study are fully consistent with Morehead 2017. First, attenuated implicit adaptation to small rotations (not clamps; Morehead Figure 5) could be attributed to error cancellation (assuming aiming has become negligible to nonexistent, or never happened in the first place). This (and/or something like the independence model) thus may explain the attenuated 15° result in Figure 1N. Second, and more importantly, the drop-off of several degrees of adaptation from 30°/60° to 90° in Figure 1N is eerily similar to that seen in Morehead '17. Here's what we're left with: An odd coincidence whereby another method predicts essentially these exact results but is simultaneously (vaguely) not applicable. If the authors agree that invariant clamps do limit explicit learning contributions (see Tsay et al. 2020), it would seem that similar nonmonotonicity being present in both rotations and invariant error-clamps works directly against the competition model. Moreover, it could be argued that the weakened 90° adaptation in clamps (Morehead) explains why people aim more during 90° rotations. A clear reason, preferably with empirical support, for why various inconvenient results from the invariant clamp literature (see point 6 for another example) are either erroneous, or different enough in kind to essentially be dismissed, is needed.

We appreciate the reviewer’s criticisms here, but think it is important not to overlook the overall picture. While we are not sure we agree that Tsay et al. is consistent with Morehead et al. (elaborated on below), even in such a case, the implicit learning properties suggested by Morehead et al., are at odds with most data considered in our paper. Here are some examples:

– The scaling phenotype in Experiment 1, and also Salomonczyk et al. 2011, directly contradicts the idea that implicit learning reaches a rotation-invariant saturation point – the hallmark phenotype in Morehead et al., 2017. These studies also show that the implicit response can vary greatly across 15-60° rotations, vastly exceeding the 4.4° upper-limit on the proportional zone estimated in Kim et al., 2018.

– The variations in implicit learning across the abrupt and stepwise conditions in Experiment 1, are at a minimum 3fold larger than one can obtain with standard generalization curves (see Point 4-2 above), and minimally 2fold larger in response to instruction in Neville and Cressman (2018). Moreover, as described in Point 4-2, a model where implicit learning is equal between instruction/no-instruction conditions or abrupt/gradual conditions requires generalization properties that are unphysiologically narrow. In sum, it is not possible that implicit learning was equal across these various conditions but altered by generalization.

– A similar argument is true when considering the variations in implicit learning in Neville and Cressman (2018). There is no way to reconcile the variation in implicit learning in response to instruction, with the invariance in learning across rotation sizes. The latter requires a Morehead-like response, and complete generalization in implicit learning. However, with complete generalization, there can be no decrement in implicit learning with instruction, unless competition with explicit strategy is permitted.

Thus, these studies alone show it is not possible that the implicit system shows the same response to rotation size as in Morehead et al. Now, let us consider the assertion that the rotation response in Tsay et al. is consistent with Morehead et al. To be clear, error-clamp learning in Morehead et al., does not show a nonmonotonic response, one that goes up and then down (or vice versa). Rather, it shows a saturated response to rotation size initially, and then a decrease in implicit learning when rotations get very large (i.e., >95°). As the reviewer notes, when Morehead et al. tested a 7.5° standard rotation (or at least somewhat standard, minus the “ignore the cursor” instruction), smaller implicit learning was observed (than the saturated zone). This reduction in implicit learning was attributed to “error cancellation”. Namely, the participants adapted to 7.5°, which eliminated error and halted implicit learning (see Morehead et al). We do not agree with the reviewer that this phenomenon may have occurred in Tsay et al. The critical point is that Tsay et al. tested a 15° rotation. As shown in Figure 1N, implicit learning reached only 7.6°. Thus, it is not possible that the error was cancelled by implicit learning, which only reached half the rotation magnitude. Morehead et al. would have predicted that implicit learning should continue until 15° to cancel the error. Thus, the implicit responses in Tsay et al. are not consistent with those in Morehead et al.

This now brings us to the decrement in implicit learning we observed in Figure 1N with the 90° rotation. Yes, we agree that Morehead et al. observed an attenuation in implicit learning with very large rotations. It is possible that such an attenuation may have contributed to the substantial implicit reduction in the 90° group in Tsay et al. We reference this possibility in our revised manuscript on P21L849. With that said, we wish to emphasize a very important detail. It is errors that drive learning, not rotations. While the large rotations may have been similar across these two studies, errors subjects experienced in both experiments were totally mismatched. In Morehead et al., participants in both the error-clamp and standard rotation groups were told not to aim and to ignore the cursor. The implicit learning curve reached approximately 10°, leaving an 85° target error. Given past studies (e.g., Wei and Kording, 2009; Marko et al., 2012), error sensitivity will be exceedingly tiny for such errors. In our view, this insensitivity to extremely large errors likely led to the attenuation in implicit learning observed in Morehead et al. Instructions to “ignore the cursor” may have further exacerbated the reduction in sensitivity to these large errors (e.g., if participants did not foveate the cursor during adaptation).

However, in Tsay et al., participants were allowed to aim. Total learning reached about 85°, leaving a 5° target error: an error much more inclined to drive implicit learning. Comparing steady-state adaptation to this 5° residual error with the 85° residual error in Morehead et al., is not reasonable in our view. The only way to compare learning across these experiments, would be in the case that SPE is the sole driver of implicit learning (which is not altered by explicit strategy). This, however, is directly contradictory to the analyses in the current paper, and other studies which show the critical role visual target error has in implicit learning: Miyamoto et al. (Nat Neuro, 2020), Ranjan and Smith (MLMC 2020), Tsay et al. (bioRxiv, 2021), and Taylor and Ivry (2011).

Thus, we would argue that direct comparison between Tsay et al. and Morehead et al., is not advisable. Attenuation in implicit learning in these two studies may have entirely separate causes: a drastic reduction in target errors (the competition hypothesis) in Tsay et al., and an unresponsiveness to extreme target error in Morehead et al. (which could have been exacerbated by telling participants to ignore the cursor), which appears largely specific to this one study. For the learning patterns in Tsay et al., the competition model seems the most parsimonious choice, not only given its quantitative match to the data (Figures 1Q and Figure 1-Supplement 2), but also because it alone (not the errorclamp learning properties in Morehead et al.) can explain implicit responses across the many other cases we consider in Figures 1 and 2: abrupt and stepwise responses in Experiment 1 (as well as Salomonczyk et al.), rotation responses between 15-60° in Tsay et al., as well as implicit behavior in Neville and Cressman. This is not to mention that the implicit learning properties in Morehead et al. provide no clear way to interpret the pairwise and counterintuitive (e.g., a negative implicit-total learning) relationships between implicit learning, explicit strategy, and total learning detailed at length in Figures 3-5 in our paper at the individual-participant level. That is, we tested the reviewer’s suggestion that negative correlations between implicit-explicit learning were due to strategies responding to implicit variation (see P11L417 in the revised paper). The data were inconsistent with this possibility (Figure 5). However, we have now revised our Discussion section to note other possible reasons why implicit learning declined with large rotation sizes in Tsay et al. (see P21L849).

With that said, there is a much larger and more central point to be made here, which we have not clearly described in our previous paper and reviewer responses. Our previous statement, that ‘the implicit properties observed in an invariant error context [may not] apply to [standard rotation] conditions’, misses critical mathematical context, and creates an unnecessary contention between these two experimental conditions.

At a core level, the equation that governs steady-state behavior in an error-clamp condition, is not the same as in a standard rotation condition. Excluding any role strategic learning may play in competing with implicit learning, an SPE model predicts steady-state implicit learning via the independence equation:xiss=bi(1ai+bi)1r . In a constant error-clamp study, where errors remain invariant, the correct steady-state prediction is:xiss=bi(1ai)1r . These two steady-states possess different implicit learning gains: bi(1ai)1 for constant error-clamp and bi(1ai+bi)1 for standard rotation learning. This has critical implications. For example, in an error-clamp condition, for an ai = 0.98 and bi = 0.3, the state-space model predicts an implicit steady-state of 0.3(1-0.98)-1r = 15r. What that means is that in an errorclamp condition, steady-state implicit adaptation would need to exceed the rotation size by at least an order of magnitude to reach a dynamic equilibrium between learning and forgetting. For pi=15 would mean that a 5° errorclamp would require 75° implicit learning to reach a dynamic steady-state. A 30° rotation would require 450° implicit learning. Thus, very rapidly, even small rotations would require implicit learning that is either physiologically unlikely, or physically impossible, in order to reach a steady-state between learning and forgetting.

Thus, not only do the competition and independence Equations Equations (4) and (5) not apply to error-clamp learning, their ‘corrected’ variants show us that the steady-states we describe in our paper cannot be reached in an errorclamp paradigm. For these reasons, the steady-states reached in error-clamp studies are likely caused by another mechanism: the ceiling effect as described in Morehead et al. (2017) and Kim et al. (2018). That is, the large implicit learning gain (e.g., 15) in error-clamp studies will drive implicit systems to a ceiling in the total amount it is able to adapt (because it cannot reach its dynamic equilibrium). In the error-clamp scenario, implicit forgetting is simply not strong enough to compete with learning from the constant, unchanging error, yielding large theoretical steady-state levels, that across many rotation sizes, greatly exceed the implicit system’s capacity to correct. On the other hand, in standard rotation conditions, the implicit learning gain is much smaller (e.g., 0.6-0.8 in the data sets we consider here), meaning that the implicit system will reach a dynamic steady-state prior to a rotation-invariant ceiling.

Now, a separate but related question, is what causes the implicit system’s “upper capacity limit”, and does the limit vary across experimental settings. We suspect that it does. For example, in Morehead et al. (2017) the implicit system was limited to about 10-15° learning, but in Kim et al. (2018) this limit increased to 20-25°. The limit increased greatly, despite using similar experimental conditions across these tasks. We speculate that an important component in this limit is proprioception which may play a role in halting implicit learning when the hand strays too far from the target. We speculate that this halting is much more aggressive in error-clamp paradigms, when the participant’s goal is to move their hand straight to the target and ignore the cursor (here proprioception is the only input that participants have about whether or not they are achieving their “reach straight” goal). In the standard rotation, however, changes in reach angle are implicitly encouraged, because the participant’s goal is to hit the target with the cursor, not their hand. Thus, we suspect that the brain is less inclined to halt implicit learning due to “proprioceptive discrepancies” in standard rotation conditions; visual errors dominate here. This may explain why some authors have observed implicit learning levels (e.g., about 35° in Salomonczyk et al., 2011, and even 45° in Maresch et al., 2021) which greatly exceed the error-clamp limits observed in Morehead et al. and Kim et al.

We have amended our Discussion to better compare and contrast standard rotation learning and invariant errorclamp learning on P23L918 in the revised paper. We also better compare Tsay et al. and Morehead et al. on P21L849 in our Discussion, as well as Appendix 6.6. We outline the critical discrepancies in the state-space model, i.e., the theoretical implicit learning gain. We hope our discussion can help the reader understand why the implicit system may reach a ceiling in error-clamp conditions, but not standard rotation conditions. This ceiling effect makes it appear as if implicit steady-states are not sensitive to rotation size. Overall, we do think that a competition or independence model can apply in error-clamp learning conditions, when we allow the idea that implicit learning can reach a “capacity limit”, and also that it may respond differentially to visual and proprioceptive errors across errorclamp studies and standard rotation conditions (that is, we should not expect the same ceiling across task variants).

Summary

We appreciate the reviewer’s concern about potential similarities between implicit attenuation in Tsay et al., 2021 and Morehead et al., 2017. With that said, there are critical pieces that do not match: e.g., reduced implicit learning in the 15° rotation group, which cannot be chalked up to ‘error cancelation’ as in Morehead et al. Further, there are fundamental differences in the errors experienced across these two experimental paradigms: large residual errors (>80°) in Morehead et al., versus 5° residual error in the 90° rotation group in Tsay et al. These substantial variations in error make it challenging to compare these two studies. In the Morehead et al., study, we suspect drastic reduction in implicit error sensitivity is likely behind attenuations in implicit learning: the large residual errors encountered in this study do not generalize well to the experiments we consider here. Given that the competition model alone can explain the implicit learning phenotypes in Experiment 1, Salomonczyk et al., Neville and Cressman, and the 15-60° rotation range in Tsay et al., this model provides the most parsimonious explanation to the implicit attenuation in Tsay et al.

Lastly, in our past paper and reviewer responses, we did not provide enough mathematical context for why implicit steady-states may differ across error-clamp and standard rotation conditions. The issue lies in that the steady-state equations which describe a dynamic equilibrium between learning and forgetting, differ across these experimental paradigms. For error-clamp learning, implicit learning must reach unattainable levels in order to reach the dynamic steady-states described by the competition or independence theories. For error-clamp adaptation, the only possible way to reach “steady-state” is when the implicit system saturates at a physiologic upper bound. Thus, we think that the competition model describes implicit learning up until a physiologic upper bound – the one highlighted in errorclamp studies. This upper bound’s nature remains unclear, as it appears to vary across similar studies: e.g., Morehead et al., and Kim et al., and exceeded in some standard rotation conditions (e.g., Salomonczyk et al., and Maresch et al., 2021). Thus, we think it is likely that several experimental factors may play a role in determining the “stopping point” for implicit learning, when errors cannot be reduced through adaptation: e.g., differential responses to visual and proprioceptive signals.

We have now greatly expanded on these issues and speculations in our revised Discussion. It is our hope that these mathematical and empirical considerations inspire the community to better compare and contrast implicit responses across standard rotations and error-clamp conditions: especially in scenarios where explicit strategy is also present to modulate the implicit response.

(4.4) The treatment given to non-target based generalization (i.e., plan/aim) is useful and a nice revision w/r/t previous reviewer comments. However, there are remaining issues that muddy the waters. First, it should be noted that Day et al. did not counterbalance the rotation sign. This might seem like a nitpick, but it is well known that intrinsic biases will significantly contaminate VMR data, especially in e.g. single target studies. I would thus not rely on the Day generalization function as a reasonable point of comparison, especially using linear regression on what is a Gaussian/cosine-like function. It appears that the generalization explanation is somewhat less handicapped if more reasonable generalization parameterizations are considered. Admittedly, they would still likely produce, quantitatively, too slow of a drop-off relative to the competition model for explaining Experiments 2 and 3. This is a quantitative difference, not a qualitative one. W/r/t point 1 above, the use of Exp 1 is an additional problematic aspect of the generalization comparison (Figure 5, lower panels). All in all, I think the authors do make a solid case that the generalization explanation is not a clear winner; but, if it is acknowledged that it can contribute to the negative correlation, and is parameterized without using Day et al. and linear assumptions, I’d expect the amount of effect left over to be smaller than depicted in the current paper. If the answer comes down to a few degrees of difference when it is known that different methods of measuring implicit learning produce differences well beyond that range (Maresch), this key result becomes less convincing. Indeed, the authors acknowledge in the Discussion a range of 22°-45° of implicit learning seen across studies.

We thank the reviewer for these excellent suggestions. We have made several changes to the manuscript to address these concerns. First, we have now added the generalization curve measured by McDougle et al. (2017) to the paper. These authors measured implicit generalization via aftereffect, and also balanced CW and CCW rotations. To do this, we used the generalization curve measured in Figure 3A in their paper. We used the aftereffects measured at the target locations labeled: 22.5°, 0°, -22.5°, -45°, and -67.5°. We used the “left-hand-side” of the curve as these data represent how hand angle would change when participants abandon their strategy and are told to reach straight to the target. We added these reach angles to Figure 4A in the revised manuscript (see 4A below). To do this, we normalized data to the maximal aftereffect at 22.5° (result is shown in McDougle 1T in 4A below).

There are some important things to note about these data. As the reviewer suggested, the McDougle et al. and Day et al. generalization curves do diverge. However, this departure seems to occur around when explicit strategies are greater than about 25-30° or so. For angles less than 25°, they are overall quite similar. This is important given our data in Experiments 2 and 3. Here explicit strategies were maximally 20-25°. Thus, in this region, the Day et al. and McDougle et al. curves are very similar. As such, when the generalization curves are used to predict how implicit learning should decline in Experiments 2 and 3 (see Figure 5B and 5C), McDougle et al. is no better at matching the measured data than the Day et al. curve highlighted in the previous manuscript. Thus, while we agree that other studies such as McDougle et al. provide better generalization estimates, our conclusions that the implicit decline far exceeds that predicted by generalization in Experiments 2 and 3, remains unchanged.

Recall, however, that the analysis in Figures 4AandB above, requires an important correction (see Point 4-2). Namely, the explicit strategies along the x-axis are not true explicit learning. This is explicit strategy estimated via total adaptation minus the measured implicit learning (on reach-to-target probes). Thus, it is actually not appropriate to compare the data in these insets directly to past generalization curves. Under the generalization hypothesis, because measured implicit learning will underestimate total implicit, the estimated explicit strategy will overestimate the true explicit strategy. While we considered this idea in a supplementary figure in our previous manuscript, we think this is a much more central point that should be made. Thus, in Figure 4C we now show the implicit-explicit generalization curve that would be required to produce the implicit-explicit measures we obtained in Experiments 2 and 3. To obtain these data, we fit a normal SPE generalization model to the data in Experiment 2 and 3, separately, and identified the total implicit learning and generalization width that best matched the data. To do this, we used the method described in Point 4-2 above. The optimal generalization curves had σ = 5.16° and 5.76° in Experiments 2 and 3, respectively. Note that this generalization, is exceedingly narrow, and inconsistent with McDougle et al. (σ = 37.76°). Thus, as in Point 4-2 above, plan-based generalization is not consistent with the measured data.

Next, to comply with the reviewer’s suggestion, we have also updated our Experiment 3 analysis in Figures 4D-F. We wish to reiterate again that our data (and previous data sets by Salomonczyk et al. and Neville and Cressman) suggest that our data in Experiment 1 do provide good asymptotic estimates (see our Point 4-1 response above) for the implicit system. Also, please recall (see Point 4-1) that the data in Experiment 1 can still be analyzed whether or not implicit and explicit learning have reached asymptote; the competition model (and similarly the independence model) apply to the entire learning timecourse (only the effect sizes scale as total adaptation is approached). Thus, with these clarifications, we hope the reviewer agrees there is no issue with our Experiment 1 analysis in Figure 4.

Instead, the reason why the Experiment 1 analysis is so critical is it speaks to the reviewer’s point about ‘qualitative versus quantitative differences.’ Experiment 1 tests multiple rotation sizes, and as such, allows us to test qualitative differences between generalization and the competition model. As described in the paper, the competition model predicts that the implicit gain relating implicit learning and explicit strategy should be similar across rotation sizes. A generalization model predicts that this gain will increase as the rotation increases. This change in gain is due to increases in implicit learning as rotation size increases. A Gaussian implicit-explicit relationship would exacerbate these variations in gain; because the normal distribution is nonlinear, linear approximations to the curve will vary in slope as explicit strategy gets larger (which occurs as rotation size gets larger).

In our revised Experiment 1 analysis, we made two major improvements. We now split our analysis into two pieces, one in which the relationship between implicit learning and explicit learning is assumed to be well-approximated by a line (SPE gen. linear in Figures 4D-F). Second, we repeat these analyses using a normal distribution to represent the implicit-explicit relationship in the SPE generalization model (SPE gen. normal in Figures 4D-F). In our linear analysis, we do not use Day et al. generalization curve properties. We replaced these with the McDougle et al. generalization curve properties. We use σ = 37.76° in our Gaussian model, which is the width measured by McDougle et al.

As in the previous manuscript, we fit the model to the B4 period only (the 60° period). Figure 4D shows the updated comparison to the data. The black line shows the competition model. The cyan line (SPE gen. linear) shows the linear generalization model. The gold dashed line (SPE gen. normal) shows the Gaussian model. Figure 4E calculates the prediction error in each model. This is the RMSE between the data and the model predictions across the held-out B1 (15°), B2 (30°), and B3 (45°) periods. Prediction error was about 60% larger in the SPE model (Figure 4E, rm-ANOVA, F(2,72)=13.7, p<0.001, ηp2=0.276). The linear model performed slightly better than the Gaussian model (post-hoc test, p=0.006). Thus, as previously reported, the competition model more accurately matched the data measured in Experiment 1.

Issues with both the linear and normal SPE generalization functions were due in part to an intrinsic property in each model; the relationship between implicit and explicit learning should vary as implicit learning increases. For example, suppose that there is a 50% decrease in implicit learning due to generalization. A condition with 15° implicit learning produces a 7.5° change, but a condition with 30° implicit learning produces a 15° change – thus, the total change in learning will vary with total implicit adaptation, which increased as rotation size increased. Another issue with the Gaussian model is that the implicit-explicit curve varies with explicit strategy, which ‘moves the data’ to different regions along the nonlinear normal distribution. As rotation gets larger, explicit strategies increase, which results in systematic changes in where the normal distribution is sampled, yielding variable implicit-explicit relationships when assessed in the linear sense. These variations in implicit-explicit learning did not match the measured data (Figure 4F), which exhibited little to no change in the slope relating implicit and explicit learning across rotation sizes. On the other hand, as discussed in the paper, the invariance in slope matches the competition model, where the implicit explicit correlation has a constant slope -pi which does not depend on the rotation (see black lines in Figure 4F).

Last, we conducted a model comparison between all 3 models (competition, linear generalization with McDougle et al. properties, and normal generalization with McDougle et al. properties) using AIC. For these analyses, we used the stepwise participants in Experiment 1. These are the only data where such a comparison can be done, because implicit and explicit learning was assayed across 4 rotation sizes. Thus, we fit all 3 models in Figures 5D-F to individual participants. As expected, the competition model was more likely to explain the data (Figure 4G); 31 participants were best described by the competition model, compared to 2 participants in the linear SPE generalization case, and 4 in the normal SPE generalization case (Figure 4G).

Perhaps the poor SPE model performance was linked to an inaccurate estimate for the width of the generalization curve (the slope in the linear case, the standard deviation in the normal case). In other words, maybe generalization in McDougle et al. differed from that in Experiment 1. To test this, we conducted a sensitivity analysis. We repeated our prediction error (Figure 4H, left) and AIC (Figure 4H, right) analyses, but varied the generalization width assumed by the model. As a lower bound, we used one-half the width measured by McDougle et al. As an upper bound, we used twice the width measured by McDougle et al. As shown in Figure 4H, the competition model remained superior across this range in generalization parameters.

Summary

In the revised paper, we now highlight the McDougle et al. generalization curve over that measured in the Day et al. study. In addition, we now consider whether a normal distribution can better match the measured implicit-explicit relationships in Experiment 1. Our results remain unchanged. The SPE generalization model exhibits two key limitations, a qualitative one, and a quantitative one:

1. Past generalization curves (even that measured by McDougle et al.) drastically underpredict the decline in implicit learning we measured in Experiments 2 and 3. This quantitative deviation would only be worsened had the McDougle et al. study used 4 targets (as in our data) which would have widened their generalization curve as observed in Krakauer et al., 2000. The existing deviations are not small – instead, implicit learning drops nearly 3 times more rapidly across the explicit strategy range probed in Experiments 2 and 3. The discrepancy is substantially worsened when we correct the “explicit strategies” we estimated, using generalization.

2. SPE generalization models predict that the decline in implicit learning will vary with changes in rotation size and explicit strategy. The competition model, however, predicts that the gain relating implicit and explicit learning is invariant across rotation sizes and changes in explicit strategy. These predictions were tested in Experiment 1. These data agreed with the competition model: there was a constant slope relating implicit-explicit learning across all rotation sizes. The competition model’s dominance was also shown in an AIC comparison that we have added to the paper. Finally, the SPE model’s poor performance was not caused by incorrectly estimating generalization parameters: in a sensitivity analysis we varied the generalization curve’s width and observed no appreciable effect on our results.

In the revised manuscript we have made these critical updates, and again thank the reviewer for strengthening our analysis. These changes can be observed in Section 1.6 in the paper, Figure 4, and Appendix 6.

(4.5) I may be missing something here, but is the error sensitivity finding reported in Figure 6 not circular? No savings is seen in the raw data itself, but if the proposed model is used, a sensitivity effect is recovered. This wasn't clear to me as presented.

This is indeed a puzzling phenomenon, hence why we unpack this in our competition map simulations in Figure 7. The apparent contradiction lies in how one defines savings. Generally, savings is intuitively defined as the upregulation in a learning process upon re-exposure to a perturbation. So, in this sense, in Haith et al. (2015), the implicit system does not exhibit savings. The issue arises, however, when the assumption is made that an invariance in the implicit learning timecourse implies an invariance in implicit learning properties (e.g., implicit error sensitivity).

As shown in Figure 6 (and the competition map in Figure 7E), the competition model predicts that the implicit system’s error sensitivity increased in Exposure 2 relative to Exposure 1. At the very same time, simulating the implicit system with these model parameters (see blue lines in Figure 5D and black lines in Figure 7E) yields similar implicit learning curves across Exposures 1 and 2. How is this possible? The answer is that the implicit learning timecourse depends not only on implicit error sensitivity (which has increased according to the model), but also competition with explicit strategy. Thus, an increase in implicit error sensitivity which would tend to increase implicit learning during Exposure 2, is counteracted by an increase in strategic learning, which would tend to decrease implicit learning via competition. These two competing forces yield an invariance in the implicit learning timecourse, but a variation in implicit learning properties (e.g., error sensitivity).

Another way to understand this (which we have now added to the paper on P13L510) is as follows. The competition model suggests that while the increased implicit error sensitivity does not lead to greater implicit learning during Exposure 2, it nonetheless contributes to overall savings. That is, had implicit error sensitivity not increased, implicit learning would decrease during Exposure 2 (because strategies are larger), and less overall savings in learning rate would occur.

These paradoxical situations have extremely important implications. From Haith et al. (2015), the invariance in the implicit learning curve might intuitively suggest that the implicit system does not experience any changes due to the initial rotation exposure. However, this expectation is not matched in Experiment 4 (Figure 8). Here, suppressing reaction times reveals a change in implicit learning that was not observed in Haith et al. These data sets are entirely consistent with each other when explicit strategy is considered (Figures 7EandF). Suppressing explicit strategy eliminates competition, thus allowing the implicit system to express its increased error sensitivity within its learning timecourse, which were ‘hiding’ in Haith et al.

What our model suggests is an unappreciated possibility in how we interpret savings experiments. The notion that increases in error sensitivity will enhance the implicit system’s learning timecourse, is not true. There is a distinction between learning properties and learning timecourses. Properties are specific to the learning system. Timecourses depend in part on interactions with parallel learning systems. The state-space model attempts to reveal properties. Empirical analysis in datasets such at Haith et al., measures timecourses. Thus, the competition model suggests that we must distinguish between two questions in savings experiments: (1) “did implicit learning increase?” and (2) “did implicit error sensitivity change?” As a corollary to this, the competition model suggests that invariance in an implicit learning timecourse should not be taken to mean that the implicit system’s response to error has not changed. We think these ideas are critical in the ongoing debate as to whether the implicit system changes its response to errors with re-exposure to a rotation.

We appreciate the confusion surrounding these points, as it is a genuinely unintuitive phenomenon. We hope that the passage added on P13L510 provides some additional context.

(4.6) The attenuation (anti-savings) findings of Avraham et al. 2021 are not sufficiently explained by this model.

Absolutely. We had noted this puzzling deviation between standard rotation and error-clamp learning in our last Discussion section: “… it is important to note that the increased implicit error sensitivity observed in Experiment 4 (Figure 8) contrasts with the implicit system’s response to invariant errors25; in cases where participants can neither control nor reduce their target error, the implicit system appears attenuated upon re-exposure to a perturbation…”

Indeed, this discrepancy provides reason to question whether standard rotation learning, and invariant error-clamp paradigms engage the implicit learning system in the same way (see Point 4-3 response). Avraham et al. (2021) argues that the error-clamp attenuation they observed, can also be seen in standard rotation learning (their Figure 3). Our work however, casts doubt on this: in our view, decreases in implicit learning is much more likely due to a competition with explicit strategy, rather than a downregulation in implicit learning properties. Thus, we think it is much too early to tell whether the attenuation induced by re-exposure in Avraham et al., occurs in standard rotation learning paradigms. Given the scope of our work, we feel the best course of action is to note these discrepancies in order to inspire future research into these central matters. We can at present, only speculate as to the cause of the discrepancy in implicit behavior across the standard rotation and error-clamp paradigm.

One working hypothesis may relate to the reward system. In standard rotation learning, the goal is to move a cursor to the target. In invariant error-clamp paradigms, participants cannot control the cursor; their goal is to try to move their arm as straight to the target as possible. Thus, in the standard rotation paradigm, it is optimal to potentiate the implicit system in response to past errors. In the error-clamp paradigm, it is optimal to suppress the implicit system to prevent deviations between the hand and target. The proprioceptive system may play a large role in suppressing implicit learning in the error-clamp paradigm, and less role in standard rotation learning. In sum, we speculate that the participant’s internal goal may play a strong role in savings, and that the stark contrast in goals across the errorclamp paradigm and standard rotation paradigm, may at least partly lead to their diverging implicit phenotypes.

Along these lines, Sedaghat-nejad et al. (2021) has shown that saccade adaptation (an implicit learning process) is accelerated when learning improves task success. In other words, when being in an adapted state improves reward acquisition probability, saccadic learning accelerates. On the other hand, when learning does not translate to improvements in task success, adaptation rates are not improved, in response to the same perturbation. These ideas are eerily similar to the current conundrum: implicit error sensitivity may be upregulated in standard rotations, because in these tasks learning improves one’s ability to acquire a reward (i.e., hit the target with the cursor). In an invariant error-clamp context, learning does not alter one’s ability to acquire the target, but rather (as noted above) is detrimental to the participant’s goal: move the arm straight to the target.

These suppositions somewhat align with known error-clamp learning properties established by Kim et al. (2019). In this paradigm, learning rate and total implicit adaptation are accelerated when the cursor is programmed to hit the target, and attenuated when the cursor completely or partially hits the target. One interpretation here, is that when the cursor does not hit the target, the brain predicts that adaptation can restore one’s ability to hit the target, and thus learning is improved. When the brain experiences the same error, but the cursor is already hitting the target, there is no need to adapt, and the brain may attenuate its implicit learning rate. Thus, while the mechanism is very unclear, one idea is with re-exposure to the perturbation as in Avraham et al., 2021, the brain has already determined that adaptation does not improve one’s ability to acquire the target, and then invokes the same attenuation process in Kim et al., 2019, to slow learning rates/extents. These processes would be reversed in standard rotation learning, where past adaptation has improved one’s ability to hit the target, and thus will promote enhanced learning rates.

In sum, we can do little more than speculate as to why the implicit response to rotation re-exposure varies so greatly between standard rotation learning and error-clamp learning. In our revised paper (P24L954), we have expanded on our previous comparison between savings-related phenomena across both tasks. These matters will need to be studied in much greater depth in future work devoted solely to this topic. At a minimum, Experiment 4, shows the implicit savings phenotype may vary across tasks/contexts. This is a fascinating possibility which we hope will spark future research directions.

(4.7) Have the authors considered context and inference effects (Heald et al. 2020) as possible drivers of several of the presented results?

This is an interesting point. While Heald et al., 2021 have limited treatment of implicit-explicit learning components, they do provide some extensions to their model which will exhibit properties common with the competition model. Heald et al. considers implicit and explicit learning in a spontaneous recovery paradigm (McDougle et al., 2015). They speculate that explicit memory components correspond to the ‘most responsible context on the previous trial’. They introduce a ‘bias’ term to the model, representing a mismatch between vision and proprioception. Implicit learning is the weighted average of this bias over each context. These two ‘states’ map well onto implicit and explicit learning in McDougle et al.

While we do not examine McDougle et al. (2015) in our manuscript, the implicit and explicit COIN-model extensions will compete as in the competition model. Incidentally, the authors state in their Supplementary Material that the ‘state [explicit] and bias [implicit] interact competitively within a context to account for the total state feedback.’ In their work, this competitive interaction gives rise to the non-monotonic explicit response, which at first increases, and then decreases as in McDougle et al. We noted that the competition model also produces this same phenotype, albeit in a different experiment (e.g., see P23L909, and Figure 6D).

Thus, it is interesting that both the COIN model realization, and our target-error learning model do indeed share a common property: namely, competition. For the COIN model, these states compete because they attempt to sum to the total state feedback. For the competition model, they attempt to sum to the total error. The models differ of course, in how they define implicit-explicit learning, and how they contextualize the learning process. For us, these are two parallel ‘states’ which respond to a common error. For the COIN model, these represent two components in a credit assignment: explicit being one’s estimate of an ‘external’ perturbation to the environment, implicit being one’s ‘internal’ estimate for a miscalibration between vision and proprioception. It seems possible that the COIN model could produce the competitive behaviors we consider, when ‘credit’ is more readily assigned to the external perturbation in some rotation conditions (e.g., large rotations) than others (e.g., small rotations).

We thank the reviewer for raising this interesting possibility and have added a discussion on P22L862.

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Data Citations

    1. Albert ST, Jang J, Modchalingam S, Hart M, Henriques D, Lerner G, Della-Maggiore V, Haith AM, Krakauer JW, Shadmehr R. 2022. Competition between parallel sensorimotor learning systems. Open Science Framework. 10.17605/OSF.IO/MZS6A [DOI] [PMC free article] [PubMed]

    Supplementary Materials

    Figure 1—source code 1. Figure 1 data and analysis code.
    Figure 1—figure supplement 2—source code 1. Figure 1—figure supplement 2 data and analysis code.
    Figure 1—figure supplement 3—source code 1. Figure 1—figure supplement 3 analysis code.
    Figure 1—figure supplement 4—source code 1. Figure 1—figure supplement 4 data and analysis code.
    Figure 2—source code 1. Figure 2 data and analysis code.
    Figure 2—figure supplement 1—source code 1. Figure 2—figure supplement 1 data and analysis code.
    Figure 2—figure supplement 2—source code 1. Figure 2—figure supplement 2 analysis code.
    Figure 2—figure supplement 3—source code 1. Figure 2—figure supplement 3 data and analysis code.
    Figure 2—figure supplement 4—source code 1. Figure 2—figure supplement 4 data and analysis code.
    Figure 3—source code 1. Figure 3 data and analysis code.
    Figure 3—figure supplement 1—source code 1. Figure 3—figure supplement 1 data and analysis code.
    Figure 3—figure supplement 2—source code 1. Figure 3—figure supplement 2 data and analysis code.
    Figure 3—figure supplement 3—source code 1. Figure 3—figure supplement 3 data and analysis code.
    Figure 4—source code 1. Figure 4 data and analysis code.
    Figure 4—figure supplement 1—source code 1. Figure 4—figure supplement 1 data and analysis code.
    Figure 4—figure supplement 2—source code 1. Figure 4—figure supplement 2 data and analysis code.
    Figure 5—source code 1. Figure 5 data and analysis code.
    Figure 5—figure supplement 1—source code 1. Figure 5—figure supplement 1 data and analysis code.
    Figure 5—figure supplement 2—source code 1. Figure 5—figure supplement 2 data and analysis code.
    Figure 5—figure supplement 3—source code 1. Figure 5—figure supplement 3 data and analysis code.
    Figure 5—figure supplement 4—source code 1. Figure 5—figure supplement 4 analysis code.
    Figure 6—source code 1. Figure 6 data and analysis code.
    Figure 7—source code 1. Figure 7 analysis code.
    Figure 8—source code 1. Figure 8 data and analysis code.
    Figure 8—figure supplement 1—source code 1. Figure 8—figure supplement 1 data and analysis code.
    Figure 9—source code 1. Figure 9 data and analysis code.
    Figure 10—source code 1. Figure 10 data and analysis code.
    Transparent reporting form

    Data Availability Statement

    Source data files generated or analyzed during this study, as well as the associated analysis code, are included as supplements to Figures 1-10, as well as their associated Figure Supplements, and have also been deposited in OSF under accession code MZS6A.

    The following dataset was generated:

    Albert ST, Jang J, Modchalingam S, Hart M, Henriques D, Lerner G, Della-Maggiore V, Haith AM, Krakauer JW, Shadmehr R. 2022. Competition between parallel sensorimotor learning systems. Open Science Framework. 10.17605/OSF.IO/MZS6A


    Articles from eLife are provided here courtesy of eLife Sciences Publications, Ltd

    RESOURCES