Abstract
The visual system continuously generates predictions to guide behavior, yet how visuomotor adaptation relates to sensory detection and motor variability remains unclear. We addressed this question using joystick-based tasks: a visuomotor interception task with angular or speed perturbations, a sensory detection task, and a no-feedback motor variability task. Participants showed robust within-task responses, with angular discrepancies engaging both external (target-based) and self-referential control, while speed discrepancies primarily involved self-referential strategies. Gaze behavior reflected distinct tracking modes depending on perturbation type. However, cross-task regression analyses revealed weak associations between detection, variability, and adaptation. These dissociations were not due to noise or low power but reflected consistent performance patterns. Notably, within-subject variability exceeded between-subject variability across all tasks, highlighting trial-to-trial fluctuations as key drivers of behavior. Together, these findings support the view that predictive control relies on specialized, context-dependent mechanisms, in which task-specific computations adaptively integrate context and internal state dynamics.
Subject terms: Neuroscience, Cognitive neuroscience, Sensorimotor processing, Visual system
Introduction
Effective behavior in dynamic environments requires the brain to anticipate sensory consequences of action and adjust motor output when expectations are violated. This capacity for prediction and adaptation is central to predictive processing theories, which posit that behavior is guided by internal models that continuously generate expectations about incoming sensory input and update those models through prediction error minimization1–4. In movement control, these sensory prediction errors arise from mismatches between expected and actual sensory feedback and are thought to support motor learning by shaping subsequent motor commands5–7.
In sensorimotor tasks, prediction errors can arise from spatial discrepancies, such as unexpected changes in direction, or dynamic discrepancies, such as sudden variations in speed. Both types of errors challenge the motor system’s ability to maintain accurate control, but may rely on partially distinct mechanisms8,9. For example, visuomotor rotations typically introduce spatial errors requiring realignment of internal reference frames, whereas speed changes primarily engage kinematic prediction and temporal coordination10. In real-world scenarios, successful performance often depends on resolving both spatial and dynamic discrepancies, sometimes simultaneously.
Previous work has shown that visual information plays a crucial role in resolving these prediction errors, especially during interception tasks11–13. Eye movements initially follow targets reactively, but gradually shift to anticipate future positions, reflecting the engagement of internal predictive models. Proprioceptive feedback also contributes to error correction, especially when visual information is unreliable or absent14. These multimodal feedback mechanisms enable the brain to recalibrate in real time.
While predictive processing models often posit the integration of sensory, perceptual, and motor functions within a unified hierarchical architecture, emerging evidence challenges this assumption. For example, studies of oculomotor control reveal that different predictive behaviors, such as saccades vs. smooth pursuit, depend on distinct sensory computations shaped by task demands15. Similarly, research in motor control suggests that gross and fine motor adaptation engage anatomically and functionally distinct neural circuits, pointing toward a ‘compartmentalized’ organization of predictive control16,17. These findings raise a critical question: are prediction errors detected and corrected by a shared predictive system, or by domain-specific, partially independent subsystems? Further expanding these considerations, differences in visuomotor behavior, including gaze strategies, perceptual detection thresholds, and motor variability, indicate that predictive computations vary not only across functional domains but also across individuals. From this perspective, inter-individual variability may reflect differences in visuomotor processing capabilities or the flexible engagement of distinct sensory and motor subsystems18,19. Clarifying whether inter-individual variability reflects true functional independence or context-dependent use of shared sensorimotor resources remains a critical challenge for predictive models.
This study examines whether three components of sensorimotor prediction, sensory detection, motor execution, and visuomotor adaptation, are functionally integrated or operate as loosely coupled processes. These functions are commonly associated with predictive processing but may rely on partially independent mechanisms. For example, low motor variability reflects the stability of repeated actions and is believed to index the reliability of internal motor representations20,21. Detection of sensory discrepancies reflects the precision of internal sensory predictions and is believed to involve cortical mechanisms responsible for identifying violations of expected sensory input22–24 but see ref. 25. Visuomotor adaptation, reflects online corrections to action in response to external perturbations, engaging both sensory and motor prediction systems8,26–28.
We hypothesized that individuals’ performance across detection, motor variability, and visuomotor adaptation tasks would be strongly coupled, such that high sensitivity or stability in one domain would predict systematic differences, either enhancements or deficits, in the others. If a shared predictive mechanism governs these functions, then their performance metrics should correlate across tasks. Conversely, weak inter-task associations would support the notion of a framework in which predictive computations are context-dependent and domain-specific. To test these alternatives, we developed a battery of three tasks targeting distinct components of predictive processing, all of which were performed by the same group of participants: (1) a visuomotor adaptation task involving angular and speed-based perturbations during target interception; (2) a sensory detection task requiring judgments about brief and subtle changes in target direction or speed; and (3) a motor variability task assessing directional consistency during blindfolded joystick movements, targeting feedforward motor control. Through these tasks, we quantified individual performance using eye movements, interception accuracy, detection accuracy, response times, and movement variability. This within-subject, multi-task design allowed us to test whether individual differences in sensory detection and motor consistency predict visuomotor adaptation, as would be expected under a unified predictive architecture, or whether predictive functions are separable across domains, consistent with a functionally specialized organization.
Results
Quantifying behavioral responses to visuomotor discrepancies
To examine predictive motor responses to visuomotor discrepancies, we adapted a previously validated interception task29. Participants used a joystick to control a white dot on a screen (Fig. 1a) and were instructed to intercept a black target dot moving along a pseudo-random trajectory (Fig. 1b). After an initial block of 100 baseline trials without perturbations, visuomotor discrepancies were introduced in 50% of the remaining 700 trials. These discrepancy trials were randomly interleaved with control trials and were activated during a fixed 2-second window between 1.5 s and 3.5 s after trial onset (gray area in Fig. 1c).
Fig. 1. Behavioral Responses to Visuomotor Discrepancies.
a Participants used a joystick to control a white dot and intercept a moving black target within a circular arena. b Example trial (without discrepancy) showing trajectories of the target (black), gaze (navy blue), and user-controlled dot (white). c After an initial baseline block of 100 trials, visuomotor discrepancies were introduced in 50% of subsequent trials. Each discrepancy was applied for 2 seconds (1.5–3.5 s post-movement onset; shaded gray region), randomly interleaved with control trials. d Angular discrepancies altered the direction of joystick input ( ± 30°, ±60°, ±90°, ±120°, ±150°, ±180°). e Speed discrepancies modulated the joystick-controlled dot’s velocity (50%, 70%, 90%, 100%, 110%, 130%, 150%). f Group-averaged responses to angular discrepancies (100 trials per condition per participant): top panel shows corrected inter-dot distance (IDD*), bottom panel shows user velocity (vU*). Insets display amplitude (Amp) and area under the curve (AUC) during the discrepancy window. Asterisks denote slopes significantly different from zero. Angular conditions are shown in woodland green. g Same format as (f), showing responses to speed discrepancies. Color-coded using a blue–bronze diverging scale.
Two types of perturbations were introduced. Angular discrepancies were imposed by rotating the joystick output by ±30°, ±60°, ±90°, ±120°, ±150°, or ±180°, creating a directional mismatch between the intended and actual motion of the user-controlled dot (Fig. 1d). Speed discrepancies were introduced by scaling the user dot’s velocity to 50%, 70%, 90%, 110%, 130%, or 150% of the joystick input (Fig. 1e). Both manipulations induced a discrepancy between expected and observed visual feedback, producing transient sensory prediction errors.
To quantify participants’ behavioral responses to these perturbations, we analyzed two spatial-temporal performance metrics: inter-dot distance (IDD) and user velocity (vU). IDD measures the instantaneous Euclidean distance between the user- and target-controlled dots, reflecting spatial tracking accuracy14. vU captures the instantaneous speed of the user-controlled dot, indexing how quickly participants adjusted their own speed relative to the moving target14,29. To isolate adaptive responses from the externally imposed perturbations, we derived reference-system-corrected versions of these metrics, denoted as IDD* and vU*. For angular discrepancy trials, user trajectories were realigned by applying inverse rotations, thereby removing the effect of the imposed angular shift. For speed discrepancies, user velocities were rescaled by the inverse of the applied gain factor. These corrections effectively removed the externally imposed component of the manipulation, allowing interpretation of the residual signal as the participant’s compensatory response. To validate the correction procedure, we simulated 100 virtual agents that moved according to the same statistical rules as the target (random heading changes and externally imposed discrepancies) but did not pursue the target. As expected, uncorrected metrics for these agents exhibited systematic deviations driven entirely by perturbations. After applying inverse transformations, their corrected metrics (IDD*, vU*) showed flat response profiles against discrepancy magnitude, confirming that the correction procedure successfully removed the externally imposed discrepancy, isolating participant-driven compensatory behavior (Supplementary Fig. 1).
To ensure consistency and interpretability, all main figures present data from full-length discrepancy (FLD) trials, those in which the imposed visuomotor perturbation remained active throughout the entire 1.5–3.5 s window, without being truncated by early interception. This selection allowed us to analyze time-aligned behavioral and gaze responses under matched temporal conditions across trials and participants, minimizing variability introduced by uneven exposure to the perturbation. However, to provide a comprehensive view of task performance, we also calculated the averaged responses from all trials, including those in which the discrepancy was prematurely interrupted by a successful interception. These analyses are reported in Supplementary Figs. 3–5, enabling comparison across both trial types. Additionally, Supplementary Table 1 details the proportion of FLD and truncated trials as a function of discrepancy magnitude, allowing readers to assess how task difficulty and exposure time varied with experimental conditions.
Distinct behavioral responses to angular and speed discrepancies
We analyzed time-locked responses to quantify how behavior changed following imposed discrepancies. As mentioned, two corrected metrics were used: IDD* to capture spatial tracking accuracy, and vU*to assess speed control. IDD* changes suggest spatial corrections driven by self-referential control, based on proprioception and internal models. In contrast, vU* reflects dynamic adjustments to align with target speed, which relies heavily on external visual cues. Under high uncertainty (or in lower-performing individuals), vU* often lags behind target velocity, reflecting slower adaptation14,29.
In Fig. 1, we illustrate the group-averaged response traces for IDD* and vU* to angular (Fig. 1f) and speed (Fig. 1g) discrepancies. We quantified these responses by fitting linear regression models that related the peak amplitude and area under the curve (AUC) for IDD* and vU* to the magnitude of the discrepancy (average measurements across trials). For each regression, we extracted the slope (m), Pearson correlation coefficient (R), coefficient of determination (R²), the F-statistic from the ANOVA model, and the associated P-values, and we applied a Benjamini–Hochberg false discovery rate (FDR) correction for multiple comparisons (see Methods). Participants showed robust compensatory changes in IDD* as a function of angular discrepancy magnitude, with steep and significant slopes (IDD*: Amp, m = 39.01 ∙ 10−3, P < 0.01, q < 0.013, R = 0.96, R2 = 0.93, F6,318 = 188; AUC, m = 1.367, P < 0.01, q < 0.013, R = 0.97, R2 = 0.94, F6,318 = 317). These adjustments reflect strong spatial recalibration to directional perturbations (upper panel in Fig. 1f). In contrast, user velocity (vU*) showed minimal changes under angular discrepancies (Amp, m = 1.64 ∙ 10−3, P < 0.01, q < 0.013, R = 0.93, R2 = 0.86, F6,318 = 6.93; AUC, m = 2.50 ∙ 10−3, P = 0.90, q = 0.900. R = 0.10, R2 = 0.01, F6,318 = 6.27). Indeed, the slopes for vU* were just 4.23% (Amp) and 0.18% (AUC) of those observed for IDD*, revealing the limited contribution of velocity adjustments to angular discrepancy correction (lower panel in Fig. 1f).
A complementary pattern emerged in the speed discrepancy condition (Fig. 1g). User velocity (vU*) showed strong, graded compensation (Amp, m = −7.41 ∙ 10−3, P < 0.01, q < 0.013, R = 0.98, R2 = 0.95, F6,318 = 39.6; AUC, m = −187.35 ∙ 10−3, P < 0.001, q < 0.013, R = 0.98, R2 = 0.97, F6,318 = 62.7; lower panel in Fig. 1g). In contrast, IDD* responses to speed discrepancies were non-significant (Amp, m = -22.25 ∙ 10−3, P = 0.17, q = 0.211, R = 0.59, R2 = 0.34, F6,318 = 33.9; AUC, m = -595.83 ∙ 10−3, P = 0.18, q = 0.211, R = 0.57, R2 = 0.32, F6,318 = 109; upper panel in Fig. 1g). These results indicate that speed discrepancies engaged velocity-based adjustments, rather than spatial trajectory corrections. We obtained similar results when repeating the analyses across all valid interception trials, including those with truncated discrepancies (i.e., trials in which the target was intercepted before the discrepancy period had concluded; Supplementary Fig. 3).
Distinct gaze responses to angular and target speed discrepancies
To complement our analysis of joystick-based motor behavior, we examined participants’ eye movements during the interception task to assess the visual strategies supporting predictive control. Vision is essential for detecting discrepancies and guiding corrective actions, especially when intercepting moving targets requires continuous integration of internal models with real-time sensory input. We focused on two complementary gaze metrics14. Gaze-to-target distance (GTD), which is the Euclidean distance between the gaze position and the moving target, is measured in degrees of visual angle. GTD reflects externally referential strategies, with lower values indicating effective tracking and anticipation of the target’s trajectory. Gaze-to-user distance (GUD): the distance between gaze position and the joystick-controlled dot, reflecting self-referential behavior. Lower GUD values suggest visual monitoring of one’s own movement, typically under conditions of uncertainty or increased motor demand. Both measures were computed continuously throughout each trial and captured distinct sources of sensory information for guiding corrective behavior: external target tracking (GTD) and internal movement monitoring (GUD)10,14.
To isolate compensatory responses, we aligned gaze data to the onset of each discrepancy and applied reference-frame corrections analogous to those used for the joystick metrics (IDD* and vU*). This ensured that measured changes reflected participant-driven adaptations rather than an artifact from imposed transformations. For angular discrepancies, both gaze metrics scaled proportionally with discrepancy magnitude (GTD: Amp, m = 34.34 ∙ 10−3, P < 0.01, q < 0.013, R = 0.92, R2 = 0.83, F6,318 = 72.2; AUC, m = 1.05, P < 0.01, q < 0.013, R = 0.92, R2 = 0.84, F6,318 = 154; GUD: Amp, m = 35.98 ∙ 10−3, P < 0.01, q < 0.013, R = 0.93, R2 = 0.88, F6,318 = 60.9; AUC, m = 0.76, P < 0.001, q < 0.013, R = 0.88, R2 = 0.78, F6,318 = 79.7). These results indicate that both externally and internally oriented gaze behavior contributed to compensatory responses during angular perturbations, reflecting a coordinated use of visual prediction and motor monitoring (Fig. 2a).
Fig. 2. Gaze responses to visuomotor discrepancies.
a Group-averaged gaze responses to angular discrepancies (100 trials per condition per participant). Top panel shows gaze-to-target distance (GTD); bottom panel shows gaze-to-user distance (GUD). Shaded area indicates the 2-second discrepancy window (1.5–3.5 s). Insets display response amplitude (Amp., top) and area under the curve (AUC, bottom) during this window. Significant slopes (q < 0.05) are marked with asterisks. Angular conditions are color-coded in woodland greens. b Gaze responses to speed discrepancies: Same layout as in (a), color-coded using a blue–bronze diverging scale.
In contrast, under speed discrepancies, participants predominantly increased gaze monitoring of their own movement, as captured by GUD, while GTD remained largely unaffected (GTD: Amp, m = -3.84 ∙ 10−3, P = 0.56, q = 0.622, R = 0.27, R2 = 0.07, F6,318 = 12.6; AUC, m = -110.33 ∙ 10−3, P = 0.72, q = 0.757, R = 0.17, R2 = 0.03, F6,318 = 150; GUD: Amp, m = 55.21 ∙ 10−3, P < 0.01, q < 0.013, R = 0.90, R2 = 0.82, F6,318 = 53; AUC, m = 2.58, P < 0.001, q < 0.013, R = 0.99, R2 = 0.99, F6,318 = 457). These findings indicate that speed perturbations engage self-referential monitoring, possibly reflecting increased reliance on proprioceptive feedback or internal velocity estimation (Fig. 2b). We replicated all analyses across the full set of interception trials, yielding similar results (Supplementary Fig. 4). Additionally, we examined gaze velocity (vG) and pupil diameter as a function of discrepancy magnitude. Notably, neither angular nor speed discrepancies elicited reliable changes in these measures (Supplementary Fig. 5).
Perceptual sensitivity to directional and speed discrepancies
Adaptive motor behavior in dynamic environments relies on the brain’s ability to detect unexpected changes in sensory input before initiating corrective action. This perceptual stage of predictive processing integrates incoming visual signals with internal expectations, allowing the system to detect when predictions have been violated30–33. To assess participants’ sensitivity to motion-related prediction errors, we implemented a perceptual detection task designed to probe sensory-level processing. In this task, participants viewed a single moving dot and indicated via keyboard whether they detected a sudden change in its direction or speed. Visual parameters, including dot size, arena dimensions, and background color, were kept constant with those of the previous visuomotor task. Discrepancies were introduced 1.5 seconds after trial onset and varied systematically in both magnitude and duration (see Methods). Participants responded with a binary answer, using the index fingers of both hands to press designated keyboard keys, indicating whether a change was present or absent (Fig. 3a). Two dependent variables were analyzed: detection accuracy, defined as the proportion of correct responses, and response time (RT), measured from the onset of discrepancy to the keypress. These measures allowed us to quantify perceptual sensitivity and processing speed.
Fig. 3. Visual detection of angular and speed discrepancies.
a Schematic of the perceptual detection task. Participants viewed a single moving dot and reported whether they detected a change in its direction or speed by pressing designated keys with their left or right index fingers. Fingers remained in contact with the keyboard throughout the session to eliminate the need for visual guidance during responses. Gaze remained directed at the screen. b Detection of angular changes. Top: Group-averaged detection accuracy as a function of directional change magnitude (left) and presentation duration (right). Bottom: Corresponding response times (RTs). Angular discrepancies are color-coded in woodland green. Participants showed high sensitivity even at small angular changes, with detection performance reaching up to ~50% correct with directional changes of 5° (upper left panel). c Detection of speed changes. Same layout as (b), with speed discrepancies expressed as percentage changes and color-coded using a blue–bronze diverging scale. In both (b, c), presentation durations are represented with a shared rust-crimson colorbar. All values represent group-averaged responses across 100 trials per discrepancy condition per participant.
To characterize detection performance, we evaluated how accuracy and RT varied with both the magnitude and duration of the discrepancy. Discrepancies were sustained over predefined time windows ranging from 50 to 1000 ms. Detection probabilities increased with both larger magnitudes and longer durations, for both angular (Fig. 3b) and speed (Fig. 3c) discrepancies. For directional changes, detection was modulated by both magnitude and duration (probability of detection: F14,1022 = 499, P < 0.001, q < 0.003, η2G = 0.806), with longer durations yielding steeper detection curves (Fig. 3b). RTs decreased slightly with magnitude (response time (RT): F14,1022 = 37.8, P < 0.001, q < 0.003, η2G = 0.106), indicating faster perceptual decisions with bigger discrepancies.
For speed discrepancies, a similar pattern emerged (Fig. 3c). Detection accuracy increased with larger velocity gradients and longer durations (probability of detection: F24,1752 = 159, P < 0.001, q < 0.003, η2G = 0.577), while RTs also showed a reduction with discrepancy magnitude, although with a smaller effect size (RT: F14,1022 = 5.24, P < 0.001, q < 0.003, η2G = 0.047). Finally, we assessed whether detection performance predicted visuomotor responses by computing pairwise correlations for matching discrepancy types. These analyses revealed weak associations, with average Pearson correlation coefficients below ±0.15 (Supplementary Table 2). This suggests that differences in perceptual sensitivity did not account for variability in visuomotor responses.
Motor variability without visual feedback
To assess the consistency of feedforward motor control, we designed a no-visual-feedback motor task to characterize motor execution without visual guidance (see Methods). Before the task, participants received a standardized explanation of eight directional cues (North, South, East, West, Northeast, Northwest, Southeast, Southwest) via a printed diagram, verbal examples, and a comprehension test, ensuring that posterior response variability reflected performance rather than misunderstandings of instructions or unfamiliarity with directions. During the task, blindfolded participants received auditory cues indicating one of these eight directions and were instructed to perform a single joystick movement toward the cued direction, without corrections, relying solely on proprioceptive and efference-based predictions (Fig. 4a).
Fig. 4. Assessing motor variability without visual feedback.
a The same participants who completed the visuomotor discrepancy task also performed a motor variability task under open-loop conditions. They were blindfolded using a sleep mask and wore headphones to receive verbal cues instructing them to move the joystick toward specific cardinal or intermediate directions (illustrated by colored segments in the left inset). This setup eliminated visual feedback and isolated feedforward motor output. b Joystick responses from four representative participants. Upper panels show data from individuals selected from the 25th and 75th percentiles of ɛtarget (angular error relative to instructed direction, sorted from highest to lowest), illustrating low vs. high response accuracy. Each color-coded trace shows a single joystick movement toward a given instructed direction. Central panels show the corresponding distributions of ɛtarget, and right panels show ɛuser (angular error relative to each participant’s own mean movement direction per target), indexing precision. These histograms were used to extract participant-level accuracy and precision metrics. c Group-averaged histograms of response accuracy (ɛtarget, upper panel) and precision (ɛuser, lower panel). The broader tails in the accuracy distribution reflect greater variability in directional targeting, whereas the narrower precision distribution indicates consistent execution of idiosyncratic response patterns across trials.
This task enabled us to assess two distinct dimensions of motor variability under visual occlusion. First, motor accuracy was quantified as the angular error between the instructed direction and the participant’s actual joystick movement on each trial. This metric, denoted as εtarget, captures systematic deviations from the intended movement direction. Second, motor precision was defined as the trial-to-trial variability in movement direction for a given target. This was estimated as the angular error between each trial and the participant’s mean response direction for that target, denoted as εuser. Thus, while εtarget reflects how closely participants adhered to the instructed direction (i.e., accuracy), εuser captures the internal consistency of repeated movements (i.e., precision).
Figure 4b presents single-trial movement trajectories and angular error distributions from four participants, selected based on their values of εtarget and εuser. The upper panels illustrate examples corresponding to the 25th and 75th percentiles of εtarget (motor accuracy), while the lower panels show participants from the corresponding percentiles of εuser (motor precision). These examples illustrate how individuals vary in their alignment to the target or to their own motor output, ranging from dispersed and inaccurate responses to more stable and precisely directed movements.
Group-level histograms were then computed for both metrics (Fig. 4c). The distribution of motor accuracy (εtarget) exhibited broader tails than that of motor precision (εuser), indicating greater variability in how closely participants aligned with the instructed directions. In contrast, the narrower precision distribution suggests that participants reproduced their own idiosyncratic movement patterns, despite systematic directional biases relative to the instructed direction34. Notably, individual differences in motor variability showed negligible correlation with performance in the visuomotor discrepancy tasks (average Pearson’s r < ±0.01; Supplementary Table 3). This suggests that variability measured under open-loop conditions does not directly support adaptive compensation in tasks requiring visual feedback and prediction error minimization.
Weak cross-task predictive power between detection, variability, and visuomotor adaptation
Sensorimotor adaptation depends on detecting discrepancies between expected and actual sensory inputs and updating motor commands accordingly. This raises the hypothesis that individuals with higher sensory sensitivity or more accurate motor output may show better adaptation to visuomotor perturbations imposed. To test this, we asked whether performance in the detection and motor variability tasks could predict participants’ compensation responses in the visuomotor discrepancy task.
Because all participants completed the three experimental tasks, enabling a within-subject, cross-domain analysis, we implemented a multivariate linear regression (MVLR) model using performance metrics from the detection and motor variability tasks to predict visuomotor adaptation outcomes. Eight predictors were included: P1: Average inter-dot distance (IDD*) during the visuomotor task; P2: Average user velocity (vU*); P3: Average velocity error (user speed minus target speed); P4: Motor variability in the no-feedback condition (endpoint dispersion); P5: Detection performance in speed-change trials; P6: Detection performance in direction-change trials; P7: Average gaze-to-target distance (GTD); P8: Average trial duration in the visuomotor task. The outcome variables comprised eight adaptation indices from the visuomotor task: V1–V4: Speed discrepancy responses (Amp and AUC for IDD* and vU*); V5–V8: Angular discrepancy responses (Amp and AUC for IDD* and vU*). The employed metrics reflected the slope (m) of participants’ compensation responses across discrepancy magnitudes and were standardized (z-scored) prior to analysis (Fig. 5a). Regression coefficients were estimated using ordinary least squares, and significance was assessed via t-statistics, with a Benjamini–Hochberg FDR correction applied across all 72 predictor-outcome combinations. Strikingly, in only one case did the model yield a significant intercept (q < 0.05; black bar in Fig. 5b), reflecting a non-zero baseline level of the dependent variable. However, all predictor coefficients were non-significant (white bars), indicating that individual differences in detection performance and motor variability did not reliably explain variation in visuomotor adaptation. This result suggests weak cross-task predictive power and supports the interpretation that these functions may operate through largely independent, task-specific mechanisms (Fig. 5b).
Fig. 5. Cross-task predictive relationships.
a Heatmaps of participant z-scores for predictors (left) and outcome variables (right). The left heatmap shows eight predictor variables (P1-P6 from the detection task, P7-P8 from the motor variability task), sorted by each participant’s slope (m) of area under the curve (AUC) for inter-dot distance (IDD*). Each row represents a single participant. Slopes were derived from linear regressions fitted to average responses (100 repetitions per discrepancy condition; see Fig. 1). The right heatmap displays outcome variables V1-V8, quantifying visuomotor responses across angular and speed discrepancies (see Methods). b Multivariate linear regression (MVLR) results. Bars represent regression coefficients for each predictor-outcome combination. Black bars indicate statistically significant coefficients (FDR-corrected q < 0.05), and white bars denote non-significant results. Only one significant intercept was observed; all predictor coefficients failed to reach significance, indicating weak cross-task predictive power. c Left: Predicted responses (yellow) vs. observed responses (blue) across outcome variables V1–V8. Red lines show individual prediction errors. Right: Scatterplot of predicted vs. observed responses (blue dots), with the diagonal black line indicating perfect prediction. The plot illustrates the MVLR model’s limited explanatory power.
To evaluate whether a more flexible regression approach improved prediction of visuomotor outcomes, we trained a support vector machine (SVM) regression model, which outperformed the multivariate linear regression (MVLR) model across all outcome variables (Supplementary Table 4). However, we should note that the MVLR model has a fixed 9 degrees of freedom (DOF = 9, including 8 predictors and 1 intercept for a single outcome), while the SVM’s DOF, determined by the number of support vectors, is data-dependent and reached 49 in our dataset (n = 54). This high number of support vectors ( ~ 91% of samples) enables the SVM to capture complex patterns in visuomotor responses, explaining its superior performance over the MVLR’s linear approach and aligning with the observed weak cross-task correlations.
Moreover, to assess whether internal monitoring provided additional predictive value, we conducted a supplementary MVLR analysis replacing GTD (P7) with GUD, a gaze metric indexing self-referential control. The results remained qualitatively unchanged, with no systematic improvement in predictive power (Supplementary Fig. 6), reinforcing the conclusion that behavioral measures from the detection and motor variability tasks did not reliably predict visuomotor adaptation.
Within-subject variability dominates across visuomotor, perceptual, and motor tasks
In the preceding section, our multivariate linear regression analysis revealed weak and largely non-significant cross-task predictive relationships. One plausible explanation for this limited predictive power is that high within-subject variability may obscure consistent between-subject patterns, thereby masking cross-domain associations. To assess this, we directly compared within- and between-subject variability across the three tasks, examining whether behavior fluctuated more within an individual across trials than across individuals. We first examined the visuomotor task. For each participant, we computed trial-wise variance across four behavioral measures, user velocity (vU*), inter-dot distance (IDD*), gaze-to-target distance (GTD), and gaze-to-user distance (GUD), under both angular and speed discrepancy conditions. These within-subject variances were compared to between-subject variances calculated over participant-averaged traces. For angular discrepancies, within-subject variability exceeded between-subject variability across all metrics (paired t-tests, var(vU*): t14 = 47.2, P < 0.001, q < 0.002; var(IDD*): t14 = 17.0, P < 0.001, q < 0.002; var(GTD): t14 = 19.2, P < 0.001, q < 0.002; var(GUD): t14 = 41.9, P < 0.001, q < 0.002; left panels in Fig. 6a). A similar pattern was observed for speed discrepancies (var(vU*): t8 = 35.4, P < 0.001, q < 0.002; var(IDD*): t8 = 28.2, P < 0.001, q < 0.002; var(GTD): t8 = 73.9, P < 0.001, q < 0.002; var(GUD): t8 = 37.9, P < 0.001, q < 0.002; right panels in Fig. 6a). On average, within-subject variability was ~12 times larger than between-subject variability, revealing the dynamic and highly variable nature of individual response profiles.
Fig. 6. Within-subject vs. between-subject variability across tasks.
a Variance time series in the visuomotor task, aligned to discrepancy onset (gray shading, 1.5–3.5 s). First and second columns: angular discrepancies. Left panels show group-averaged within-subject variance; right panels show between-subject variance. Metrics shown from top to bottom variances for vU*, IDD*, GTD, and GUD. Angular discrepancies are color-coded in woodland greens. Third and fourth columns: speed discrepancies, with the same layout. Speed discrepancy conditions are color-coded using a blue-bronze diverging scale. Variance traces reflect group-averaged trial-wise variability for each discrepancy magnitude. b Variance in detection tasks. Upper panels: within- and between-subject variability in accuracy for angular discrepancy trials (three levels). Lower panels: same metrics for speed discrepancy trials (five levels). Color coding matches (a). c Motor variability task. Bar plots compare within- vs. between-subject variance in response accuracy (angular error to instructed direction, ɛtarget; left) and response precision (angular error to user’s mean direction, ɛuser; right). d Relationship between interception performance and within-subject variability measures from the visuomotor task. Plots show scatter plots with linear regressions comparing performance to variability in vU*, IDD*, GTD, and GUD. No significant associations were found, except for var(GTD), which showed a positive relationship with interception performance after FDR correction (q < 0.0001; third plot).
We next analyzed the detection task. For angular discrepancies, within-subject variability again exceeded between-subject variability for both detection accuracy (choice probability: var(choice): t14 = 8.53, P < 0.001, q < 0.002, upper left panel) and response time (var(RT): t14 = 8.96, P < 0.001, q < 0.002; upper right panel in Fig. 6b). For speed-change trials, the effect held for accuracy (var(choice): t24 = 16.5, P < 0.001, q < 0.002, lower left panel), but not for RT (var(RT): t24 = 1.37, P = 0.185, q = 0.185; lower right panel in Fig. 6b). Across both trial types, within-subject variability in detection accuracy was approximately six times greater than between-subject variability, indicating that perceptual sensitivity fluctuates substantially within individuals.
The no-feedback motor task yielded the most pronounced effect. Within-subject variability in movement endpoint dispersion was roughly 24 times greater than between-subject variability (var(ɛtarget): t73 = 11.5, P < 0.001; var(ɛuser): t73 = 11.2, P < 0.001, q < 0.002; Fig. 6c). We also investigated whether individual variability within the visuomotor task was itself predictive of task performance. Within-subject variability in the visuomotor task did not reliably predict interception performance, except for GTD variability, which was positively associated with performance (q < 0.0001; Fig. 6d). Collectively, these results show that across all tasks, intra-individual fluctuations dominate behavioral variability, which may partially account for the weak cross-task correlations observed. Trial-by-trial changes in internal state or strategy may play a central role in shaping sensorimotor performance, emphasizing the importance of analyzing variability as a functional signal rather than treating it as ‘noise’.
Discussion
We asked whether the ability to detect sensory discrepancies or to produce consistent motor output predicts visuomotor adaptation. To test this, we combined three paradigms: (i) a visuomotor interception task with angular or speed perturbations, (ii) a sensory detection task, and (iii) a no-feedback motor variability task. Eye tracking during interception allowed us to assess gaze responses. Contrary to our hypothesis, performance across tasks showed little convergence, indicating that predictive control mechanisms are domain-specific and context-dependent rather than unified.
In the visuomotor task, angular and speed perturbations engaged distinct adaptive policies. Angular discrepancies elicited robust corrections in inter-dot distance (IDD*), gaze-to-target distance (GTD), and gaze-to-user distance (GUD), but produced minimal adjustments in velocity (vU*). IDD* reflects spatial alignment between the user and the target, indexing externally referential corrections, whereas vU* captures velocity-based adjustments that depend more on internal models. By themselves, however, these motor metrics cannot identify the sensory source of error correction. Gaze measures provide a critical complement: increases in GTD during perturbations indicate impaired performance due to reliance on external visual information, whereas increases in GUD suggest a drop in self-referential monitoring, consistent with proprioceptive and efference-based guidance. Thus, GTD and GUD are not redundant with IDD* and vU* but instead disambiguate the sensory channel driving each motor correction. These dissociations, consistent with previous reports10,35–37, suggest that angular errors recruit both sensory monitoring and motor recalibration, whereas speed errors rely more strongly on predictive control of effector dynamics. Overall, gaze metrics extend the explanatory power of kinematics by revealing the sensory basis of adaptation.
The sensory detection task further revealed functional separation. Detection probabilities scaled with both magnitude and duration of perturbations, consistent with temporal integration in perceptual decision-making38,39. Thus, larger, longer perturbations were detected more reliably and with shorter latencies, aligning with evidence accumulation models. In contrast, the motor variability task revealed wide inter-individual differences in accuracy and precision under visual occlusion, but these did not predict performance in detection or visuomotor adaptation. This suggests that feedforward motor stability and feedback-driven adaptation depend on partly independent mechanisms20,21.
A central finding was the absence of robust cross-task correlations. Although responses within each paradigm were consistent and graded, performance across detection, variability, and adaptation did not covary. This challenges the notion of a domain-general predictive resource16,40,41 and instead supports the idea of specialized neural circuits tuned to task demands18,33,42. While null results must be interpreted cautiously, the consistency of the dissociations across tasks, the robustness of within-task effects, and FDR-corrected analyses strengthen this conclusion. Moreover, exploratory machine learning analyses reinforced this view. A support vector machine (SVM) model captured visuomotor outcomes more effectively than linear regression, reflecting complex nonlinear response patterns. Still, weak cross-task predictability remained, pointing to genuine task specificity. Across all paradigms, within-subject variability exceeded between-subject variability by an order of magnitude, indicating that trial-to-trial fluctuations are not noise but state-dependent dynamics probably related to attention, uncertainty, or cortical excitability38,39,43,44. This variability could contribute to the apparent independence of predictive functions.
Our design emphasized rapid, trial-level corrections to transient perturbations rather than long-term adaptation. Such feedback-driven adjustments are thought to rely on model-based correction distinct from slower, explicit learning processes27,45–47. The limited between-subject variance in our dataset may also have obscured subtle cross-domain associations48. With only 54 participants, statistical power may have been insufficient to detect small effects, as reliable individual-differences research often requires larger samples49,50. Additionally, task sessions occurred on different days, leaving performance susceptible to shifts in internal states such as arousal or fatigue. Finally, although our design focused on sensory prediction errors, reinforcement signals from successful interceptions may also have shaped behavior14,29. Methodologically, trial-averaged analyses may obscure rapid state-dependent fluctuations. Evidence from neural recordings shows that variability sources can shift within seconds51,52. Similarly, averaging behavioral responses risks masking dynamic processes central to predictive control. Future studies should adopt single-trial modeling (e.g., Hidden Markov or state-space models) to capture latent state transitions and nonlinear interactions.
In summary, our experimental results indicate that predictive processing in human sensorimotor control is compartmentalized. Angular and speed perturbations engage distinct visuomotor strategies, perceptual detection scales with discrepancy magnitude and duration, and motor precision under occlusion does not translate into visuomotor adaptation. Weak cross-task associations and dominant within-subject variability reveal that predictive mechanisms are task-specific, flexible, and dynamically reconfigured rather than unified and trait-like. Recognizing variability as a functional signal provides a useful framework for understanding adaptive control in real-world behaviors such as sports, driving, or human–machine interaction.
Methods
Participants
The study involved 54 participants (35 women; mean age = 22.28 ± 0.35 years; age range = 19–30 years). All participants provided written informed consent to participate, and permission to publish their anonymized data. Inclusion criteria required full-term birth without pre-, peri-, or postnatal complications affecting neurodevelopment, no history of psychiatric, neurological, or neurodevelopmental disorders, no substance abuse, right-handedness, and normal or corrected-to-normal vision confirmed via a Snellen chart. Participants completed a brief self-report questionnaire to provide demographic details, including age, sex, and educational background. Participation was voluntary, non-remunerated, and participants were informed of their right to withdraw at any time without penalty. All procedures were non-invasive and approved by the Ethics Committee of the Instituto de Neurociencias, Universidad de Guadalajara, México (protocols ET102021-330 and ET122023-382). No formal a priori power analysis was conducted; however, the sample size (n = 54) aligned with standards from prior studies in our laboratory. Each experimental condition included 100 trial repetitions per participant, sufficient to detect robust group-level effects with moderate to large effect sizes in within-subject designs.
Visuomotor task
This task evaluated real-time motor prediction and online correction under transient perturbations, processes that rely on internal forward models and sensory prediction error minimization6,27. Participants used a joystick to control a white dot on a 27-inch monitor (1920 × 1080 pixels, 36.40 pixels/°, 60 Hz) to intercept an unpredictably moving black target within a circular arena on a 50% gray background. The white dot (0.3° visual angle) represented the participant’s cursor, while the black dot served as the target. Seated 70 cm from the screen, participants began each trial with the white dot at the arena’s center and the black dot at a random perimeter position. The goal was to intercept the target within 5 seconds, with trials ending upon interception or timeout. Auditory feedback (distinct tones) signaled success or failure.
Target motion followed rectilinear dynamics with controlled direction and speed to modulate task difficulty29. The target moved at a constant velocity (vT) in straight segments, updating its direction every 500 ms (Fixed-Direction Interval, FDI) from a uniform distribution ( ± 90° angular range, AR) centered on the previous direction. Upon reaching the arena boundary, the target rebounded per the law of reflection with an added random angular offset from the same AR, ensuring unpredictable yet stable trajectories29. Participants completed up to 800 trials across three blocks, with rest breaks to minimize fatigue. This design, with parametric control over direction and variability, allowed for fine-grained measurement of sensorimotor integration and predictive motor control under variable conditions14,29.
Two perturbation types were applied on separate days: (i) angular rotation discrepancies, where the joystick-controlled dot’s direction was rotated ( ± 30°, ±60°, ±90°, ±120°, ±150°, or ±180°) using a 2D rotation matrix, and (ii) speed discrepancies, where the user’s movement speed (vU, in °/s) was scaled (50%, 70%, 90%, 110%, 130%, or 150%) while maintaining direction. The baseline joystick deflection vector at time t was:
| 1 |
where (k = 2/3) (baseline joystick sensitivity). For angular discrepancies, the velocity was rotated by angle θ:
| 2 |
For speed discrepancies, the velocity was scaled by gain factor g:
| 3 |
The first 100 trials were perturbation-free to establish baseline performance, followed by 50% of trials randomly interleaved with perturbations. Each discrepancy condition included 100 repetitions, applied within a 2-second window (1.5–3.5 s post-movement onset). This timing allowed for precise temporal alignment of behavioral measures relative to the perturbation and for distinguishing pre- and post-discrepancy behavior. Comparison of control trials before and after the baseline phase confirmed that participants did not anticipate the timing of discrepancies, supporting the validity of the randomized design. The task was implemented in MATLAB R2023a using the Psychophysics Toolbox (PTB-3).
Task for detecting changes in target direction or speed
This task assessed sensory detection of unexpected kinematic changes, reliant on prediction error signaling in sensory cortex22,23. A black dot moved linearly across a circular arena’s diameter, matching the visuomotor task’s visual properties. At 1.5 s, the dot underwent a transient change in direction ( ± 5°, ±30°, or 0°) or speed (50%, 70%, 100%, 130%, or 150% of baseline vT = 40°/s) for 50, 100, 250, 500, or 1000 ms. Participants reported detected changes via keyboard (right arrow for change, left arrow for no change), responding as quickly and accurately as possible53,54. Detection probabilities and response times (RTs, from discrepancy onset to keypress) were measured to evaluate perceptual sensitivity.
Motor variability task
The motor variability task assessed the accuracy and consistency of feedforward motor commands without visual feedback, evaluating their stability20,21. Participants were blindfolded with a sleep mask and wore earphones to receive auditory instructions. Prior to the task, they received a standardized explanation of eight directional cues (cardinal: North, South, East, West; intercardinal: Northeast, Northwest, Southeast, Southwest) via a printed diagram, verbal examples, and a comprehension test to ensure understanding. Participants performed single joystick movements toward the cued direction without corrections, relying solely on auditory cues. Directions were presented in blocks of 30 trials each, with the order of directions randomly permuted for each participant to eliminate history effects from a fixed sequence, which could bias group data if all participants shared the same order55,56. The direction of each joystick movement was recorded, and angular error (deviation from the instructed direction) and precision (variability across repetitions) were computed.
Experimental task implementation
The study comprised five modules: (1) visuomotor task with angular discrepancies, (2) visuomotor task with speed discrepancies, (3) detection task for angular changes, (4) detection task for speed changes, and (5) motor variability task. Task order was counterbalanced across participants, with a minimum 1-day interval (mean 4.04 ± 0.52 days) between sessions to minimize carryover effects57. All tasks used the same calibrated computer and joystick.
Eye tracking
Binocular gaze was recorded using a Tobii Pro Fusion eye tracker (60 Hz) mounted below a 27-inch monitor, with participants seated 70 cm away and stabilized via a chin and forehead rest. The room maintained ~100 lux lighting. Calibration included four-point fixation, peripheral dot fixation, smooth-pursuit tracking, and pupillary light reflex elicitation14.
Analysis
We analyzed participants’ behavioral responses to visuomotor discrepancies presented across the three tasks described above. Continuous joystick trajectories were recorded, allowing us to extract spatial and temporal parameters, including the Cartesian and polar coordinates of both the user- and target-controlled dots on the screen. The origin of the polar coordinate system was set at the center of the circular arena.
From these data, we computed key motor performance metrics: inter-dot distance (IDD), user velocity (vU), and angular directional error, defined as the absolute angular difference between the movement vectors of the user and the target29. IDD reflects spatial error and was computed as the Euclidean distance between the user’s and the target’s dots:
| 4 |
where (xU,yU) and (xT,yT) are the Cartesian coordinates of the user- and target-controlled dots, respectively. User velocity (vU) was computed as:
| 5 |
where ΔxU, ΔyU are positional displacements between frames, and Δt ≈ 16.66 ms (based on a 60 Hz refresh rate). Velocity was expressed in both pixels/second and degrees of visual angle per second (°/s). We selected the IDD and vU as performance metrics because they reflect spatial and temporal components of predictive motor control. Thus, these measures served as proxies for the accuracy and timing of participants’ adaptive responses to sensory prediction errors.
To isolate participants’ true behavioral responses from imposed experimental perturbations, we applied inverse reference-frame corrections to the user dot’s coordinate positions. For angular rotation discrepancies, trajectories were realigned using inverse rotations. For speed discrepancies, we rescaled the observed velocities using the inverse gain factor. This procedure eliminated the imposed perturbation, allowing residual responses to reflect only compensatory motor behavior. To validate our correction procedure, we ran simulations with 100 virtual agents that moved stochastically according to the same rules as the target (e.g., rectilinear motion, randomized heading updates based on the angular range at each FDI), but without target-tracking behavior (Supplementary Fig. 1). As expected, the uncorrected trajectories of these agents showed response components primarily driven by the imposed discrepancies. However, after applying the inverse corrections, the resulting trajectories exhibited flat response profiles against discrepancy magnitude, confirming that the systematic variations in the uncorrected signals arose solely from the imposed transformations. Corrected variables are denoted with an asterisk in figures (e.g., IDD* and vU*; see Supplementary Fig. 1).
Visual tracking precision and referential strategies were assessed via three gaze metrics. Gaze-to-target distance (GTD), was defined as the Euclidean distance (in degrees of visual angle) between gaze position and the target:
| 6 |
Gaze-to-user distance (GUD) was:
| 7 |
and gaze velocity (vG):
| 8 |
Lower GTD values indicate improved target tracking by capturing prediction errors in externally guided tracking. Lower GUD values reflect tighter gaze-joystick coordination, indicating self-monitoring strategies14,29,58–61. Missing data due to blinks or tracking loss were linearly interpolated.
To quantify behavioral responses to imposed discrepancies, we analyzed time series of IDD*, vU*, GTD, GUD, and vG under both perturbed and control conditions. Two main metrics were computed over a fixed 2-second window (1500–3500 ms after trial onset), which covered the full perturbation period. Peak amplitude was calculated using MATLAB’s findpeaks function applied to the averaged traces from each participant. Peaks captured transient deviations (e.g., peak IDD*: maximum tracking error; peak vU*: largest velocity mismatch; peak GUD = strongest gaze-hand decoupling). The area under the curve (AUC) was calculated using trapezoidal numerical integration. Lower AUCs indicate more effective tracking or adaptation, e.g., lower IDD* or GTD implies better spatial alignment; lower vU* indicates improved velocity matching; lower GUD reflects improved gaze–hand coupling14,29. Therefore, peak amplitude and AUC were used to capture both the peak and integrated magnitude of behavioral deviations following discrepancy onset14,62.
Main analyses focused on “full-length discrepancy” (FLD) trials, defined as those in which the imposed perturbation lasted the entire 1.5–3.5 s window without being truncated by early interception. This allowed for clean averaging of behavioral traces across the full discrepancy period. No participants were excluded based on the number of valid trials, and no minimum duration cutoff was applied. To ensure that the findings were not biased by selection of longer trials, we also conducted complementary analyses on all trials, including those interrupted by early interception (“partial discrepancies”). Results were qualitatively similar to those based on FLD trials (see Figs. S3–S5), confirming that trial outcome (intercepted vs. not intercepted) did not bias the population-level responses. Furthermore, because discrepancy strength affects trial duration (e.g., strong discrepancies may delay or prevent interception), we quantified the proportion of trials that met the full-length criterion as a function of discrepancy magnitude. These proportions were computed per participant and averaged across the group for each condition. The results are presented in Supplementary Table 1 and Supplementary Fig. 2, which show that omission rates due to partial discrepancies were relatively low and well-characterized across the full range of conditions.
We analyzed detection probabilities and response times (RTs) as a function of discrepancy magnitude and duration (50, 100, 250, 500, and 1000 ms) for the detection task. RTs measured in seconds from the onset presentation of the discrepancy to the moment when the participant responded whether the discrepancy was present or not. Detection probabilities are illustrated in two formats: one plotted against angular or speed gradients with durations represented by color, and another plotted against discrepancy duration with colors indicating angular or speed magnitudes. In the motor response task, performance under visual occlusion was evaluated using two complementary error metrics: (1) angular error relative to the instructed direction (εtarget), reflecting response accuracy, and (2) angular error relative to the participant’s own mean response direction for each target (εuser), capturing response precision across repetitions.
Statistical analysis
To assess the relationship between behavioral responses and discrepancy magnitude, we applied linear regression models and reported the slope (m), correlation coefficient (R), coefficient of determination (R²), and F-values. Repeated-measures ANOVAs were used to evaluate condition effects, with generalized eta squared (η²) reported as a measure of effect size. We used a multivariate linear regression (MVLR) model to test whether performance in the detection and motor variability tasks predicted visuomotor adaptation outcomes. The model included eight predictors (P1–P8): from the detection task with angular discrepancies, slope of choices (P1), slope of response times (RTs; P2), and slope of inverse efficiency scores (IES, the ratio of trial duration divided by the proportion of correct choices, a metric aimed to summarize accuracy/speed trade-off63; P3); from the detection task with speed discrepancies, slope of choices (P4), slope of RTs (P5), and slope of IES (P6); and from the motor variability task, average angular error to target (P7: ɛtarget, representing precision), and average angular error to user position (P8: ɛuser, representing accuracy). These predictors were used to separately model eight dependent variables (V1–V8) from the visuomotor discrepancy task: for angular discrepancies, slope of amplitude (V1) and slope of area under the curve (AUC; V2); for inter-dot distance (IDD*), and slope of amplitude (V3) and slope of AUC (V4) for gaze-to-target distance (GTD); for speed discrepancies, slope of amplitude (V5) and slope of AUC (V6) for user velocity (vU*), and slope of amplitude (V7) and AUC (V8) for GTD. Predictor and outcome variables were z-scored against group-level means. Regression coefficients were estimated using ordinary least squares, and statistical significance was assessed via t-statistics.
To explore nonlinear models, we also trained a support vector machine (SVM) with a fine Gaussian kernel using MATLAB’s Regression Learner app. This model employed automated hyperparameter tuning and k-fold cross-validation. Predictive performance was quantified using root mean squared error (RMSE) and Pearson’s correlation between predicted and observed values. While the MVLR provided interpretable coefficients, the SVM showed higher predictive accuracy (see Supplementary Fig. 6 and Supplementary Table 4).
For variability analyses, we computed time-resolved within- and between-subject variability across the discrepancy window (1.5–3.5 s) for each behavioral measure and condition: Within-subject variability was calculated by arranging trial-wise response traces from each participant into matrices (columns = trials; rows = time points). Data were mean-centered across trials at each time point, and variance was computed using MATLAB’s ‘nanvar’ function. This produced a trace of intra-individual variability over time. Between-subject variability was computed by first averaging each participant’s trace across trials, then organizing participant means into matrices (columns = participants; rows = time points). At each time point, we subtracted the group mean and computed variance across participants, yielding a time series of inter-individual variability. To statistically compare within- and between-subject variability, we averaged values over the analysis window and submitted them to paired t-tests. Results are reported as mean ± standard error of the mean (S.E.M.). All statistical tests involving multiple comparisons (e.g., MVLR coefficients, behavioral regressions) were corrected using the Benjamini–Hochberg False Discovery Rate (FDR) procedure64, with a significance threshold of q < 0.05.
Supplementary information
SUPPLEMENTARY_MATERIAL_FOR_PAPER_DISCREPANCIES_ET_v13
Acknowledgements
This study was supported by the Consejo Nacional de Humanidades, Ciencias y Tecnologías (CONAHCYT, grant #CF-2023-G-107 to MT), Programa de Apoyo a la Mejora en las Condiciones de Producción de los Miembros del SNII 2024 (CUCIENEGA, Universidad de Guadalajara, to IM), and Programa de Fortalecimiento de la Investigación y el Posgrado 2022 (CUCBA, Universidad de Guadalajara, to MT). We extend our gratitude to Martín, N., Hernández, F., Barajas, A., Salguero, A., Valdivia, D., Ramos, E., and Ramírez, L. for their assistance with some experiments. Special thanks to H.C. Farina and G. García for their help in recruiting participants. We express our gratitude to the University Centers, CUCBA and CUCIÉNEGA, for the generous support in providing a dedicated space for our experiments.
Author contributions
Conceptualization: M.T.; Data curation: I.M., M.T.; Formal analysis: M.T.; Funding acquisition: M.T.; Investigation: I.M.; Methodology: I.M., M.T.; Project administration: I.M.; Resources: I.M., M.T.; Software: M.T.; Supervision: I.M., M.T.; Validation: I.M., M.T.; Visualization: I.M., M.T.; Writing—original draft: M.T.; Writing—review and editing: I.M., M.T.
Data availability
The datasets generated during the current study are available in the OSF repository ([https://osf.io/ha5ms/?view_only=8940c72c83034da892a7f637b0b6fc70)).
Competing interests
M.T. serves as a member of the Editorial Board for Science of Learning. The authors declare non-financial competing interests.
Footnotes
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
The online version contains supplementary material available at 10.1038/s41539-025-00377-4.
References
- 1.den Ouden, H. E. M., Kok, P. & de Lange, F. P. How prediction errors shape perception, attention, and motivation. Front Psychol.3, 548 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Friston, K. A theory of cortical responses. Philos. Trans. R. Soc. Lond. B Biol. Sci.360, 815–836 (2005). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Hohwy, J. Priors in perception: Top-down modulation, Bayesian perceptual learning rate, and prediction error minimization. Conscious Cogn.47, 75–85 (2017). [DOI] [PubMed] [Google Scholar]
- 4.Keller, G. B. & Mrsic-Flogel, T. D. Predictive Processing: A Canonical Cortical Computation. Neuron100, 424–435 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Seidler, R. D., Kwak, Y., Fling, B. W. & Bernard, J. A. Neurocognitive Mechanisms of Error-Based Motor Learning. in Progress in Motor Control (eds Richardson, M. J., Riley, M. A. & Shockley, K.) 39–60 (Springer, New York, NY, 2013). 10.1007/978-1-4614-5465-6_3. [DOI] [PMC free article] [PubMed]
- 6.Shadmehr, R., Smith, M. A. & Krakauer, J. W. Error Correction, Sensory Prediction, and Adaptation in Motor Control. Annu. Rev. Neurosci.33, 89–108 (2010). [DOI] [PubMed] [Google Scholar]
- 7.Tsay, J. S., Haith, A. M., Ivry, R. B. & Kim, H. E. Interactions between sensory prediction error and task error during implicit motor learning. PLoS Comput Biol.18, e1010005 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Butcher, P. A. & Taylor, J. A. Decomposition of a sensory-prediction error signal for visuomotor adaptation. J. Exp. Psychol. Hum. Percept. Perform.44, 176–194 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Zhao, H. & Warren, W. Interception of a speed-varying target: On-line or model-based control?. J. Vis.13, 951 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Treviño, M. & Márquez, I. Entrainment of visuomotor responses to target speed during interception. Neuroscience10.1016/j.neuroscience.2025.01.047 S0306-4522(25)00055–7 (2025). [DOI] [PubMed] [Google Scholar]
- 11.Morehead, J. R. & Xivry, J.-J. O. De. A Synthesis of the Many Errors and Learning Processes of Visuomotor Adaptation. 2021.03.14.435278 Preprint at 10.1101/2021.03.14.435278 (2021).
- 12.Márquez, I., Lemus, L. & Treviño, M. A continuum from predictive to online feedback in visuomotor interception. Eur. J. Neurosci.10.1111/ejn.16628 (2024). [DOI] [PubMed]
- 13.Sailer, U., Flanagan, J. R. & Johansson, R. S. Eye–Hand Coordination during Learning of a Novel Visuomotor Task. J. Neurosci.25, 8833–8842 (2005). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Márquez, I. & Treviño, M. Visuomotor predictors of interception. Plos One10.1371/journal.pone.0308642 (2024). [DOI] [PMC free article] [PubMed]
- 15.Goettker, A. & Gegenfurtner, K. R. Individual differences link sensory processing and motor control. Psychol. Rev.10.1037/rev0000477 (2024). [DOI] [PubMed]
- 16.Haith, A. M., Huberdeau, D. M. & Krakauer, J. W. The influence of movement preparation time on the expression of visuomotor learning and savings. J. Neurosci.35, 5109–5117 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Lemke, S. M., Ramanathan, D. S., Guo, L., Won, S. J. & Ganguly, K. Emergent modular neural control drives coordinated motor actions. Nat. Neurosci.22, 1122–1131 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Mylopoulos, M. The modularity of the motor system. Philos. Explorations24, 376–393 (2021). [Google Scholar]
- 19.Schilling, M., Hammer, B., Ohl, F. W., Ritter, H. J. & Wiskott, L. Modularity in Nervous Systems—a Key to Efficient Adaptivity for Deep Reinforcement Learning. Cogn. Comput16, 2358–2373 (2024). [Google Scholar]
- 20.Churchland, M. M., Afshar, A. & Shenoy, K. V. A central source of movement variability. Neuron52, 1085–1096 (2006). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Wu, H. G., Miyamoto, Y. R., Gonzalez Castro, L. N., Ölveczky, B. P. & Smith, M. A. Temporal structure of motor variability is dynamically regulated and predicts motor learning ability. Nat. Neurosci.17, 312–321 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Alink, A., Schwiedrzik, C., Kohler, A., Singer, W. & Muckli, L. Stimulus predictability reduces responses in primary visual cortex. J. Soc. Neurosci.30, 2960–2966 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Summerfield, C., Trittschuh, E. H., Monti, J. M., Mesulam, M. M. & Egner, T. Neural repetition suppression reflects fulfilled perceptual expectations. Nat. Neurosci.11, 1004–1006 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Yaron, A. et al. Auditory cortex neurons that encode negative prediction errors respond to omissions of sounds in a predictable sequence. PLoS Biol.23, e3003242 (2025). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Westerberg, J. A. et al. Sensory responses of visual cortical neurons are not prediction errors. 2024.10.02.616378 Preprint at 10.1101/2024.10.02.616378 (2025).
- 26.Canaveral, C. A., Danion, F., Berrigan, F. & Bernier, P.-M. Variance in exposed perturbations impairs retention of visuomotor adaptation. J. Neurophysiol.118, 2745–2754 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Franklin, D. W. & Wolpert, D. M. Computational mechanisms of sensorimotor control. Neuron72, 425–442 (2011). [DOI] [PubMed] [Google Scholar]
- 28.Haith, A., Jackson, C., Miall, C. & Vijayakumar, S. Unifying the Sensory and Motor Components of Sensorimotor Adaptation. in Proc. Advances in Neural Information Processing Systems (NIPS ’08) (2008).
- 29.Treviño, M., Medina-Coss Y León, R., Támez, S., Beltrán-Navarro, B. & Verdugo, J. Directional uncertainty in chase and escape dynamics. J. Exp. Psychol.: Gen.153, 418–434 (2024). [DOI] [PubMed] [Google Scholar]
- 30.Clark, A. Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behav. Brain Sci.36, 181–204 (2013). [DOI] [PubMed] [Google Scholar]
- 31.Ernst, M. O. & Bülthoff, H. H. Merging the senses into a robust percept. Trends Cogn. Sci.8, 162–169 (2004). [DOI] [PubMed] [Google Scholar]
- 32.Summerfield, C. & de Lange, F. P. Expectation in perceptual decision making: neural and computational mechanisms. Nat. Rev. Neurosci.15, 745–756 (2014). [DOI] [PubMed] [Google Scholar]
- 33.Zacks, J. M., Speer, N. K., Swallow, K. M., Braver, T. S. & Reynolds, J. R. Event perception: a mind-brain perspective. Psychol. Bull.133, 273–293 (2007). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Wang, T., Morehead, R. J., Tsay, J. S. & Ivry, R. B. The Origin of Movement Biases During Reaching. eLife13, (2024).
- 35.Kokkinara, E., Slater, M. & López-Moliner, J. The Effects of Visuomotor Calibration to the Perceived Space and Body, through Embodiment in Immersive Virtual Reality. ACM Trans. Appl. Percept.13, 22 (2015). [Google Scholar]
- 36.Saijo, N. & Gomi, H. Multiple Motor Learning Strategies in Visuomotor Rotation. PLOS ONE5, e9399 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Wijeyaratnam, D. O., Cheng-Boivin, Z., Bishouty, R. D. & Cressman, E. K. The influence of awareness on implicit visuomotor adaptation. Conscious Cogn.99, 103297 (2022). [DOI] [PubMed] [Google Scholar]
- 38.Busch, N. A., Dubois, J. & VanRullen, R. The Phase of Ongoing EEG Oscillations Predicts Visual Perception. J. Neurosci.29, 7869–7876 (2009). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Hanslmayr, S. et al. Prestimulus oscillations predict visual perception performance between and within subjects. Neuroimage37, 1465–1473 (2007). [DOI] [PubMed] [Google Scholar]
- 40.Shadmehr, R. & Krakauer, J. W. A computational neuroanatomy for motor control. Exp. Brain Res185, 359–381 (2008). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Wolpert, D. M. & Kawato, M. Multiple paired forward and inverse models for motor control. Neural Netw.11, 1317–1329 (1998). [DOI] [PubMed] [Google Scholar]
- 42.Krakauer, J. W. & Mazzoni, P. Human sensorimotor learning: adaptation, skill, and beyond. Curr. Opin. Neurobiol.21, 636–644 (2011). [DOI] [PubMed] [Google Scholar]
- 43.Linkenkaer-Hansen, K., Nikulin, V. V., Palva, S., Ilmoniemi, R. J. & Palva, J. M. Prestimulus oscillations enhance psychophysical performance in humans. J. Neurosci.24, 10186–10190 (2004). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Trevi¤o, M., De la Torre-Valdovinos, B. & Manjarrez, E. Noise Improves Visual Motion Discrimination via a Stochastic Resonance-Like Phenomenon. Front. Hum. Neurosci.10, 572 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Avraham, G., Morehead, J. R., Kim, H. E. & Ivry, R. B. Reexposure to a sensorimotor perturbation produces opposite effects on explicit and implicit learning processes. PLoS Biol.19, e3001147 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Crevecoeur, F. & Scott, S. H. Beyond muscles stiffness: importance of state-estimation to account for very fast motor corrections. PLoS Comput Biol.10, e1003869 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47.Taylor, J. A., Krakauer, J. W. & Ivry, R. B. Explicit and Implicit Contributions to Learning in a Sensorimotor Adaptation Task. J. Neurosci.34, 3023–3032 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48.Yarkoni, T. Big Correlations in Little Studies: Inflated fMRI Correlations Reflect Low Statistical Power-Commentary on Vul et al. (2009). Perspect. Psychol. Sci.4, 294–298 (2009). [DOI] [PubMed] [Google Scholar]
- 49.Grady, C. L., Rieck, J. R., Nichol, D., Rodrigue, K. M. & Kennedy, K. M. Influence of sample size and analytic approach on stability and interpretation of brain-behavior correlations in task-related fMRI data. Hum. Brain Mapp.42, 204–219 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Turner, B. O., Paul, E. J., Miller, M. B. & Barbey, A. K. Small sample sizes reduce the replicability of task-based fMRI studies. Commun. Biol.1, 62 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Akella, S. et al. Deciphering neuronal variability across states reveals dynamic sensory encoding. Nat. Commun.16, 1768 (2025). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52.Tlaie, A. et al. What does the mean mean? A simple test for neuroscience. PLOS Computational Biol.20, e1012000 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53.Treviño, M. Non-stationary Salience Processing During Perceptual Training in Humans. Neuroscience443, 59–70 (2020). [DOI] [PubMed] [Google Scholar]
- 54.Treviño, M. et al. Isomorphic decisional biases across perceptual tasks. PLOS ONE16, e0245890 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55.Herrera, D. & Treviño, M. Undesirable Choice Biases with Small Differences in the Spatial Structure of Chance Stimulus Sequences. PLoS One10, e0136084 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56.Treviño, M. Stimulus similarity determines the prevalence of behavioral laterality in a visual discrimination task for mice. Sci. Rep.4, 7569 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57.Treviño, M. et al. Two-stage reinforcement learning task predicts psychological traits. Psych. J.12, 355–367 (2023). [DOI] [PubMed] [Google Scholar]
- 58.Hayhoe, M. & Ballard, D. Modeling task control of eye movements. Curr. Biol.24, R622–R628 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 59.Land, M. & Tatler, B. Looking and Acting: Vision and Eye Movements in Natural Behaviour. (Oxford University Press, 2009). 10.1093/acprof:oso/9780198570943.001.0001.
- 60.Bajcsy, R., Aloimonos, Y. & Tsotsos, J. K. Revisiting active perception. Auton. Robots42, 177–196 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 61.Tani, J. Exploring Robotic Minds: Actions, Symbols, and Consciousness as Self-Organizing Dynamic Phenomena. (Oxford University Press, 2016). 10.1093/acprof:oso/9780190281069.001.0001.
- 62.Márquez, I. & Treviño, M. Pupillary responses to directional uncertainty while intercepting a moving target. R. Soc. Open Sci.11, 240606 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 63.Treviño, M., Fregoso, E., Sahagún, C. & Lezama, E. An Automated Water Task to Test Visual Discrimination Performance, Adaptive Strategies and Stereotyped Choices in Freely Moving Mice. Front Behav. Neurosci.12, 251 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 64.Benjamini, Y. & Hochberg, Y. Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing. J. R. Stat. Soc. Ser. B (Methodol.)57, 289–300 (1995). [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
SUPPLEMENTARY_MATERIAL_FOR_PAPER_DISCREPANCIES_ET_v13
Data Availability Statement
The datasets generated during the current study are available in the OSF repository ([https://osf.io/ha5ms/?view_only=8940c72c83034da892a7f637b0b6fc70)).






