Abstract
If humans exploit task redundancies as a general strategy, they should do so even if the redundancy is decoupled from the physical implementation of the task itself. Here, we derived a family of goal functions that explicitly defined infinite possible redundancies between distance (D) and time (T) for unidirectional reaching. All [T, D] combinations satisfying any specific goal function defined a goal-equivalent manifold (GEM). We tested how humans learned two such functions, D/T = c (constant speed) and D·T = c, that were very different but could both be achieved by neurophysiologically and biomechanically similar reaching movements. Subjects were never explicitly shown either relationship, but only instructed to minimize their errors. Subjects exhibited significant learning and consolidation of learning for both tasks. Initial error magnitudes were higher, but learning rates were faster, for the D·T task than for the D/T task. Learning the D/T task first facilitated subsequent learning of the D·T task. Conversely, learning the D·T task first interfered with subsequent learning of the D/T task. Analyses of trial-to-trial dynamics demonstrated that subjects actively corrected deviations perpendicular to each GEM faster than deviations along each GEM to the same degree for both tasks, despite exhibiting significantly greater variance ratios for the D/T task. Variance measures alone failed to capture critical features of trial-to-trial control. Humans actively exploited these abstract task redundancies, even though they did not have to. They did not use readily available alternative strategies that could have achieved the same performance.
Keywords: motor control, redundancy, equifinality, motor noise
a fundamental question in motor neuroscience research is to determine how the human nervous system generates accurate and repeatable goal-directed movements in the face of multiple levels of redundancy (Bernstein 1967; Scott 2004; Todorov 2004) and inherent biological noise (Faisal et al. 2008; McDonnell and Ward 2011; Osborne et al. 2005; Stein et al. 2005). There are more mechanical degrees of freedom than required to complete most movement tasks, more muscles than necessary to move a given joint, more motor units than necessary to contract a given muscle, and so forth. This redundancy gives rise to equifinality, i.e., there are an infinite number of ways to perform the same action (Bernstein 1967; Cusumano and Cesari 2006; Todorov and Jordan 2002). Redundancy thus permits individuals to perform complex tasks reliably and repeatedly while allowing variability in a movement's details. Likewise, there is ample evidence that the neuromuscular pathways and biological processes involved in movement are inherently noisy (Faisal et al. 2008; McDonnell and Ward 2011; Osborne et al. 2005; Stein et al. 2005; van Beers 2009). This also contributes to movement variability. Most optimization approaches predict average behavior (Collins 1995; Engelbrecht 2001; Harris and Wolpert 1998; Scott 2004) assuming that the nervous system minimizes variability as a limiting constraint (Harris and Wolpert 1998; Körding and Wolpert 2004; O'Sullivan et al. 2009). However, recent work demonstrates that humans instead often exploit redundancy to regulate variability in ways that help maximize task performance (Cusumano and Cesari 2006; Latash et al. 2002; Todorov and Jordan 2002) while minimizing control effort.
Two geometry-based approaches that address these issues experimentally are the uncontrolled manifold (UCM) analysis (Latash et al. 2002; Schöner and Scholz 2007) and the tolerance-noise covariation (TNC) analysis (Müller and Sternad 2004; Sternad et al. 2011). Both methods postulate that the nervous system only corrects deviations orthogonal to some proposed subsurface (i.e., a “manifold”) within a larger space of relevant variables. In UCM, this manifold is defined at each instant in time along an experimentally recorded average movement trajectory, based on the hypothesis that this trajectory then determines “what task-level variables are ‘most important’ for the nervous system” (Latash et al. 2010). In contrast, the TNC approach (Müller and Sternad 2004; Sternad et al. 2011) analyzes data relative to a task manifold defined by some minimal subset of body-level variables required to achieve the external goal of the task itself (e.g., when throwing a ball, the position and velocity of the ball at release define exactly where the ball will land relative to some external target). In both methods, ratios of the variances orthogonal to and along the defined manifold are then analyzed and interpreted. In UCM analyses, one seeks to infer whether or not movements are “controlled” or “stabilized” (i.e., by limiting orthogonal variance) around the trajectory of interest (Latash et al. 2002; Schöner and Scholz 2007). TNC analyses, where the manifold is defined with respect to the external task goal (which is fixed) instead of the recorded movement (which might vary), are used to explore how variance structure evolves over time during learning (Cohen and Sternad 2009; Pendt et al. 2011).
While both UCM and TNC analyses have revealed many important features about how the nervous system regulates variability during movement, neither method is without some limitations. For example, Yang et al. (2007) applied UCM analyses to study subjects learning to make reaching movements in a viscous curl force field. This included analyzing variance relative to average recorded hand paths made during initial exposure to this force field and throughout the learning process. However, by hypothesizing that the central nervous system structures variance specifically around the average recorded hand path (Latash et al. 2002, 2010; Schöner and Scholz 2007), UCM must also assume that this average hand path is somehow known to the nervous system in order for it to be used as the basis for controlling and/or structuring variance around it. In many contexts, this assumption is likely quite valid. However, for at least the initial exposure trials in Yang et al. (2007), this seems highly unlikely because those recorded hand paths were significantly curved not because of any actions of the nervous system or the person performing the task, but instead because of the unanticipated external force field applied to the subject. Thus, for these movements in particular, the authors' conclusion that “the central nervous system makes use of kinematic redundancy … to adapt reaching performance … ” (Yang et al. 2007) seems, at the very least, problematic. By defining manifolds based on recorded movement data, UCM conflates theory with data analysis, which can sometimes lead to ambiguous interpretations, as in the case described here. This suggests that the traditional interpretations of UCM analyses (Latash et al. 2002, 2010; Schöner and Scholz 2007) need to be carefully revisited if UCM is applied to tasks of this or similar nature.
The minimum intervention principle (MIP) offers a general theoretical basis for constructing computational models of movement (Liu and Todorov 2007; Scott 2004; Todorov 2004; Todorov and Jordan 2002) that provides a framework for exploring these issues more formally. The MIP ties the idea of task geometry to stochastic optimal control theory to construct computational models that predict how movements are regulated in redundant motor systems (Todorov and Jordan 2002; Valero-Cuevas et al. 2009). The related goal-equivalent manifold (GEM) approach provides an analysis that maps the observed dynamics of task performance, at the level of the body, onto an independently defined goal space (Cusumano and Cesari 2006; Dingwell et al. 2010; John and Cusumano 2007). Thus, in the GEM approach, task manifolds are defined in a way more similar to the TNC approach than to the UCM approach. In the GEM approach, a clear distinction is made between the task being performed and the measured performance (i.e., movement) of the person executing the task. In the game of darts, for example, the goal of the task is to hit the bull's-eye. This goal exists independently of who throws the dart, how they throw it, or even if any dart is thrown at all. Thus, unlike UCM, which uses each subject's own average behavior to define manifolds for analysis, the GEM approach uses formal mathematical relationships between the body and the goal (i.e., goal functions) that are defined equivalently for any subject (Cusumano and Cesari 2006). The GEM approach also makes no a priori assumptions about which variables are or are not “controlled” (Dingwell et al. 2010).
An important advance introduced first by UCM and adopted by MIP, TNC, and GEM was the recognition that understanding how variance is structured yields important insights into how movements are controlled by the nervous system. Indeed, UCM, MIP, and TNC analyses have thus far relied solely on computing different measures (mostly ratios) of variance about a task manifold to draw such inferences. However, variance can be “structured” for a variety of (biomechanical and/or neurophysiological) reasons not related to task-relevant control (Dingwell et al. 2010; Valero-Cuevas et al. 2009). Additionally, statistical measures of variance do not capture the temporal structure of observed intertrial fluctuations and so cannot quantify how errors evolve from trial to trial. One solution is to develop computational models that directly predict how variance becomes structured as a result of specific control policies (Dingwell and Cusumano 2010; John and Cusumano 2007; Todorov and Jordan 2002). Experimentally, additional insights can be gained by supplementing variance analyses with more detailed temporal analyses that directly quantify how fluctuations on any one trial are subsequently corrected on the next (Dingwell et al. 2010). Indeed, for many tasks, movements on consecutive trials are correlated (Ganesh et al. 2010; Ranganathan and Newell 2010b). Autocorrelation-based models have shown strong dependence of each consecutive movement on the immediately preceding movement for both reaching (Gates and Dingwell 2008; Scheidt et al. 2001) and walking (Dingwell et al. 2010). Indeed, there is a substantial literature indicating that such trial-to-trial analyses are essential to gaining a comprehensive understanding of motor learning and motor control processes (Cheng and Sabes 2007; Fine and Thoroughman 2007; Smith et al. 2006; Thoroughman et al. 2007; van Beers 2009; Verstynen and Sabes 2011).
One important issue that has not yet been addressed is the degree to which prior experimental observations reveal truly general control strategies. Certainly, experimental evidence suggests that people exploit redundancy in many specific individual tasks (Cohen and Sternad 2009; Cusumano and Cesari 2006; Dingwell et al. 2010; Gates and Dingwell 2008; Hsu et al. 2007; Reisman et al. 2002; Yang and Scholz 2005; Yen and Chang 2010). However, it is not known whether subjects exhibit learning and/or consolidation of learning (Brashers-Krug et al. 1996; Shadmehr and Brashers-Krug 1997; Shadmehr and Holcomb 1997) for more generalized or abstract tasks. Here we defined a family of similar tasks that share the same neural and biomechanical resources but use these resources to achieve very different (but still explicitly defined) task goals. The GEM analysis framework allowed us to directly compare behavior between these tasks to test the hypothesis that humans can learn to exploit task redundancies, when available, in a broad and general way, independent of the specific physical constraints of any one individual task. We thus determined the extent to which the strategy of exploiting redundancy can itself generalize (Krakauer et al. 2006; Poggio and Bizzi 2004) across multiple tasks. If exploiting redundancy is a generic motor control strategy, then learning one specified redundancy relationship might be expected to help facilitate learning of a different (but neurally and/or biomechanically similar) task (Braun et al. 2009). Alternatively, however, if subjects learned to use a redundancy that was specific to achieving a single task goal, then one might expect to see interference of learning (Krakauer et al. 2006; Shadmehr and Moussavi 2000; Wulf and Shea 2002) when subjects are asked to perform a similar task where they are given the option to exploit a very different redundancy involving the same task variables.
METHODS
Defining a family of generalized reaching tasks.
Reaching to a fixed target in a fixed time (e.g., as in Ranganathan and Newell 2010a, 2010b; Yang et al. 2007) allows for redundancy in the hand path used to perform the task (i.e., at the level of motor elements). However, the final goal of the task itself is not redundant. Here we instead defined a novel class of reaching tasks for which the goals were themselves inherently redundant, such that many combinations of reaching distance (Di) and time (Ti) on any given movement i could equally achieve a variety of predefined task “goals.” This class of reaching tasks was defined by the infinite family of goal functions (Cusumano and Cesari 2006), f(Ti, Di):
| (1) |
where m, n, and C are positive constants. The task was then to drive f(Ti, Di) to zero, or equivalently to achieve combinations of Dim·Tin that remained constant (C) on average. Given Eq. 1, we then formulated concrete mathematical predictions regarding strategies people might use to achieve any given specific task (Cusumano and Cesari 2006; Dingwell et al. 2010). Different tasks could be defined by different choices of the constants m, n, and C (Fig. 1). The set of all combinations of Ti and Di that satisfied Eq. 1 then defined the GEM for that specific task, i.e., all pairs [Ti, Di] that satisfy Eq. 1 lie on the GEM and correspond to perfect task execution. Varying m, n, and C changed the location and shape of the GEM within the [Ti, Di] plane (e.g., Fig. 1).
Fig. 1.

Schematic representations of the 2 goal-equivalent manifolds (GEMs) (D/T and D·T; solid lines) used in the present experiment. Dashed lines represent ±5% errors with respect to each GEM. Individual points represent sample reaching data from one 400-reach trial from a typical subject. Both plots show perpendicular and tangent coordinate axes used for analysis relative to the GEM. The D/T plot also shows an example of how perpendicular (δP) and tangent (δT) deviations away from these coordinate axes were defined.
Here, we tested two specific goal functions from this class. The first was defined by n = −m and C = cm, which defined the task of trying to maintain constant average speed (D/T = c) from trial to trial. The second GEM was defined by n = +m and C = cm, which defined the task of trying to maintain constant D·T = c from trial to trial. In both cases, each GEM was defined entirely by the corresponding task goal, completely independent of how subjects chose to move with respect to the GEM (Cusumano and Cesari 2006). These two tasks were physically orthogonal in the [T, D] task space (Fig. 1). They were also “conceptually” orthogonal since D/T has a direct physical interpretation (i.e., speed), while D·T has no simple physical interpretation. If the physical meaning of the D/T GEM is important for neural control (Hwang and Shadmehr 2005), and not only as an intellectual construct, we might expect the D·T task to be much harder to learn. However, since both tasks could easily be achieved by making similar movements within the same range of the available workspace, it is also reasonable to expect that subjects could learn either task just as easily. We predicted that people would learn each of these two reaching tasks (i.e., performance would improve over time and consolidate across consecutive days), even though they involved far more abstract and ambiguous task goals than previous studies in which subjects were always given explicit goals (e.g., “hit the target”) to achieve (Berret et al. 2011; Cohen and Sternad 2009; Cusumano and Cesari 2006; Schaefer et al. 2012; Schlerf and Ivry 2011; Todorov and Jordan 2002). We also predicted that learning of either task first would facilitate subsequent learning of the second task, assuming that acquiring some prior experience at exploiting redundancies between D and T in general for either task should generalize to and therefore facilitate learning for the other task. Finally, we predicted that, for both tasks, people would exhibit specific trial-to-trial dynamics indicating that they adopted control strategies that actively exploited each task's unique redundancy relationship.
Subjects and protocol.
Ten young healthy right-handed adults (Table 1) participated. Subjects were screened to exclude anyone who reported any history of orthopedic problems or recent upper extremity injuries or was taking medications that may have influenced his or her reaching and/or motor control. All participants provided written informed consent, as approved by the University of Texas Institutional Review Board. Handedness was determined with a modified version of the Edinburgh Inventory (Oldfield 1971). A score of 0/10 indicated a complete left-handed preference, while a score of 10/10 indicated a complete right-handed preference. All subjects scored at least 8/10, indicating strong right-handed dominance.
Table 1.
Participant characteristics
| Characteristic: | Value: |
|---|---|
| Age, yr | 22.20 ± 1.476 |
| Sex | 6 M / 4 F |
| Body height, m | 1.71 ± 0.070 |
| Body mass, kg | 64.76 ± 11.49 |
| Body mass index, kg/m2 | 21.95 ± 3.184 |
| Upper arm length, m | 0.34 ± 0.025 |
| Forearm length, m | 0.36 ± 0.018 |
All values except sex are means ± SD.
Subjects were tested on two pairs of consecutive days. Days 1 and 2 and days 3 and 4 were on consecutive days, with days 2 and 3 being at least 5 days apart. On each pair of days, one of two different goal functions was used:
| (2a) |
or
| (2b) |
where m was a positive constant. These then defined the D/T GEM and the D·T GEM, respectively. These two goal functions were selected because their GEMs shared a common point of intersection (Fig. 1) that corresponded to an easily reachable target, [T, D] = [1.0 s, 0.45 m]. Thus, while different covariations between T and D led to very different errors for each goal function, the physical implementation of each task (i.e., muscles involved, average reaching parameters, etc.) was the same for both tasks. This allowed us to test qualitatively very different GEMs that could equally be solved with the same basic reaching movements. Changing the value of the exponent, m, did not alter the location or shape of either GEM but did allow us to vary the sensitivity (Cusumano and Cesari 2006) to errors along each GEM (see below). Odd-numbered subjects were given the D/T task first (days 1 and 2) and the D·T task second (days 3 and 4). Even-numbered subjects were presented with the same two tasks in reverse order.
Subjects sat in a chair attached to the testing device (Fig. 2A). The height of the chair was adjusted so subjects' knees were at a 90° angle. Subjects rested their arm in the arm support and grasped a handle attached to a slider mounted on a low-friction rail (Fig. 2A). The height of the rail was adjusted so the top of the subject's right hand was aligned with the bottom of his/her shoulder when the handle was grasped close to the body with the elbow pointing out.
Fig. 2.

Experimental setup. A: schematic of the physical apparatus, with a test subject making a reaching movement during an experiment while receiving visual feedback on a video monitor mounted in front of him/her. B: schematic of the visual feedback provided to each subject during the “Early Learning,” “Late Learning,” and “Testing” phases of each experiment. The “Scoreboard” is shown at top, above the numerical display of the subject's current “error” and the indicator light. Bottom right: plot of the reaching distance (D) vs. reaching time (T) for each reaching movement. The color of each symbol corresponds to the magnitude of the error for that reaching movement. Here, the 5 most recent movements are shown. Note that no other information relative to the GEM was provided to the subjects. C: schematic of the timeline of trials completed during each day of the experiment. This same schedule was followed on each of the 2 consecutive days for each of the 2 GEM tasks (see text).
Subjects were instructed to start close to the body and to make smooth, continuous out-and-back reaching movements (Fig. 2A). They were instructed to reach as near or as far as they desired, at whatever speed they desired. Handle kinematics were recorded with a rotary encoder and analyzed with custom LabVIEW (National Instruments, Austin, TX) software. For the ith reaching movement, the software computed the net reaching distance (Di) and the total time (Ti) required to reach out and back. These were then used to calculate the relative percent goal-level error (Ei) with respect to each specified GEM:
| (3a) |
or
| (3b) |
For both GEMs, the value of m determined the sensitivity to errors in Ti and Di. For the same absolute body-level errors in Ti and Di (i.e., deviations Ti − T* and Di − D* from the closest point [T*,D*] on the GEM), smaller positive values of m led to smaller percent errors (Ei), while larger values of m led to larger percent errors. Thus varying m from smaller to larger values allowed us to directly manipulate the perceived level of difficulty of each task from easier to harder.
After each reach, subjects received feedback (Fig. 2B). On the display screen in front of them, the error was displayed and a marker was plotted in the [T, D] plane located at their reaching time (Ti) and distance (Di). The color of the marker corresponded to the magnitude of their error. Errors with magnitudes |Ei| < 5% were green, errors with magnitudes 5% < |Ei| < 25% were yellow, errors with magnitudes 25% < |Ei| < 50% were orange, errors with magnitudes 50% < |Ei| < 75% were red, and errors with magnitudes |Ei| > 75% were shown as red ×s. For each error level, a different sound was also played, ranging from a pleasant sound for smaller errors to a very unpleasant sound for larger errors. The meaning of these colors and sounds was explained to each subject prior to each day's experiment. Most importantly, the GEM itself was never directly displayed to the subjects (Fig. 2B). This is quite different from other recent experiments involving redundant reaching tasks, where subjects were explicitly shown line and/or arc-shaped targets to reach to (Berret et al. 2011; Schaefer et al. 2012; Schlerf and Ivry 2011). Here, subjects were only told that there was a goal for the reaching movements, which was some combination of distance and time, and that more than one combination could achieve the goal. They were only instructed to minimize the errors, as presented to them on the screen.
Additionally, the experiments were broken up into blocks of 50 consecutive movements each. In addition to the error feedback, subjects also accumulated “points” for each reaching movement i, up to a maximum score, Sb, of 100 points for trial block b, calculated as
| (4) |
Subjects were told that smaller errors would result in higher scores and to try to score as high as they could within each block of trials. A bar at the top of the video screen (Fig. 2B) tracked each subject's scores across each block of trials. This allowed subjects to observe any improvements in their own performance over longer periods of time, and thus provided greater motivation/incentive to improve.
Each of the 4 days of testing followed the same sequence of 4 phases: “Early Learning,” “Exploration,” “Late Learning,” and “Testing” (Fig. 2C). For Early Learning, each subject completed 4 blocks of 50 movements each, where the level of task difficulty was increased on each consecutive block by increasing the value of m in Eq. 2 across the values m ∈ {0.5, 0.7, 1.0, 1.35}. For both GEMs, since smaller (or larger) values of m generated smaller (or larger) Ei (Eqs. 3) for the same movements, increasing m therefore increased the perceived difficulty of the task. Subjects were instructed to try to minimize their errors (and maximize their scores) within each trial/block. They were given visual feedback showing a running display of their five most recent reaching errors displayed on the [T, D] graph and showing their current overall scores (Fig. 2B). For the Exploration phase, subjects completed 200 consecutive movements, all at m = 1.35, where they were given cumulative visual [T, D] feedback about all 200 movements. Subjects were instructed to try and fill up the space on the [T, D] graph by making movements with a wide range of combinations of T and D. Subjects were not provided overall scores for this phase. For Late Learning, each subject completed 2 blocks of 50 movements each, all at m = 1.35. Subjects were instructed to try to minimize their errors and maximize their scores within each block. They were again given visual [T, D] feedback about only their five most recent movements and about their overall scores (Fig. 2B). Finally, for Testing, each subject completed 8 additional blocks of 50 movements each, all under the same conditions as Late Learning.
Data collection and analyses.
Net reaching distances (Di) and times (Ti) were recorded for all movements made during the Early Learning and Testing phases of each day. These data were all exported from LabVIEW to MATLAB (MathWorks, Natick, MA) for final processing and analysis. All statistical analyses were performed in Minitab 15 (Minitab, State College, PA).
To quantify learning, the percent errors (Eqs. 3) from the first block of 50 reaches from the Early Learning phase of each experiment were plotted against trial number and fitted with an exponential (e.g., Fig. 3B):
| (5) |
where Êi was the predicted value of each percent error (Eqs. 3) and i was the trial number. Thus, a statistically defined the expected initial error at trial i = 0 and τ defined the “mean lifetime” (in number of trials) over which subjects minimized their errors. Smaller values of τ thus indicated faster rates of learning. For each GEM, 95% confidence intervals for τ were computed across subjects for both day 1 and day 2. Significant decreases in Ei over time, as reflected in values of τ > 0, were taken as evidence that subjects did indeed learn how to perform each of these abstract generalized reaching tasks.
Fig. 3.

Early Learning data for each reaching task. A: average (±SE) % error vs. trial number for all subjects on both test days for both reaching tasks. Note that the vertical scales for the data for the 2 tasks are quite different. B: % error vs. trial number for both test days from 1 typical subject for the D/T task (top) and from another typical subject for the D·T task (bottom). Parameter values for the exponential fits (Eq. 5) to each curve are shown in each subplot. C: average τ values (in no. of trials) for day 1 and day 2 for the D/T task (top) and the D·T task (bottom). Error bars denote ±95% confidence intervals.
Consolidation of learning (Shadmehr and Holcomb 1997) was tested by comparing values of a from these exponential fits to determine whether subjects performed better 24 h after initial exposure to the task. Two-factor (day × subject) mixed-effects, balanced analysis of variance (ANOVA) tests were performed on the a data for each GEM separately. Day (1 vs. 2) was the fixed factor, and subjects (n = 10) was the random factor. Significant differences between the initial performance (a) on the first and second days of exposure to either GEM were taken as evidence that learning did consolidate over 24 h. Initial analysis of the residuals indicated that these a data should be log-transformed to meet the normality and linearity requirements of the ANOVA test. Therefore, these ANOVAs were repeated on the log(a) data.
Generalization of learning (Krakauer et al. 2006; Poggio and Bizzi 2004) was tested by comparing values of a from the initial exposure to each task on day 1. A two-factor (task × order) fixed-effects, repeated-measures, balanced ANOVA was performed to determine whether performing either task (D/T vs. D·T) first either enhanced or interfered with subsequent learning of the other task. Here also, initial analysis of the residuals indicated that these a data should be log-transformed to meet the normality and linearity requirements of the ANOVA test. Therefore, this ANOVA was repeated on the log(a) data.
Finally, several tests were performed to quantify final task performance and to determine whether and to what extent subjects exploited the redundancy inherent in each task. These analyses were all conducted on the data from the final 400 reaching movements executed during the Testing phase (Fig. 2C) of each day of exposure to each task (e.g., Fig. 6). To perform these analyses, we first rescaled the experimental data for each task to unit variance (Dingwell et al. 2010), i.e., T̃ = Ti/σ(T) and D̃ = Di/σ(D), where σ(T) and σ(D) were the standard deviations of T and D, respectively, computed over the whole time series. For each relevant task, we then also rescaled each GEM accordingly. For the D/T GEM:
| (6a) |
and for the D·T GEM:
| (6b) |
Therefore, by appropriately redefining the values of the relevant constants, we rescaled both each task definition and the experimental data into a space where σ(D̃) = σ(T̃) = 1. This then provided our subsequent analyses with an intuitive reference for comparing other measures of variance (Dingwell et al. 2010).
Fig. 6.

Example time series data from the final “Testing” phase (Fig. 2C) of 1 typical subject for each of the 2 GEM tasks. Plots show total reaching distance (Di), reaching time (Ti), and deviations tangent to (δT) and perpendicular to (δP) the GEM across all 400 movements made. Note that the scales on the plots for δT and δP are different. a.u., Arbitrary units.
From these rescaled data, we then determined each subject's average performance, i.e., [T̄,D̄]. Since [T̄,D̄] may not lie exactly on the GEM, we then defined each subject's “preferred operating point” along the GEM, [T*,D*], as the point on the GEM that was closest to [T̄,D̄]. The trial-to-trial fluctuations in each subject's performance then constituted small deviations away from their preferred operating point, i.e., T̃i′ = T̃i − T* and D̃i′ = D̃i − D*. By linearizing around the preferred operating point, we then derived expressions to compute the deviations tangent to (δT) and perpendicular to (δP) the GEM (see appendix). For the D/T GEM, this yielded
| (7a) |
where c̃ was the goal value for the D/T reaching task (Eq. 6a). Following this same process for the D·T GEM (see appendix) yielded
| (7b) |
For both GEMs, the δT deviations are “goal equivalent” because any deviations along either GEM do not affect the task goal. Conversely, the δP deviations are directly “goal relevant” because any deviations perpendicular to the GEM will constitute an error with respect to the goal (Cusumano and Cesari 2006).
We then analyzed these δT and δP time series in two ways. First, we calculated the standard deviations (σ) of each δP and δT time series to quantify the structure of these variance components (Cusumano and Cesari 2006; Schöner and Scholz 2007; Todorov 2004). Since the original Di and Ti time series were already normalized to unit variance, we predicted that subjects would exhibit σ(δT) > 1 and σ(δP) < 1, consistent with trying to minimize those deviations that directly impacted task performance (i.e., δP) more than those that are irrelevant to task performance (i.e., δT).
However, standard deviations only quantify the average magnitude of differences across all trials, regardless of temporal order. They yield no information about how each trial affects subsequent trials (Dingwell and Cusumano 2000; Dingwell et al. 2010). In repetitive tasks including reaching, each repetition of a movement is strongly influenced by the immediately preceding movement (Dingwell et al. 2010; Scheidt et al. 2001) but not by movements farther in the past (Scheidt et al. 2001). Such findings are consistent with many accepted models of single-step trial-to-trial learning (Cheng and Sabes 2007; Fine and Thoroughman 2007; Smith et al. 2006; van Beers 2009; Verstynen and Sabes 2011). Therefore, to directly quantify the influence of each reaching movement on the subsequent movement, we modeled each time series as
| (8) |
where X indicated any of the relevant time series (X ∈ {D, T, δP, δT}) and ξ was a noise term. The parameter λ can be interpreted as a stability multiplier (Dingwell and Kang 2007; Strogatz 1994) that quantifies the strength of the response to external disturbances. It also quantifies the correlation between consecutive movements. λ > 0 indicates statistical persistence: increases (or decreases) in X are more likely to be followed by further increases (or decreases) in X. λ = 0 indicates uncorrelated “white” noise, and λ < 0 implies antipersistence: increases (or decreases) in X are more likely to be followed by subsequent decreases (or increases) in X. In the context of control, statistical persistence (λ > 0) indicates variables that are less tightly regulated (i.e., they “drift” for several consecutive trials before being corrected) (Dingwell and Cusumano 2010). Variables that are more tightly regulated will exhibit less persistence (smaller λ) or possibly antipersistence (λ < 0) (Dingwell et al. 2010). We therefore predicted that subjects would exhibit strong statistical persistence (large λ) for both Di and Ti time series for both tasks, because of the inherent redundancy between D and T in each task (Fig. 1). We also predicted that subjects would exhibit much stronger statistical persistence for δT time series than for δP time series, i.e., λ(δT) >> λ(δP), consistent with actively correcting the relevant δP fluctuations more rapidly than the irrelevant δT fluctuations. In all cases, we anticipated that |λ| < 1, as required for stable deviations about the operating point [T*,D*].
Thus the final dependent measures for each subject, for each task, for each day, consisted of the variances of the GEM variables, σ(δT) and σ(δP), and the one-step autocorrelations of both the original time series, λ(Di) and λ(Ti), and the GEM variables, λ(δT) and λ(δP). For each reaching task (D/T vs. D·T), the data for λ(Di) and λ(Ti) were first subjected to a four-factor (task × variable × day × subject) mixed-effects, balanced ANOVA, where task (D/T vs. D·T), variable (Di vs. Ti), and day (1 vs. 2) were fixed factors and subjects (n = 10) was a random factor. Similarly, the data for both σ(δT) and σ(δP) and λ(δT) and λ(δP) were first subjected to four-factor (task × direction × day × subject) mixed-effects, balanced ANOVAs, where task, direction (δT vs. δP), and day (1 vs. 2) were fixed factors and subjects was the random factor. In all cases, there were no significant “day” effects, and so each analysis was rerun as a reduced three-factor model, treating the data from each of the two consecutive days as two repeated independent observations. These models tested for main effects for each factor and also for any interaction effects.
RESULTS
Learning of each task.
When first exposed to both reaching tasks, subjects reduced their initial errors quickly (Fig. 3A). Subjects also made much smaller initial errors at the beginning of day 2 (Fig. 3A). Most of these decreasing error trends were well-fit with a decaying exponential function (Eq. 5; Fig. 3B). Average time constants (τ) ranged from ∼30 trials to ∼60 trials (Fig. 3C), indicating that most subjects learned both tasks within the first one or two trial blocks. Of the 40 total trials analyzed (10 subjects × 2 tasks × 2 days), 4 trials yielded extreme outliers for τ and are not included in Fig. 3C. Each of these cases corresponded to subjects who exhibited very low initial errors (a) and so had little room to improve, producing very “flat” curve fits. Although initial errors (a) were higher for the D·T task than for the D/T task (Fig. 3A), subjects also appeared to learn the D·T task slightly more quickly (i.e., slightly smaller τ) than they did the D/T task (Fig. 3C). However, these smaller τ values for the D·T task also at least partly reflect the much higher initial errors (a) that subjects also exhibited for this task.
Consolidation of learning.
Subjects decreased their initial errors (a) from day 1 to day 2 for both the D/T task (Fig. 4A) and the D·T task (Fig. 4B). After log-transformation, the decreases in log(a) from day 1 to day 2 were statistically significant for both the D/T task (P = 0.047; Fig. 4C) and the D·T task (P = 0.004; Fig. 4D). These decreases in initial error from day 1 to day 2 for both tasks indicate that subjects did consolidate learning. Therefore, subjects both reduced their initial errors (Fig. 3) and exhibited consolidation across consecutive days (Fig. 4), and thus learned each of these two abstract reaching tasks.
Fig. 4.

Consolidation of learning in the Early Learning phase from day 1 to day 2. Error bars denote ±95% confidence intervals. A and B: initial % error values (i.e., a in Eq. 5) for day 1 vs. day 2 for both reaching tasks. C and D: the same initial % error values as shown in A and B, after log transformation to meet requirements for statistical tests. Initial errors were significantly reduced at the beginning of day 2 relative to day 1 for both the D/T task and the D·T task.
Generalization of learning.
Initial examination of the task × order comparisons (Fig. 5A) suggested differences between how subjects responded to the D/T and D·T reaching tasks when first exposed to the other task. After log-transformation (Fig. 5B), the ANOVA revealed both a highly significant main effect for task (P = 9.3 × 10−6) and a highly significant task × order interaction effect (P = 0.007). Subjects who learned the D·T task second, after having learned the D/T task first, exhibited smaller initial errors than subjects who learned the D·T task first. Thus learning the D/T task first helped facilitate subsequent learning of the D·T task. Conversely, subjects who learned the D/T task second, after having learned the D·T task first, exhibited larger initial errors than subjects who learned the D/T task first. Thus learning the D·T task first interfered with subsequent learning of the D/T task.
Fig. 5.

Generalization from each reaching task to the other. Plots show data for initial % error values (i.e., a in Eq. 5) for the beginning of the Early Learning phase of each task. Solid lines represent the difference in initial % error for the D/T task between those who completed it first (left) and those who completed it second (right). Dashed lines represent the difference in initial % error for the D·T task between those who completed it “first” (left) and those who completed it “second” (right). Error bars denote ±95% confidence intervals for each mean. A: initial % error for subjects tested on each task first vs. second. B: the same initial % error values as shown in A, after log transformation to meet requirements for statistical tests. Initial % errors were significantly different between the 2 tasks (P = 9.3 × 10−6). There was also a significant task × order interaction effect (P = 0.007), indicating that learning of each task affected the initial responses to the other task differently.
Exploiting redundancy.
Over the final 400 “Testing” trials (Fig. 2C) for each task, subjects exhibited relatively consistent behavior across most trials (Fig. 6). However, the raw data did typically exhibit statistical persistence in the form of low-frequency oscillations like those seen in Fig. 6. These trends were particularly noticeable in the Di, Ti, and δT time series, but less so in the δP time series. This statistical persistence property of a time series cannot be quantified with summary statistical measures like standard deviations. However, these slow oscillations in these time series are important, as they reflect a weaker level of control (Dingwell and Cusumano 2010; Dingwell et al. 2010), the degree of which can be quantified by λ (Eq. 8).
Relative to the GEM for both tasks, standard deviations in the δP direction were significantly smaller than standard deviations in the δT direction (P = 1.4 × 10−13; Fig. 7). Likewise, there was a highly significant GEM × direction interaction effect (P = 8.8 × 10−5), indicating that this difference was much greater for the D/T task than for the D·T task. Follow-up ANOVA tests conducted on the data from each task separately confirmed that these direction effects were significant for both the D/T task (P = 5.4 × 10−6; Fig. 7A) and for the D·T task (P = 0.007; Fig. 7B) individually. Variances in both directions were statistically indistinguishable at the end of learning between day 1 and day 2 for both tasks (P = 0.957). Thus, for both tasks, the variance in the trial-to-trial data exhibited the largest deviations, on average, along the respective GEM and the smallest deviations, on average, perpendicular to it. However, these differences were far more pronounced for fluctuations relative to the D/T GEM.
Fig. 7.

Standard deviations of the trial-to-trial fluctuations perpendicular (δP) vs. tangent (δT) to the D/T GEM (A) and the D·T GEM (B) from the final “Testing” phase (Fig. 2C) of each experiment. Shown are box plots of standard deviations computed across all 400 movements of the final “Testing” phase of each day of each experiment for all subjects. Data points for individual subjects are shown as single dots. Fluctuations in both reaching distance (D) and time (T) were first normalized to unit variance prior to analysis, so the data here are shown relative to a standard deviation of 1. Subjects exhibited significantly greater variability along the GEM than perpendicular to it (P = 1.4 × 10−13) for both the D/T task (A) and the D·T task (B). Likewise, there was a significant task × direction interaction effect (P = 8.8 × 10−5), indicating that the difference between the δP and δT directions was significantly greater for the D/T task than for the D·T task.
For reaching distance D and time T, values of λ were all well above zero (Fig. 8), indicating strong statistical persistence of trial-to-trial fluctuations in these time series. Across both tasks, subjects exhibited values of λ that were significantly greater for D than for T (P = 0.006). However, there were no significant differences between the two GEM tasks (P = 0.613) and no significant GEM × variable interaction effects (P = 0.186). Likewise, there were no significant differences between day 1 and day 2 (P = 0.550). Thus subjects regulated trial-to-trial fluctuations in both D and T in approximately the same way for both reaching tasks. For both tasks, λ values were relatively high, indicating that subjects did not correct deviations in either D or T very quickly.
Fig. 8.

Statistical persistence (λ) as defined by Eq. 8 for time series of reaching distances (Di) and reaching times (Ti) for the final “Testing” phases (Fig. 2C) of each experiment for the D/T task (A) and the D·T task (B). Shown are box plots of λ values computed across all 400 movements of the final “Testing” phase of each day of each experiment for all subjects. Data points for individual subjects are shown as single dots. Subjects exhibited somewhat greater statistical persistence for D than for T across both tasks (P = 0.006) but did not exhibit any differences between the D/T task (A) and the D·T task (B) (P = 0.613).
Conversely, for the GEM-related variables, there was a highly significant main effect difference between the δP and δT directions across both tasks (P = 1.8 × 10−13; Fig. 9). Follow-up ANOVA tests conducted on the data from each task separately demonstrated that these direction effects remained significant for both the D/T task (P = 5.4 × 10−6; Fig. 9A) and the D·T task (P = 5.2 × 10−9; Fig. 9B) individually. In contrast to the variability results (Fig. 7), however, the GEM × direction interaction effect here was not significant (P = 0.173). There were likewise no differences between day 1 and day 2 (P = 0.772). Although still positive, the λ values for fluctuations in the δP direction were also substantially smaller than those for either of the original movement variables, D and T (Fig. 8). Thus, from trial-to-trial, subjects corrected the GEM-relevant deviations in δP significantly more quickly than they did the irrelevant δT deviations along each GEM, and also more quickly than deviations in either D or T (Fig. 8). Additionally, they did so in a similar manner for both GEM tasks.
Fig. 9.

Statistical persistence (λ) as defined by Eq. 8 for the trial-to-trial fluctuations perpendicular (δP) vs. tangent (δT) to the D/T GEM (A) and the D·T GEM (B) from the final “Testing” phase (Fig. 2C) of each experiment. Shown are box plots of λ values computed across all 400 movements of the final “Testing” phase of each day of each experiment for all subjects. Data points for individual subjects are shown as single dots. Subjects exhibited significantly less statistical persistence (smaller λ) for task-relevant δP fluctuations than for task-irrelevant δT fluctuations across both tasks (B: P = 1.8 × 10−13). However, in contrast to the variability results (Fig. 7), the GEM × direction interaction effect here was not statistically significant (P = 0.173).
DISCUSSION
Determining how humans generate accurate and repeatable goal-directed movements despite inherent redundancy (Bernstein 1967; Scott 2004; Todorov 2004) and biological noise (Faisal et al. 2008; McDonnell and Ward 2011; Osborne et al. 2005; Stein et al. 2005) remains a fundamental quest in motor neuroscience research. Here we constructed a novel class of reaching tasks, defined by a generalized goal function (Eq. 1), for which the end goal of each task was itself inherently redundant. Adopting the formal goal function approach (Cusumano and Cesari 2006) allowed us to objectively and unambiguously define the GEM for each specific individual task, independently of the performance of any individual performer. Furthermore, we extended previous work on learning of redundant tasks (Cohen and Sternad 2009; Müller and Sternad 2004) by testing whether humans would exhibit consolidation of learning (Brashers-Krug et al. 1996; Shadmehr and Brashers-Krug 1997; Shadmehr and Holcomb 1997) and/or generalization of learning (Krakauer et al. 2006; Poggio and Bizzi 2004; Shadmehr and Moussavi 2000) from either task to the other. We tested the general hypothesis that humans would learn to exploit the task redundancies made available in each of these two tasks, even though there was no necessity to do so, nor any obvious performance advantage of doing so.
Most subjects in this study reduced their initial errors quickly after they were first exposed to each task (Fig. 3A), thus indicating that they did indeed learn to improve their performance in each task. Moreover, subjects' initial errors were lower at the beginning of the second day of training for both tasks (Fig. 4), indicating that there was significant consolidation of learning (Brashers-Krug et al. 1996). These findings extend previous work (Cohen and Sternad 2009; Cusumano and Cesari 2006; Müller and Sternad 2004) by demonstrating that humans can learn redundant tasks defined by more abstract “goals” that do not have a direct physical meaning (e.g., “hit the target”). However, even though these two tasks could in theory be equally accomplished with the same, easily reachable movement strategy, clear differences in performance were also observed. The much higher initial errors for the D·T task (Fig. 3) suggest that this task was more difficult to learn, and/or more novel, than the D/T task. This might be because the D/T task corresponded to maintaining the same average movement speed over consecutive trials. Such a task may be more intuitively easy to learn and/or simply easier to accomplish biomechanically. Because muscle spindles provide feedback on velocity (Grill and Hallett 1995), the motor control system has an innate sense of velocity (Hwang and Shadmehr 2005). This likely partly explains the observed differences between the two GEMs tested here: humans can innately sense their performance relative to the D/T GEM much more readily than they can relative to the D·T GEM. However, the learning rates (Fig. 3B) were also much faster for the D·T task. Therefore subjects were able to achieve roughly the same overall error rates for both tasks after only ∼30–60 trials (Fig. 3C). Thus, despite its novelty, subjects still learned the D·T task relatively rapidly.
Interestingly, learning the D/T task first helped facilitate (Poggio and Bizzi 2004) subsequent learning of the D·T task, whereas learning the D·T task first interfered with (Krakauer et al. 2006) subsequent learning of the D/T task (Fig. 5). This finding is fully consistent with the differences in initial exposure errors (Fig. 3A) and further reinforces the idea that even though both tasks could be readily accomplished with very similar and easily achievable movements, the D·T task clearly presented a more novel experience for our subjects. Once they had readjusted their normal patterns of both perception and action to accomplish the D·T task, it was then more difficult to go back to the “simpler” task to be performed with the D/T GEM (Fig. 5). This strongly suggests that even though humans can generally learn to exploit task redundancy to help maximize performance while minimizing control effort (Cusumano and Cesari 2006; Dingwell et al. 2010; Todorov and Jordan 2002), this ability can depend greatly on the specific task being performed. Thus neuromuscular (Grill and Hallett 1995; Hwang and Shadmehr 2005), biomechanical (Darainy et al. 2009; Dingwell et al. 2010; Valero-Cuevas et al. 2009), and perceptual (Faisal and Wolpert 2009; Friston 2011; Osborne et al. 2005) factors need to be strongly considered when trying to determine whether, and to what extent, humans can exploit inherent task redundancies to perform any specific task.
Time series analyses of the final learned behaviors also yielded substantial evidence that subjects directly exploited the inherent redundancy between D and T in each task. For both tasks, subjects exhibited greater variability along each GEM than perpendicular to it (Fig. 7), although this effect was significantly less pronounced for the D·T task (Fig. 7B). More importantly, for both tasks, subjects actively corrected the δP deviations perpendicular to each GEM more rapidly than the δT deviations along each GEM (Fig. 9). Remarkably, this effect was equally pronounced for both the D·T and D/T tasks (Fig. 9). This suggests that subjects exploited the redundancy presented by the D·T GEM just as much as that presented by the D/T GEM, despite the greater variance ratios observed for the D/T task (Fig. 7). The similarity in the control strength for both GEMs (Fig. 9) also occurred despite the fact that the D·T task seemed perceptually harder to learn than the D/T task (Figs. 3–5). This suggests that the observed differences in variability in the final learned performance (Fig. 7) were not related to the controller itself or how it was affected by this initial perceptual difference. The different variability structures for these two tasks might be due instead to differences in sensory/perceptual parameters that are used to define the desired action on each subsequent trial (Osborne et al. 2005), or could possibly reflect how subjects trade off sensory uncertainty and movement uncertainty (Faisal and Wolpert 2009).
Data analysis methods currently used to substantiate UCM (Schöner and Scholz 2007), MIP (Todorov 2004), and TNC (Cohen and Sternad 2009) predictions would not have captured these relevant features because they only consider how variability is structured in the data (e.g., Fig. 7). Thus quantifying variance ratios alone may provide incomplete insights into control (Dingwell et al. 2010; Valero-Cuevas et al. 2009). Our results demonstrate that these analyses should at least be supplemented with more detailed temporal analyses that directly quantify how fluctuations on each trial are corrected on the next (Dingwell et al. 2010; Gates and Dingwell 2008). This is consistent with many studies that highlight the importance of such trial-to-trial analyses (Cheng and Sabes 2007; Fine and Thoroughman 2007; Scheidt et al. 2001; Smith et al. 2006; Thoroughman et al. 2007; van Beers 2009; Verstynen and Sabes 2011).
Of critical importance to this study is the fact that there was no explicit requirement, either physical or experimental, to exploit the redundancy available in either task presented. Subjects could have learned to minimize their errors without being aware of the presence of either GEM or exploiting the redundancies presented (Dingwell et al. 2010). Subjects were never explicitly told where the GEM was for either task, or even that there was one. They were only given minimal indirect feedback (Fig. 2B) about which movements (i.e., combinations of D and T) led to what error magnitudes (larger vs. smaller). What they did with that information was entirely up to them. Subjects were never told how to reduce their errors, so they were free to reduce their errors using any strategy they chose. Thus there were no “constraints” imposed by either task on the possible solution strategies subjects could have chosen. Both tasks could easily have been achieved by using the same reaching movements with the same combinations of D and T (Fig. 1). Thus there were also no neuromuscular or biomechanical differences in what movements subjects could have made to achieve equal success in either task. Subjects could have chosen any number of alternative strategies that simply “ignored” the redundancy defined by the GEM.
One simple and intuitive alternative strategy not utilized by our subjects would have been to minimize some relevant physiological parameter like energy cost. This would imply choosing a single (energetically “optimal”) operating point, [T*,D*], along either GEM and then trying to minimize errors around that point (Dingwell et al. 2010). In that case, however, we would not expect to see the clear anisotropy in the variance distributions that we did (Fig. 7). Likewise, subjects could have done this and achieved the same ultimate task performance. Imagine, for example, taking either data set shown in Fig. 1 and compressing the data only along the respective êT direction, without making any changes in the corresponding êP direction. These new data would exhibit the exact same sequence of δP deviations from the GEM and thus the exact same sequence of errors. Had any subjects done this, they could have achieved the exact same performance in either or both tasks, except that they would have exhibited variance ratios much closer to 1 (or possibly even less than 1). Indeed, this type of behavior was observed in similar tasks where subjects were presented explicit line/arc-shaped targets (Schlerf and Ivry 2011). However, our subjects clearly did not do this for either task here (Fig. 7), most likely because to do so would have required additional control effort.
A second very reasonable possibility is that the variance observed in the data (Fig. 7) could have been structured for biomechanical or other reasons not related to trial-to-trial control of these movements (Dingwell et al. 2010; Valero-Cuevas et al. 2009). For example, if the variance were structured for biomechanical reasons alone, we would expect those biomechanical factors to affect each reaching movement in the same way so that they would not vary from any one movement to the next. Indeed, one can create “surrogate” data (Dingwell and Cusumano 2000; Schreiber and Schmitz 2000; Theiler et al. 1992) to directly test this alternative hypothesis by taking the exact same set of [Ti, Di] data points for any given sequence of trials and shuffling them in random temporal order (Dingwell and Cusumano 2010). By construction, these surrogates would exhibit not only the exact same errors relative to the GEM, and thus the exact same performance with respect to the task, but also the exact same variance ratios as the original data (Fig. 7). However, if the data were temporally uncorrelated in this way, we would expect to find λ(δT) ≈ λ(δP) ≈ 0 (Dingwell and Cusumano 2010). However, subjects also clearly did not do this for either task (Fig. 9).
The nervous system estimates both motor errors and the sources of those errors to guide continued adaptation (Berniker and Körding 2008; Braun et al. 2009; Faisal et al. 2008). Exposing humans to tasks that share similar structural characteristics but vary randomly can sometimes help facilitate the ability to generalize to novel tasks (Braun et al. 2009). The neural structures involved in decision making may even deliberately insert noise into the process to enhance adaptation (Carpenter and Reddi 2001; Reddi and Carpenter 2000). Similar capacities were recently demonstrated even in highly learned (“crystallized”) adult bird song (Tumer and Brainard 2007), where residual variability in this skill represented “meaningful motor exploration” to enhance continued learning and performance optimization (Faisal et al. 2008; Grafton 2008; Tumer and Brainard 2007). The present study specifically sought to determine whether humans, when performing generalized, inherently redundant reaching tasks, would learn to exploit the redundancies available to selectively regulate the effects of neuromuscular noise and variability (Faisal et al. 2008; Harris and Wolpert 1998) in order to enhance task performance (Cusumano and Cesari 2006; Dingwell et al. 2010; Todorov 2004). Our work demonstrates that humans can and do actively exploit such redundancies, even for the very abstract tasks presented here. This suggests that the ability of humans to explore the space of possible solutions to a task, and to identify and exploit redundancies they encounter in those task solutions, is likely a more broadly general strategy used for movement control than previously believed.
GRANTS
Funding for this project was provided by National Institute of Child Health and Human Development Grant 1-R03-HD-058942-01 (to J. B. Dingwell) and by National Science Foundation Grant 0625764 (to J. P. Cusumano).
DISCLOSURES
No conflicts of interest, financial or otherwise, are declared by the author(s).
AUTHOR CONTRIBUTIONS
Author contributions: J.B.D. and J.P.C. conception and design of research; J.B.D., R.F.S., and J.P.C. interpreted results of experiments; J.B.D. and R.F.S. prepared figures; J.B.D. and R.F.S. drafted manuscript; J.B.D. and J.P.C. edited and revised manuscript; J.B.D., R.F.S., and J.P.C. approved final version of manuscript; R.F.S. performed experiments; R.F.S. and J.B.D. analyzed data.
ACKNOWLEDGMENTS
Present address of R. F. Smallwood: Joint Department of Biomedical Engineering, University of Texas Health Science Center San Antonio and University of Texas at San Antonio, San Antonio, TX 78229.
APPENDIX
Derivation of δT and δP equations for the D/T GEM.
Starting from the variance-normalized data and GEM (see Eq. 6a), the goal function that defines the D/T GEM becomes
| (9) |
We treat the trial-to-trial fluctuations in the data as small deviations, u = [T̃i′,D̃i′], away from the preferred operating point, [T*,D*], and then expand Eq. 9 in a Taylor series around [T*,D*] (Cusumano and Cesari 2006) to compute the errors, ei, at the goal:
| (10) |
where the matrix A is the body-goal variability map that defines how small performance errors at the body level, u = [T̃i′,D̃i′], create errors at the goal level, ei (Cusumano and Cesari 2006). The unit vector tangent to the GEM at [T*,D*] (i.e., êT), can then be obtained by computing the null space of A, N(A) = {x|Ax = 0}, where x = [x1,x2], as follows:
| (11) |
Note that the exponent m drops out of the final expression because, by construction, the geometry of the GEM itself does not depend on m, which only determines how the magnitudes of the goal level errors (ei) are scaled relative to the magnitudes of the body level errors, u = [T̃i′,D̃i′]. Here, since the D/T GEM is a straight line, we must have (D*/T*) = c, for all possible values of [T*,D*]. Therefore, this implies
| (12) |
The unit vector perpendicular to the GEM at [T*,D*] (i.e., êP), is then given by the row space of A, R(A), which is orthogonal to N(A) (Cusumano and Cesari 2006):
| (13) |
These two unit vectors then yield the expression given in Eq. 7a for the δT and δP deviations of each trial away from the preferred operating point, [T*,D*], for the D/T GEM.
Derivation of δT and δP equations for the D·T GEM.
The derivation here follows the same steps as above, where in this case, the goal function given in Eq. 6b becomes
| (14) |
Again assuming small deviations, u = [T̃i′,D̃i′], away from the preferred operating point, we expand Eq. 14 in a Taylor series around [T*,D*] in the same manner as before to obtain
| (15) |
The unit vector tangent to the GEM at [T*,D*] is again obtained by computing N(A) as follows:
| (16) |
Note that the exponent m again drops out of the final expression. Here, for the D·T GEM, D*/T* does not reduce to a constant because of the curved shape of this GEM. The relevant unit vectors (i.e., êT and êP) are then specified by the null space, N(A), and row space, R(A), of A, as follows:
| (17) |
| (18) |
These two unit vectors then yield the expression given in Eq. 7b.
REFERENCES
- Berniker M, Körding KP. Estimating the sources of motor errors for adaptation and generalization. Nat Neurosci 11: 1454–1461, 2008 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bernstein N. The Coordination and Regulation of Movements. New York: Pergamon, 1967 [Google Scholar]
- Berret B, Chiovetto E, Nori F, Pozzo T. Manifold reaching paradigm: how do we handle target redundancy? J Neurophysiol 106: 2086–2102, 2011 [DOI] [PubMed] [Google Scholar]
- Brashers-Krug T, Shadmehr R, Bizzi E. Consolidation in human motor memory. Nature 382: 252–255, 1996 [DOI] [PubMed] [Google Scholar]
- Braun DA, Aertsen A, Wolpert DM, Mehring C. Motor task variation induces structural learning. Curr Biol 19: 352–357, 2009 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Carpenter RH, Reddi BA. Reply to “Putting noise into neurophysiological models of simple decision making.” Nat Neurosci 4: 337, 2001 [DOI] [PubMed] [Google Scholar]
- Cheng S, Sabes PN. Calibration of visually guided reaching is driven by error-corrective learning and internal dynamics. J Neurophysiol 97: 3057–3069, 2007 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cohen R, Sternad D. Variability in motor learning: relocating, channeling and reducing noise. Exp Brain Res 193: 69–83, 2009 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Collins JJ. The redundant nature of locomotor optimization laws. J Biomech 28: 251–267, 1995 [DOI] [PubMed] [Google Scholar]
- Cusumano JP, Cesari P. Body-goal variability mapping in an aiming task. Biol Cybern 94: 367–379, 2006 [DOI] [PubMed] [Google Scholar]
- Darainy M, Mattar AA, Ostry DJ. Effects of human arm impedance on dynamics learning and generalization. J Neurophysiol 101: 3158–3168, 2009 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dingwell JB, Cusumano JP. Nonlinear time series analysis of normal and pathological human walking. Chaos 10: 848–863, 2000 [DOI] [PubMed] [Google Scholar]
- Dingwell JB, Cusumano JP. Re-interpreting detrended fluctuation analyses of stride-to-stride variability in human walking. Gait Posture 32: 348–353, 2010 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dingwell JB, John J, Cusumano JP. Do humans optimally exploit redundancy to control step variability in walking? PLoS Comput Biol 6: e1000856, 2010 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dingwell JB, Kang HG. Differences between local and orbital dynamic stability during human walking. J Biomech Eng 129: 586–593, 2007 [DOI] [PubMed] [Google Scholar]
- Engelbrecht SE. Minimum principles in motor control. J Math Psychol 45: 497–542, 2001 [DOI] [PubMed] [Google Scholar]
- Faisal AA, Selen LP, Wolpert DM. Noise in the nervous system. Nat Rev Neurosci 9: 292–303, 2008 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Faisal AA, Wolpert DM. Near optimal combination of sensory and motor uncertainty in time during a naturalistic perception-action task. J Neurophysiol 101: 1901–1912, 2009 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fine MS, Thoroughman KA. Trial-by-trial transformation of error into sensorimotor adaptation changes with environmental dynamics. J Neurophysiol 98: 1392–1404, 2007 [DOI] [PubMed] [Google Scholar]
- Friston K. What is optimal about motor control? Neuron 72: 488–498, 2011 [DOI] [PubMed] [Google Scholar]
- Ganesh G, Haruno M, Kawato M, Burdet E. Motor memory and local minimization of error and effort, not global optimization, determine motor behavior. J Neurophysiol 104: 382–390, 2010 [DOI] [PubMed] [Google Scholar]
- Gates DH, Dingwell JB. The effects of neuromuscular fatigue on task performance during repetitive goal-directed movements. Exp Brain Res 187: 573–585, 2008 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Grafton ST. Malleable templates: reshaping our crystallized skills to create new outcomes. Nat Neurosci 11: 248–249, 2008 [DOI] [PubMed] [Google Scholar]
- Grill SE, Hallett M. Velocity sensitivity of human muscle spindle afferents and slowly adapting type II cutaneous mechanoreceptors. J Physiol 489: 593–602, 1995 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Harris CM, Wolpert DM. Signal-dependent noise determines motor planning. Nature 394: 780–784, 1998 [DOI] [PubMed] [Google Scholar]
- Hsu WL, Scholz JP, Schoner G, Jeka JJ, Kiemel T. Control and estimation of posture during quiet stance depends on multijoint coordination. J Neurophysiol 97: 3024–3035, 2007 [DOI] [PubMed] [Google Scholar]
- Hwang EJ, Shadmehr R. Internal models of limb dynamics and the encoding of limb state. J Neural Eng 2: S266–S278, 2005 [DOI] [PMC free article] [PubMed] [Google Scholar]
- John J, Cusumano JP. Inter-trial dynamics of repeated skilled movements. In: ASME Conference Proceedings: ASME 2007 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference (IDETC/CIE2007), September 4–7, 2007 Las Vegas, NV: ASME, 2007, p. 707–716 (paper no. DETC2007-35380) [Google Scholar]
- Körding KP, Wolpert DM. Bayesian integration in sensorimotor learning. Nature 427: 244–247, 2004 [DOI] [PubMed] [Google Scholar]
- Krakauer JW, Mazzoni P, Ghazizadeh A, Ravindran R, Shadmehr R. Generalization of motor learning depends on the history of prior action. PLoS Biol 4: e316, 2006 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Latash ML, Levin MF, Scholz JP, Schöner G. Motor control theories and their applications. Medicina (Kaunas) 46: 382–392, 2010 [PMC free article] [PubMed] [Google Scholar]
- Latash ML, Scholz JP, Schöner G. Motor control strategies revealed in the structure of motor variability. Exerc Sport Sci Rev 30: 26–31, 2002 [DOI] [PubMed] [Google Scholar]
- Liu D, Todorov E. Evidence for the flexible sensorimotor strategies predicted by optimal feedback control. J Neurosci 27: 9354–9368, 2007 [DOI] [PMC free article] [PubMed] [Google Scholar]
- McDonnell MD, Ward LM. The benefits of noise in neural systems: bridging theory and experiment. Nat Rev Neurosci 12: 415–426, 2011 [DOI] [PubMed] [Google Scholar]
- Müller H, Sternad D. Decomposition of variability in the execution of goal-oriented tasks: three components of skill improvement. J Exp Psychol Hum Percept Perform 30: 212–233, 2004 [DOI] [PubMed] [Google Scholar]
- Oldfield RC. The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia 9: 97–113, 1971 [DOI] [PubMed] [Google Scholar]
- Osborne LC, Lisberger SG, Bialek W. A sensory source for motor variation. Nature 437: 412–416, 2005 [DOI] [PMC free article] [PubMed] [Google Scholar]
- O'Sullivan I, Burdet E, Diedrichsen J. Dissociating variability and effort as determinants of coordination. PLoS Comput Biol 5: e1000345, 2009 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pendt LK, Reuter I, Müller H. Motor skill learning, retention, and control deficits in Parkinson's disease. PLoS One 6: e21669, 2011 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Poggio T, Bizzi E. Generalization in vision and motor control. Nature 431: 768–774, 2004 [DOI] [PubMed] [Google Scholar]
- Ranganathan R, Newell K. Motor learning through induced variability at the task goal and execution redundancy levels. J Mot Behav 42: 307–316, 2010a [DOI] [PubMed] [Google Scholar]
- Ranganathan R, Newell KM. Influence of motor learning on utilizing path redundancy. Neurosci Lett 469: 416–420, 2010b [DOI] [PubMed] [Google Scholar]
- Reddi BA, Carpenter RH. The influence of urgency on decision time. Nat Neurosci 3: 827–830, 2000 [DOI] [PubMed] [Google Scholar]
- Reisman DS, Scholz JP, Schöner G. Coordination underlying the control of whole body momentum during sit-to-stand. Gait Posture 15: 45–55, 2002 [DOI] [PubMed] [Google Scholar]
- Schaefer SY, Shelly IL, Thoroughman KA. Beside the point: motor adaptation without feedback-based error correction in task-irrelevant conditions. J Neurophysiol 107: 1247–1256, 2012 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Scheidt RA, Dingwell JB, Mussa-Ivaldi FA. Learning to move amid uncertainty. J Neurophysiol 86: 971–985, 2001 [DOI] [PubMed] [Google Scholar]
- Schlerf JE, Ivry RB. Task goals influence online corrections and adaptation of reaching movements. J Neurophysiol 106: 2622–2631, 2011 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schöner G, Scholz JP. Analyzing variance in multi-degree-of-freedom movements: uncovering structure versus extracting correlations. Motor Control 11: 259–275, 2007 [DOI] [PubMed] [Google Scholar]
- Schreiber T, Schmitz A. Surrogate time series. Physica D 142: 346–382, 2000 [Google Scholar]
- Scott SH. Optimal feedback control and the neural basis of volitional motor control. Nat Rev Neurosci 5: 532–546, 2004 [DOI] [PubMed] [Google Scholar]
- Shadmehr R, Brashers-Krug T. Functional stages in the formation of human long-term motor memory. J Neurosci 17: 409–419, 1997 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shadmehr R, Holcomb HH. Neural correlates of motor memory consolidation. Science 277: 821–825, 1997 [DOI] [PubMed] [Google Scholar]
- Shadmehr R, Moussavi ZM. Spatial generalization from learning dynamics of reaching movements. J Neurosci 20: 7807–7815, 2000 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Smith MA, Ghazizadeh A, Shadmehr R. Interacting adaptive processes with different timescales underlie short-term motor learning. PLoS Biol 4: e179, 2006 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Stein RB, Gossen ER, Jones KE. Neuronal variability: noise or part of the signal? Nat Rev Neurosci 6: 389–397, 2005 [DOI] [PubMed] [Google Scholar]
- Sternad D, Abe MO, Hu X, Müller H. Neuromotor noise, error tolerance and velocity-dependent costs in skilled performance. PLoS Comput Biol 7: e1002159, 2011 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Strogatz SH. Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry, and Engineering. New York: Addison-Wesley, 1994 [Google Scholar]
- Theiler J, Eubank S, Longtin A, Galdrikian B, Farmer JD. Testing for nonlinearity in time series: the method of surrogate data. Physica D 58: 77–94, 1992 [Google Scholar]
- Thoroughman KA, Fine MS, Taylor JA. Trial-by-trial motor adaptation: a window into elemental neural computation. Prog Brain Res 165: 373–382, 2007 [DOI] [PubMed] [Google Scholar]
- Todorov E. Optimality principles in sensorimotor control. Nat Neurosci 7: 907–915, 2004 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Todorov E, Jordan MI. Optimal feedback control as a theory of motor coordination. Nat Neurosci 5: 1226–1235, 2002 [DOI] [PubMed] [Google Scholar]
- Tumer EC, Brainard MS. Performance variability enables adaptive plasticity of “crystallized” adult birdsong. Nature 450: 1240–1244, 2007 [DOI] [PubMed] [Google Scholar]
- Valero-Cuevas FJ, Venkadesan M, Todorov E. Structured variability of muscle activations supports the minimal intervention principle of motor control. J Neurophysiol 102: 59–68, 2009 [DOI] [PMC free article] [PubMed] [Google Scholar]
- van Beers RJ. Motor learning is optimally tuned to the properties of motor noise. Neuron 63: 406–417, 2009 [DOI] [PubMed] [Google Scholar]
- Verstynen T, Sabes PN. How each movement changes the next: an experimental and theoretical study of fast adaptive priors in reaching. J Neurosci 31: 10050–10059, 2011 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wulf G, Shea C. Principles derived from the study of simple skills do not generalize to complex skill learning. Psychon Bull Rev 9: 185-211-211, 2002 [DOI] [PubMed] [Google Scholar]
- Yang JF, Scholz J, Latash M. The role of kinematic redundancy in adaptation of reaching. Exp Brain Res 176: 54–69, 2007 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yang JF, Scholz JP. Learning a throwing task is associated with differential changes in the use of motor abundance. Exp Brain Res 163: 137–158, 2005 [DOI] [PubMed] [Google Scholar]
- Yen JT, Chang YH. Rate-dependent control strategies stabilize limb forces during human locomotion. J R Soc Interface 7: 801–810, 2010 [DOI] [PMC free article] [PubMed] [Google Scholar]
