Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2025 Apr 1.
Published in final edited form as: Nat Hum Behav. 2024 Jan 29;8(4):729–742. doi: 10.1038/s41562-023-01804-5

Motor cortex retains and reorients neural dynamics during motor imagery

Brian M Dekleva 1,2,3, Raeed H Chowdhury 3,4, Aaron P Batista 3,4, Steven M Chase 3,5,6, Byron M Yu 3,5,7, Michael L Boninger 1,2,4, Jennifer L Collinger 1,2,4
PMCID: PMC11089477  NIHMSID: NIHMS1987677  PMID: 38287177

Abstract

The most prominent characteristic of motor cortex is its activation during movement execution, but it is also active when we simply imagine movements in the absence of actual motor output. Despite decades of behavioral and imaging studies, it is unknown how the specific activity patterns and temporal dynamics within motor cortex during covert motor imagery relate to those during motor execution. Here we recorded intracortical activity from the motor cortex of two people who retain some residual wrist function following incomplete spinal cord injury as they performed both actual and imagined isometric wrist extensions. We found that we could decompose the population activity into three orthogonal subspaces, where one was similarly active during both action and imagery, and the others were only active during a single task type—action or imagery. Although they inhabited orthogonal neural dimensions, the action-unique and imagery-unique subspaces contained a strikingly similar set of dynamical features. Our results suggest that during motor imagery, motor cortex maintains the same overall population dynamics as during execution by reorienting the components related to motor output and/or feedback into a unique, output-null imagery subspace.

Introduction

As people prepare to execute a skilled action, they often pause beforehand to mentally rehearse and visualize it. For example, a tennis player might imagine hitting an upcoming serve or a pianist might imagine playing a difficult sequence prior to performance. This type of covert motor imagery is constrained to the same performance limits as exist for overt execution. One study showed that the speed with which people were able to imagine performing a sequence of finger movements was limited to their actual overt performance1. Motor imagery is similarly impacted by neurologic impairment; a lesion to motor cortex leads to an equal slowing of both executed and imagined movements2. Conversely, imagery-based practice can improve actual motor function, in some instances offering a comparable performance benefit as standard overt training36. This tight coupling between imagery and actual motor function suggests similar central mechanisms, so that information and experience gleaned from one modality can usefully inform the other.

Primary motor cortex, known mainly for its role in the generation of volitional movement, is also active during covert motor imagery711. In fact, many movement-related brain areas are also active during the mental rehearsal of imagined movements. Premotor and supplementary motor cortices12,13, anterior cingulate areas13 and parietal areas13,14 all display modulated activity during covert motor imagery. Despite the clear link between imagery and action, we know little about how cortical population activity differs between the two. Previous studies in both monkeys15 and humans10,11 show evidence that motor cortex activity is somewhat consistent across volitional states but exhibits clear differences as well. In particular, the study in monkeys by Jiang et al. found that a portion of motor cortex activity could be partitioned into distinct subspaces containing unique responses during either overt hand control of an on-screen cursor or passive observation of cursor movements. This segmentation of activity into orthogonal subspaces appears to be a common motif within motor cortex, the most established example being that of movement preparation16,17. Preparation is certainly distinct from motor imagery; movement preparation involves “readying” for an imminent overt action, while motor imagery is the covert rehearsal of a complete action. However, during both imagery and preparation, the motor system faces a similar objective: to engage in movement-related processing while avoiding the activation of descending control pathways. In the case of movement preparation, the cortical implementation appears to be well explained by the coordination of a small number of population activity patterns16,17. Within this framework, activity in motor and/or premotor cortices activate orthogonal neural subspaces during preparation and execution. This orthogonality allows preparatory activity to evolve while avoiding the dimensions that would engage descending pathways and cause overt movement.

Given that imagery, by definition, does not involve overt movement, it seems reasonable to assume that imagery activity in motor cortex, like preparatory activity, somehow avoids dimensions responsible for downstream control. One possibility is that imagery exists only in a subset of the neural dimensions active during overt action. Another possibility is that imagery exists in dimensions completely orthogonal to those for action. However, both of these possibilities are unlikely given the results from Jiang et al.15, which show strong evidence for a large degree of overlap across volitional states, plus additional subspaces specific to overt (arm movement) and covert (cursor observation) volitional states. Related work by Vargas-Irwin et al.11 and Rastogi et al.10 also show that despite broad similarities between attempted and imagined movements in the motor cortex of paralyzed humans, different volitional states are still highly discriminable. However, the population analyses used in these studies did not identify the underlying geometry driving the separation in volitional state. The present study sets out to characterize the population-level organization across volitional states, and to identify both the dynamical features that are shared, as well as those that are unique to either overt motor action or covert motor imagery.

Here we use an isometric wrist extension task to examine the relationship between imagery and action in motor cortex. Two participants with tetraplegia due to spinal cord injury participated in the study. Despite having no hand or lower extremity function, both retained residual proximal arm and wrist extension control. We recorded from intracortical microelectrode arrays implanted in the hand and arm areas of motor cortex as they performed either real or imagined isometric wrist extensions to achieve low and high force targets. After reducing the recorded population activity to a low dimensional manifold, we found that it contained three distinct subspaces: (1) a shared space, in which responses were nearly identical during action and imagery, (2) an action-unique subspace that contained significant modulation only during actual force production, and (3) an imagery-unique subspace that contained significant modulation only during imagery. Strikingly, we found that the neural dynamics within the imagery-unique subspace during imagery closely resembled those observed in the action-unique subspace during execution. These unique subspace dynamics also contained elements that did not exist within the shared subspace. From this, we conclude that motor cortex maintains the same overall neural dynamics during imagery as during overt action. However, since the population activity must avoid output dimensions (and also lacks modulation along feedback-related dimensions) during imagery, cortex reorients output or feedback-related responses—contained within the action-unique subspace—into an orthogonal, imagery-unique (output-null) subspace. We propose that the retention of overall neural dynamics structure during imagery provides the motor system with a useful proxy for overt practice.

Results

Motor cortex is active during actual and imagined force

We asked participants to perform isometric wrist extensions within an immobile frame affixed with a force sensor to control the height of a line trace displayed on a monitor in front of them (Figure 1a). During “action” trials, they observed a horizontal bar indicating the required force (either low: ~5N or high: ~40N) and then were required to apply the appropriate force such that the line trace matched the vertical position of the bar, holding the target force for approximately four seconds. During “imagery” trials, the participants kept their hand within the force-sensing apparatus and were asked to imagine producing the same wrist extension forces without actually doing so. On imagery trials, the line trace automatically increased to the target force and then returned to zero. For each session, we collected alternating twelve-trial blocks of action and imagery, resulting in thirty-six total trials of action and thirty-six trials of imagery. We then removed trials with force profiles that deviated significantly from the average (see Methods: Experimental Setup), resulting in approximately 31 ± 4 action trials and 28 ± 4 imagery trials per session (six sessions for P2 and three sessions for P3).

Figure 1.

Figure 1.

Motor cortex is active during both imagery and action. (a) Participants placed their hands on a board beneath a load cell and produced either real or imagined wrist extension forces. For all trials, they received visual feedback of either their actual produced force (action trials) or an automated proxy (imagery trials). (b) Locations of microelectrode arrays implanted in the motor cortices. C.S. = central sulcus. (c) Average low force (gray) and high force (black) traces for all sessions (P2: solid, P3: dashed) during action trials (top) and imagery trials (bottom). (d) Average population firing rate modulation in motor cortex (M1) during action trials (top) and imagery trials (bottom). Here “modulation” is calculated as the change in firing rate from trial start. Each trace corresponds to a single session. Lighter traces represent low force trials and darker traces represent high force trials. (e) Average activity for three example channels (P3) during high-force action (blue) and high-force imagery (red) trials. Shading represents 95% confidence bounds, calculated by bootstrapping with 1000 resamples. (f) Maximum modulation during low force (light) and high force (dark) action and imagery for all recorded channels and all sessions (left: P2, right: P3). Shaded bounds represent 95% confidence intervals, calculated by bootstrapping with 1000 resamples.

Both participants successfully achieved and maintained the requested force targets during action trials (Figure 1c, top and Supplementary Figures 12), and produced no appreciable force during imagery trials (Figure 1c, bottom and Supplementary Figures 12). Throughout the experimental sessions, we recorded activity from the hand and arm areas of motor cortex (Figure 1b). Despite the stark difference in force output between action and imagery trials, we observed modulation in the overall population-wide firing rates for both task types (Figure 1d, top vs. bottom; Supplementary Table 1). Individual channels displayed a wide variety of responses, including a mix of preferential activation during action or imagery (Figure 1e). Some channels displayed similar modulation during both action and imagery (e.g., Figure 1e, channel 183), while others appeared uniquely active during only one task type (e.g., Figure 1e, channels 188 and 19). For each channel, we calculated the maximum modulation during action and imagery by calculating the difference between the 5th percentile and 95th percentile firing rates for each task type (imagery or action) and force level. We found that for P2, the average low-force imagery modulation was 75 ± 6% that of low-force action modulation (bounds represent 95% confidence intervals obtained via bootstrapping with 10000 resamples). High-force imagery modulation was 57 ± 4% that of action (Figure 1f, left). For P3, low-force imagery modulation was 59 ± 5% that of action and high-force imagery modulation was 53 ± 6% that of action (Figure 1f, right).

Latent space contains distinct action and imagery subspaces

As a first step towards characterizing the differences between the neural representations of action and imagery, we asked whether a portion of the population activity could be separated into unique dimensions containing only action or imagery activity. For simple motor behaviors, the measured dimensionality of motor cortical activity is typically far lower than the number of recorded neurons1823. Thus, as an initial step, we reduced the activity from our recorded populations (176 channels for P2, 192 channels for P3) to a lower dimensional (~36D) latent space. To do this, we first performed principal components analysis (PCA) separately on action and imagery trials, keeping enough dimensions to explain >99% variance for each task. We then combined them into a single, unified space (see Methods: Dimensionality reduction for details). Within this combined space, we observed an incomplete overlap between the dimensions containing action and imagery variance. For example, by performing singular value decomposition (SVD) on action data from P2, we found that the leading eleven dimensions explained >99% of action variance, but only 66% of imagery variance (Figure 2a). Similarly, the leading twelve imagery dimensions explained >99% of imagery variance, but only 70% of action variance (Figure 2b). This discrepancy in the set of dimensions containing meaningful variance for each task suggests that the neural subspaces involved in the two tasks were not fully aligned. We summarized this partial alignment in Figure 2d using the subspace alignment index metric16, which ranges from 0 (when subspaces are orthogonal) to 1 (when subspaces are fully aligned). The action and imagery tasks showed only moderate alignment, well below what we would expect to be able to resolve if they were actually aligned given the trial-by-trial variability (Figure 2d; alignments calculated across 1000 shuffled datasets were uniformly larger than for the true dataset; p<0.001 two-sided t-test). These results suggest that some portion of the activity inhabited distinct orthogonal subspaces unique to both action and imagery.

Figure 2.

Figure 2.

Population activity contains distinct action, imagery, and shared subspaces. (a) An example from P2 showing the percent of total action (blue) and imagery (red) variance explained by principal components computed on action trials for an example session (P2). (b) Same as in (a), but for principal components computed on imagery trials. (c) The same space as in (a) and (b) following an orthogonal rotation to isolate distinct action and imagery variances into unique subspaces. (d) Bar plots showing the alignment index between action and imagery, as well as a label-shuffled control. The alignment index reflects the proportion of variance for one condition that is captured by the leading principal components computed from the opposite condition. Bounds represent 95% confidence intervals through bootstrapping with 1000 resamples. For both participants, the action/imagery alignments were significantly lower than the shuffled label control, p<0.001, two-sided t-test (e) Percent action and imagery variance captured by the action-unique, imagery-unique and shared subspaces following the procedure in (c). (f) Estimates of the dimensionality of the full latent space, shared subspace, and action-unique subspace during action. (g) Estimates of the dimensionality of the full latent space, shared subspace, and imagery-unique subspace during imagery. Shaded vertical histograms in (f) and (g) represent the distribution of dimensionalities across 1000 threshold values from 0.5% to 2%.

Based on this observation, we developed a method to jointly identify the action-unique, imagery-unique, and shared (common) subspaces (see Methods: Subspace separation). The identified transformations from the latent space to the three subspaces were constrained to be fully orthogonal, such that the three subspaces fully spanned the original space (i.e. the percentages in Figure 2c sum to 100%). Thus, the final transformation simply provided a different view of the same underlying latent space such that activity clustered into discrete subspaces with unique task-related variance characteristics (Figure 2e). The action-unique subspace contained high variance responses only during the action task, the imagery-unique subspace contained high variance only during the imagery task, and the shared subspace contained both action and imagery variance. The subspace splitting procedure can be considered an extension of the alignment index concept16—the proportion of variance explained by the shared subspace is essentially equivalent to the alignment index value. In line with Jiang et al.15, the subspaces did not correspond to distinct subpopulations within the recordings (unimodal distribution of channel contributions to each subspace, P2: p=0.99, P3: p = 0.99, Hartigan’s dip test; Supplementary Figure 3). Rather, individual channels exhibited mixed selectivity, with the clear condition-specific subspace structure only appearing at the broader population level24.

In addition to total variance explained, we also examined the dimensionality of each subspace (Figure 2fg). To do this, we first performed a varimax rotation on the activity within each subspace and ranked the resulting components by variance. Compared to PCA, varimax provided a more severe “elbow” in the ranked variances, which in turn gave a more consistent estimate of dimensionality. We then counted the number of dimensions that accounted for more than one percent of the total task variance on all 1000 bootstrapped resamples across trials (see Methods: Feature extraction and subspace dimensionality estimation). We found that the estimated dimensionalities of the action and imagery subspaces were mostly larger than the dimensionality of the shared subspace for both P2 (action-unique vs. shared, p=0.122, imagery-unique vs. shared, p < 0.001; bootstrap with 1000 resamples) and P3 (action-unique vs. shared, p=0.002, imagery-unique vs. shared, p < 0.001; bootstrap with 1000 resamples). Additionally, the estimated dimensionalities of the full space during each task were significantly less than the sum of the shared and unique dimensionalities (P2 action: p < 0.001; P2 imagery: p = 0.001; P3 action: p < 0.001; P3 imagery: p = 0.013). This is because the unique subspaces are characterized by their cross-task variance properties. However, within a single task, there exists a more compact representation of the total population response.

Common dynamics within the shared subspace

The previous analysis of the neural latent space revealed a shared subspace containing activity for both tasks, and two separate subspaces that were differentially active depending on whether the participant exerted force or imagined exerting force. Next, we turn to an examination of how neural activity evolved in time within each of these subspaces.

Projecting trial-averaged activity from both tasks into the shared subspace revealed visually similar temporal profiles between action and imagery for each shared dimension (Figure 3b). To assess the extent of this correlation irrespective of chosen dimension, we performed a Monte Carlo method (see Methods: Monte Carlo sampling) in which we sampled 10000 random unit vectors from the shared subspace (Figure 3a, top). On each draw, we computed the correlation between the action and imagery activity along that dimension. Across all sampled dimensions, we found median correlations of 0.93 (P2; 95% CI: [0.75 0.97]) and 0.88 (P3; 95% CI: [0.64 0.94]), indicating that the shared subspace activity was universally well matched between the two task types.

Figure 3.

Figure 3.

Temporal components of action and imagery are similar within the shared subspace and across unique subspaces. (a) Top: Correlations between action and imagery responses for 10,000 random dimensions selected from the shared subspace. Arrows indicate median correlations for each participant. Bottom: Correlations between aligned action-unique and imagery-unique components for 10,000 randomly selected dimensions. Arrows indicate median correlations for each participant. Gray distributions represent the maximal possible correlations if the corresponding aligned imagery dimension is removed on each random draw. (b) Activity within the shared subspace during action (left) and imagery (right). (c) Activity within the action-unique subspace during action (left) and imagery (right) (d) Activity within the imagery-unique subspace during action (left) and imagery (right), following an orthogonal transformation to align to the action-unique responses (see Methods: Action-Imagery subspace alignment). For b-d, thin traces correspond to individual sessions (participant P2), thick lines to cross-session averages. Light and dark traces represent low and high force conditions, respectively. Action-imagery correlation values in c-d correspond to the correlations between action-unique dimensions during action and imagery-unique dimensions during imagery.

Common dynamics across action- and imagery-unique subspaces

The comparison of temporal components between the action-unique and imagery-unique subspaces is less straightforward than for the shared subspace. By construction, the unique subspaces are orthogonal to each other and only contain meaningful activity during their respective tasks, making it futile to compare activity across tasks along a single dimension. Instead, we first found a rotation of the imagery subspace axes that aligned the multidimensional responses observed in the imagery-unique subspace during imagery to those in the action-unique subspace during action. (see Methods: Action-Imagery subspace alignment). Following this alignment procedure, we observed that the action and imagery spaces appeared to comprise a similar set of temporal components (Figure 3c,d), despite existing in orthogonal subspaces of the population space.

We quantified the overall similarity between the multidimensional action-unique and imagery-unique responses by calculating correlations between the action-unique activity on randomly chosen dimensions in the action-unique subspace and the imagery-unique activity along the corresponding (aligned) imagery-unique dimensions (Figure 3a, bottom). Across 10000 randomly sampled dimensions, we found median correlations of 0.93 (P2; 95% CI: [0.84 0.97]) and 0.86 (P3; 95% CI: [0.76 0.94]). To provide context for these values, we also found, for each randomly selected action-unique dimension, the maximally correlated imagery-unique dimension that did not incorporate the aligned dimension (Figure 3a, bottom; gray histograms). These secondary dimensions represent the “next best correlation”, indicating the degree of triviality in the original alignment. Temporal components reflecting nonspecific task timing, for example, could be fairly ubiquitous, and correlates might exist on multiple dimensions. However, we found that removing the aligned imagery-unique dimension consistently and significantly reduced the maximum possible correlation that could be achieved from all other dimensions (P2: p<0.001, P3: p<0.001, paired t-test on 10000 randomly chosen dimensions). As such, activity on aligned imagery dimensions correlated non-trivially with the corresponding action-unique dimension activity. This suggests that the same set of temporal components found in the action-unique subspace also exist during mental imagery, but within an orthogonal imagery-unique subspace.

Distinct unique subspace dynamics

Examining the components across subspaces in Figure 3 revealed that, in addition to the high correlation between action-unique and imagery-unique subspaces, there appeared to be some similarity between responses in the unique subspaces and those in the shared subspace. A strong similarity between these subspaces could mean that the unique subspaces simply recapitulate responses from the shared subspace. Thus, we set out to determine the degree of this dynamic similarity between the unique and shared subspaces.

To assess the degree to which temporal components of activity in the unique subspaces simply recapitulated those from the shared subspace, we used linear regression to reconstruct unique subspace activity from the shared subspace (Figure 4a). Similarly, we found a linear reconstruction of the shared subspace activity from the unique subspaces (Figure 4b). If the unique subspaces were merely reflections of the shared activity, we would expect the quality of these reconstructions to be fairly equivalent. Instead, we found that the reconstructions of unique subspaces were uniformly worse than reconstructions of shared subspace activity from the unique subspaces (Figure 4c). This argues against the possibility of the unique spaces being simple “readouts” of the shared subspace; rather, they contain novel dynamic components.

Figure 4.

Figure 4.

Unique subspaces contain novel dynamical features (a) Average action-unique subspace components for P2 (left) and the corresponding best linear reconstruction of each component from the shared subspace responses (right). (b) Average shared subspace components for P2 (left) and the corresponding best linear reconstruction of each component from the action-unique subspace responses. (c) x-axis: the percentage of unique subspace variance than can be explained via linear combinations of shared subspace components. y-axis: the percentage of shared subspace variance that can be explained via linear combinations of unique subspace components. Each dot corresponds to an individual session. (d) left: action-unique dimensions ordered by increasing maximal correlation within the shared subspace during action. right: shared dimensions ordered by increasing maximal correlation within the action-unique subspace during action. Top plots show ranked correlations for both participants, and bottom plots show actual component traces for P2. For each subplot below, the colored traces on the left show the target subspace component and the gray traces on the right show the maximally correlated response from the opposite subspace. (e) As in (d) for the imagery-unique and shared subspaces during imagery. For the top sections of (d) and (e), shaded regions represent 95% confidence intervals obtained via bootstrapping with 1000 resamples. In the bottom sections of (d) and (e), thin traces correspond to individual sessions, thick lines to cross-session averages. Light and dark traces represent low and high force averages, respectively.

To examine the novel dynamics within the unique spaces a bit further, we performed a second analysis in which we identified specific components that exemplified this dynamic novelty (Methods: Assessing dynamic novelty). The result of this analysis showed that for each unique subspace, there existed at least one dimension that had no correlate within the shared subspace (Figure 4d,e). However, even the most novel response from the shared subspace could be fit relatively well from unique-subspace activity.

Unique subspace activity exhibits more complex dynamics

From the analysis in Figure 4, it appeared that the unique subspace components that were least correlated with the shared subspace tended to display a large force-dependent effect. We suspected that the discrepancy in dynamics between the shared and unique subspaces arose largely from differences in the force-related information (Figure 3 also suggests more pronounced and varied force-dependent effects within the unique subspaces than the shared). To test this, we first isolated the force-dependent response within each subspace by subtracting the mean response across both force levels. We then estimated the dimensionality of this resulting force-dependent response within each subspace (see Methods: Force-specific responses) and found that the dimensionality of the force-dependent activity within the shared subspace was lower than in the unique spaces (Figure 5a,b)—though the difference was statistically significant in only one out of the four cases based on bootstrapping with 1000 resamples (P2 action-unique vs. shared, p=0.064; P2 imagery-unique vs. shared, p = 0.17; P3 action-unique vs. shared, p = 0.11; P3 imagery-unique vs. shared, p < 0.001). The unique subspaces appeared to contain both transient and tonic force-dependent responses, whereas the shared space seemingly only exhibited a single tonic response throughout the entirety of the trial duration (Figure 5c,d).

Figure 5.

Figure 5.

Action-unique and imagery-unique subspaces contain more complex force-specific responses (a) Estimates of force-related dimensionality in the shared and action-unique subspaces during action. (b) Estimates of force-related dimensionality in the shared and imagery-unique subspaces during imagery. (c) Leading force-dependent components (mean-subtracted) in the action-unique subspace (top) and shared subspace (bottom) during action (P2). (d) Leading force-dependent components (mean-subtracted) in the imagery-unique subspace (top) and shared subspace (bottom) during imagery (P2). Shaded vertical histograms in (a) and (b) represent the distribution of dimensionalities for 1000 threshold values from 0.5% to 2%.

Action-unique activity contains downstream motor commands

Based on the core property of the action-unique subspace—that it is active only during overt action—we hypothesized that it at least in part reflected communication of descending control signals. The second component displayed in Figure 3c, for example, resembles the recorded forces across the action and imagery conditions. To more directly test the relationship between each subspace and the executed force, we attempted to decode moment-by-moment force from both the action-unique and shared subspaces (see Methods: Force decoding). The example traces in Figure 6a show the recorded force (black) and predictions (on held-out data) from the action-unique (blue) and shared subspace (purple) decoders for eight consecutive action trials from P2 (session 5). Across all trials from all sessions, the action-unique subspace decoder outperformed the shared subspace decoder (p<0.001; bootstrap across trials with 1000 resamples), providing further evidence that the action-unique subspace is more closely linked to motor execution.

Figure 6.

Figure 6.

Moment-by-moment force can be decoded more accurately from the action-unique subspace than from the shared subspace. (a) Eight example action trials (successive) from the fifth session by P2. Black traces correspond to the actual recorded wrist extension force. Blue and purple traces show cross-validated predictions from decoders (Wiener cascade) trained on action-unique and shared subspace responses, respectively. (b) Total R2 values for the action-unique and shared force predictions. Error bars represent the 95% confidence intervals computed across all trials/sessions.

Discussion

In this study, we examined the relationship between population activity in motor cortex during isometric force production and corresponding covert motor imagery. We found that the low-dimensional manifold activity comprised three orthogonal subspaces: a shared subspace, an action-unique subspace, and an imagery-unique subspace. Activity within the shared subspace accounted for approximately half of the total variance and was nearly identical during both action and imagery. Activity in the action and imagery subspaces, though constructed from completely orthogonal correlation patterns, also contained well-matched sets of temporal responses. Further, the action-unique and imagery-unique activity contained dynamically novel components over the shared space activity, and these novel components appear related to actual or imagined motor output (Supplementary Figure 4).

Because the action-unique subspace we identified only modulates during action (and not during motor imagery), we hypothesize that it is directly involved in generating motor output and possibly receiving sensory feedback. During imagery, output dimensions must be avoided, and there is no incoming somatosensory feedback. In theory, motor cortex could satisfy the constraint of avoiding output dimensions by restricting imagery activity to a lower-dimensional subspace, such that there existed only a shared subspace and an action-unique subspace. However, we instead found that imagery engaged an additional, separate subspace, containing temporal components equivalent to those in the action-unique subspace. This suggests that covertly imagining an action does not simply suppress output activity, but instead reorients into dimensions that do not generate muscle activity (Figure 7). There are multiple potential explanations for why the act of motor imagery should involve creating “dummy” output-related responses in motor cortex. One reason might be that during imagery, cortex is practicing to generate output commands, even though those commands do not actually make it downstream. Even for this one-dimensional task, the action-unique subspace contained approximately six dimensions (Figure 2f), which suggests an output-related space that is more complex than the eventual muscle activity25. In addition to muscle-like responses, (Figure 3c, second component) the action-unique subspace also contains transient responses (Figure 3c, first component), which could reflect indirect control through subcortical areas. There is evidence that downstream motor structures are able to integrate brief, transient activity from cortex to generate sustained muscle output26,27. The ability to practice producing this multidimensional control signal within a motor-output-null space before generating the actual output commands might explain why mental rehearsal improves subsequent performance on overt motor tasks3,4,6,2830.

Figure 7.

Figure 7.

Hypothesized population-level architecture of motor cortex that enables both action and imagery.

A separate possible reason for the existence of output-like components during imagery is that they are necessary for maintaining the dynamical structure of the entire motor cortical ensemble. There is ample evidence motor cortex activity operates as a dynamical system23,31,32. The multidimensional population response unfolds predictably from an initial neural state, often dictated by preparatory activity in premotor cortex16, presumably reflecting intrinsic motor cortical or broader synaptic connectivity. From our results, it appears that activity in the action-unique subspace is dynamically distinct from activity in the shared subspace, and thus is likely to be important for maintaining the dynamical structure of neural population activity (Figure 4). Simply suppressing action-unique activity entirely during imagery would lead to the loss of this dynamic structure in motor cortex. However, recapitulating those components in an orthogonal subspace suppresses output while preserving its dynamical properties, which might be important for stabilizing the behavior of the broader sensorimotor network across different volitional states.

The orthogonality between action and imagery subspaces presumably functions similarly to the orthogonality observed between movement preparation and movement execution16,17. For both preparation and imagery, restricting the population activity to uniquely non-output dimensions prevents unwanted movement. However, the two processes are distinct in their dynamic relationship to the action. If enough time is allowed, preparatory activity in premotor areas appears to settle into a static neural state33. From the dynamical systems perspective, this represents a set point, which dictates how the subsequent multidimensional response will unfold during movement execution34. The process of motor imagery, on the other hand, is not an imminent preparation for movement, but rather rehearsal of the entire action. We did not observe strong preparatory responses in this experiment (perhaps due to array placement, Figure 1b), so we do not know how preparation for imagery might relate to preparation for action. Uncovering the full population-level organization of preparation-, imagery-, and action-related activity could help elucidate the processes by which cortex uses both overt and covert processes to improve motor skill.

The concept of covert motor imagery also invokes the related function of action observation. When people or animals observe others performing a motor action, it engages motor-related brain areas in a way similar to self-initiated movement15,35,36,3642. This correlation between observation and action exists even in the activity of individual cortical neurons12,35,4244, which supports the notion of a “mirror neuron” network through which the motor system can presumably learn new skills by observing others45. It is tempting to assume that observation and imagery/rehearsal are equivalent processes, and that observing an action triggers a person (or animal) to imagine performing the action themselves. Since the vast majority of intracortical observation-based experiments are performed with monkeys, it is often impossible to resolve the degree to which the animal is actively involved in motor imagery. However, recent work in humans with tetraplegia found that observation and imagery are actually not equivalent10,11, and that it is possible to distinguish those two volitional states from population-level activity in motor cortex. Our observation of orthogonal subspaces containing action-unique and imagery-unique activity mirrors results from Jiang et al.15, who observed separate subspaces containing action-unique and observation-unique activity. This suggests that although observation and imagery can be considered distinct volitional states10,11, they might employ similar population-level mechanisms (i.e. orthogonal subspaces for output and non-output conditions). Because the task used in our study was isometric, we could not include a meaningful observation-only condition. In the future, including kinematic limb movements could help identify the degree of overlap (common versus unique subspaces) across a larger range of volitional conditions, including observation, imagery, action, and perhaps even replay during sleep46.

While the task-dependent nature of the action and imagery subspaces provides clear insight into their functional roles (e.g. the action-unique subspace includes output-related activity), interpretation of the shared subspace is more difficult. Activity in this subspace was nearly identical for both tasks, suggesting that it represents some sort of higher-level, abstracted task objective. The separation of force levels within the shared subspace argues against the interpretation that it is highly nonspecific and reflects broad subject state processes like arousal or engagement47,48. The shared responses also do not seem driven by visual feedback, as they consistently lead the executed force (and subsequent visual feedback; Supplementary Figures 56). Instead, it appears to contain information related to the specific task goal (i.e. force level). We speculate that activity in the shared subspace corresponds to goal-oriented or “task-intention” signals from higher order brain areas.

The shared subspace in particular may also be responsible for allowing the transfer of learning from covert practice (imagery) to actual motor performance. Work in monkeys found that activity within preparatory subspaces shared across volitional states can indeed help facilitate covert-to-overt learning of visuomotor rotations49. Similarly, a recent study in humans showed that movement preparation alone (without overt execution) can drive motor adaptation50. From an ethological perspective, the ability to imagine movements is only useful if it can meaningfully inform or assist overt motor control. Our results suggest that such transfer may be made possible in motor cortex by maintaining the full repertoire of population responses during imagery as is present during action. The responses related to motor output are reoriented into an orthogonal subspace, which allows the system to suppress actual motor output without changing the overarching dynamics. Further investigation of these faux-output responses—and their interactions with shared components—may give insight into how covert imagery can be used to drive skill learning or enhance rehabilitation following injury or disease.

Methods

Participants

Two participants (P2 and P3) took part in this study. The participants provided informed consent prior to performing any study-related procedures and were each compensated for the time committed to the ongoing clinical trial. The study was conducted under an Investigational Device Exemption from the Food and Drug Administration and approved by Institutional Review Board at the University of Pittsburgh (Pittsburgh, PA), registered at ClinicalTrials.gov (NCT01894802). The clinical trial is an early feasibility study with a primary outcome of evaluating the safety of an intracortical brain-computer interface for long-term neural recording and stimulation. The work presented here is a scientific effort to understand how neural activity during motor imagery, the basis of brain-computer interface devices, relates to that of overt movement. P2 is a 35-year-old man with tetraplegia caused by C5 motor/C6 sensory ASIA B spinal cord injury. P3 is a 30-year-old man with tetraplegia caused by incomplete C6/C7 ASIA B spinal cord injury. Both participants retain some residual upper arm and wrist control, but no hand function. The manual muscle test scores for wrist extension51 were 4- for both participants (full range of motion against gravity/ mild resistance). Using ASIA exam sensory testing of the right side52, P2 had normal sensation from C2-C4, altered sensation for C5, no sensation for C6, and altered sensation for C7-T1. P3 had normal sensation from C2-C6 and altered sensation from C7-T2.

Both participants had two microelectrode arrays (Blackrock Microsystems, Salt Lake City, UT) implanted in the hand and arm areas of motor cortex (P2: two 88-channel arrays, P3: two 96-channel arrays). They also had two 64-channel arrays implanted in somatosensory cortex53, which were not used for this study. Data collection for P2 occurred approximately four years post-implant, and collection for P3 approximately one year post-implant.

Experimental Setup

For each experimental session, the participants placed their pronated right hands on a board on their lap. We then secured a load cell in a frame attached to the board, positioning it such that it made gentle contact with the top of the hand. At the start of each session, we asked participants to perform one maximal voluntary contraction (MVC). We then set initial low and high force targets based on the peak force observed during MVC (10% and 60%) and asked participants to practice by attempting each force level a few times. If the participants reported concern that the high force target was too high and he would be unable to perform the task without significant pain or fatigue, we lowered it until it reached a comfortable level. This resulted in average low forces of 5%, 8%, 4%, 4%, 4%, and 4% (P2) and 11%, 11%, and 7% (P3), and high forces of 50%, 66%, 50%, 47%, 48%, and 48% (P2) and 51%, 52%, and 38% (P3) MVC. Once the force targets were set, we began the experiment, alternating blocks (12 trials each for P2, 10 trials each for P3) of action and imagery. Participant P2 performed three blocks of action and three blocks of imagery for all sessions. Participant P3 performed four blocks of each for sessions one and two, and five blocks for session three. During both action and imagery blocks, the participants’ hands remained positioned within the force-sensing apparatus (Figure 1a). On each block, we randomly interleaved low and high forces. An audible cue (“gentle” or “firm”) cued the upcoming target, and then at the “go” time, a red bar appeared at the target force level. The participants then attempted to achieve and maintain the target force, using the line trace of exerted force as feedback. Each session always began with action to provide a reference for the subsequent imagery. We excluded trials that contained force traces that significantly deviated from the cued profile. On average, we excluded 6 ± 2 action trials and 8 ± 4 imagery trials (due to non-zero force output) per session.

Data acquisition

We collected neural data via digital NeuroPlex E headstages connected via fiber optic cable to two synced Neural Signal Processors (Blackrock Microsystems, Salt Lake City, UT). The neural signals were filtered using a 4th order 250 Hz high-pass filter, logged as threshold crossings (−4.5 RMS) and subsequently binned at 50 Hz. These binned counts were then convolved offline with a Gaussian kernel (σ = 200 ms) to provide a smoothed estimate of firing rate.

Dimensionality reduction

We sought to reduce the dimensionality of the neural population recordings by projecting the activity into a low-dimensional space. Principal components analysis (PCA) is a common approach for reducing dimensionality, but simply applying PCA to the combined action+imagery dataset could bias later results; the leading dimensions would preferentially capture action-related variance, since the action task contained higher overall variance. To ensure that the low-dimensional space encompassed both action and imagery responses equally, we first performed PCA separately on the condition-averaged firing rates for each dataset, which resulted in two orthonormal weight matrices, WactionRN×Daction and WimageryRN×Dimagery. Here N is the number of channels (N = 176 for P2, N = 192 for P3) and Daction and Dimagery are the dimensions needed to capture at least 99% of the corresponding task-related variance. For all sessions of P2, Daction=[17,16,16,15,15] and Dimagery=[201916212220]. For P3, Daction=[19,18,17], Dimagery=[312231]. We then concatenated these two matrices into a new matrix Waction+imageryRN×(Daction+Dimagery) and performed singular value decomposition,

Waction+imagery=UΣVT Equation 1

This procedure provided URN×(Daction+Dimagery), an orthonormal basis spanning both Waction and Wimagery. The resulting space overestimated the actual dimensionality of the combined action and imagery tasks, since there ended up being a great deal of overlap between the two (Figure 2). This approach represented the most conservative choice, allowing for mild dimensionality reduction (which aided in cross-session alignment and computational savings in future analyses) while ensuring that the space contained all (>99%) meaningful variance for both tasks. We projected all data from both tasks into this space by simply multiplying the firing rate estimates by U.

Alignment index shuffled control

To provide a comparison for the alignment index values computed between action and imagery, we performed a shuffled control analysis. For all trials of a given force condition (low or high), we randomly reassigned the action/imagery labels and recomputed the alignment index between the new scrambled “action” and “imagery” conditions. We repeated this 10,000 times for each dataset. Without exception, the shuffled control alignments were always higher than the action/imagery alignments (100%), suggesting that the amount of overlap between action and imagery was lower than what could be trivially explained by trial-by-trial variability.

Cross-session alignment

For each dataset, we performed principal component analysis to reduce the dimensionality from channel-space (176 channels for P2, 192 channels for P3) to a lower dimensional latent space (see Methods: Dimensionality reduction). However, there is evidence that such low-dimensional (manifold) representations of the population activity remain consistent for a single behavior20. To combine datasets from different sessions, it is necessary to first align the low-dimensional spaces54. We chose to align the low-dimensional latent activity across sessions for each participant using Generalized Procrustes Analysis (GPA)55. GPA iterates to find a multidimensional response common to all datasets and returns the axis transformation (an orthonormal rotation with uniform scaling) necessary to align each dataset with that common response. We found that this approach successfully aligned the responses, achieving a high degree of correlation across sessions (see Supplementary Figure 7). We used the cross-session average response to perform the subspace separation as described below.

Subspace separation

We aimed to identify, if possible, subspaces within the population activity that contained wholly task-specific variance (only variance during action or imagery). To achieve this, we implemented an optimization method that extends the concept of the alignment index16 to identify orthogonal subspaces containing the “unaligned” responses (i.e. variance during one task that appears in the trailing principal components of the opposite task).

The intuition behind the approach is as follows. Given a dataset containing two tasks (e.g. A and B), we can perform PCA on the neural data from just one of the tasks, ZARM×N(M time points, N latent dimensions) and identify the leading DpotentA dimensions that capture the vast majority of task A variance (we chose 99% as the cutoff, but other reasonable choices provide nearly equivalent results; see Supplementary Figure 8). This also results in DnullA=N-DpotentA dimensions, which combined contain insignificant (e.g. <1%) variance for task A and can be considered task A null. However, we also can project activity from task B into this task A null space. If the null dimensions of task A contain a significant proportion of task B variance, we consider the activity within that subspace to be task B-unique. We can perform an equivalent procedure starting with PCA on task B activity to identify task A-unique activity.

The main challenge with this approach is that the null spaces from each task—i.e. Anull (containing B-unique activity) and Bnull (containing A-unique activity)—are computed by performing PCA separately on data from different task conditions, and so will not be mathematically orthogonal (due to noise introduced by low-variance components). However, we know that they are in fact functionally orthogonal. This is because A-unique activity resides in the potent space of task A (since it contains significant variance during task A), but the null space of task B. Likewise, B-unique activity resides in the null space of task A, but the potent space of task B. Therefore, since A-unique exists in the potent space of A and B-unique exists in the null-space of A, they must be orthogonal. We were able to obtain orthogonal subspaces containing the unique responses through a simple optimization, as described below for our specific action/imagery case.

We begin with latent activity LRM×Dlatent, containing M time points and Dlatent dimensions (see Methods: Dimensionality reduction). For convenience, we also split this activity into task-specific data matrices LactionRMaction×Dlatent (Maction time points from the action task) and LimageryRMimagery×Dlatent (Mimagery time points from the action task).

We identified the imagery-null subspace Uimagery-nullRDlatent×Dimagery-null by performing PCA on Limagery and keeping only the Dimagery-null trailing dimensions containing a total of <1% of Limagery variance. We then performed PCA on LactionUimagery-null to obtain the subspace Vaction,imagery-nullRDimagery-null×Daction-unique by keeping the leading Daction-unique dimensions (discarding trailing dimensions that accounted for a total of <1% Laction variance). We then multiplied Uimagery-null and Vaction,imagery-null to obtain a single orthonormal subspace Zaction-uniqueRDlatent×Daction-unique, which contained meaningful variance during action and no meaningful variance during imagery. Projecting activity from both tasks, L, into this subspace gave data matrix Yaction-uniqueRM×Daction-unique, a representation of the action-unique responses. We then used the same method to obtain imagery-unique responses, Yimagery-uniqueRM×Dimagery-unique.

We used gradient descent via the Manopt toolbox56 to identify two orthonormal unique subspaces Qaction-uniqueRDlatent×Daction-unique and Qimagery-uniqueRDlatent×Dimagery-unique, which minimized the sum of squared residuals between [Yaction-unique,  Yimagery-unique] and [LQaction-unique ,LQimagery-unique ], subject to Qaction-unique Qimagery-unique (brackets indicate concatenation across dimensions). This optimization reconstructed the action-unique and imagery-unique responses, but ensured that they were contained within orthogonal subspaces.

The optimization above resulted only in subspaces containing task-unique responses. We defined the remaining subspace not spanned by the combination of these two unique subspaces to be the shared space, i.e.

QsharedRDlatent×(Dlatent-Daction-unique-Dimagery-unique)
Qshared[Qaction-unique,Qimagery-unique]

This shared subspace necessarily contains only dimensions for which meaningful variance exists during both tasks, or no meaningful variance for either task. Together, all of the subspaces Q can be concatenated to form a single transformation QRDlatent×Dlatent that represents an orthonormal transformation of the original space. This approach to subspace identification contrasts with existing methods—e.g. demixed PCA57—that identify subspaces containing cross-condition variance (Supplementary Figure 9).

Feature extraction and subspace dimensionality estimation

With the latent responses split into shared and unique subspaces, we then found a compact representation of the underlying temporal components within each subspace using a varimax rotation. That is, we performed a varimax rotation on the baseline-centered multidimensional common response (cross-session average) for each subspace. We defined “baseline” as the average response during a 200 ms window at the start of the trial. We then ranked the varimax-rotated version of the multidimensional response by variance. As an orthogonal transformation, this varimax procedure—like PCA—does not change the underlying nature of the multidimensional responses, but rather highlights the separable temporal features within each subspace. For example, the first, second, and fifth dimensions in the action subspace (Figure 3c) are readily interpretable as onset, sustained, and offset responses, respectively.

For dimensionality estimation, as in Figure 2fg, we calculated the number of dimensions within each subspace that explained more than 1% of variance during the appropriate task (action or imagery) for all of 1000 bootstrapped resamples of the trials. That is, for each bootstrapping run, we re-calculated the condition means using the bootstrapped selection of trials (all sessions), aligned the within-subject averages, and identified unique and shared subspaces. We performed a varimax rotation on each subspace response (we found that the ranked variances following the varimax rotation displayed a greater discontinuity in slope (i.e. “elbow”) than did those resulting from PCA) and ranked the dimensions in decreasing order of variance. The reported dimensionality represents the number of dimensions for which the condition (action or imagery) variance exceeded 1% for all bootstrapped runs. However, true dimensionality is difficult to estimate, and may differ significantly from the values reported here. However, here we were mostly interested in a qualitative comparison of dimensionalities across subspaces, rather than actual values. Different cutoffs and dimensionality estimation approaches provided similar main results, namely that the unique subspaces consistently displayed higher dimensionality than did the shared subspace.

Action-Imagery subspace alignment

Just as the low-dimensional subspace responses had to be aligned to make cross-session comparisons, so too did the action and imagery subspaces. We wanted to compare the similarity of the multidimensional temporal responses between the action and imagery subspaces. However, we could not simply compare, e.g. the first dimension of the action subspace with the first dimension of the imagery subspace since the two subspaces existed in orthogonal subspaces (see Methods: Subspace separation).

To compare the imagery and action subspace responses, we found an orthonormal transformation, Zim-act, of the imagery subspace that maximized the sum of the squared covariance between action responses in the action subspace and imagery responses in the imagery subspace.

maxZim-actTr((XactionTXimageryZim-act)2) Equation 2

We performed this optimization using the Manopt toolbox in Matlab56. Unlike the more commonly used canonical correlation analysis for aligning temporal components20,58, this rotation is orthonormal and represents a middle ground between maximizing correlation and returning leading components that explain a large amount of within-subspace variance.

Monte Carlo sampling

For the temporal response comparisons in Figure 3a we wanted to quantify the similarity of the multidimensional responses in a way that did not depend on the chosen coordinate frame. For example, we provide in Figure 3bd the specific correlation values for the displayed dimensions, but an orthogonal rotation of the space—which preserves the actual multidimensional relationship—would result in a different set of correlations.

To provide a coordinate-frame-agnostic quantification of two multidimensional responses, we performed a Monte Carlo sampling-based procedure in which we calculated the correlation between responses on randomly selected dimensions.

For multidimensional data X1 and X2 (dimensionality d):

  1. Generate a random d-dimensional unit vector urand.

  2. Project X1 and X2 onto urand and calculate the correlation between the resulting projections
    ci=corr(X1urand,X2urand) Equation 3
  3. Repeat for 10,000 random vectors

The resulting distribution of correlations c provides an overall picture of the multidimensional correspondence between X1 and X2.

Action-Imagery correlation control distributions

The Monte Carlo sampling-based method provided a quantification of the overall correlation between responses in the action and imagery subspaces following the imagery-action alignment (Methods: Action-Imagery subspace alignment). However, we also wanted to include an additional reference distribution that would help provide context for the resulting distribution of correlations (Figure 3a, bottom). Ideally, we aimed to clarify whether the relatively high correlations observed between the action and imagery subspaces were unique to the specific alignment (indicating a true alignment of similar components), or if they simply reflected broad, nonspecific modulation throughout the multidimensional space as a result of e.g. task timing.

The core question that we addressed with our control procedure was: for any given dimension, what is the maximum possible correlation between action and imagery if we remove the aligned imagery dimension? To implement this, we performed an additional optimization on each draw of the Monte Carlo routine (Methods: Monte Carlo sampling).

  1. Project imagery subspace activity Ximagery into the (d-1)-dimensional space orthogonal to urand (i.e. in the nullspace of urand) to obtain Xim-V-null.
    maximizemim-V-nullcorr(Xim-V-nullmim-V-null,Xactionurand)2 Equation 4
  2. Find unit vector mim-V-null that maximizes the correlation between Xim-V-null and Xactionurand

  3. Save resulting (positive) correlation
    cnull,i=corr(Xim-V-nullmim-V-null,Xactionurand) Equation 5

The distribution of cnull is therefore a strong control, since (unlike c) each element results from an independent optimization routine. However, even with that additional freedom, the values of cnull were consistently lower than those in c. This indicates that the temporal structure of action activity along any given action dimension is uniquely mirrored by imagery activity along the corresponding matched imagery dimension.

Assessing dynamic novelty

To determine the dynamic overlap between unique and shared subspaces (Figure 4), we compared the extent to which the responses within one subspace could be fit via linear combinations of the responses in the other subspace. For example, in Figure 4a, we performed a simple linear regression to find a best fit to the action-unique components from shared subspace responses during action. The condition-specific variance explained by those fits (action variance for action-shared fits, imagery variance for imagery-shared fits) are reported for each session in Figure 4c. We obtained 95% confidence intervals by bootstrapping over trials with 1000 resamples.

To provide a more in-depth analysis of the novel dynamics that seemed to exist within the action- and imagery-unique subspaces, we ranked the components within each unique subspace with increasing maximal correlation with the shared subspace. To do this, we first performed the following optimization:

maxu(XsharedXsharedTXshared-1XsharedTXactu-Xactu)2(Xactu)2 Equation 8

Intuitively, this finds the dimension u within the action-unique subspace activity Xact (mean-centered) that results in the poorest linear fit from Xshared to Xactu. The normal equation within the numerator identifies the optimal fit from Xshared to each projected component Xactu, and the maximization finds u for which that fit is least successful (lowest variance accounted for). We then projected the action-unique activity Xact into the nullspace of u and repeated. By doing this, we were able to assemble a full orthonormal transformation of the action-unique activity in which each successive transformed dimension reflected the next most novel dynamic response (poorest fit from Xshared). We performed this entire procedure for both unique subspaces as above, as well as for the shared subspace during action and imagery, optimizing with respect to the action-unique and imagery-unique subspaces, respectively.

Force-specific responses

To identify the force-specific responses within each subspace, we first subtracted the mean response across force conditions and then performed varimax on the resulting trajectories. We estimated the dimensionality in a similar way as for the full responses (see Feature extraction and subspace dimensionality estimation). However, instead of using a cutoff based on the percentage of force-specific variance explained, we instead used the same actual variance cutoff as identified from the full condition response. That is, for the action condition we calculated the number of force-specific action-unique and shared dimensions that explained at least 1% of total action variance.

Force decoding

To probe the relationship between the action-unique/shared subspaces and executed force, we employed a simple Wiener cascade decoding model59. The Wiener cascade involves a simple linear model with a static nonlinearity, and has been used previously to decode force and EMG responses60. The predictions were fully cross-validated using a leave-one-out approach for each session; i.e. on each trial, the model used to predict force was trained on all other trials from that session. We then concatenated all of the cross-validated predictions to report total R2.

Supplementary Material

Supplementary Material

Acknowledgements

We would like to thank Nathan Copeland and Mr. Dom for their continued efforts and commitment to this study. We would also like to thank the research team, especially Debbie Harrington for regulatory management as well as Caroline Schoenewald, Jordyn Ting, Devapratim Sarma, Amit Sethi, and Jeffrey Weiss for their help with data collection. Research reported in this publication was supported by the National Institute Of Neurological Disorders And Stroke of the National Institutes of Health under Award Numbers UH3NS107714 and U01NS108922. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript.

Footnotes

Competing Interests

The authors declare no competing interests

Code Availability

Code central to the results presented in this manuscript are publicly available at https://github.com/pitt-rnel/action_imagery

Data Availability

Given the potential sensitivity concerns, deidentified data from this study are posted on DABI, a repository for data related to the NIH Brain Research Through Advancing Neurotechnologies (BRAIN) Initiative. Data for this specific sub-project can be found at https://doi.org/10.18120/70gm-a975 and are available upon request. A portion of the data included in this paper (action conditions only) was used in a previous publication61.

References

  • 1.Sirigu A et al. The Mental Representation of Hand Movements after Parietal Cortex Damage. Science 273, 1564–1568 (1996). [DOI] [PubMed] [Google Scholar]
  • 2.Sirigu A 1 et al. Congruent unilateral impairments for real and imagined hand movements. Neuroreport 6, 997–1001 (1995). [DOI] [PubMed] [Google Scholar]
  • 3.Clark LV Effect of Mental Practice on the Development of a Certain Motor Skill. Research Quarterly. American Association for Health, Physical Education and Recreation 31, 560–569 (1960). [Google Scholar]
  • 4.Frank C, Land WM, Popp C & Schack T Mental Representation and Mental Practice: Experimental Investigation on the Functional Links between Motor Memory and Motor Imagery. PLOS ONE 9, e95175 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Ladda AM, Lebon F & Lotze M Using motor imagery practice for improving motor performance – A review. Brain and Cognition 150, 105705 (2021). [DOI] [PubMed] [Google Scholar]
  • 6.Yue G & Cole KJ Strength increases from the motor program: comparison of training with maximal voluntary and imagined muscle contractions. Journal of Neurophysiology 67, 1114–1123 (1992). [DOI] [PubMed] [Google Scholar]
  • 7.Hotz-Boendermaker S et al. Preservation of motor programs in paraplegics as demonstrated by attempted and imagined foot movements. NeuroImage 39, 383–394 (2008). [DOI] [PubMed] [Google Scholar]
  • 8.Jeannerod M Motor Cognition: What actions tell the self. (OUP Oxford, 2006). [Google Scholar]
  • 9.Kilteni K, Andersson BJ, Houborg C & Ehrsson HH Motor imagery involves predicting the sensory consequences of the imagined movement. Nature Communications 9, 1617 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Rastogi A et al. Neural Representation of Observed, Imagined, and Attempted Grasping Force in Motor Cortex of Individuals with Chronic Tetraplegia. Scientific Reports 10, 1429 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Vargas-Irwin CE et al. Watch, Imagine, Attempt: Motor Cortex Single-Unit Activity Reveals Context-Dependent Movement Encoding in Humans With Tetraplegia. Frontiers in Human Neuroscience 12, (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Cisek P & Kalaska JF Neural correlates of mental rehearsal in dorsal premotor cortex. Nature 431, 993–996 (2004). [DOI] [PubMed] [Google Scholar]
  • 13.Stephan KM et al. Functional anatomy of the mental representation of upper extremity movements in healthy subjects. Journal of Neurophysiology 73, 373–386 (1995). [DOI] [PubMed] [Google Scholar]
  • 14.Aflalo T et al. Decoding motor imagery from the posterior parietal cortex of a tetraplegic human. Science 348, 906–910 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Jiang X, Saggar H, Ryu SI, Shenoy KV & Kao JC Structure in Neural Activity during Observed and Executed Movements Is Shared at the Neural Population Level, Not in Single Neurons. Cell Reports 32, 108006 (2020). [DOI] [PubMed] [Google Scholar]
  • 16.Elsayed GF, Lara AH, Kaufman MT, Churchland MM & Cunningham JP Reorganization between preparatory and movement population responses in motor cortex. Nature Communications 13239 (2016) doi: 10.1038/ncomms13239. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Kaufman MT, Churchland MM, Ryu SI & Shenoy KV Cortical activity in the null space: permitting preparation without movement. Nat Neurosci 17, 440–448 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Elsayed GF & Cunningham JP Structure in neural population recordings: an expected byproduct of simpler phenomena? Nature Neuroscience 20, 1310–1318 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Gallego JA, Perich MG, Miller LE & Solla SA Neural Manifolds for the Control of Movement. Neuron 94, 978–984 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Gallego JA, Perich MG, Chowdhury RH, Solla SA & Miller LE Long-term stability of cortical population dynamics underlying consistent behavior. Nature Neuroscience 23, 260–270 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Remington ED, Narain D, Hosseini EA & Jazayeri M Flexible Sensorimotor Computations through Rapid Reconfiguration of Cortical Dynamics. Neuron 98, 1005–1019.e5 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Sadtler PT et al. Neural constraints on learning. Nature 512, 423–426 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Shenoy KV, Sahani M & Churchland MM Cortical Control of Arm Movements: A Dynamical Systems Perspective. Annual Review of Neuroscience 36, 337–359 (2013). [DOI] [PubMed] [Google Scholar]
  • 24.Kaufman MT et al. The implications of categorical and category-free mixed selectivity on representational geometries. Current Opinion in Neurobiology 77, 102644 (2022). [DOI] [PubMed] [Google Scholar]
  • 25.Russo AA et al. Motor Cortex Embeds Muscle-like Commands in an Untangled Population Response. Neuron 97, 953–966.e8 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Shalit U, Zinger N, Joshua M & Prut Y Descending systems translate transient cortical commands into a sustained muscle activation signal. Cerebral Cortex 22, 1904–1914 (2012). [DOI] [PubMed] [Google Scholar]
  • 27.Albert ST, Hadjiosif A, Jang J, Krakauer JW & Shadmehr R Holding the arm still through subcortical mathematical integration of cortical commands. bioRxiv 2, 556282 (2019). [Google Scholar]
  • 28.Ryan ED & Simons J Efficacy of Mental Imagery in Enhancing Mental Rehearsal of Motor Skills. Journal of Sport and Exercise Psychology 4, 41–51 (1982). [Google Scholar]
  • 29.Schack T, Essig K, Frank C & Koester D Mental representation and motor imagery training. Frontiers in Human Neuroscience 8, (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Sheahan HR, Ingram JN, Žalalytė GM & Wolpert DM Imagery of movements immediately following performance allows learning of motor skills that interfere. Sci Rep 8, 14330 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Churchland MM et al. Neural population dynamics during reaching. Nature 487, 51–6 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Perich MG et al. Motor cortical dynamics are shaped by multiple distinct subspaces during naturalistic behavior. bioRxiv 2020.07.30.228767 (2020) doi: 10.1101/2020.07.30.228767. [DOI] [Google Scholar]
  • 33.Cisek P & Kalaska JF Neural correlates of reaching decisions in dorsal premotor cortex: specification of multiple direction choices and final selection of action. Neuron 45, 801–14 (2005). [DOI] [PubMed] [Google Scholar]
  • 34.Churchland MM, Cunningham JP, Kaufman MT, Ryu SI & Shenoy KV Cortical preparatory activity: representation of movement or first cog in a dynamical machine? Neuron 68, 387–400 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Dushanova J & Donoghue J Neurons in primary motor cortex engaged during action observation. European Journal of Neuroscience 31, 386–398 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Hari R et al. Activation of human primary motor cortex during action observation: A neuromagnetic study. Proceedings of the National Academy of Sciences 95, 15061–15065 (1998). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Holmes P, Collins D & Calmels C Electroencephalographic functional equivalence during observation of action. Journal of Sports Sciences 24, 605–616 (2006). [DOI] [PubMed] [Google Scholar]
  • 38.Muthukumaraswamy SD & Johnson BW Primary motor cortex activation during action observation revealed by wavelet analysis of the EEG. Clinical Neurophysiology 115, 1760–1766 (2004). [DOI] [PubMed] [Google Scholar]
  • 39.Papadourakis V & Raos V Neurons in the Macaque Dorsal Premotor Cortex Respond to Execution and Observation of Actions. Cerebral Cortex 29, 4223–4237 (2019). [DOI] [PubMed] [Google Scholar]
  • 40.Rizzolatti G, Fadiga L, Gallese V & Fogassi L Premotor cortex and the recognition of motor actions. Cognitive Brain Research 3, 131–141 (1996). [DOI] [PubMed] [Google Scholar]
  • 41.Stefan K et al. Formation of a Motor Memory by Action Observation. J. Neurosci 25, 9339–9346 (2005). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Tkach D, Reimer J & Hatsopoulos NG Congruent Activity during Action and Action Observation in Motor Cortex. J. Neurosci 27, 13241–13250 (2007). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Mazurek KA, Rouse AG & Schieber MH Mirror Neuron Populations Represent Sequences of Behavioral Epochs During Both Execution and Observation. J. Neurosci 38, 4441–4455 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Vigneswaran G, Philipp R, Lemon RN & Kraskov A M1 Corticospinal Mirror Neurons and Their Role in Movement Suppression during Action Observation. Current Biology 23, 236–243 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Rizzolatti G, Fogassi L & Gallese V Neurophysiological mechanisms underlying the understanding and imitation of action. Nat Rev Neurosci 2, 661–670 (2001). [DOI] [PubMed] [Google Scholar]
  • 46.Rubin DB et al. Learned Motor Patterns Are Replayed in Human Motor Cortex during Sleep. J. Neurosci 42, 5007–5020 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Hennig JA et al. Learning is shaped by abrupt changes in neural engagement. Nat Neurosci 24, 727–736 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Kaufman MT et al. The Largest Response Component in the Motor Cortex Reflects Movement Timing but Not Movement Type. eNeuro 3, (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Vyas S et al. Neural Population Dynamics Underlying Motor Learning Transfer. Neuron 97, 1177–1186.e3 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Kim OA, Forrence AD & McDougle SD Motor learning without movement. Proceedings of the National Academy of Sciences 119, e2204379119 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Kendall FP, McCreary EK & Provance PG Muscles: testing and function 4th ed. (Williams & Wilkins, 1993). [Google Scholar]
  • 52.Kirshblum SC et al. International standards for neurological classification of spinal cord injury (Revised 2011). J Spinal Cord Med 34, 535–546 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Flesher SN et al. Intracortical microstimulation of human somatosensory cortex. Science Translational Medicine 8, 361ra141–361ra141 (2016). [DOI] [PubMed] [Google Scholar]
  • 54.Degenhart AD et al. Stabilization of a brain–computer interface via the alignment of low-dimensional spaces of neural activity. Nat Biomed Eng 4, 672–685 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55.Gower JC Generalized procrustes analysis. Psychometrika 40, 33–51 (1975). [Google Scholar]
  • 56.Boumal N, Mishra B, Absil P-A & Sepulchre R Manopt, a Matlab Toolbox for Optimization on Manifolds. Journal of Machine Learning Research 15, 1455–1459 (2014). [Google Scholar]
  • 57.Kobak D et al. Demixed principal component analysis of neural population data. eLife 5, 1–36 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58.Jude J, Perich MG, Miller LE & Hennig MH Robust alignment of cross-session recordings of neural population activity by behaviour via unsupervised domain adaptation. Preprint at 10.48550/arXiv.2202.06159 (2022). [DOI] [Google Scholar]
  • 59.Glaser JI et al. Machine Learning for Neural Decoding. eNeuro 7, (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60.Westwick DT, Pohlmeyer EA, Solla SA, Miller LE & Perreault EJ Identification of Multiple-Input Systems with Highly Coupled Inputs: Application to EMG Prediction from Multiple Intracortical Electrodes. Neural Computation 18, 329–355 (2006). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 61.Balasubramanian K, Arce-McShane FI, Dekleva BM, Collinger JL & Hatsopoulos NG Propagating motor cortical patterns of excitability are ubiquitous across human and non-human primate movement initiation. Iscience 26, (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary Material

Data Availability Statement

Given the potential sensitivity concerns, deidentified data from this study are posted on DABI, a repository for data related to the NIH Brain Research Through Advancing Neurotechnologies (BRAIN) Initiative. Data for this specific sub-project can be found at https://doi.org/10.18120/70gm-a975 and are available upon request. A portion of the data included in this paper (action conditions only) was used in a previous publication61.

RESOURCES