Skip to main content
eLife logoLink to eLife
. 2021 May 11;10:e60988. doi: 10.7554/eLife.60988

Adapting non-invasive human recordings along multiple task-axes shows unfolding of spontaneous and over-trained choice

Yu Takagi 1,2,3,, Laurence Tudor Hunt 2,4, Mark W Woolrich 2,4, Timothy EJ Behrens 4,5,, Miriam C Klein-Flügge 1,4,†,
Editors: J Matias Palva6, Joshua I Gold7
PMCID: PMC8143794  PMID: 33973522

Abstract

Choices rely on a transformation of sensory inputs into motor responses. Using invasive single neuron recordings, the evolution of a choice process has been tracked by projecting population neural responses into state spaces. Here, we develop an approach that allows us to recover similar trajectories on a millisecond timescale in non-invasive human recordings. We selectively suppress activity related to three task-axes, relevant and irrelevant sensory inputs and response direction, in magnetoencephalography data acquired during context-dependent choices. Recordings from premotor cortex show a progression from processing sensory input to processing the response. In contrast to previous macaque recordings, information related to choice-irrelevant features is represented more weakly than choice-relevant sensory information. To test whether this mechanistic difference between species is caused by extensive over-training common in non-human primate studies, we trained humans on >20,000 trials of the task. Choice-irrelevant features were still weaker than relevant features in premotor cortex after over-training.

Research organism: Human

Introduction

For many decades, neuroscientists have studied task-dependent response properties of individual neurons, and this has laid the groundwork for our current understanding of brain function. More recently, a major shift from looking at individual neurons to studying population responses has begun to shed light on larger-scale neural dynamics, thus providing insight into previously hidden circuit mechanisms (Cunningham et al., 2014; Yuste, 2015). Usually, in this approach, the major axes of variation of a neural population are defined and the population activity is projected into this neural state space. This new way of looking at neural firing rates has begun to revolutionize our understanding of various neural processes, including the evolution of choice in parietal and frontal cortices (Harvey et al., 2012; Mante et al., 2013; Morcos and Harvey, 2016; Raposo et al., 2014), the critical stages of movement preparation and reaching in premotor and motor cortices (Churchland et al., 2012; Kaufman et al., 2014; Li et al., 2016), and the mechanisms underlying working memory in prefrontal cortex (PFC) (Murray et al., 2017).

Yet deriving neural population trajectories requires invasive recordings because the population vector is constructed from the firing rate of individual neurons. As a consequence, to date, no comparable non-invasive techniques for use in healthy human participants have been established. Here we asked whether we can recover ‘population-like’ trajectories in humans and thus provide insight into the unfolding of choice on a millisecond basis, by selectively suppressing neural populations to different features using repetition suppression in magnetoencephalography (MEG) recordings.

Repetition suppression (or ‘adaptation’) takes advantage of a feature first observed over 50 years ago (Gross et al., 1967), showing that the activity of a neuron will be suppressed when it is repeatedly exposed to features it is sensitive to. This phenomenon has since been widely replicated and shown to be a robust property of neurons in single-unit recordings (for review, see Barron et al., 2016a). Importantly, the bulk signal measured from thousands or millions of neurons using techniques such as electroencephalography (EEG), MEG, or functional magnetic resonance imaging (fMRI) will also show suppression if a subset of these neurons is sensitive to a repeated feature. Thus, repetition suppression provides insight into the activity of specific subpopulations of neurons in these non-invasive recordings available for use in humans and, in that way, allows us to examine the underlying neural mechanisms.

In this study, we focus on choice processes unfolding in dorsal premotor cortex (PMd). PMd is the key region for choosing hand digit responses (Dum and Strick, 2002; Rushworth et al., 2003), the response modality in our choice task. Thus, the neural representations of interest are located within one brain region. Given this spatial focus, repetition suppression provides the best resolution achievable using non-invasive MEG: a single sensor or voxel is sufficient to reveal feature-processing using repetition suppression. By contrast, multivariate approaches rely on spatial patterns detected across sensors, which would not offer the required spatial scale.

Here, we extend the repetition suppression framework in one crucial way: we suppress the MEG signal to multiple different features within the same experiment. Adaptation along each feature can be conceptualised as ‘squashing’ the neural response along one task dimension, or task-axis. This assimilates task-axes in multi-dimensional state space derived from recording many neurons, but using an experimental manipulation of a univariate, rather than multivariate, signal. Thus, we ask whether repetition suppression along multiple features can mimic projections onto multiple task-axes. If so, this would be the closest we can get to measuring multiple cellular-level representations within a single brain region in humans using MEG, with temporal resolution in the order of milliseconds, thanks to the temporal precision of MEG.

Our first key result shows that it is possible, using repetition suppression, to simultaneously induce adaptation along multiple task-axes in non-invasive human recordings. The recovered human choice traces show a progression from a processing of choice inputs to a processing of the motor response, just like in invasive recordings from non-human primates (NHPs) (Mante et al., 2013).

Once the feasibility of our non-invasive adaptation approach was established, we asked whether the mechanisms of choice uncovered in NHPs also generalize to humans. More precisely, we examined whether the mechanisms for input selection during choice were comparable between the two species. A large body of evidence in human and non-human primates shows that selection of relevant sensory inputs occurs through top-down modulations from prefrontal and parietal regions onto early sensory regions (Buschman and Miller, 2007; de Lange et al., 2010; Desimone and Duncan, 1995; Egner and Hirsch, 2005; Everling et al., 2002; Friston, 2005; Friston and Kiebel, 2009; Gazzaley et al., 2007; Kok et al., 2016; Michalareas et al., 2016; Moore and Zirnsak, 2017; Noudoost et al., 2010; Pezzulo and Cisek, 2016; Rao and Ballard, 1999; Richter et al., 2017; Squire et al., 2013; van Wassenhove et al., 2005; Wyart et al., 2015). However, two recent perceptual decision-making studies in macaques found that irrelevant sensory inputs are not filtered out before the integration stage (Mante et al., 2013; Siegel et al., 2015). To reconcile these findings, naive human participants performed context-dependent choices in the same perceptual decision-making task used in macaques. This showed that information about both relevant and irrelevant input dimensions was present in premotor cortex, but irrelevant inputs were weaker than relevant inputs, consistent with top-down suppression.

One major difference in terms of how human and animal studies are conducted, however, is that animals are trained for thousands of trials before neural recordings are performed. Differences in input selection mechanisms could therefore be a consequence of brain circuit reorganization caused by extensive over-training. In an attempt to reconcile our findings with those obtained in macaques, we next mirrored NHP training conditions, and our human participants underwent extensive training on over 20,000 trials of the task. However, this did not change the input selection mechanisms evident in the MEG recordings taken afterwards. Irrelevant inputs were present, but still more weakly than relevant inputs. This was true when examining information processed in premotor cortex, or when decoding from whole-brain activity.

Results

Participants (n = 22) performed a modified version of a dynamic random-dot-motion (RDM) decision task, in which stimuli simultaneously contained information about colour (red vs. green) and motion (left vs. right, as in Mante et al., 2013). An additional flanker stimulus displayed 150 ms before the RDM stimulus instructed participants about the relevant stimulus dimension: arrows indicated the choice in the current trial was about the dominant motion direction, whereas coloured dots instructed participants to respond based on the dominant colour (Figure 1A).

Figure 1. Experimental task involving manipulation of specific choice features.

Figure 1.

(A) Human participants performed a perceptual choice task adapted from the macaque version in Mante et al., 2013 while magnetoencephalography (MEG) data were recorded. Each trial involved two random-dot-motion stimuli – an adaptation stimulus (AS) and a test stimulus (TS). A flanker cue (coloured dots or arrows) instructed which choice dimension, colour or motion, to attend to for making a choice. Responses were given using the right-hand index or middle finger for green/left and red/right stimuli, respectively. By presenting two choices with varying features in rapid succession, we selectively suppressed the subset of neurons sensitive to repeated features. To maximize suppression effects, AS colour and motion was strong compared to the TS (70% compared to 25.6 or 12.8% motion coherence or colour dominance). Feedback at the end of each trial related to performance on the TS. In total, there were 64 conditions: 4 AS x 2 contexts x 2 directions x 2 colours x 2 coherence levels. The rationale for the selective suppression of choice features is illustrated for two examples in (B) and (C). (B) The top and bottom rows show two combinations of AS and TS that were compared to extract colour suppression. At the time of the TS (the focus of all analyses), the stimulus is identical, containing predominantly red colour and left-ward motion. If preceded by a red AS (top row), any red-coding neurons will show a reduced signal at the time of the TS (red dots), but any other neurons will show the same response (grey dots). Thus, the overall MEG signal will be reduced compared to a situation where the preceding AS stimulus does not share any features with the TS (green; bottom row). This suppression effect, i.e., the difference in the MEG signal for two identical TS as a function of their preceding AS (arrow) can be captured in a time-resolved manner, thus showing not only whether but also when colour is being processed. This experimental repetition suppression manipulation can be conceptualized as a projection (or ‘squashing’) of the MEG signal onto the axis that captures variation in colour processing. (C) The sequence of stimuli shown in (B) can also be used to probe response suppression. When participants are attending to colour at the time of the TS, a middle finger response will be repeated in the top example but not in the bottom. If they are attending to motion, an index finger response will be repeated in the bottom but not in the top example. The respective differences (arrows) thus provide a time-resolved measure of response suppression, analogous to projections of neurons onto a response axis.

MEG activity can track the temporal evolution of a choice process

When recording activity from premotor cortex in NHPs performing the same task, neural activity shows a clear evolution from an initial processing of the choice input (e.g., colour in a colour trial) to a later processing of the choice direction (motor response: right or left; Mante et al., 2013). Our first analysis, therefore, aimed to establish whether the time-resolved nature of MEG data would allow us, in a similar way, to watch how a choice evolved from the processing of inputs to the processing of the motor response in human premotor cortex.

In order to get a handle on neurons that process colour, motion, or response in MEG recordings measuring the summed activity of many neurons, we incorporated an additional task manipulation. The main RDM stimulus on each trial (‘test stimulus’: TS) was preceded by another RDM stimulus, the ‘adaptation stimulus’ (AS), so that each trial contained two RDM stimuli in quick succession (Figure 1A). Unlike the TS, the AS contained only one input feature (strong red or strong green colour, or strong leftward or strong rightward motion).

Participants responded to both stimuli using the corresponding finger of the right hand (index finger for leftward motion or green colour, middle finger for rightward motion or red colour). We hypothesized that presentation of the AS meant that MEG activity recorded at the time of the TS would have suppressed responses to features already engaged at the time of the AS. For example, when a red and leftward TS was preceded by a red AS, MEG activity measured at the time of the TS would still contain the activity of neurons responding to leftward motion, but neurons responding to the red colour input would be suppressed, compared to a situation where the AS was green (Figure 1B). Furthermore, if participants were asked to attend to the colour of the TS, they would produce a middle finger response twice in quick succession (to AS and TS, both red), and thus the ‘response-direction (middle finger)’ selective neurons would also be suppressed (Figure 1C). This would not be the case for the identical stimuli when leftward motion was attended at the time of the test stimulus. In this case, only colour-sensitive neurons but not response-selective neurons would be selectively suppressed at the time of the TS. In summary, by creating and comparing situations with and without suppression for input (colour or motion) and response, we aimed to establish whether we could measure premotor cortex activity transition from a processing of input to a processing of choice output using non-invasive human MEG.

We focused this analysis on the time-course of activity from an area in left premotor cortex (PMd). PMd was identified by first projecting the MEG scalp signal back into source space, which relies on linearly combining MEG sensors to create ‘virtual sensors’ reflecting the signal from specific cortical locations (linearly constrained minimum variance (LCMV)-beamforming; see 'Materials and methods'). At the source level, we then computed a contrast between TS trials with repetition suppression to any input (colour or motion) or response at any time point between [−250,750] ms around the test stimulus and TS trials containing no such suppression (see 'Materials and methods'). This comparison made use of all trials in the experiment and matched the two sides of the contrast in terms of the visual stimulus properties of the TS and the required motor response. The only difference between the two sides of the contrast was which AS preceded a given TS and whether, as a result, we expected repetition suppression to a shared feature or not. Consequently, this contrast was agnostic to any differences between input and response suppression, any differences between relevant and irrelevant input suppression, and any timing differences between these effects. PMd, the premotor region responsible for selecting hand motor responses, was our a priori region of interest for this study. Its role in selecting finger responses is equivalent to the role of the frontal eye fields (FEF) for guiding eye movement responses, the region where NHP recordings were performed (Mante et al., 2013). Indeed, left PMd, in a cluster together with left M1, was the strongest peak at the whole-brain level for this contrast (p(FWE)=0.018 cluster-level corrected; left M1: z = 6.10, peak MNI coordinate x=-36, y=-18, z = 48; left PMd: z = 5.02, peak MNI coordinate x=-37, y=-6, z = 55; right PMd: z = 3.11 at x = 37, y = 06, z = 55; Figure 2A). A further analysis performed on a 38-region parcellation using beamforming with orthogonalization (Colclough et al., 2016; Colclough et al., 2015) confirmed that the linear regression coefficients for both input and response suppression were strongest in the parcel containing premotor cortex (Figure 2—figure supplement 1).

Figure 2. The evolution of choice in human premotor cortex.

(A) Dorsal premotor cortex (PMd) was the a priori region of interest for this study. Indeed, using source localization and a contrast between trials containing any input or response adaptation, compared to no adaptation, revealed a cluster involving M1 and left PMd (group contrast, whole-brain cluster-level FWE-corrected at p<0.05 after initial thresholding at p<0.001; PMd peak coordinate: x=-37, y=-6, z = 55). Importantly, this contrast was agnostic and orthogonal to any differences between input and response adaptation, relevant and irrelevant input adaptation, or pre- vs. post-training effects (including any timing differences, as beamforming was performed in the time window [−250, 750] ms). (B) The time-resolved nature of magnetoencephalography (MEG), combined with a selective suppression of different choice features, allowed us to track the evolution of the choice on a millisecond timescale. Linear regression coefficients from a regression performed on data from left premotor cortex (PMd) demonstrate an early representation of context (motion or colour) and switch (attended dimension same or different from TS) from ~8 ms (sliding window centred on 8 ms contains 150 ms data from [−67,83] ms). The representation of inputs, as indexed using repetition suppression (RS), emerged from around 108 ms (whether or not the input feature, colour, or motion was repeated). Finally, the motor response, again indexed using RS (same finger used to respond to adaptation stimulus [AS] and test stimulus [TS] or not), and TS choice direction (left or right, unrelated to RS) were processed from 275 ms. *p<0.001; error bars denote SEM across participants; black line denotes group average. (C) The distribution of individual peak times across the 22 participants directly reflects this evolution of the choice process. In particular, it shows significant differences in the processing of input and response, consistent with premotor cortex transforming sensory inputs into a motor response.

Figure 2—source data 1. Contains 'pre' and 'post' [Time x Regressors x Subjects] for Figure 2B.
Figure 2—source data 2. Contains 'dat' [Regressors x Subjects] for Figure 2B.

Figure 2.

Figure 2—figure supplement 1. Premotor cortex processes choice inputs and outputs.

Figure 2—figure supplement 1.

Dorsal premotor cortex (PMd) was the a priori region of interest for this study. As shown in Figure 2A, beamforming for source localization identified a cluster involving left PMd from a contrast probing any input or response adaptation across both pre- and post-training sessions. Here we show an additional parcellation into 38 regions using beamforming, which confirmed that the linear regression coefficients for both input and response suppression were strongest in the parcel containing premotor cortex (PMd(M1-S1)-L). Colours denote the maximum linear regression coefficient in the range [−500,1150] ms. Unlike in Figure 6, this analysis used the same regressors used to examine PMd responses in Figures 24, with input and response regressors capturing repetition suppression effects.
Figure 2—figure supplement 1—source data 1. Contains 'pre' and 'post' [ROIs x Regressors].
Figure 2—figure supplement 2. Simulations show sufficient parameter recovery given the experimental design.

Figure 2—figure supplement 2.

To test whether we could recover linear regression coefficients using the same design matrix as used in the experiment, we first simulated magnetoencephalography (MEG) signals using y = Xb + εp, where y is a vector of size T × 1, X is the design matrix of a specific subject in the experiment of size T × M, b is the vector of true linear regression coefficients of size M × 1, T is the number of trials, M is the number of regressors, ε is Gaussian noise drawn from N(0,1), and p is the noise multiplication factor. We set b, the true linear regression coefficients to [250, 150, 100, 150, 150, 200]. These are analogous to the six regressors in the main analyses (Context, Relevant adaptation [RelvAdpt], Irrelevant adaptation [IrrelAdpt], Response adaptation [RespAdpt], Switch, and Choice). We varied the noise multiplication factor between [1, 5, 50]. We used a regularized regression to mimic the regression procedure used for the real data. We varied the regularization term (λ) between [0.00001, 0.00005, 0.0001, 0.0005] and used the optimal one in each case. We repeated this analysis for all subjects and averaged the estimated linear regression coefficients across subjects. We found that for all noise levels (A for p=1 [λ = 0.00001]; B for p=10 [λ = 0.0001]; C for p=50 [λ = 0.0005]), we could accurately recover the true values. The horizontal line indicates the median. The bottom and top edges of the box indicate the 25th and 75th percentiles, respectively. The circles denote outliers and the whiskers extend to the most extreme data points not considered as outliers.
Figure 2—figure supplement 2—source data 1. Contains 'a_est', 'b_est', 'c_est' [Subjects x Regressors], and 'betas_true' [Regressors x 1].
Figure 2—figure supplement 3. Simulations show sufficient independence between estimated linear regression coefficients.

Figure 2—figure supplement 3.

In addition to showing sufficient parameter recovery (Figure 2—figure supplement 2), we probed the correlation among estimated linear regression coefficients. Even if the true coefficients were not correlated, estimated coefficients may show artificial dependencies due to correlations present in the design matrix. To investigate this possibility, we generated synthetic data using y = Xb + ε∗p, as before. This time, we set all values in b to the same value of 100. Critically, we added Gaussian noise drawn from N(0,5) to each linear regression coefficient separately. We repeated this procedure 100 times for each subject and calculated the correlation between estimated linear regression coefficients for the five regressors within a given subject. We set p and λ to 50 and 0.0005, respectively, as in the previous simulation. We averaged obtained correlation matrices across all subjects. We found that the correlation across estimated linear regression coefficients (B) was lower than expected, given correlations present in design matrix (A). Again, we confirmed that linear regression coefficients were recoverable for all regressors (C; scatter plots showing results from all subjects; R corresponds to Pearson’s R between estimated and true linear regression coefficients).
Figure 2—figure supplement 3—source data 1. Contains 'a', 'b' [Regressors x Regressors], 'c_betas_true' and 'c_betas_pred' [Repetition x Regressors x Subjects].

To examine how the choice evolved over time in PMd, we extracted the timeseries from left PMd (x=-37, y=-6, z = 55) and used an L2-regularized linear regression (ridge regression) containing regressors that described which task variables were being suppressed on a trial-by-trial basis (see 'Materials and methods'). The regression was applied to each time point around the presentation of the TS ([−500,1200] ms) using a sliding-window approach (window size: 150 ms). Simulations showed that all regressors were estimable despite dependencies between some (maximum shared variance: r2 = 0.34; Figure 2—figure supplements 2 and 3). Figure 2B demonstrates the temporal evolution of the choice for each of the regressors. The two contextual variables ‘context’ (motion or colour instruction flankers) and ‘switch’ (attended dimension same or different from AS) showed the earliest significant representation, and this response was sustained throughout the TS presentation (both first significant at 8 ms). This is unsurprising as contextual information was available −150 ms before the TS. Both of these variables were unrelated to the repetition suppression manipulation. Slightly later, starting from 108 ms, the input representation emerged (probing whether or not the same input feature was already shown in the AS and thus suppressed). The regressor that was represented latest in PMd was the motor response (probing whether the same finger was/was not used to respond to the AS and thus suppressed) and the choice direction (left or right; both starting from 275 ms). Input and response regressors infer sensory and motor processing using the repetition suppression manipulation; choice direction, by contrast, simply captures the TS response direction, independent of any repetition.

Statistical tests performed on the peaks of the linear regression coefficients obtained for the five regressors showed a significant difference in processing latency (one-way repeated-measures ANOVA: F(4, 84)=128.71, p=5.5154e-35, η2p = 0.86; Figure 2C), and a pairwise post-hoc test between the two critical regressors, input and response repetition suppression, confirmed a significant timing difference with the choice input representations preceding the response representations (pairwise t-test on peak linear regression coefficients for input [429.5 ± 35 ms; mean ± s.t.d.] and response [531.1 ± 41.6 ms]: t(21)=11.07, p=3.1654e-9, after Bonferroni familywise error correction; Hedges’ g = 2.577, 95% CI = [1.8647, 3.4166]; see 'Materials and methods'; Figure 2C). Thus, activity recorded using MEG repetition suppression in human premotor cortex can track the evolution of a choice from sensory input to motor response.

This transition from input-to-choice processing can also be plotted as a continuous trace that progresses along the two task-axes. This is achieved by plotting the linear regression coefficients capturing input suppression, as a marker of input processing, on the first axis and the linear regression coefficients capturing response suppression, as a marker of motor response processing, on the second axis (Figure 3A). The obtained choice trace mimics the state space trajectories derived from neuronal population responses in previous NHP recordings (Mante et al., 2013). This way of plotting the data does not add information but helps visualize the choice dynamics in the way usually done when projecting neural populations into state space. A key difference, however, is that our approach did not rely on invasively recorded firing rates but was possible using non-invasive repetition suppression along multiple features.

Figure 3. Choice trace: progression along task-axes shows filtering out of irrelevant inputs in PMd.

Figure 3.

(A) By plotting the processing of input on one and response on the other axis (both as in Figure 2B), we can derive a choice trace that shows the signal progression along the two task-axes, thus mimicking state space trajectories obtained from non-human primate (NHP) recordings (Mante et al., 2013). (B) Separation of input suppression into relevant and irrelevant inputs shows slightly diminished processing of irrelevant inputs in dorsal premotor cortex (PMd) (p<0.001; black * shows the difference between relevant and irrelevant inputs, significant between 375–442 and 675–875 ms; green and purple * indicates significance separately for relevant and irrelevant inputs). (C) This can also be seen in the choice trace split for relevant and irrelevant inputs. We observed partial but incomplete filtering of irrelevant inputs in PMd.

Figure 3—source data 1. Contains 'pre.relirrel' and 'post.relirrel' [Time x Regressors x Subjects].
Figure 3—source data 2. Contains structs of 'pre' and 'post'.
These structs have four regressors [Time x 1].

Irrelevant inputs are processed less relative to relevant inputs in human PMd

Having established that different components of the choice computation can be tracked in a temporally resolved manner using MEG, we next examined whether PMd processes inputs equally when they are relevant compared to when they are irrelevant for the choice at hand. Accounts of top-down attentional filtering would predict reduced processing of inputs that are irrelevant for making a choice (e.g., colour when the choice is about motion). By contrast, recent evidence provided from recordings in NHPs suggests the passing forward of both relevant and irrelevant inputs, with selection occurring at the output stage (Mante et al., 2013). Comparing the linear regression coefficients capturing the processing of relevant and irrelevant inputs, both indexed using repetition suppression, showed that both were processed significantly from 208 ms, but importantly, when the suppressed sensory dimension was relevant to the choice at hand, this had a stronger effect on the signal compared to when it was irrelevant (p<0.001 between 375–442 and 675–875 ms; Figure 3B). In other words, repetition suppression effects were greater for relevant compared to irrelevant sensory information, suggesting prioritized processing of choice-relevant over choice-irrelevant sensory inputs in PMd. This can also be appreciated in the alternative visualization as two-dimensional choice traces for relevant and irrelevant inputs (Figure 3C). Thus, our data show partial but incomplete filtering of irrelevant inputs in PMd. Feature selection did not occur solely at the motor output stage of the decision process, nor was irrelevant information entirely filtered out by this stage.

In trials with irrelevant input suppression, by design, participants had to switch context between AS and TS, for example, from attending colour to attending motion. Context switches, however, also sometimes occurred in the absence of irrelevant input suppression. Importantly, simulation analyses showed that linear regression coefficients could be estimated accurately despite some shared variance between switch and irrelevant input regressors (simulations; Figure 2—figure supplements 2 and 3).

Over-training does not abolish the filtering out of irrelevant inputs

We speculated that one potential cause for seeing attenuated irrelevant sensory input processing in our human participants who spontaneously performed the task, but unattenuated irrelevant input processing in recordings from NHPs, may be rooted in the fact that monkeys have been trained to perform this task for many months and thousands of trials. This may have caused plasticity in the neural pathways that process and pass forward the input information. To test this hypothesis, we trained our human participants for 4 weeks on over 20,000 trials of the task (see 'Materials and methods'; Figure 4—figure supplement 1A). Unsurprisingly, this led to a speeding up of reaction times and higher performance scores (2×2×3 repeated-measures ANOVA with factors context [motion vs. colour], training [pre vs. post], and coherence [70% (AS) vs. 25.6% vs. 12.8%]; effect of training on log-reaction times (RTs): F(1,20)=9.23, p=6.46e-3, η2p = 0.32, mean RT difference 29 ± 10 ms; effect of training on % correct: F(1,20)=36.39, p=6.75e-6, η2p = 0.65, mean difference 4.76 ± 0.8; Figure 4A and Figure 4—figure supplement 1B,C). Upon completion of the training, we again recorded MEG data during performance of the task in the same way as done in pre-training. As before, the unfolding of the choice computation was evident in signals recorded from PMd (Figure 4C–D). Crucially, however, when we repeated the analysis focussing on any differences between the processing of relevant and irrelevant inputs, the difference was retained (Figure 4E–G). Even after having performed thousands of trials on the task, irrelevant inputs were, therefore, still filtered out from PMd activity. There was no interaction between input processing and training (two-way repeated-measures ANOVA with factors input [relevant vs. irrelevant] and training [pre vs. post]: all p>0.05 for effect of input x training; Bonferroni correction for familywise error rate; a two-way repeated-measures Bayesian ANOVA provided further evidence in favour of the null hypothesis: the input x training interaction was not part of the winning model at any time point).

Figure 4. Filtering out of irrelevant inputs remains after over-training on >20,000 trials.

(A) Over a period of 4 weeks, participants performed >1000 trials of training on the random-dot-motion task for 5 or 6 days a week before returning for two further post-training magnetoencephalography (MEG) sessions. Comparison of pre- and post-training task performance showed significant reaction time (RT) speeding and performance improvements (% correct choices) in both colour and motion contexts. (B) Performance improvements are also clear in psychometric curves plotted across training sessions (blue = early, purple = late; see also Figure 5A and Figure 5—figure supplement 1A). (C) The evolution of a choice can be tracked in PMd post-training (as shown in Figure 2B for the pre-training sessions). (D) However, peak timings of the majority of regressors occur earlier in the post-compared to the pre-training data, suggesting a faster and possibly more efficient coding of the choice process. (E) The choice trace extracted from the repetition suppression regressors for input and response in PMd shows a transformation from a processing of inputs to a processing of outputs, similar to Figure 3A, for the data acquired pre-training. (F) The difference between the processing of relevant versus irrelevant inputs is preserved even after extensive training, thus suggesting some filtering out of irrelevant inputs. (G) This can be visualized in two divergent choice traces, separately showing relevant and irrelevant input adaptation.

Figure 4—source data 1. Contains 'dat' [Regressors x Subjects].

Figure 4.

Figure 4—figure supplement 1. Timeline, performance progression during training sessions, and behavioural effects of relevant and irrelevant inputs and AS.

Figure 4—figure supplement 1.

(A) The timeline of the experiment is illustrated. The manuscript focusses on the neural data acquired from the two PRE-training and the two POST-training magnetoencephalography (MEG) scans (black). The training performed in between was performed in three stages, starting with random-dot-motion (RDM) stimuli containing only one dimension (colour or motion: 1D), progressing onto RDM stimuli containing both dimensions (colour and motion: 2D), followed by three sessions that contained 2D stimuli pairing the adaptation stimulus (AS) and the test stimulus (TS). Not all participants completed all 7 1D + 12 2D + 3 (AS/TS) = 22 training sessions. Everyone performed the screening, the four MEG sessions, and all seven 1D sessions. Of the 12 2D sessions, participants completed between 5–12 (mean 10.7) and of the final three full task sessions, they completed between 1–3 (mean 2.5) before coming back for the two post-training MEG sessions. In total, everyone completed >20,000 trials before the final MEG sessions and, on average, 26,594 trials (minimum 20,288; maximum 28,688). (B) Choice performance (% correct) is shown for the sessions completed by all participants, split by motion and colour (left), by coherence level for motion (middle) and for colour (right). (C) The same progression as in (C) is shown for reaction times (RTs).
Figure 4—figure supplement 2. Transition from choice to response processing is specific to PMd.

Figure 4—figure supplement 2.

All key analyses were repeated for four other regions: primary visual cortex (V1), higher-order visual cortices (V4 +V5), medial prefrontal cortex (mPFC), and lateral intraparietal area (LIP). We found that dorsal premotor cortex (PMd) (right most column) showed the strongest evidence for a decision computation transitioning from input to response processing. We used the same parcellation as in Figure 2—figure supplement 1 to extract source data (see 'Materials and methods': 'Region of interest') for all regions, including PMd. Error bars denote SEM across participants; blue lines and green lines indicate pre- and post-training, respectively.
Figure 4—figure supplement 2—source data 1. Contains 'pre' and 'post' [Time x Regressors x Subjects] for LIP.
Figure 4—figure supplement 2—source data 2. Contains 'pre' and 'post' [Time x Regressors x Subjects] for PMd.
Figure 4—figure supplement 2—source data 3. Contains 'pre' and 'post' [Time x Regressors x Subjects] for V1.
Figure 4—figure supplement 2—source data 4. Contains 'pre' and 'post' [Time x Regressors x Subjects] for V4.
Figure 4—figure supplement 2—source data 5. Contains 'pre' and 'post' [Time x Regressors x Subjects] for mPFC.
Figure 4—figure supplement 3. Task feature processing independent of repetition suppression in PMd.

Figure 4—figure supplement 3.

The same statistical analysis procedures were used as in Figures 24 but this time the generalized linear model (GLM) only contains regressors that are independent of any repetition suppression; all regressors relate to the stimulus properties of the test stimulus (TS) alone and are used to explain the univariate signal measured across trials in dorsal premotor cortex (PMd). Even though the task was not optimized for this analysis, this analysis confirmed our key effects, albeit with less specificity than the main repetition suppression analysis (e.g., here we cannot rule out the influence of confounding factors such as attention or difficulty). (A) All task variables are processed in PMd. (B) Relevant input strength is processed more strongly than irrelevant input strength, both pre- and post-training. (C) Timings indicate a transition from input strength to response direction processing.
Figure 4—figure supplement 3—source data 1. Contains 'pre' and 'post' [Time x Regressors x Subjects].

Consistent with faster behavioural reaction times, we did observe changes in peak processing latency of the different components of the choice process after completion of an extensive training regime (Figure 4D; two-way repeated-measures ANOVA with factors regressors [Context, Switch, Input repetition suppression, Response repetition suppression, Choice] and training [pre vs. post]: effect of regressors [F(4,84) = 39.35, p=1.5824e-18, η2p = 0.65]; effect of training [F(1,84)=75.81, p=2.0638e-08, η2p = 0.78]; effect of regressors x training [F(4,84)=35.38, p=2.6436e-17, η2p = 0.63]). Post-hoc tests confirmed that peak timings changed for four out of five regressors following the training, including the processing of context switches, input adaptation, response adaptation, and choice direction (pairwise t-test between pre vs. post on the peak timings of context: t(21) = 0.18, p=1.00, Hedges’ g = 0.046, 95% CI = [−0.481, 0.575]; switch: t(21) = 5.92, p=3.5641e-05, Hedges’ g = 1.98, 95% CI = [−0.481, 0.575]; input repetition suppression: t(21) = 5.91, p=3.6398e-05, Hedges’ g = 1.57, 95% CI = [0.94, 2.27]; response repetition suppression: t(21) = 12.15, p=2.8914e-10, Hedges’ g = 3.08, 95% CI = [2.28, 4.02]; choice: t(21) = 11.09, p=1.5246e-09, Hedges’ g = 3.28, 95% CI = [2.35, 4.37]; Bonferroni correction for familywise error rate; Figure 4D).

Behavioural effects predicted by neural adaptation mechanisms

We have shown that repeated exposure to sensory inputs or motor responses has an impact on the MEG signal recorded at the time of the TS. However, if repetition has such a clear effect on neural representations, it is possible that it might also impact participants’ behaviour. While the precise neuronal mechanisms underlying repetition suppression are not fully understood, one possibility is that a fatigue mechanism attenuates the inputs of the repeated feature (Barron et al., 2016a; Grill-Spector et al., 2006; Vidyasagar et al., 2010). This would affect the neural information content that is being processed and might translate into behavioural change.

To test this, we repeated the regression conducted at every time step in PMd on log-reaction times (RTs) and response accuracies (% correct) of TS choices (see 'Materials and methods'). We hypothesized that suppression of a relevant sensory feature or motor representation would slow RTs and reduce accuracy. For example, if a green AS meant that green inputs were attenuated at the time of the TS, we would predict TS reaction times and accuracy to be impacted when green is the relevant TS feature to attend to.

Consistent with our predictions, participants were slower and less accurate when the relevant input had already been processed at the time of the AS and was therefore suppressed at the time of the TS (RT: t(21)=6.80, p=1.00e-6, Hedges’ g = 2.00, 95% CI = [1.18, 2.92]; accuracy: t(21)=-8.65, p=2.28e-8, Hedges’ g = −2.55, 95% CI = [−3.60,–1.62]). Similarly, they were less accurate when the motor response was repeated (accuracy; t(21)=-2.28, p=0.03, Hedges’ g = −0.67, 95% CI = [−1.33,–0.05]; Figure 5; see Figure 5—figure supplement 1 for separated results of pre- and post-training), akin to choice history biases that tend to prefer alternating choices (Urai et al., 2019). However, neither RTs nor choice accuracies were affected by repetition of the irrelevant sensory feature (RT: t(21)=0.13, p=0.90, Hedges’ g = 0.04, 95% CI = [−0.57, 0.64]; accuracy: t(21)=0.03, p=0.98, Hedges’ g = 0.01, 95% CI = [−0.60, 0.61]). Other main effects included context (RT: t(21)=-2.31, p=0.03, Hedges’ g = −0.68, 95% CI = [−1.34,–0.06]; accuracy: t(21)=2.48, p=0.02, Hedges’ g = 0.73, 95% CI = [0.10, 1.39]) and context-switch (RT: t(21)=7.72, p=1.46e-7, Hedges’ g = 2.27, 95% CI = [1.40, 3.25]; accuracy: t(21)=-9.58, p=4.10e-9, Hedges’ g = −2.82, 95% CI = [−3.95,–1.82]).

Figure 5. Manipulating information content through repetition suppression changes behaviour.

(A) Psychometric curves show a strong influence of the relevant feature (colour in a colour trial and motion in a motion trial) and a weak influence of the irrelevant feature (colour in a motion trial and motion in a colour trial), consistent with data in non-human primates (Mante et al., 2013). In addition, suppressing motion inputs affects choices based on motion and suppressing colour inputs affects choices based on colour, but not vice versa. (B) Relevant, but not irrelevant, input adaptation slowed log-reaction times (RTs), and both relevant input adaptation and response adaptation reduced choice accuracies (% correct). This is consistent with a potential neuronal fatigue mechanism whereby repeated exposure to a feature reduces the received input, which would be expected to affect behaviour.

Figure 5.

Figure 5—figure supplement 1. Pre- vs. post-training effects of repetition suppression on behavioural performance.

Figure 5—figure supplement 1.

(A) Psychometric curves are shown using the same conventions as in Figure 5A but separately for pre- and post-training. (B) Regression analyses performed on log-reaction times (RTs) and choice accuracies, as shown in Figure 5B, shown separately for pre- and post-training. While effects of relevant sensory input adaptation became more pronounced with training, response adaptation effects diminished.

Task feature processing independent of repetition suppression in PMd

The results described thus far have relied on the use of repetition suppression to manipulate the MEG signals recorded at the time of the TS. As outlined in 'Introduction', this is what our experiment was optimized for. Suppression along multiple task features allows us to measure multiple cellular-level representations within the same brain region. By contrast, multivariate approaches akin to those used in Mante et al., 2013 would in humans rely on spatial patterns detected across MEG sensors, which would have a lower spatial resolution, not allowing us to examine processes within one brain region.

Nevertheless, across trials, we can ask whether univariate signal variance in PMd relates to task features such as sensory input strength or choice direction using an encoding-style regression analysis, which mimics the approach used in Mante et al., 2013 (for details, see 'Materials and methods'). This is not an optimal analysis approach, given our experimental design (see 'Materials and methods'), and, unlike our repetition suppression (RS) approach, it may capture confounding variables (e.g., attention, difficulty). Still, the results are broadly consistent with our original findings, showing that PMd’s univariate signal contains information related to all task features (context, sensory input strength, choice direction). The strength of sensory inputs is processed earlier in time than the choice direction (motor response), and the strength of irrelevant inputs is processed more weakly than the strength of relevant inputs both before and after extensive training (Figure 4—figure supplement 3).

Spatial specificity of PMd effects

PMd was our a priori region of interest for this study and the strongest peak at the whole-brain level (contrast probing input and response adaptation). To further show the specificity of our findings, we repeated all key analyses for a few other a priori regions known to be implicated in visual feature selection and choice: primary visual cortex (V1), higher-order visual cortices (V4 and V5), medial prefrontal cortex (mPFC), and lateral intraparietal area (LIP). None of these regions showed the transition from processing choice input to processing the motor response observed in PMd (Figure 2—figure supplement 1 for all regions; Figure 4—figure supplement 2 for a priori regions).

Filtering out of irrelevant inputs is present across the brain

Finally, it is possible that our targeted focus on PMd as the ‘output’ stage of the decision process might have obscured the fact that other MEG sensors processed irrelevant input information more strongly. To investigate this, we performed a multivariate decoding analysis on the data from 38 parcels covering the whole brain in source space. This analysis focused on stimulus properties of the TS, disregarding any repetition suppression. It also accounted for shared variance between relevant sensory inputs and choice direction by working on the residuals of the MEG signal after regressing out variance related to choice direction (see 'Materials and methods' for details). Separate decoders were constructed to decode relevant and irrelevant sensory input strength (coded as [−2,–1, 1, 2]) from these residuals. Consistent with earlier analyses, decoding across all parcels showed weaker decoding of irrelevant compared to relevant inputs (Figure 6). There was some significant decoding of irrelevant inputs (p<0.001 from 455 to 708.3 ms post-TS) but it was again less extended in time and weaker compared to the decoding of relevant inputs (p<0.001 from 201.7 to 961.7 ms post-TS; Figure 6A). There was again no difference between the data acquired pre- vs. post-training (two-way repeated-measures ANOVA with factors input [relevant vs. irrelevant] and training [pre vs. post]: all p>0.5; minimum p=0.52, F(1,21) = 5.49, η2p = 0.21, for effect of training; Bonferroni correction for familywise error rate; a two-way repeated-measures Bayesian ANOVA provided further evidence in favour of the null hypothesis: the input x training interaction was not part of the winning model at any time point) apart from differences in peak timing (Figure 6B; pairwise t-test between pre vs. post on the peak timings of input repetition suppression: t(21) = 3.12, p=0.026, Hedges’ g = 1.00, 95% CI = [0.33, 1.73]; context: t(21) = 2.35, p=0.14, Hedges’ g = 0.62, 95% CI = [0.08, 1.20]; switch: t(21) = −0.60, p=1.00, Hedges’ g = −0.19, 95% CI = [−0.86, 0.46]; response repetition suppression: t(21) = 1.59, p=0.63, Hedges’ g = 0.52, 95% CI = [−0.15, 1.22]; choice: t(21) = 2.30, p=0.16, Hedges’ g = 0.67, 95% CI = [0.07, 1.30]), as observed in PMd (Figure 4D). This further supports an interpretation of increased efficiency as a result of extensive over-training. Thus, while this analysis cannot rule out that there may be a brain region that represents irrelevant inputs more strongly, we confirmed that even after extensive training and when considering the activity across the whole brain, the processing of irrelevant inputs was attenuated compared to the processing of relevant inputs when making a choice.

Figure 6. Filtering of irrelevant inputs is also present in whole-brain decoding.

Figure 6.

(A) Decoding performed on the source reconstructed signal in 38 parcels covering the whole brain showed a significant difference in decoding accuracy between relevant and irrelevant inputs, suggesting an attenuation of information about irrelevant choice features. Thus, a filtering out of irrelevant information is apparent even when considering data from the whole brain, and it is not affected by extensive training on the task (compare top vs. bottom rows). As in Figure 3B, black * denotes significance at p<0.001 for the difference between relevant and irrelevant input coding; green and purple * denotes significance for relevant and irrelevant input decodings, respectively. (B) Consistent with the encoding approach focused on PMd, peak decoding times are faster following the training when considering data from the whole brain.

Figure 6—source data 1. Contains structs of 'pre' and 'post'.
These structs have two structs 'relv' and 'irrel', having two matrices of 'motion' and 'colour' [Subjects x Time].
Figure 6—source data 2. Contains 'pre' and 'post' [1 x Subjects].

Discussion

Here we have shown that non-invasively recorded MEG activity can be used to track a choice process on a millisecond timescale. As decisions unfolded, MEG activity in premotor cortex transitioned from processing sensory inputs to processing the motor response. Watching these dynamics has previously only been possible by projecting high-dimensional neural population recordings onto a low-dimensional set of axes or ‘neural state space’. Here, we took a different approach to assimilate this state space, in which we selectively suppressed the processing of sensory inputs or response features in the average neural data recorded with MEG. This allowed us to selectively index (or ‘squash’) neural activity along the different task-axes that would define the neural state space in direct neuronal recordings. We found that in human premotor cortex, sensory inputs irrelevant to the current choice were processed more weakly compared to relevant choice inputs. This partial filtering of irrelevant choice features was observed even after extensive over-training and when considering activity present across the brain.

Studying neural population responses, as opposed to the responses of individual neurons, has received increasing attention because it provides a window into larger-scale neural dynamics (Yuste, 2015). It has provided crucial insights, for example, into the evolution of choice (Harvey et al., 2012; Hunt et al., 2018; Mante et al., 2013; Morcos and Harvey, 2016; Raposo et al., 2014), movement preparation and execution (Churchland et al., 2012; Kaufman et al., 2014; Li et al., 2016), and the mechanisms underlying working memory (Murray et al., 2017). The approach we used to derive representations linked to multiple features, or task-axes, draws on the observation that electrophysiological responses are suppressed when repeatedly exposed to features to which they are sensitive. In MEG data recorded non-invasively from human participants, each sensor’s signal is driven by the summed postsynaptic potentials of millions of neurons. This bulk signal can be manipulated by repeated exposure to a specific feature, causing suppression in the subset of neurons sensitive to this feature and thus reduce the average MEG signal that is measured (Figure 1). This repetition approach has greater spatial specificity than multivariate approaches because it manipulates activity in subsets of neurons and can thus be measured within a single sensor, rather than relying on variations in the patterns of activity observed across sensors. While this insight has been exploited for understanding a wide range of cognitive processes, with both fMRI (Barron et al., 2013; Barron et al., 2016b; Chong et al., 2008; Doeller et al., 2010; Garvert et al., 2017; Garvert et al., 2015; Jenkins et al., 2008; Klein-Flügge et al., 2019; Klein-Flügge et al., 2013; Piazza et al., 2007) and MEG/EEG (Fritsche et al., 2020; Henson, 2016; Henson et al., 2004; Todorovic and de Lange, 2012), here we extended this framework in one unusual way. As in previous work, using MEG allowed us to measure temporally resolved repetition suppression effects, thus giving insight into precisely when particular features (e.g., input versus choice direction) are processed. This has been exploited previously, for example, to characterize the temporal tuning of repetition suppression (Fritsche et al., 2020) or to temporally dissociate stimulus repetition from stimulus expectation effects (Todorovic and de Lange, 2012). However, importantly, here we suppressed the signal to multiple different features within the same experiment (colour and motion – when relevant or irrelevant – and motor direction). This can be conceptualized as squashing the neural response onto multiple task-axes, with each feature representing one axis, and allowed us to visualize choice traces in a two-dimensional task space.

The obtained trajectories are comparable to population traces from invasive recordings in NHPs (Figures 3 and 4Mante et al., 2013) in that they show a progression from a processing of choice inputs to a processing of the motor response. However, in Mante et al., 2013, the influence of input returned to baseline before choice execution. This may be because monkeys were only allowed to indicate a response after a delay, once the motion stimulus had been turned off. By contrast, we allowed participants to respond at any time while the stimulus was presented, and thus the trial could end before input processing returned to zero. Indeed, in our human choice trajectories, input representations started to return to baseline just before the time of response but did not fully return to zero by the time participants responded. Overall, we believe our approach provides an exciting new opportunity, allowing researchers to measure something like the state space trajectories obtained from direct recordings but in tasks that might be difficult for NHPs to do (e.g., involving language), or in human disease.

The neural mechanisms underlying repetition suppression are not fully understood to date. Hypothesized mechanisms include neuronal sharpening, fatigue, and facilitation, with current evidence favouring the fatigue model according to which suppression is caused by attenuated synaptic inputs (Barron et al., 2016b; Grill-Spector et al., 2006). Another possibility for the observed signal modulations might be related to change detection or prediction-error-like processes caused by novelty or unexpectedness. However, we observe adaptation of not just sensory inputs but also the motor response, and of not just attended but also unattended sensory inputs (Larsson and Smith, 2012). This lends support to an interpretation as a suppression rather than a prediction-error-like effect. The temporal dynamics of repetition suppression (e.g., the influence of time-lag) can vary between regions and is likely determined by the neural dynamics and recurrent processing of a given region. Importantly, however, the key effects reported here all come from within the same region. This eliminates inter-regional differences in neural dynamics as a possible explanation for the timing differences we observed between sensory and motor suppression effects. Ultimately, the arbitrary nature of the orientation of the MEG dipoles fitted means that we cannot be certain about whether feature repetition causes a suppression or enhancement in the underlying neural signal. For that reason, we report absolute linear regression coefficients throughout. However, single-unit studies provide strong and consistent evidence for a suppression, rather than enhancement, of neural responses after repeated exposure to a sensory feature (Barron et al., 2016b; Grill-Spector et al., 2006).

Our behavioural results are consistent with the interpretation that inputs are suppressed as a result of repeated exposure to a feature. If repeated exposure attenuates the received neuronal inputs, thus affecting the processed information content, this might translate into behavioural change because suppressed information cannot be processed as efficiently for making a choice. However, this should only be the case if the relevant sensory dimension is repeated. Our behavioural analyses confirmed this prediction (Figure 5 and Figure 5—figure supplement 1). Repetition of the relevant sensory feature or response, but not the irrelevant sensory feature, reduced choice accuracies, and repetition of relevant, but not irrelevant, sensory features slowed RTs. These results provide further evidence that the repetition suppression approach employed here was effective at manipulating the inputs to the decision process. It seems likely that the same adaptation process that changes the MEG signal in PMd is causing the changes in behaviour we identified. In prior work, choice biases were shown to depend on the precise choice history, and in the majority of participants, choices were biased away from the previous response (Urai et al., 2019). This finding could relate to effects observed here, showing improved performance for response alternation and reduced performance for response repetition.

Our data speak to an important controversy about the mechanisms underlying feature-based, or context-dependent, choice selection. A substantial body of evidence both in human and non-human primates has shown that selection of relevant sensory features occurs through top-down modulations from prefrontal and parietal regions onto early sensory regions (Buschman and Miller, 2007; de Lange et al., 2010; Desimone and Duncan, 1995; Everling et al., 2002; Friston, 2005; Friston and Kiebel, 2009; Kok et al., 2016; Michalareas et al., 2016; Moore and Zirnsak, 2017; Noudoost et al., 2010; Pezzulo and Cisek, 2016; Rao and Ballard, 1999; Richter et al., 2017; Squire et al., 2013; Summerfield et al., 2014; van Wassenhove et al., 2005; Wyart et al., 2015). By contrast, the study by Mante et al., 2013 found that irrelevant sensory features were not filtered out but passed forward to the output stage (see also Siegel et al., 2015). The authors provided a compelling neural network model that solved feature selection and evidence integration within a single recurrent network. However, the monkeys in Mante et al., 2013 were extensively over-trained at performing the task and this might have caused the circuits to reorganize. We hypothesized that top-down influences might be more important when someone is naive at doing the task. Indeed, in a task with only relevant features, training changes neuronal responses responsible for interpreting sensory evidence, but not those processing the sensory evidence itself (Law and Gold, 2010; Law and Gold, 2008). We, therefore, compared participants before and after >20,000 trials of training on the task. To our surprise, the only difference identified between naive and extensively over-trained participants was a shift in peak timings (Figures 4 and 5). We did not observe any changes in feature selection. In other words, even following the training, irrelevant features were present but weaker than relevant features in premotor cortex and across the brain. Our data thus suggest partial but incomplete filtering of irrelevant inputs in PMd. This is broadly in line with the effects present in participant’s behaviour, where we observed a weak but observable influence of the irrelevant feature (Figure 5A and Figure 5—figure supplement 1A). Consistent with predictive coding principles and theories of top-down control, the processing of irrelevant inputs was diminished, but partly consistent with Mante et al., 2013, they were not completely filtered out. Thus, it seems plausible that some but not all of the feature selection happens at the output stage, in PMd. Overall, the strength of top-down control, or the extent to which task-irrelevant information was filtered out, seemed unaffected by the amount of prior experience on the task.

One important difference between studies in humans and NHPs is the brain area under study. We focused on the premotor area responsible for selecting digit responses, the response modality in our task. By contrast, a large number of studies in NHPs, including Mante et al., 2013, record from FEF because choices are indicated using saccades. In terms of their anatomical connections, FEF and PMd can be considered at similar levels of the processing hierarchy for saccadic and digit-motor responses, respectively. FEF has direct projections to the saccade-initiating superior colliculus, while PMd directly projects to the region of M1 that controls hand motor responses. However, based on pyramidal neuron spine count, a good indicator of the hierarchical position of a cortical area, FEF may be at a lower level of the hierarchy than PMd (Chaudhuri et al., 2015). Whether or not the positions of PMd and FEF in the processing hierarchy are comparable, this potential difference is unlikely to explain the discrepant findings. Other work in NHPs has shown attentional filtering in regions as ‘low’ in the processing hierarchy as V4/MT (Noudoost et al., 2010 and references therein), which would imply sensory information is filtered out before FEF/PMd, consistent with our work.

This leaves outstanding the question as to why equally strong processing of relevant and irrelevant features was observed in NHP state space trajectories (Mante et al., 2013), but not in our data. It has been proposed that the site of feature selection may depend on the level of detail afforded by the prediction (de Lange et al., 2018; Hochstein and Ahissar, 2002). One difference between tasks was that the relevant dimension changed from trial to trial in our experiment but was blocked in Mante et al., 2013. There is evidence that FEF shows long-term selection history effects (Bichot et al., 1996; Bichot and Schall, 1999), which may be promoted by blocking trials. However, Siegel et al., 2015 do not mention any blocking of trials and report results consistent with Mante et al., 2013, making this an unlikely possibility. Recent work has shown attenuation of expected stimulus features when they are attended (Richter and de Lange, 2019). However, attenuation of expected information is usually thought to help filter out predictable objects (Duncan et al., 1997; Everling et al., 2002), for example, via representations of pre-stimulus sensory templates (Kok et al., 2017). Indeed, generally, processing is biased in favour of behaviourally relevant input (for review, see Desimone and Duncan, 1995; Stokes and Duncan, 2014). There is also a discrepancy in terms of the methods used here and in Mante et al., 2013. Mante et al. used multivariate approaches across cells, while we used a repetition suppression manipulation on non-invasive univariate data. However, again, this difference is unlikely to fully account for the discrepancy in findings. A large body of work in NHPs is consistent with our findings but used similar recording, analysis, and training approaches as in Mante et al., 2013 (Noudoost et al., 2010). Furthermore, when we implemented a comparable analysis approach to Mante et al., 2013, albeit using the univariate signal of a single sensor and ignoring the repetition suppression manipulation our design was optimized for (Figure 4—figure supplement 3), we were able to confirm that signal variation related to sensory stimulus strength was weaker for irrelevant compared to relevant features. Ultimately, the discrepancy between different findings remains to be addressed and highlights a general need for a better understanding of decision-making in environments that require dynamic changes (Glaze et al., 2015; Gold and Stocker, 2017; Ossmy et al., 2013).

Overall, our results reinforce the importance of inter-species translational research, whereby tasks and techniques are used across species, e.g., by using comparable analysis pipelines (Hunt et al., 2015; Kriegeskorte et al., 2008), by obtaining direct recordings from humans when possible (e.g., neurosurgical patients: Ekstrom et al., 2003; Fell et al., 2001; Miller et al., 2013; Rutishauser et al., 2010; Watrous et al., 2018), or by recording whole-brain fMRI in NHP species (Bongioanni et al., 2021; Chau et al., 2015; Fouragnan et al., 2019). Our work also emphasizes the importance of developing more mechanistic approaches in human neuroscience, and it shows that the generalizability from NHPs to humans can and should be tested but not assumed (Passingham, 2009).

Materials and methods

Participants

Twenty-five participants (10 male, 15 female, age range 19–32, mean age 25 ± 0.68) with no history of neurological or psychiatric disorder, with normal or corrected-to-normal vision and who fulfilled screening criteria for undergoing MRI and MEG scanning, took part in this study. The sample size was determined based on the reported sample sizes in previous related studies. One participant dropped out before completing the experiment and two participants’ data sets were too noisy even after rigorous data clean-up. The final sample thus included 22 participants (10 male, 12 female, age range 19–32, mean age 25 ± 0.74). There was a problem with processing the MEG data from two sessions (session 2 and session 4 in two different individuals), so only data from three instead of four MEG sessions were included for two participants. The study was approved by the University College London (UCL) Research Ethics Committee (reference 1825/005) and all participants gave written informed consent.

Experimental procedure

Participants agreed to take part in an initial screening session that ensured that they were safe to undergo MRI and MEG scanning, and that they were able to do the task to a basic standard (e.g., they were not colour blind). No participants were excluded after the screening session. Following the screening, they then took part in a short structural MRI scan and four MEG sessions, two at the beginning and two at the end of the study, spaced 4 weeks apart. They also agreed to complete an hour of training (including short breaks: 1200 trials split into 12 blocks) in the laboratory or on their own computers for 5 or 6 days per week for a total of 22 training sessions spread across 4 weeks (Figure 4—figure supplement 1A). Participants who performed their training at home (9 out of 22) agreed to pass on the data to the experimenter on the same day to enable monitoring of progress and to ensure daily completion, and they agreed to perform the task in a quiet environment without interruptions. Personal laptop screens were colour-calibrated to ensure matched stimulus appearance. The four MEG sessions were identical and lasted ~1.5 hr (1024 trials). Participants were reimbursed £250 for their time. Half of the money (£125) was paid out in smaller chunks after each MEG session and each week of behavioural training; the other half was paid upon completion of the entire experiment to discourage drop-out, given the time-intensive nature of this study.

Experimental task

The task was adapted from Mante et al., 2013 and contained the same RDM that, in addition to a dominant motion direction, also contained colour information, here varying from predominantly green via grey (neutral) to predominantly red. Each trial contained two sequentially presented coloured RDM stimuli, the AS and the TS. Two separate instruction cues, presented shortly before the AS and TS, signalled whether participants had to judge the direction of motion or the colour dominance of the AS and TS, respectively. This determined the relevant input dimension to focus on. More precisely, within a given trial, the order of presentation was as follows (Figure 1A): (1) an instruction cue presented for 150 ms showed a green and red dot to signal that colour was relevant or a left and right arrow to signal that motion was the relevant dimension to attend to for the AS; (2) The AS was presented for 500 ms, either with random motion and 70% colour dominance for green or red, or with non-dominant colour (grey shades) and 70% motion coherence to the left or right; (3) a fixation cross was shown for the 300 ms inter-stimulus interval (ISI); (4) a second instruction cue signalled the relevant dimension for the TS (150 ms); (5) the TS was presented for 500 ms; (5) after another 500 ms of fixation; (6) feedback was presented for 300 ms (‘green tick’ or ‘red cross’). Participants had to respond to both AS and TS using a button press with their right-hand index (left) or middle (right) finger. Because the response to the AS was trivial (dominance level: 70%; accuracy 95 ± 2% during screening), the feedback at the end of the trial related to their response to the TS. TS colour and motion dominance were modulated according to two difficulty levels during the MEG sessions: 12.8 or 25.6%. During the training, we also included two other difficulty levels corresponding to 3.2 and 6.4%.

During the screening session, the four MEG sessions, and the last 3 days of training, this was the precise task used. During the screening session, participants performed six blocks of 128 trials (n = 768 trials) of the task. During MEG sessions, they performed 8 blocks of 128 trials and thus a total of 1024 trials each, allowing a total of 2048 trials from each participant to enter the pre- vs. post-training MEG analyses. During the first 7 days of behavioural testing following the first two MEG scans, a simpler version of the task was used. Participants were only given one stimulus at a time (coherences: 3.2, 6.4, 12.8, 25.6, and 70%) and it only contained either colour or motion (‘1-dimensional’ stimuli, 1D; Figure 4—figure supplement 1A). There was feedback after every stimulus, and even though it was easy to know which feature to attend to (when all dots were grey, it was motion; when they were coloured and static, it was colour), the instruction cue was presented 150 ms prior to stimulus onset. Participants performed 12 blocks of 100 trials (n = 1200) per day, totaling to 8400 trials across the 7 days. Following the 1D task, participants moved on to individual 2D stimuli that simultaneously included colour and motion and performed this 2D task for 12 days (coherences were identical to the 1D task). Again participants performed 12 × 100 = 1200 trials per day, totaling to 14,400 trials of this 2D version of the task. Finally, the last 3 days of training were done on the task described above containing two stimuli presented in quick succession (AS = 70% coherence and TS = 12.8 or 25.6% coherence) and which was identical to the one used during the MEG scans (8 blocks of 128 trials per day and thus 3 × 1024 = 3072 trials in total). Thus, all participants were expected to complete a total of 7 + 12 + 3=22 training sessions. They were told not to take more than 1 day off in a row, but due to sickness, some sessions were missing in some participants (mean number of completed sessions: 21.2). Overall, by the time they came for their third and fourth MEG sessions, participants were expected to have completed 768 (screening) + 2048 (2 MEG) + 8400 (7 days 1D stimuli) + 14,400 (12 days 2D stimuli) + 3072 (3 days full task) = 28,688 trials. Everyone performed the screening, the four MEG sessions, and all seven 1D sessions. Of the 12 2D sessions, participants completed between 5–12 (mean: 10.7) and of the final three full task sessions, they completed between 1–3 (mean: 2.5) before coming back for the two post-training MEG sessions. In total, everyone completed >20,000 trials before the final MEG sessions and on average 26,594 trials (minimum: 20,288, maximum: 28,688).

Repetition suppression procedure and trial types

All analyses focus on the time of the TS. Importantly, however, the purpose of the AS was to selectively manipulate neurons responding to particular input and response features. For instance, presenting a green AS followed by a predominantly green TS meant that at the time of TS, any MEG sensors influenced by neurons responding to green colour, or by neurons responding to leftward hand motor responses, should show suppressed responses compared to a situation where a red AS was followed by the same predominantly green TS. In a similar way, we could selectively adapt to green or red colour inputs, right or left motion inputs, and middle/index-finger hand motor responses, and we could do so when a given input was relevant or irrelevant. For example, a green AS followed by a left-motion TS that was predominantly green but while attending to motion, suppressed to green colour when it was irrelevant at the time of TS. Finally, response adaptation could be obtained, e.g., by showing a red AS (leading to a right and thus middle finger response) followed by a right-moving TS (also leading to a response with the middle finger). The full table of conditions can be seen in Supplementary file 1. In total, there were 64 conditions: 4 AS x 2 TS contexts (colour/motion) x 2 TS directions (right/left) x 2 TS colours (green/red) x 2 TS coherence levels (12.8 and 25.6%). Trials of all types were interleaved and shown in a random order.

Stimulus generation

Custom-written MATLAB (The MathWorks, Inc, Natick, MA) code was used to produce a randomized stimulus order for each session and subject, with balanced trial numbers for each of the 64 conditions. For each RDM stimulus, a new random dot placement was generated, and given the appropriate level of coherence and motion. The RDM stimuli were coded using three interleaved streams of stimuli, one per screen refresh rate (17 ms), with the following parameters: speed of dots 4 degrees/second; temporal displacement 50 ms (three screen refreshs); spatial displacement 0.2 degrees/second; unmasked area 10 × 10 degrees; dot diameter 0.3 degrees; and number of dots 100. The stimulus presentation was programmed in MATLAB and performed using the Psychophysics Toolbox (Brainard, 1997).

Behavioural analysis

We recorded choice (left or right button press) and RT to both AS and TS in each trial. To examine training improvements, average RT and % correct from the two pre-training MEG sessions were compared with those obtained in the two post-training MEG sessions (which used the same stimuli/schedule; black in Figure 4—figure supplement 1A; see also Figure 4—figure supplement 1B–E). We used an ANOVA with factors coherence (70% = AS, 25.6% = easy TS, and 12.8% = hard TS), context (colour or motion), and training (pre vs. post) to assess statistical significance (Figure 4A).

MEG and MRI data acquisition

MEG data were recorded continuously at a sampling rate of 600 samples per second using a whole-head 275-channel axial gradiometer system (CTF Omega, VSM MedTech). Participants were seated upright in the scanner and their head location was monitored using three fiducial locations (nasion, left and right pre-auricular points). The distance to the screen was measured to adjust the size of the stimuli and the lights were turned off. Eye movements were recorded (EyeLink software), which required a brief calibration and validation procedure. During each MEG session, participants then performed eight blocks of 7 min of the task, with short breaks in between. Responses were indicated using a keypad with their right-hand index and middle finger. All four MEG sessions (two pre-training and two post-training) were identical in terms of the difficulty, trial structure, and procedure. One of the MEG sessions was followed by a short MRI session, during which a structural T1-weighted MPRAGE scan was acquired on a 3T Magnetom TIM Trio scanner (Siemens, Healthcare, Erlangen, Germany) with 176 slices; slice thickness = 1 mm; TR = 7.92 ms; TE = 2.48 ms; voxel size = 1×1 × 1 mm.

MEG data preprocessing

MEG data were preprocessed using SPM12 (http://www.fil.ion.ucl.ac.uk/spm/) and custom-written MATLAB code. Data were converted to SPM12 format, a notch filter was applied, and eyeblinks were removed based on the electro-oculogram channel using a regression procedure based on the principal component of the average eye blink as previously explained in Hunt et al., 2012. The data were downsampled to 300 Hz, epoched at [−1500,2000] around TS onset, and baseline corrected between [−1500–1100] (thus using a pre-AS baseline). Where necessary, timings were corrected for one frame (1/60 s) between trigger and image refresh, which was based on timings recorded with an in-scanner photodiode. Trials containing artefacts were rejected visually using Fieldtrip’s spm_eeg_ft_artefact_visual. Prior to source localization, data were low-pass filtered at 40 Hz and the blocks from each session were merged.

MEG source reconstruction

Source reconstruction was performed in SPM12. The structural scans were segmented and normalized to the MNI template. A subject-specific mesh was created using inverse normalization and the three recorded scalp locations were registered to the head model mesh. A forward headmodel was estimated for each session and subject (EEG BEM, single shell). An LCMV beamformer was applied in the window [−250, 750] ms around TS to estimate whole-brain power images on a grid of 5 mm and for source data (virtual timecourse) extraction, using PCA dimensionality reduction to regularize the data covariance estimation (Woolrich et al., 2011). Although beamforming has proven to be powerful at reconstructing source signals in electromagnetic imaging, it can be limited in the presence of highly correlated source signals, such as those that can occur bilaterally across hemispheres. To overcome this, a bilateral implementation of the LCMV beamformer was employed, in which the beamformer spatial filtering weights for each dipole were estimated together with the dipole's contralateral counterpart (Brookes et al., 2007). Beamformed power images from the two pre-training sessions and the two post-training sessions were smoothed, log-transformed, and averaged, respectively.

Region of interest

The a priori region of interest for this study was dorsal premotor cortex (PMd). Two analyses performed on our data justified the choice of PMd. First, we ran a broad inclusive beamforming contrast that compared TS trials containing any adaptation (whether colour or motion or response, relevant or irrelevant) with TS trials not containing any adaptation, averaged across pre- and post-training MEG sessions to avoid bias in subsequent analyses. Note that this contrast is entirely balanced for visual TS features and motor responses. For example, equal numbers of right-moving TS trials are on both sides of the contrast. There is a response to each TS and equally many left- and middle-finger responses are present on both sides of the contrast. We identified PMd within the peak cluster of this contrast (p<0.05, familywise error [FWE] cluster-corrected across the whole brain after initial thresholding at p<0.001). Left PMd (x=-37, y=-6, z = 55) was then used for extraction of time courses and further analyses on PMd source data (all subsequent statistical tests were orthogonal to region of interest (ROI) selection). Second, we used an established parcellation that included orthogonalization (to remove spatial leakage between parcels) to extract source data from 38 parcels obtained from an independent component analysis (ICA) decomposition on resting-state functional magnetic resonance imaging data from the Human Connectome Project (Colclough et al., 2016; Colclough et al., 2015) and confirmed that the strongest task-related effects were present in the parcel that contained left PMd (Figure 2—figure supplement 1).

Linear regression on MEG source data in PMd

We fitted an L2-regularized linear regression (ridge regression) to the raw source data extracted from PMd, which contained the following six regressors capturing task events and repetition suppression effects:

  1. Context [1/0; Motion/Colour]

  2. Switch instruction [1/0; Switch/No-switch]

  3. Relevant input adaptation [1/0; Motion or Colour relevant input adaptation/No relevant input adaptation]

  4. Irrelevant input adaptation [1/0; Motion or Colour irrelevant input adaptation/No irrelevant input adaptation]

  5. Response adaptation [1/0; Response adaptation/No adaptation]

  6. Choice [1/0; Right/Left]

Thus, regressors (3–5) capture the critical repetition suppression manipulation and depend on the preceding AS; regressor 2 captures whether the context changed from the AS to the TS but does not directly relate to repetition suppression; regressors (1,6) capture features related to the TS alone. The regression was applied to each time point around the presentation of the TS ([−500, 1350] ms) for each subject. While some dependencies between regressors were present by design, the percentage of shared variance was below 0.4 in all cases (Switch with Rel input: r2 = −0.34; Switch with Irrel input: r2 = 0.34; Rel input with Resp: r2 = 0.34). To increase sensitivity, we used a sliding-window approach by averaging time points within 150 ms around the time point and used a step size of 33.3 ms. For each time point, we sub-sampled 90% of trials from all correct trials. We then fitted ridge regression (Matlab’s function fitrlinear) to this sub-sample for obtaining linear regression coefficients. Because ridge regression has a hyper-parameter λ (the regularization coefficient), we tuned λ from {10−5, 10−3, 10−1, 101, 103, 105} using threefold cross validation for each sub-sample. Specifically, we used the hyper-parameter λ, which performed the best in the threefold cross validation for estimating linear regression coefficients for the subset. We averaged linear regression coefficients across the 10 sub-sets’ results for obtaining the estimated linear regression coefficients used for the following statistical analyses. To test whether linear regression coefficients were significant, we generated a null distribution by shuffling the trials within subject. More precisely, we kept rows consistent but shuffled the order of the rows in the design matrix. This preserved the covariance structure of the design matrix in all control shuffles and thus accounted for the possibility that any effects could have been caused by existing correlations between regressors (see also simulation results in Figure 2—figure supplements 2 and 3). We generated n = 1000 permutations. We estimated linear regression coefficients using exactly the same procedure as above using these shuffled data.

We used a conservative time window to correct for multiple comparisons across time. Because the context cues came on at −150 ms and responses were on average at around 500 ms, we chose a window of nearly 1 s duration from [−133.3, 950] ms around the TS. At our sampling resolution, this window contained 29 data points, and we corrected all statistical tests on these data across these 29 data samples. This correction was used to establish significance of individual effects (e.g., response adaptation) or differences between two effects (e.g., relevant versus irrelevant input adaptation). Note that, with alpha set at 0.05, and 29 time points, the p-value required for significance would be 0.05/29 = 0.0017 after Bonferroni familywise error rate correction. Therefore, we used a threshold of p<0.001 in the main figures.

We used the absolute values of the linear regression coefficients for statistical analyses and figures because source-localized MEG data have an arbitrary sign as a consequence of the ambiguity of the source polarity. As beamforming is done for each session separately, the sign of the reconstructed dipoles risks being inconsistent across subjects and sessions. We aligned the signs within subject for the two pre-training and the two post-training sessions separately by calculating Pearson’s correlation coefficient between average event-related potentials (ERPs) ([−200, 1500]) for sessions 1 and 2, and 3 and 4. In case of a negative correlation, we flipped one session’s signals before estimating linear regression coefficients. This ensured that pre- and post-training sessions each used sign-aligned data in a given participant.

To establish whether peak timings between input and response adaptation differed, the peak time for the parameter estimates was established in each participant for both regressors using an out-of-sample procedure. The average peak time across all participants except the left-out participant was determined and the left-out participant’s peak was taken as the highest linear regression coefficient in a window of size [−66.7, 66.7] around that group peak. We confirmed that using a wider window of size [−133.6, 133.6] ms did not change our conclusion. This procedure was repeated for all participants and all regressors and peak times were subjected to a one-way repeated-measures ANOVA. We conducted post-hoc pair-wise t-test between the peak timings of linear regression coefficients for input and response adaptation. For Figure 2C,a, probability distribution of the peak time for each regressor was estimated using the fitdist function in MATLAB for visualization purposes.

Choice traces showing the signal progression in a two-dimensional task space spanned by input and response dimensions to mimic NHP population traces were generated by plotting the linear regression coefficient obtained for input and response adaptation against each other within the same plot.

To investigate training effects on relevant versus irrelevant input processing, we used a two-way ANOVA on the estimated linear regression coefficients with factors training (pre vs. post) and input adaptation (relevant vs. irrelevant) for each time point in the window [−133.3, 950] ms. Bayesian ANOVAs were performed in JASP (JASP Team (2018), https://jasp-stats.org) and were JZS Bayes factor ANOVAs (Love et al., 2019; Rouder et al., 2012) with default prior scales and enabled measuring the likelihood of the null hypothesis. Across time points, the largest P(M|data) for the model including the input x training interaction in PMd was P(M|data)=0.166. The winning models were either the Null model (17 time points) or a model with only a factor of input (relevant vs. irrelevant; nine time points), and their P(M|data) was >0.4 across time (mean 0.52).

Behavioural analysis of RT and accuracy

The six regressors described above were also used in a linear and logistic regression of TS RTs and TS accuracies (1 = correct, 0 = incorrect). For each regressor, a t-test was performed on the linear/logistic regression coefficients obtained across all participants.

Task feature processing independent of repetition suppression in PMd

Our experiment was optimized for the analyses described above, which capitalize on the suppression of the MEG signal along multiple task features (or ‘axes’) induced by repeated exposure (repetition suppression). This is because multivariate approaches, like those performed across neurons in Mante et al., 2013 would have to be performed across sensors when dealing with MEG data. However, this would not offer the required spatial resolution for studying modulations within a single brain region, here PMd. Nevertheless, in an additional analysis, we tried to implement an analysis approach more similar to the encoding analysis used in Mante et al., 2013 which did not rely on the repetition suppression manipulation. This analysis was performed across trials on the univariate signal from the virtual PMd sensor. We used the following regressors, which are independent of the adaptation stimulus and only pertain to stimulus and response properties of the TS (Figure 4—figure supplement 3):

  1. Context [1/0; Motion/Colour]

  2. Relevant input strength [–2/–1/+1/+2; indicating the level of positive or negative sensory evidence. The sign of the motion and colour coherence is defined such that positive coherence values correspond to evidence pointing towards a right choice, and negative coherence values correspond to evidence pointing towards a left choice.]

  3. Irrelevant input strength [–2/–1/+1/+2; same as the relevant input regressor but pertaining to the sensory information that is irrelevant on a given trial, e.g. colour input strength on a motion trial]

  4. Choice direction [1/0; Right/Left]

Therefore, unlike in the first GLM above, in this GLM, regressors are related to stimulus properties of the TS, and unrelated to the AS and the repetition suppression manipulation. The regression was applied to each time point of the time series extracted from PMd using the same fitting and statistical procedures described above (e.g., including regularized regression, multiple comparison correction, extraction of peak timings). We note that Mante et al.’s regressors were identical to ours with the only difference that we merged colour and motion trials for the two input strength regressors to maximize the power of our design.

While we are able to confirm our key effects, there are two important caveats that mean that this analysis is not as sensitive as our main analysis. First, this analysis approach ignores the adaptation stimulus that we know affects the MEG signal at the time of the test stimulus. Second, our experimental design was not optimized for this analysis, and high correlations were present between the relevant input strength and choice direction regressors (r = 0.95). Unlike Mante et al., 2013 we did not have three levels of coherence and we did not randomize the colour-response mappings, which would have reduced the design correlations. We did not optimize the design for this analysis but instead for a repetition suppression analysis because repetition suppression has higher within-sensor spatial sensitivity.

Decoding from whole-brain MEG scalp data

Finally, to rule out that we were overlooking potential representations of the irrelevant sensory inputs by focusing solely on PMd, we repeated a similar analysis to the above control analysis on the whole-brain source-reconstructed MEG signal in 38 parcels (‘virtual sensors’) (Colclough et al., 2016; Colclough et al., 2015). Again, this analysis did not code regressors in reference to the adaptation stimulus/feature suppression but focused on the properties of the TS. A decoder was constructed separately for each time point around the presentation of the TS ([−506.7,1416.7] ms) for each subject and each session. To increase sensitivity, we used a sliding-window approach by averaging time points within 150 ms around the time point and used a step size of 63.3 ms. The regressor used to predict current sensory evidence took values from [–2,–1,+1,+2], indicating the level of positive or negative sensory evidence, such that positive coherence values point towards a right choice, and negative coherence values indicate a left choice. Because of the high correlations between sensory evidence and choice direction (see above), we took a conservative approach. First, for each time point and virtual sensor/parcel, choice direction (right or left) was regressed out of the signal. A decoding analysis was then performed on the residuals of each parcel, having accounted for variance explained by choice direction.

For each context (motion and colour), we constructed a decoder for relevant input (e.g., motion input in motion context) and irrelevant input (e.g., motion input in colour context) separately. We used ridge regression, and decoding performance was evaluated using a nested cross validation procedure as follows. We first split all correct trials into 10 sets of trials (tenfold outer-CV). We then split whole trials except one held-out set into three sets of trials (threefold inner-CV). We tuned the hyper-parameter λ from {10−5, 10−3, 10−1, 101, 103, 105} in this threefold inner-CV. Finally, we fitted the model with the best-performing λ from the inner-CV to three sets of trials and obtained the prediction for the one held-out set. This procedure was repeated 10 times. Here, we chose a window of nearly 1 s duration from [−190, 1036.7] ms around the TS for statistical testing. At our sampling resolution, this window contained 18 time points.

Again, Bayesian ANOVAs were performed in JASP. Across time points, the largest P(M|data) for the model including the input x training interaction in the scalp data was P(M|data)=0.273. The winning models were either the Null model (two time points), a model with only a factor of input (relevant vs. irrelevant; 10 time points), or one with timing and input factors but not their interaction (four time points). Their P(M|data) was >0.3 across time (mean 0.56).

Acknowledgements

YT was funded by Grants-in-Aid for Scientific Research on Innovative Areas from the JSPS (23118001, 23118002) and Uehara Memorial Foundation. MCKF was funded by a Sir Henry Wellcome Fellowship (103184/Z/13/Z). TEJB was funded by Wellcome Senior Research Fellowship (104765/Z/14/Z), Wellcome Principal Research Fellowship (219525/Z/19/Z), JS McDonnell Foundation award (JSMF220020372), and Wellcome Collaborator award (214314/Z/18/Z). We would like to thank Gareth Barnes and Vladimir Litvak for advice on initial data analyses, the whole support team at the FIL for help with data acquisition, and MaryAnn Noonan, Nick Myers, and Lev Tankelevich for helpful discussions on the manuscript.

Funding Statement

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Contributor Information

Yu Takagi, Email: yutakagi322@gmail.com.

Miriam C Klein-Flügge, Email: miriam.klein-flugge@psy.ox.ac.uk.

J Matias Palva, University of Helsinki, Finland.

Joshua I Gold, University of Pennsylvania, United States.

Funding Information

This paper was supported by the following grants:

  • Japan Society for the Promotion of Science 23118001 to Yu Takagi.

  • Uehara Memorial Foundation to Yu Takagi.

  • Wellcome 103184/Z/13/Z to Miriam C Klein-Flügge.

  • Wellcome 104765/Z/14/Z to Timothy EJ Behrens.

  • Wellcome Principal Research Fellowship 219525/Z/19/Z to Timothy EJ Behrens.

  • James S. McDonnell Foundation JSMF220020372 to Timothy EJ Behrens.

  • Wellcome 214314/Z/18/Z to Timothy EJ Behrens.

  • Japan Society for the Promotion of Science 23118002 to Yu Takagi.

Additional information

Competing interests

Senior/Deputy editor, eLife.

No competing interests declared.

Author contributions

Formal analysis, Visualization, Methodology, Writing - original draft, Writing - review and editing.

Methodology, Writing - review and editing.

Methodology, Writing - review and editing.

Conceptualization, Supervision, Writing - review and editing.

Conceptualization, Data curation, Formal analysis, Supervision, Validation, Investigation, Visualization, Methodology, Writing - original draft, Project administration, Writing - review and editing.

Ethics

Human subjects: The study was approved by the University College London (UCL) Research Ethics Committee (reference 1825/005) and all participants gave written informed consent.

Additional files

Supplementary file 1. Task conditions and the corresponding regressors.

The list of task conditions and corresponding regressors of the experiment are shown. The four bold lines are illustrated as examples in Figure 1B.

elife-60988-supp1.docx (36.5KB, docx)
Transparent reporting form

Data availability

The code used in the current study and the datasets generated and/or analyzed during the current study are available at the OSF repository (https://doi.org/10.17605/OSF.IO/RJY3Z).

The following dataset was generated:

Takagi Y, Hunt LT, Woolrich MW, Behrens TEJ, Klein-Flügge MC. 2021. Adapting non-invasive human recordings along multiple task-axes shows unfolding of spontaneous and over-trained choice. Open Science Framework.

References

  1. Barron HC, Dolan RJ, Behrens TE. Online evaluation of novel choices by simultaneous representation of multiple memories. Nature Neuroscience. 2013;16:1492–1498. doi: 10.1038/nn.3515. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Barron HC, Garvert MM, Behrens TE. Repetition suppression: a means to index neural representations using BOLD? Philosophical transactions of the Royal Society of London. Series B, Biological Sciences. 2016a;371:20150355. doi: 10.1098/rstb.2015.0355. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Barron HC, Vogels TP, Emir UE, Makin TR, O'Shea J, Clare S, Jbabdi S, Dolan RJ, Behrens TE. Unmasking latent inhibitory connections in human cortex to reveal dormant cortical memories. Neuron. 2016b;90:191–203. doi: 10.1016/j.neuron.2016.02.031. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Bichot NP, Schall JD, Thompson KG. Visual feature selectivity in frontal eye fields induced by experience in mature macaques. Nature. 1996;381:697–699. doi: 10.1038/381697a0. [DOI] [PubMed] [Google Scholar]
  5. Bichot NP, Schall JD. Effects of similarity and history on neural mechanisms of visual selection. Nature Neuroscience. 1999;2:549–554. doi: 10.1038/9205. [DOI] [PubMed] [Google Scholar]
  6. Bongioanni A, Folloni D, Verhagen L, Sallet J, Klein-Flügge MC, Rushworth MFS. Activation and disruption of a neural mechanism for novel choice in monkeys. Nature. 2021;591:270–274. doi: 10.1038/s41586-020-03115-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Brainard DH. The psychophysics toolbox. Spatial Vision. 1997;10:433–436. doi: 10.1163/156856897X00357. [DOI] [PubMed] [Google Scholar]
  8. Brookes MJ, Stevenson CM, Barnes GR, Hillebrand A, Simpson MI, Francis ST, Morris PG. Beamformer reconstruction of correlated sources using a modified source model. NeuroImage. 2007;34:1454–1465. doi: 10.1016/j.neuroimage.2006.11.012. [DOI] [PubMed] [Google Scholar]
  9. Buschman TJ, Miller EK. Top-down versus bottom-up control of attention in the prefrontal and posterior parietal cortices. Science. 2007;315:1860–1862. doi: 10.1126/science.1138071. [DOI] [PubMed] [Google Scholar]
  10. Chau BK, Sallet J, Papageorgiou GK, Noonan MP, Bell AH, Walton ME, Rushworth MF. Contrasting Roles for Orbitofrontal Cortex and Amygdala in Credit Assignment and Learning in Macaques. Neuron. 2015;87:1106–1118. doi: 10.1016/j.neuron.2015.08.018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Chaudhuri R, Knoblauch K, Gariel MA, Kennedy H, Wang XJ. A Large-Scale Circuit Mechanism for Hierarchical Dynamical Processing in the Primate Cortex. Neuron. 2015;88:419–431. doi: 10.1016/j.neuron.2015.09.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Chong TT, Cunnington R, Williams MA, Kanwisher N, Mattingley JB. fMRI adaptation reveals mirror neurons in human inferior parietal cortex. Current Biology: CB. 2008;18:1576–1580. doi: 10.1016/j.cub.2008.08.068. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Churchland MM, Cunningham JP, Kaufman MT, Foster JD, Nuyujukian P, Ryu SI, Shenoy KV. Neural population dynamics during reaching. Nature. 2012;487:51–56. doi: 10.1038/nature11129. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Colclough GL, Brookes MJ, Smith SM, Woolrich MW. A symmetric multivariate leakage correction for MEG connectomes. NeuroImage. 2015;117:439–448. doi: 10.1016/j.neuroimage.2015.03.071. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Colclough GL, Woolrich MW, Tewarie PK, Brookes MJ, Quinn AJ, Smith SM. How reliable are MEG resting-state connectivity metrics? NeuroImage. 2016;138:284–293. doi: 10.1016/j.neuroimage.2016.05.070. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Cunningham JP, Yu BM, Bm Y. Dimensionality reduction for large-scale neural recordings. Nature Neuroscience. 2014;17:1500–1509. doi: 10.1038/nn.3776. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. de Lange FP, Jensen O, Dehaene S. Accumulation of evidence during sequential decision making: the importance of top-down factors. Journal of Neuroscience. 2010;30:731–738. doi: 10.1523/JNEUROSCI.4080-09.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. de Lange FP, Heilbron M, Kok P. How Do Expectations Shape Perception? Trends in Cognitive Sciences. 2018;22:764–779. doi: 10.1016/j.tics.2018.06.002. [DOI] [PubMed] [Google Scholar]
  19. Desimone R, Duncan J. Neural mechanisms of selective visual attention. Annual Review of Neuroscience. 1995;18:193–222. doi: 10.1146/annurev.ne.18.030195.001205. [DOI] [PubMed] [Google Scholar]
  20. Doeller CF, Barry C, Burgess N. Evidence for grid cells in a human memory network. Nature. 2010;463:657–661. doi: 10.1038/nature08704. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Dum RP, Strick PL. Motor areas in the frontal lobe of the primate. Physiology & Behavior. 2002;77:677–682. doi: 10.1016/s0031-9384(02)00929-0. [DOI] [PubMed] [Google Scholar]
  22. Duncan J, Humphreys G, Ward R. Competitive brain activity in visual attention. Current Opinion in Neurobiology. 1997;7:255–261. doi: 10.1016/s0959-4388(97)80014-1. [DOI] [PubMed] [Google Scholar]
  23. Egner T, Hirsch J. Cognitive control mechanisms resolve conflict through cortical amplification of task-relevant information. Nature Neuroscience. 2005;8:1784–1790. doi: 10.1038/nn1594. [DOI] [PubMed] [Google Scholar]
  24. Ekstrom AD, Kahana MJ, Caplan JB, Fields TA, Isham EA, Newman EL, Fried I. Cellular networks underlying human spatial navigation. Nature. 2003;425:184–188. doi: 10.1038/nature01964. [DOI] [PubMed] [Google Scholar]
  25. Everling S, Tinsley CJ, Gaffan D, Duncan J. Filtering of neural signals by focused attention in the monkey prefrontal cortex. Nature Neuroscience. 2002;5:671–676. doi: 10.1038/nn874. [DOI] [PubMed] [Google Scholar]
  26. Fell J, Klaver P, Lehnertz K, Grunwald T, Schaller C, Elger CE, Fernández G. Human memory formation is accompanied by rhinal-hippocampal coupling and decoupling. Nature Neuroscience. 2001;4:1259–1264. doi: 10.1038/nn759. [DOI] [PubMed] [Google Scholar]
  27. Fouragnan EF, Chau BKH, Folloni D, Kolling N, Verhagen L, Klein-Flügge M, Tankelevitch L, Papageorgiou GK, Aubry JF, Sallet J, Rushworth MFS. The macaque anterior cingulate cortex translates counterfactual choice value into actual behavioral change. Nature Neuroscience. 2019;22:797–808. doi: 10.1038/s41593-019-0375-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Friston K. A theory of cortical responses. Philosophical transactions of the Royal Society of London. Series B, Biological Sciences. 2005;360:815–836. doi: 10.1098/rstb.2005.1622. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Friston K, Kiebel S. Predictive coding under the free-energy principle. Philosophical transactions of the Royal Society of London. Series B, Biological sciences. 2009;364:1211–1221. doi: 10.1098/rstb.2008.0300. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Fritsche M, Lawrence SJD, de Lange FP. Temporal tuning of repetition suppression across the visual cortex. Journal of Neurophysiology. 2020;123:224–233. doi: 10.1152/jn.00582.2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Garvert MM, Moutoussis M, Kurth-Nelson Z, Behrens TE, Dolan RJ. Learning-induced plasticity in medial prefrontal cortex predicts preference malleability. Neuron. 2015;85:418–428. doi: 10.1016/j.neuron.2014.12.033. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Garvert MM, Dolan RJ, Behrens TE. A map of abstract relational knowledge in the human hippocampal-entorhinal cortex. eLife. 2017;6:e17086. doi: 10.7554/eLife.17086. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Gazzaley A, Rissman J, Cooney J, Rutman A, Seibert T, Clapp W, D'Esposito M. Functional interactions between prefrontal and visual association cortex contribute to Top-Down modulation of visual processing. Cerebral Cortex. 2007;17:i125–i135. doi: 10.1093/cercor/bhm113. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Glaze CM, Kable JW, Gold JI. Normative evidence accumulation in unpredictable environments. eLife. 2015;4:e08825. doi: 10.7554/eLife.08825. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Gold JI, Stocker AA. Visual Decision-Making in an uncertain and dynamic world. Annual Review of Vision Science. 2017;3:227–250. doi: 10.1146/annurev-vision-111815-114511. [DOI] [PubMed] [Google Scholar]
  36. Grill-Spector K, Henson R, Martin A. Repetition and the brain: neural models of stimulus-specific effects. Trends in Cognitive Sciences. 2006;10:14–23. doi: 10.1016/j.tics.2005.11.006. [DOI] [PubMed] [Google Scholar]
  37. Gross CG, Schiller PH, Wells C, Gerstein GL. Single-unit activity in temporal association cortex of the monkey. Journal of Neurophysiology. 1967;30:833–843. doi: 10.1152/jn.1967.30.4.833. [DOI] [PubMed] [Google Scholar]
  38. Harvey CD, Coen P, Tank DW. Choice-specific sequences in parietal cortex during a virtual-navigation decision task. Nature. 2012;484:62–68. doi: 10.1038/nature10918. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Henson RN, Rylands A, Ross E, Vuilleumeir P, Rugg MD. The effect of repetition lag on electrophysiological and haemodynamic correlates of visual object priming. NeuroImage. 2004;21:1674–1689. doi: 10.1016/j.neuroimage.2003.12.020. [DOI] [PubMed] [Google Scholar]
  40. Henson RN. Repetition suppression to faces in the fusiform face area: a personal and dynamic journey. Cortex. 2016;80:174–184. doi: 10.1016/j.cortex.2015.09.012. [DOI] [PubMed] [Google Scholar]
  41. Hochstein S, Ahissar M. View from the top: hierarchies and reverse hierarchies in the visual system. Neuron. 2002;36:791–804. doi: 10.1016/s0896-6273(02)01091-7. [DOI] [PubMed] [Google Scholar]
  42. Hunt LT, Kolling N, Soltani A, Woolrich MW, Rushworth MF, Behrens TE. Mechanisms underlying cortical activity during value-guided choice. Nature Neuroscience. 2012;15:470–476. doi: 10.1038/nn.3017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Hunt LT, Behrens TE, Hosokawa T, Wallis JD, Kennerley SW. Capturing the temporal evolution of choice across prefrontal cortex. eLife. 2015;4:e11945. doi: 10.7554/eLife.11945. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Hunt LT, Malalasekera WMN, de Berker AO, Miranda B, Farmer SF, Behrens TEJ, Kennerley SW. Triple dissociation of attention and decision computations across prefrontal cortex. Nature Neuroscience. 2018;21:1471–1481. doi: 10.1038/s41593-018-0239-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Jenkins AC, Macrae CN, Mitchell JP. Repetition suppression of ventromedial prefrontal activity during judgments of self and others. PNAS. 2008;105:4507–4512. doi: 10.1073/pnas.0708785105. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Kaufman MT, Churchland MM, Ryu SI, Shenoy KV. Cortical activity in the null space: permitting preparation without movement. Nature Neuroscience. 2014;17:440–448. doi: 10.1038/nn.3643. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Klein-Flügge MC, Barron HC, Brodersen KH, Dolan RJ, Behrens TE. Segregated encoding of reward-identity and stimulus-reward associations in human orbitofrontal cortex. Journal of Neuroscience. 2013;33:3202–3211. doi: 10.1523/JNEUROSCI.2532-12.2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Klein-Flügge MC, Wittmann MK, Shpektor A, Jensen DEA, Rushworth MFS. Multiple associative structures created by reinforcement and incidental statistical learning mechanisms. Nature Communications. 2019;10:1–15. doi: 10.1038/s41467-019-12557-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Kok P, Bains LJ, van Mourik T, Norris DG, de Lange FP. Selective Activation of the Deep Layers of the Human Primary Visual Cortex by Top-Down Feedback. Current Biology: CB. 2016;26:371–376. doi: 10.1016/j.cub.2015.12.038. [DOI] [PubMed] [Google Scholar]
  50. Kok P, Mostert P, de Lange FP. Prior expectations induce prestimulus sensory templates. PNAS. 2017;114:10473–10478. doi: 10.1073/pnas.1705652114. [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Kriegeskorte N, Mur M, Ruff DA, Kiani R, Bodurka J, Esteky H, Tanaka K, Bandettini PA. Matching categorical object representations in inferior temporal cortex of man and monkey. Neuron. 2008;60:1126–1141. doi: 10.1016/j.neuron.2008.10.043. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Larsson J, Smith AT. fMRI repetition suppression: neuronal adaptation or stimulus expectation? Cerebral Cortex. 2012;22:567–576. doi: 10.1093/cercor/bhr119. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Law CT, Gold JI. Neural correlates of perceptual learning in a sensory-motor, but not a sensory, cortical area. Nature Neuroscience. 2008;11:505–513. doi: 10.1038/nn2070. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Law CT, Gold JI. Shared mechanisms of perceptual learning and decision making. Topics in Cognitive Science. 2010;2:226–238. doi: 10.1111/j.1756-8765.2009.01044.x. [DOI] [PubMed] [Google Scholar]
  55. Li N, Daie K, Svoboda K, Druckmann S. Robust neuronal dynamics in premotor cortex during motor planning. Nature. 2016;532:459–464. doi: 10.1038/nature17643. [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Love J, Selker R, Marsman M, Jamil T, Dropmann D, Verhagen J, Ly A, Gronau QF, Smíra M, Epskamp S, Matzke D, Wild A, Knight P, Rouder JN, Morey RD, Wagenmakers E-J. JASP: Graphical Statistical Software for Common Statistical Designs. Journal of Statistical Software. 2019;88:1–17. doi: 10.18637/jss.v088.i02. [DOI] [Google Scholar]
  57. Mante V, Sussillo D, Shenoy KV, Newsome WT. Context-dependent computation by recurrent dynamics in prefrontal cortex. Nature. 2013;503:78–84. doi: 10.1038/nature12742. [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Michalareas G, Vezoli J, van Pelt S, Schoffelen JM, Kennedy H, Fries P. Alpha-Beta and Gamma Rhythms Subserve Feedback and Feedforward Influences among Human Visual Cortical Areas. Neuron. 2016;89:384–397. doi: 10.1016/j.neuron.2015.12.018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Miller JF, Neufang M, Solway A, Brandt A, Trippel M, Mader I, Hefft S, Merkow M, Polyn SM, Jacobs J, Kahana MJ, Schulze-Bonhage A. Neural activity in human hippocampal formation reveals the spatial context of retrieved memories. Science. 2013;342:1111–1114. doi: 10.1126/science.1244056. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Moore T, Zirnsak M. Neural mechanisms of selective visual attention. Annual Review of Psychology. 2017;68:47–72. doi: 10.1146/annurev-psych-122414-033400. [DOI] [PubMed] [Google Scholar]
  61. Morcos AS, Harvey CD. History-dependent variability in population dynamics during evidence accumulation in cortex. Nature Neuroscience. 2016;19:1672–1681. doi: 10.1038/nn.4403. [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Murray JD, Bernacchia A, Roy NA, Constantinidis C, Romo R, Wang XJ. Stable population coding for working memory coexists with heterogeneous neural dynamics in prefrontal cortex. PNAS. 2017;114:394–399. doi: 10.1073/pnas.1619449114. [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Noudoost B, Chang MH, Steinmetz NA, Moore T. Top-down control of visual attention. Current Opinion in Neurobiology. 2010;20:183–190. doi: 10.1016/j.conb.2010.02.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  64. Ossmy O, Moran R, Pfeffer T, Tsetsos K, Usher M, Donner TH. The timescale of perceptual evidence integration can be adapted to the environment. Current biology: CB. 2013;23:981–986. doi: 10.1016/j.cub.2013.04.039. [DOI] [PubMed] [Google Scholar]
  65. Passingham R. How good is the macaque monkey model of the human brain? Current Opinion in Neurobiology. 2009;19:6–11. doi: 10.1016/j.conb.2009.01.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. Pezzulo G, Cisek P. Navigating the Affordance Landscape: Feedback Control as a Process Model of Behavior and Cognition. Trends in Cognitive Sciences. 2016;20:414–424. doi: 10.1016/j.tics.2016.03.013. [DOI] [PubMed] [Google Scholar]
  67. Piazza M, Pinel P, Le Bihan D, Dehaene S. A magnitude code common to numerosities and number symbols in human intraparietal cortex. Neuron. 2007;53:293–305. doi: 10.1016/j.neuron.2006.11.022. [DOI] [PubMed] [Google Scholar]
  68. Rao RP, Ballard DH. Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nature Neuroscience. 1999;2:79–87. doi: 10.1038/4580. [DOI] [PubMed] [Google Scholar]
  69. Raposo D, Kaufman MT, Churchland AK. A category-free neural population supports evolving demands during decision-making. Nature Neuroscience. 2014;17:1784–1792. doi: 10.1038/nn.3865. [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Richter CG, Thompson WH, Bosman CA, Fries P. Top-Down Beta Enhances Bottom-Up Gamma. Journal of Neuroscience. 2017;37:6698–6711. doi: 10.1523/JNEUROSCI.3771-16.2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  71. Richter D, de Lange FP. Statistical learning attenuates visual activity only for attended stimuli. eLife. 2019;8:e47869. doi: 10.7554/eLife.47869. [DOI] [PMC free article] [PubMed] [Google Scholar]
  72. Rouder JN, Morey RD, Speckman PL, Province JM. Default bayes factors for ANOVA designs. Journal of Mathematical Psychology. 2012;56:356–374. doi: 10.1016/j.jmp.2012.08.001. [DOI] [Google Scholar]
  73. Rushworth MF, Johansen-Berg H, Göbel SM, Devlin JT. The left parietal and premotor cortices: motor attention and selection. NeuroImage. 2003;20:S89–S100. doi: 10.1016/j.neuroimage.2003.09.011. [DOI] [PubMed] [Google Scholar]
  74. Rutishauser U, Ross IB, Mamelak AN, Schuman EM. Human memory strength is predicted by theta-frequency phase-locking of single neurons. Nature. 2010;464:903–907. doi: 10.1038/nature08860. [DOI] [PubMed] [Google Scholar]
  75. Siegel M, Buschman TJ, Miller EK. Cortical information flow during flexible sensorimotor decisions. Science. 2015;348:1352–1355. doi: 10.1126/science.aab0551. [DOI] [PMC free article] [PubMed] [Google Scholar]
  76. Squire RF, Noudoost B, Schafer RJ, Moore T. Prefrontal contributions to visual selective attention. Annual Review of Neuroscience. 2013;36:451–466. doi: 10.1146/annurev-neuro-062111-150439. [DOI] [PubMed] [Google Scholar]
  77. Stokes M, Duncan J. Dynamic Brain States for Preparatory Attention and Working Memory. Oxf Handb Atten; 2014. [DOI] [Google Scholar]
  78. Summerfield C, de Lange FP, de LFP. Expectation in perceptual decision making: neural and computational mechanisms. Nature reviews. Neuroscience. 2014;15:745–756. doi: 10.1038/nrn3838. [DOI] [PubMed] [Google Scholar]
  79. Todorovic A, de Lange FP. Repetition suppression and expectation suppression are dissociable in time in early auditory evoked fields. Journal of Neuroscience. 2012;32:13389–13395. doi: 10.1523/JNEUROSCI.2227-12.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  80. Urai AE, de Gee JW, Tsetsos K, Donner TH. Choice history biases subsequent evidence accumulation. eLife. 2019;8:e46331. doi: 10.7554/eLife.46331. [DOI] [PMC free article] [PubMed] [Google Scholar]
  81. van Wassenhove V, Grant KW, Poeppel D. Visual speech speeds up the neural processing of auditory speech. PNAS. 2005;102:1181–1186. doi: 10.1073/pnas.0408949102. [DOI] [PMC free article] [PubMed] [Google Scholar]
  82. Vidyasagar R, Stancak A, Parkes LM. A multimodal brain imaging study of repetition suppression in the human visual cortex. NeuroImage. 2010;49:1612–1621. doi: 10.1016/j.neuroimage.2009.10.020. [DOI] [PubMed] [Google Scholar]
  83. Watrous AJ, Miller J, Qasim SE, Fried I, Jacobs J. Phase-tuned neuronal firing encodes human contextual representations for navigational goals. eLife. 2018;7:e32554. doi: 10.7554/eLife.32554. [DOI] [PMC free article] [PubMed] [Google Scholar]
  84. Woolrich M, Hunt L, Groves A, Barnes G. MEG beamforming using Bayesian PCA for adaptive data covariance matrix regularization. NeuroImage. 2011;57:1466–1479. doi: 10.1016/j.neuroimage.2011.04.041. [DOI] [PMC free article] [PubMed] [Google Scholar]
  85. Wyart V, Myers NE, Summerfield C. Neural mechanisms of human perceptual choice under focused and divided attention. Journal of Neuroscience. 2015;35:3485–3498. doi: 10.1523/JNEUROSCI.3276-14.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  86. Yuste R. From the neuron doctrine to neural networks. Nature Reviews. Neuroscience. 2015;16:487–497. doi: 10.1038/nrn3962. [DOI] [PubMed] [Google Scholar]

Decision letter

Editor: J Matias Palva1
Reviewed by: Lucas C Parra2

In the interests of transparency, eLife publishes the most substantive revision requests and the accompanying author responses.

Acceptance summary:

This study used a novel approach that combined measures of human brain activity with high spatial and temporal resolution (using magnetoencephalography, or MEG) and repetition suppression to identify neural representations of task-specific information processing related to the stimulus, task context, and/or motor response during decision-making. The primary finding, which runs counter to many related studies in non-human primates, is that in premotor cortex, neural activity encodes task-relevant features more strongly than task-irrelevant stimuli. The clever approach, and the use of that approach to draw interesting and well-grounded conclusions about information processing in the human brain, were considered particularly noteworthy and likely to inform future studies of human decision-making.

Decision letter after peer review:

Thank you for submitting your article "Projections of non-invasive human recordings into state space show unfolding of spontaneous and over-trained choice" for consideration by eLife. Your article has been reviewed by 2 peer reviewers, and the evaluation has been overseen by a Reviewing Editor and Joshua Gold as the Senior Editor. The following individual involved in review of your submission has agreed to reveal their identity: Lucas C Parra (Reviewer #2).

The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.

The editors have judged that your manuscript is of interest, but as described below, that extensive revisions are required before it can be considered for publication. The editors and reviewers agree that no further data are required but major conceptual and data-analysis wise clarifications are essential.

We would like to draw your attention to changes in our revision policy that we have made in response to COVID-19 (https://elifesciences.org/articles/57162). First, because many researchers have temporarily lost access to the labs, we will give authors as much time as they need to submit revised manuscripts. We are also offering, if you choose, to post the manuscript to bioRxiv (if it is not already there) along with this decision letter and a formal designation that the manuscript is "in revision at eLife". Please let us know if you would like to pursue this option. (If your work is more suitable for medRxiv, you will need to post the preprint yourself, as the mechanisms for us to do so are still in development.)

Summary:

In this paper the authors use a novel technique to disentangle neural representations of distinct choice-relevant variables in MEG data which leverages the phenomenon of repetition suppression. By presenting an 'adaptation stimulus' before each perceptual choice the authors were able to selectively suppress neural activity for different sensory and response modalities and used these suppression signatures to isolate distinct activity components reflecting task context (instruction to attend to motion vs colour), relevant sensory input, irrelevant sensory input, motor response (index vs middle finger) and choice. Repetition suppression was used here with clever experimental design to determine from MEG whether premotor cortex (PMd) "encodes" different properties of the stimulus, task or response during a decision making task. The premise is that if adaptation is observed for a specific feature, then this feature must have been "encoded" in PMd. The main finding is that stimulus, task and response features are all "encoded" in PMd with increasing delay, and that task-irrelevant stimulus properties are "encoded" less strongly. While, prior NHP studies found little difference in the representation of relevant vs irrelevant sensory inputs at the final integration stage, the present study found that irrelevant representations were significantly weaker. A follow-up study in which humans were extensively trained to mimic the task exposure experienced by NHP replicated these results.

The study is interesting on many fronts and relevant to a wide audience. It is, however, essential the revise the manuscript extensively to address several issues including methodological challenges associated with inferring neural computations underpinning decision making from non-invasive recordings and the more general question of the degree to which irrelevant sensory inputs are subject to top-down filtering.

Essential revisions:

1. The rationale for relying on repetition suppression to isolate a neural readout of the decision process needs to be more clearly articulated. Given the emphasis the authors place on comparing their results to previous NHP studies (Mante et al. 2013, Siegel et al. 2015), the authors should explain why they could not have applied a similar decoding approach. The task design means that task context, sensory modality, sensory input strength and choice are already nicely orthogonalised so why is the adaptation step necessary?

2. The query above relates to a more substantive question regarding the degree to which the authors approach can allow us to draw firm conclusions regarding the relative timing with which these distinct variables are represented. For example the authors highlight that sensory representations precede choice representations.

2a. For starters, this is contrary to what Siegel et al. (2015) found – they reported that choice representations emerged before the stimulus even appeared.

2b. More importantly, to what extent can it be assumed that the relative timing of adaptation effects on sensory vs motor components necessarily translates directly to differences in the time at which these variables influence the decision process?

2c. The authors note the well-established fact that stimulus and response repetition is associated with decreased BOLD/EEG/MEG activity in the relevant brain regions but what do we know about the timing of these effects? Can we assume comparable dynamics underpinning sensory and motor adaptation?

2d. For example, recent studies of choice history biases (e.g. Urai et al. 2019, eLife) suggest that responses on trial N-1 cause a bias in the rate of evidence accumulation for repeated choices suggesting that the neural dynamics associated with repetition may more complex than a simple attenuation. More pertinently, can we assume that these adaptation/attenuation effects have any impact on the information content for the decision process?

Please clarify these aspects in relevant sections of the manuscript and address with data analyses how repetition impacts sensory encoding versus motor preparation signals.

3a. The contrast between the present study and the aforementioned NHP studies on the point of filtering of irrelevant sensory inputs is striking and interesting. The authors have, however, used a different analysis strategy to that of Siegel and Mante, which could conceivably contribute to this difference. This does not necessarily undermine the novelty and importance of the results but points to some additional possibilities. Please clarify this and consider corroborating the results by implementing a more comparable analysis approach.

3b. The authors could further examine the difference in human versus monkey behaviour. In Siegel et al. (2015), the monkeys exhibited quite strong cross-over effects (ie. RT for motion choices being impacted by stimulus colour). How strong are the cross-over effects in the present study? Please quantify and clarify this issue. This would be helpful to know as it would point to a more fundamental cross-species difference and perhaps rule out the possibility that the cause of the discrepancy lies more in the differences in analysis strategy or neural recording methods.

4. A key difficulty with the narrative of this work is the notion that adaptation=eoncoding. If we have understood correctly, what is actually quantified here is whether a change in experimental condition (from one repeat to the next) drives variance in the PMd signals. Therefore, the analysis treats PMd as a the output of a change detector. But the readout of a change detector does not necessarily need to encode the feature itself. So to claim that presence of adaptation (a weaker response to the stimulus) = good encoding of that stimulus, was found confusing to both reviewers and the reviewing editor. A lot of the language in the result sections equates the two and makes it very hard to parse. Please clarify and justify the rationale.

5. Another central terminological confusion pertains to "Projection into state space" that is in the title and much of the introduction. This gives an impression of multi-variate analysis of MEG data, which is largely not the case in this study. Until Figure 5, all the analysis is on uni-variate neural signals and nothing is "projected", nor is there any use of "subspace" or "decoding" or "encoding". It is clear that the investigators see "adaptation" conceptually as a way to quash neural response in some dimensions, and in that sense the term "projection" may be justified. This is, however, very unusual use of terminology and may be seen confusing by many readers. Please reformulate the title and introduction of the paper to more accurately reflect the content of the paper and better set the expectations for the reader.

eLife. 2021 May 11;10:e60988. doi: 10.7554/eLife.60988.sa2

Author response


Essential revisions:

1. The rationale for relying on repetition suppression to isolate a neural readout of the decision process needs to be more clearly articulated. Given the emphasis the authors place on comparing their results to previous NHP studies (Mante et al. 2013, Siegel et al. 2015), the authors should explain why they could not have applied a similar decoding approach. The task design means that task context, sensory modality, sensory input strength and choice are already nicely orthogonalised so why is the adaptation step necessary?

We agree that both analysis approaches are interesting. However, MEG signals measured non-invasively in humans capture neural activity at a very different spatial scale compared to those obtained from direct recordings in NHPs. This limits the use of multivariate techniques in ways that are not applicable when analysing data from invasive neuronal recordings. To respond to this comment, we will:

1. First explain why, when using MEG, repetition suppression (RS, or “adaptation”) provides the only way to measure the activity of specific neuronal populations within a given brain area. We will explain that multivariate approaches are sensitive to differences across sensors, but importantly will not be able to assess representational differences manifested within a single brain area.

2. Second, we will nevertheless show the results from conducting the suggested analysis on the univariate signal from PMd. Our task was not optimized for this analysis and is less sensitive to distinguish between two of the key variables (relevant sensory inputs and choice direction) in our experiment compared to Mante et al. or Siegel et al. Nevertheless, this new analysis confirms our key findings and has now been included as a supplementary figure.

1. In the study by Mante et al., the decoding approach was successful because it relied upon an analysis approach that looked for multivariate information across cells. With MEG, such an approach would have to look across sensors, rather than cells, which provides a very different spatial resolution, not sufficient for looking at responses within a single brain region, which was the focus of the present study. Indeed, it has been shown that we may not be sensitive to cellular-level information using multivariate decoding at the level of voxels or sensors when dealing with neuroimaging data (e.g. Dubois…Tsao, 2015). However, RS has been proposed as a technique to target cellular-level information by experimental design, rather than by multivariate analysis. As such, it is a univariate rather than a multivariate approach, giving us specificity to neural responses within each individual sensor and trial.

This is possible because the adaptation stimulus causes suppression, at the time of the test stimulus, only in those neurons selective to the specific features contained in the adaptation stimulus. For example, when the adaptation stimulus is a right-motion stimulus, this means that the MEG signal at the time of the test stimulus presented shortly after is less influenced by the subpopulation of neurons sensitive to right-ward motion. In other words, we can infer the contribution of subpopulations of neurons within a given trial and brain region (or sensor), which multivariate methods do not afford given the spatial scale of MEG. Because Mante and colleagues also studied neural responses within a single brain region, repetition suppression is the closest we can get to their approach using a non-invasive technique.

We have now introduced this rationale more clearly in the Introduction:

“In this study, we focus on choice processes unfolding in dorsal premotor cortex (PMd). PMd is the key region for choosing hand digit responses (Dum and Strick, 2002; Rushworth et al., 2003), the response modality in our choice task. Thus, the neural representations of interest are located within one brain region. Given this spatial focus, repetition suppression provides the best resolution achievable using non-invasive MEG: a single sensor or voxel is sufficient to reveal feature-processing using repetition suppression. By contrast, multivariate approaches rely on spatial patterns detected across sensors, which would not offer the required spatial scale.”

Before conducting the suggested analysis on the univariate PMd signal below, we note that we had already included a multivariate decoding analysis in the original manuscript albeit not specifically for PMd (for the reasons outlined above), but across 38 parcels covering the whole brain (original Figure 5, now Figure 6 in the updated manuscript). We took a conservative approach given high correlations between the regressors coding for choice direction and relevant input strength. Namely, we first regressed out choice direction from the signal in all 38 parcels and then asked whether we could still decode relevant input strength from the residuals. We found that this was possible. We also found that, in spite of the high correlations between the relevant input dimension of choice direction, relevant inputs were coded more strongly than irrelevant inputs across the 38 parcels. We introduced this analysis as a whole brain control to see if irrelevant responses were encoded more strongly anywhere else. Even across all regions, the filtering out of irrelevant inputs was present both before and after training.

2. Nevertheless, as suggested by the reviewers, we have now also conducted an analysis similar to those conducted in the Mante and Siegel papers on the univariate PMd signal. We cannot run decoding on a single timecourse but we can run an encoding-style analysis using comparable regressors to Mante and Siegel, in other words regressors that are unrelated to any repetition suppression effects and instead just code the features of the test stimulus. This can only tell us whether PMd’s univariate signal variance reflects the strength of sensory input or choice direction, but we note that such variation could be caused by other confounding processes, such as attention or choice difficulty. By contrast, the repetition suppression approach captures the precise sensory or motor features that are repeated and is therefore more sensitive (and thus more like the across-cell multivariate approaches employed in Mante and Siegel).

We coded the regressors as

1. Context [-1, 1]

2. Relevant input strength [-2, -1, 1, 2]

3. Irrelevant input strength [-2, -1, 1, 2]

4. Choice direction [-1, 1]

Mante et al.’s regressors were identical to these with the only difference that in addition, they split up the relevant and irrelevant inputs into colour and motion trials. In other words, here under ‘relevant input strength’, we collapse two of their regressors that assess colour input strength in colour trials and motion input strength in motion trials. Under ‘irrelevant input strength’, we collapse across motion input strength in colour trials and colour input strength in motion trials. This seems appropriate to maximise the power of our design given our main question is about the difference between relevant and irrelevant input representations, and not colour and motion inputs, and given we have a smaller number of trials.

However, it is unfortunately not the case that relevant sensory inputs (e.g., right-ward motion) are fully orthogonal to the choice direction (e.g., right button press) in our task (and this would not change if we split up trials into colour and motion trials). At the time of the test stimulus, these two are correlated by r=0.95. The two levels of coherence are the only feature that stops them from being fully correlated. In our repetition suppression analysis, the adaptation stimulus (which could for example involve the same-hand response but a different input) was instrumental in helping us to decorrelate these two dimensions.

There are several small but key differences that meant that Mante and Siegel’s experiments were better able to decorrelate these variables. First, Mante and colleagues had three coherence levels in each direction. Because we collected fewer trials in our human participants and included the adaptation stimulus manipulation, we simplified our task and only included two coherence levels. However, as can be seen in the black line in Author response image 1, which shows the correlation between relevant input strength and choice direction at multiple coherence levels, the additional coherence level only helps reduce correlations a little. However, Mante and colleagues introduced a second important manipulation. For trials in the colour context, the location of the saccade targets was randomized, so green random dot-motion stimuli sometimes required a right saccade to a green target located on the righthand side of the screen and sometimes required a left saccade to a green target located on the left-hand side of the screen. This was signalled to the animals using flanker stimuli (which we also used here but kept in fixed locations meaning that colour-direction mappings were fixed across our experiment). This scenario with randomized colour-choice mappings is simulated using the blue line in Author response image 1. Because colour trials made up half the trials, this significantly reduces the correlation between the relevant input strength regressor and the choice direction regressor from >0.9 to <0.5. We note, however, that when splitting up the relevant input strength regressor into motion and colour trials as done in Mante et al., this manipulation only helps to decorrelate one of the two input regressors (colour input strength in colour trials) from the choice direction regressor. The high correlations are still present between the regressor capturing motion input strength in motion trials and choice direction. It would be around ~0.9 even in Mante et al.’s study which is probably why they projected the population response into orthogonal axes before looking at the temporal evolution of input and choice signals. This is what we need repetition suppression for here.

Author response image 1.

Author response image 1.

Siegel et al. used 100% coherent stimuli but had seven motion directions and colour levels (see their Figure S1). Their task therefore probed a slightly different type of categorization choice.In summary, not only is our method (MEG) not best suited for conducting multivariate analyses that require the spatial scale of subpopulations of neurons within a single brain region (see first part of the response above), but our task design is also not optimized to dissociate relevant sensory information from choice direction at the time of TS, when disregarding the AS manipulation. Because relevant input and choice direction are highly correlated, it is difficult to make inferences about their relative timings. The reason for not including the necessary manipulations to decorrelate these variables was because our experiment was optimised for measuring cellular representations via repetition suppression.

Nevertheless, we conducted the analysis suggested by the reviewers. The results from this new analysis, conducted on the timeseries extracted from PMd (the same one used in the central repetition suppression analyses), confirm our key findings. They show that PMd signals contain information related to all the different task variables, sensory inputs are processed earlier in time compared to motor responses and that irrelevant inputs are represented more weakly than relevant inputs both before and after extensive training (see new Figure 4 —figure supplement 3). One caveat of using this analysis is that we are ignoring the adaptation stimulus which we know will influence activity at the time of the test stimulus.

Overall, we conclude that this is an interesting analysis for drawing parallels between existing non-human primate and human datasets, but not one that our task was optimized for, and not one that gives us the sensitivity that the RS analysis approach can provide. We have included the results from this new analysis as a new supplementary figure (see above), added methodological details to the Methods, and summarize it in a new section in the main text as follows:

Results:

“Task feature processing independent of repetition suppression in PMd

The results described thus far have relied on the use of repetition suppression to manipulate the MEG signals recorded at the time of the TS. […] The strength of sensory inputs is processed earlier in time than the choice direction (motor response), and the strength of irrelevant inputs is processed more weakly than the strength of relevant inputs both before and after extensive training (Figure 4 —figure supplement 3).”

Methods:

“Task feature processing independent of repetition suppression in PMd

Our experiment was optimized for the analyses described above which capitalize on the suppression of the MEG signal along multiple task features (or “axes”) induced by repeated exposure (repetition suppression). […] We did not optimize the design for this analysis but instead for a repetition suppression analysis because repetition suppression has higher within-sensor spatial sensitivity.”

2. The query above relates to a more substantive question regarding the degree to which the authors approach can allow us to draw firm conclusions regarding the relative timing with which these distinct variables are represented. For example the authors highlight that sensory representations precede choice representations.

2a. For starters, this is contrary to what Siegel et al. (2015) found – they reported that choice representations emerged before the stimulus even appeared.

We are not sure if we fully follow the inconsistency that is being pointed out. First, our results are consistent, in terms of the observed timings, with those reported by Mante et al. where sensory input representations precede choice representations. This can be appreciated in their population trajectories which deflect in the direction of colour or motion before they begin to deflect in the direction of choice.

Siegel et al. (e.g. in their Figure 1E and 1I, which shows the average spiking activity across all units and brain regions) also find that colour and motion information precedes choice information in terms of latency. This is consistent with the significant timing difference they report: “Motion and color information rose after stimulus onset with a significantly shorter latency for color (98 T 2 ms) as compared with motion (108 +/- 2 ms) information (P < 0.001). Last, choice information rose (193 +/- 1 ms) before the motor responses (270 ms +/- 3 ms) and significantly later than motion and color information (both P < 0.0001).”

We wonder if the reviewers and reviewing editor are referring to the spontaneous pre-trial fluctuations of activity that Siegel et al. report, and which occur before the time of stimulus presentation and predict choice. This is a very interesting result but nevertheless, the strong stimulus-evoked choice activity which we investigated here follows a similar time-course in Siegel et al. to the one we observe in our data and is preceded by the encoding of the sensory properties of the stimulus (e.g. colour and motion).

2b. More importantly, to what extent can it be assumed that the relative timing of adaptation effects on sensory vs motor components necessarily translates directly to differences in the time at which these variables influence the decision process?

2c. The authors note the well-established fact that stimulus and response repetition is associated with decreased BOLD/EEG/MEG activity in the relevant brain regions but what do we know about the timing of these effects? Can we assume comparable dynamics underpinning sensory and motor adaptation?

We think these two points (2b/2c) are related and can be answered together.

We agree with the reviewer that there is uncertainty about the relationship between repetition suppression and neuronal representation. However, we are not aware of any additional uncertainty about the timing of repetition suppression and the timing of neuronal representation. Other studies that have examined the timing of RS effects in EEG/MEG have found that the sequential timecourse of RS is consistent with what we know about the timecourse of processing from other techniques (e.g. Stefanics, … Stephan, EJN, 2018; see their Figure 3). Notably, the timings and dynamics observed in our data are exactly what we would have predicted from direct recordings results in Mante et al. And in our response above we have now shown that these timings are similar to timings obtained using an encoding approach. Therefore, we can be confident that the relative timing of adaptation effects translates to differences in the time at which these variables influence the decision process.

We also note that, importantly, all effects of interest come from within the same brain region. This eliminates the possibility that inter-regional differences in adaptation dynamics could be confounding our results. Since the temporal properties of repetition suppression are thought to be determined by the neural dynamics and recurrent processing of a given region, within the same brain region, here PMd, these features can be assumed to be constant.

We think therefore that we can have as much confidence as in any non-invasive study. We have included the following section in the Discussion about the relationship between repetition suppression and neuronal representation:

“The neural mechanisms underlying repetition suppression are not fully understood to date. Hypothesized mechanisms include neuronal sharpening, fatigue and facilitation, with current evidence favouring the fatigue model according to which suppression is caused by attenuated synaptic inputs (Helen C. Barron et al., 2016; Grill-Spector et al., 2006). […] The temporal dynamics of repetition suppression (e.g., the influence of time-lag) can vary between regions and are likely determined by the neural dynamics and recurrent processing of a given region. Importantly, however, the key effects reported here all come from within the same region. This eliminates inter-regional differences in neural dynamics as a possible explanation for the timing differences we observed between sensory and motor suppression effects.”

2d. For example, recent studies of choice history biases (e.g. Urai et al. 2019, eLife) suggest that responses on trial N-1 cause a bias in the rate of evidence accumulation for repeated choices suggesting that the neural dynamics associated with repetition may more complex than a simple attenuation. More pertinently, can we assume that these adaptation/attenuation effects have any impact on the information content for the decision process?

We agree, it is likely that repetition suppression affects the information content. While the mechanisms of repetition suppression are still under debate, there is strong evidence favouring a fatigue model whereby suppression occurs due to an attenuation of the received inputs (see Grill-Spector, Henson, Martin, TICS, 2006; Vidyasager, Parkes, Neuroimage, 2010; Barron, Garvert, Behrens, 2016).

However, if this is the case, and RS affects neural inputs, this manipulation of the neural dynamics is likely to translate into behavioural change as well. We tested if this could be directly shown in a new behavioural analysis. In other words, we tested if, by manipulating subpopulations of neurons via RS, we might as a result have also manipulated the behaviour in specific and predictable ways.

This analysis was conducted on log-RTs and response accuracies (% correct) of the test stimulus choices. We hypothesized that suppression (i.e. repetition) of the relevant feature and of the relevant response finger representation might slow RTs and reduce accuracy (note, however, all t-tests are two-sided) but that irrelevant input suppression should not impact behaviour. In other words, we reasoned that if presentation of e.g., a green adaptation stimulus means that green inputs are attenuated at the time of the test stimulus, we would predict reaction times and accuracy to be affected when green is the relevant feature to attend to at the time of the test stimulus. This is consistent with what we found – the plot in Author response image 2 summarizes these effects.

Author response image 2.

Author response image 2.

RTs: We found that participants were faster to respond to motion than colour stimuli (main effect of context; t(21)=-2.31, P=0.03), and slower when the relevant input had already been processed at the time of the AS and had thus been suppressed/attenuated at the time of the TS (t(21)=6.80, P=1.00e-6). RTs were also slowed by a context-switch (t(21)=7.72, P=1.46e-7) and slower for the left compared to right finger (t(21)=3.06, p=0.016).Accuracy: Participants were less accurate when the relevant input was colour compared to motion (t(21)=2.48, P=0.02), when relevant input was suppressed (t(21)=-8.65, P=2.28e-8), when the response was suppressed (t(21)=-2.28, P=0.03) or when a context-switch was required (t(21)=-9.58, P=4.10e-9).

These data are consistent with the hypothesis that repetition suppression attenuates the processing of information at the level of the synapse (fatigue hypothesis) causing the suppressed information to be processed less efficiently, thus causing slower and less accurate choices. It is likely that the same adaptation process that changes the neural signal at the time of the TS is causing the changes in behaviour identified here. These new results therefore provide additional evidence that the repetition suppression approach was effective at manipulating the inputs to the decision process.

We note that a subset of these behavioural effects was originally presented in Suppl Figure 2E (now Figure 4 —figure supplement 1, for relevant and irrelevant input suppression but not response suppression). However, we had not given it a prominent place in the manuscript. Moreover, the linear/logistic regression analysis above is more consistent with our neural analysis and therefore hopefully more intuitive for the reader to understand.

We now present this result in the main Results section (Behavioural effects predicted by neural adaptation mechanisms), explain the methodological details in the Methods, and have added a new behavioural figure (new Figure 5) and Discussion section summarizing these effects.

“Behavioural effects predicted by neural adaptation mechanisms

We have shown that repeated exposure to sensory inputs or motor responses has an impact on the MEG signal recorded at the time of the TS. […] However, neither RTs nor choice accuracies were affected by repetition of the irrelevant sensory feature (RT: t(21)=0.13, P=0.90, Hedges’ g = 0.04, 95%CI = [-0.57, 0.64]; accuracy: t(21)=0.03, P=0.98, Hedges’ g = 0.01, 95%CI = [-0.60, 0.61]). Other main effects included context (RT: t(21)=-2.31, P=0.03, Hedges’ g = -0.68, 95%CI = [-1.34, -0.06]; accuracy: t(21)=2.48, P=0.02, Hedges’ g = 0.73, 95%CI = [0.10, 1.39]), and context-switch (RT: t(21)=7.72, P=1.46e-7, Hedges’ g = 2.27, 95%CI = [1.40, 3.25]; accuracy: t(21)=-9.58, P=4.10e-9, Hedges’ g = -2.82, 95%CI = [-3.95, -1.82]).”

Finally, we think the relationship to Urai et al.’s findings might be interesting but there are also important differences between our study and theirs. For example, trials lasted more than 5 seconds on average in Urai et al.’s behavioural paradigms, meaning choices were more spread out in time (their Figure 2). By comparison, the interval between the AS and TS choice was only 300ms here (our Figure 1) to facilitate repetition suppression analyses. While repetition suppression effects can still occur after several seconds, they would be significantly weaker as they scale with the time-lag. Nevertheless, there is some overlap and consistency between our findings. Urai et al. focus on the history of the choice made (not the history of sensory inputs received). This is equivalent to the adaptation to the responding hand (‘response adaptation’) we report here which we show above has an impact on behavioural performance. Indeed, Urai et al. find that the most consistent effect is an alternation or shift away from the previous response which could also be interpreted in light of an adaptation effect. Although this is not the interpretation the authors put forward and the focus is specifically on the accumulation or drift-diffusion process, our results are broadly in-line with what they find.

We have added a reference to this study to the results and discussion of the new behavioural results as follows:

“Our behavioural results are consistent with the interpretation that inputs are suppressed as a result of repeated exposure to a feature. If repeated exposure attenuates the received neuronal inputs, thus affecting the processed information content, this might translate into behavioural change because suppressed information cannot be processed as efficiently for making a choice. However, this should only be the case if the relevant sensory dimension is repeated. Our behavioural analyses confirmed this prediction (Figure 5 and Figure 5 —figure supplement 1). Repetition of the relevant sensory feature or response, but not the irrelevant sensory feature, reduced choice accuracies, and repetition of relevant, but not irrelevant, sensory features slowed RTs. These results provide further evidence that the repetition suppression approach employed here was effective at manipulating the inputs to the decision process. It seems likely that the same adaptation process that changes the MEG signal in PMd is causing the changes in behaviour we identified. In other work, choice biases were shown to depend on the precise choice history, and in the majority of participants, choices were biased away from the previous response (Urai et al., 2019). This could relate to effects observed here, showing better performance for response alternation, and worse performance for response repetition.”

Please clarify these aspects in relevant sections of the manuscript and address with data analyses how repetition impacts sensory encoding versus motor preparation signals.

We have addressed these points one by one above and hope the additional analyses have satisfied the reviewer’s and reviewing editor’s concerns.

3a. The contrast between the present study and the aforementioned NHP studies on the point of filtering of irrelevant sensory inputs is striking and interesting. The authors have, however, used a different analysis strategy to that of Siegel and Mante, which could conceivably contribute to this difference. This does not necessarily undermine the novelty and importance of the results but points to some additional possibilities. Please clarify this and consider corroborating the results by implementing a more comparable analysis approach.

As mentioned in reply to point 1, we have now implemented the suggested analysis. We have outlined above why, overall, our data and task design are more suited to a repetition suppression analysis strategy. This is because of the low spatial resolution achieved with MEG, because of high correlations between relevant sensory input and choice direction in a Mantestyle encoding approach, and because the AS would be ignored in such an analysis although we know it has an impact on the MEG signal observed at the time of the TS. Despite these caveats, we can replicate our key conclusions in the suggested analysis: (1) relevant inputs are represented more strongly than irrelevant inputs, both pre- and post-training, (2) all decision variables are processed in PMd and (3) the peak timings suggest a transition from a representation of inputs to a representation of choice, with choice representations emerging slightly after input representations (new Figure 4 —figure supplement 3). There are quantitative differences: timings seem slightly less robust and more variable across individuals in the Mante-style regression analysis; and irrelevant inputs are represented even less strongly in PMd. Both of these are likely due to the less sensitive and less well powered analysis with higher correlations in the design matrix.

As mentioned above, we have added a new paragraph to the main manuscript summarizing these new results, and we have added a new supplementary figure (Figure 4 —figure supplement 3) and refer to it in the main part of the manuscript as corroborating our key conclusions. We have decided to focus on the repetition suppression analysis in the main part of the manuscript because this is the analysis strategy our task was designed and optimized for.

We have also added a few sentences to the discussion to briefly summarize the differences in analysis strategy and data types used here and by Siegel and Mante, leaving open the possibility that these differences might have contributed to the different conclusions drawn. We also note that this discrepancy in findings is present even within the macaque literature where similar direct recording techniques and training regimes are employed. Recordings from several laboratories including Robert Desimone, Tirin Moore, Pascal Fries, John Reynolds, John Maunsell, Stefan True (see review by Noudoost and Moore 2010 and ~20 relevant references therein) show that inputs are filtered via top-down attentional processes, consistent with our findings and other work in humans.

Discussion:

“There is also a discrepancy in terms of the methods used here and in (Mante et al., 2013a). Mante et al. used multivariate approaches across cells, while we used a repetition suppression manipulation on non-invasive univariate data. However, again, this is unlikely to fully account for the discrepancy in findings. A large body of work in NHPs is consistent with our findings but used similar recording, analysis and training approaches as in Mante et al. (Noudoost et al., 2010). Furthermore, when we implemented a comparable analysis approach to (Mante et al., 2013a), albeit using the univariate signal of a single sensor and ignoring the repetition suppression manipulation our design was optimized for (Figure 4 —figure supplement 3), we were able to confirm that signal variation related to sensory stimulus strength was weaker for irrelevant compared to relevant features. Ultimately, the discrepancy between different findings remains to be addressed and highlights a general need for a better understanding of decision-making in environments that require dynamic changes (Glaze et al., 2015; Gold and Stocker, 2017; Ossmy et al., 2013).”

3b. The authors could further examine the difference in human versus monkey behaviour. In Siegel et al. (2015), the monkeys exhibited quite strong cross-over effects (ie. RT for motion choices being impacted by stimulus colour). How strong are the cross-over effects in the present study? Please quantify and clarify this issue. This would be helpful to know as it would point to a more fundamental cross-species difference and perhaps rule out the possibility that the cause of the discrepancy lies more in the differences in analysis strategy or neural recording methods.

We believe the reviewers and reviewing editor are referring to the following effect reported in Siegel et al. (Figure 1D), which was similarly shown in Mante et al. (Figure 1c-f). We had provided an analogous figure in the supplement (previous Suppl Figure 2D):

This shows that cross-over effects look very comparable between the three studies and across species. In our data, cross-over effects were slightly more pronounced pre-training, and for motion influencing colour choices compared to colour influencing motion choices. Importantly, however, the effects do not seem qualitatively different between the two species.

We are not aware of equivalent plots for RTs in Mante or Siegel. Taken together, it does not seem like the difference in behaviour between monkeys and humans is a likely explanation for the differences in the filtering out of irrelevant input observed.

We have now moved the behavioural plot from the supplement to a main figure (new Figure 5) to make this information more easily accessible. We also extended it to include trials with/without adaptation to further illustrate the behavioural impact of repeating a stimulus feature (see question 2d above):

4. A key difficulty with the narrative of this work is the notion that adaptation=eoncoding. If we have understood correctly, what is actually quantified here is whether a change in experimental condition (from one repeat to the next) drives variance in the PMd signals. Therefore, the analysis treats PMd as a the output of a change detector. But the readout of a change detector does not necessarily need to encode the feature itself. So to claim that presence of adaptation (a weaker response to the stimulus) = good encoding of that stimulus, was found confusing to both reviewers and the reviewing editor. A lot of the language in the result sections equates the two and makes it very hard to parse. Please clarify and justify the rationale.

Thank you for this comment which seems important as many readers would probably find the terminology and rationale similarly confusing. Apologies for not conveying our message more clearly.

We think the central question here is whether the difference in the MEG signal when a feature was versus was not repeated is due to (a) a suppression effect seen at the synapses due to a fatigue mechanism as outlined in 1 or (b) a change detection or prediction error (PE) signal caused by novelty or unexpectedness. Maybe repetition leads to a smaller signal because of the absence of a prediction error signalling unexpected change. While this is an interesting question that affects the choice of our terminology and that future work should address, a PE-based explanation would not have a large impact on how we interpret our results.

Nevertheless, we think the fact that we see adaptation of not just sensory inputs but also the motor response, and of not just relevant but also irrelevant sensory features, lends support to an interpretation as a suppression effect rather than a PE-like effect. At least it is difficult to know how a motor suppression effect could be discussed in terms of a prediction error. Therefore, we would prefer to continue discussing our findings in terms of “information encoding”. Nevertheless, we have now removed the term “encoding” from the manuscript and talk about adaptation, neural representations or information processing instead. We have tried to make the rationale clearer from the beginning and we have also added a section to the discussion that talks about the differences between prediction errors and adaptation.

Discussion:

“The neural mechanisms underlying repetition suppression are not fully understood to date. Hypothesized mechanisms include neuronal sharpening, fatigue and facilitation, with current evidence favouring the fatigue model according to which suppression is caused by attenuated synaptic inputs (Helen C. Barron et al., 2016; Grill-Spector et al., 2006). Another possibility for the observed signal modulations might related to change detection or prediction-error-like processes caused by novelty or unexpectedness. However, we observe adaptation of not just sensory inputs but also the motor response, and of not just attended but also unattended sensory inputs (Larsson and Smith, 2012). This lends support to an interpretation as a suppression rather than a prediction-error-like effect.”

5. Another central terminological confusion pertains to "Projection into state space" that is in the title and much of the introduction. This gives an impression of multi-variate analysis of MEG data, which is largely not the case in this study. Until Figure 5, all the analysis is on uni-variate neural signals and nothing is "projected", nor is there any use of "subspace" or "decoding" or "encoding". It is clear that the investigators see "adaptation" conceptually as a way to quash neural response in some dimensions, and in that sense the term "projection" may be justified. This is, however, very unusual use of terminology and may be seen confusing by many readers. Please reformulate the title and introduction of the paper to more accurately reflect the content of the paper and better set the expectations for the reader.

Thank you – we have changed the title and introduction as requested. It now says:

Title: “Adapting non-invasive human recordings along multiple task-axes shows unfolding of spontaneous and over-trained choice”

Introduction:

“Here, we extend the repetition suppression framework in one crucial way: we suppress the MEG signal to multiple different features within the same experiment. Adaptation along each feature can be conceptualised as “squashing” the neural response along one task dimension, or task axis. This assimilates task axes in multi-dimensional state space derived from recording many neurons, but using an experimental manipulation of a univariate, rather than multivariate signal. Thus, we ask whether repetition suppression along multiple features can mimic projections onto multiple task axes. If so, this would be the closest we can get to measuring multiple cellular-level representations within a single brain region in humans using MEG, with temporal resolution in the order of milliseconds thanks to the temporal precision of MEG.”

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Data Citations

    1. Takagi Y, Hunt LT, Woolrich MW, Behrens TEJ, Klein-Flügge MC. 2021. Adapting non-invasive human recordings along multiple task-axes shows unfolding of spontaneous and over-trained choice. Open Science Framework. [DOI] [PMC free article] [PubMed]

    Supplementary Materials

    Figure 2—source data 1. Contains 'pre' and 'post' [Time x Regressors x Subjects] for Figure 2B.
    Figure 2—source data 2. Contains 'dat' [Regressors x Subjects] for Figure 2B.
    Figure 2—figure supplement 1—source data 1. Contains 'pre' and 'post' [ROIs x Regressors].
    Figure 2—figure supplement 2—source data 1. Contains 'a_est', 'b_est', 'c_est' [Subjects x Regressors], and 'betas_true' [Regressors x 1].
    Figure 2—figure supplement 3—source data 1. Contains 'a', 'b' [Regressors x Regressors], 'c_betas_true' and 'c_betas_pred' [Repetition x Regressors x Subjects].
    Figure 3—source data 1. Contains 'pre.relirrel' and 'post.relirrel' [Time x Regressors x Subjects].
    Figure 3—source data 2. Contains structs of 'pre' and 'post'.

    These structs have four regressors [Time x 1].

    Figure 4—source data 1. Contains 'dat' [Regressors x Subjects].
    Figure 4—figure supplement 2—source data 1. Contains 'pre' and 'post' [Time x Regressors x Subjects] for LIP.
    Figure 4—figure supplement 2—source data 2. Contains 'pre' and 'post' [Time x Regressors x Subjects] for PMd.
    Figure 4—figure supplement 2—source data 3. Contains 'pre' and 'post' [Time x Regressors x Subjects] for V1.
    Figure 4—figure supplement 2—source data 4. Contains 'pre' and 'post' [Time x Regressors x Subjects] for V4.
    Figure 4—figure supplement 2—source data 5. Contains 'pre' and 'post' [Time x Regressors x Subjects] for mPFC.
    Figure 4—figure supplement 3—source data 1. Contains 'pre' and 'post' [Time x Regressors x Subjects].
    Figure 6—source data 1. Contains structs of 'pre' and 'post'.

    These structs have two structs 'relv' and 'irrel', having two matrices of 'motion' and 'colour' [Subjects x Time].

    Figure 6—source data 2. Contains 'pre' and 'post' [1 x Subjects].
    Supplementary file 1. Task conditions and the corresponding regressors.

    The list of task conditions and corresponding regressors of the experiment are shown. The four bold lines are illustrated as examples in Figure 1B.

    elife-60988-supp1.docx (36.5KB, docx)
    Transparent reporting form

    Data Availability Statement

    The code used in the current study and the datasets generated and/or analyzed during the current study are available at the OSF repository (https://doi.org/10.17605/OSF.IO/RJY3Z).

    The following dataset was generated:

    Takagi Y, Hunt LT, Woolrich MW, Behrens TEJ, Klein-Flügge MC. 2021. Adapting non-invasive human recordings along multiple task-axes shows unfolding of spontaneous and over-trained choice. Open Science Framework.


    Articles from eLife are provided here courtesy of eLife Sciences Publications, Ltd

    RESOURCES