Skip to main content
eLife logoLink to eLife
. 2021 Jun 14;10:e64431. doi: 10.7554/eLife.64431

Preserved sensory processing but hampered conflict detection when stimulus input is task-irrelevant

Stijn Adriaan Nuiten 1,2,, Andrés Canales-Johnson 1,2,3,4, Lola Beerendonk 1,2,3, Nutsa Nanuashvili 1,2, Johannes Jacobus Fahrenfort 1,2, Tristan Bekinschtein 3,5,, Simon van Gaal 1,2,†,
Editors: Michael J Frank6, Nicole C Swann7
PMCID: PMC8294845  PMID: 34121657

Abstract

Conflict detection in sensory input is central to adaptive human behavior. Perhaps unsurprisingly, past research has shown that conflict may even be detected in the absence of conflict awareness, suggesting that conflict detection is an automatic process that does not require attention. To test the possibility of conflict processing in the absence of attention, we manipulated task relevance and response overlap of potentially conflicting stimulus features across six behavioral tasks. Multivariate analyses on human electroencephalographic data revealed neural signatures of conflict only when at least one feature of a conflicting stimulus was attended, regardless of whether that feature was part of the conflict, or overlaps with the response. In contrast, neural signatures of basic sensory processes were present even when a stimulus was completely unattended. These data reveal an attentional bottleneck at the level of objects, suggesting that object-based attention is a prerequisite for cognitive control operations involved in conflict detection.

Research organism: Human

eLife digest :

Focusing your attention on one thing can leave you surprisingly unaware of what goes on around you. A classic experiment known as ‘the invisible gorilla’ highlights this phenomenon. Volunteers were asked to watch a clip featuring basketball players, and count how often those wearing white shirts passed the ball: around half of participants failed to spot that someone wearing a gorilla costume wandered into the game and spent nine seconds on screen.

Yet, things that you are not focusing on can sometimes grab your attention anyway. Take for example, the ‘cocktail party effect’, the ability to hear your name among the murmur of a crowded room. So why can we react to our own names, but fail to spot the gorilla? To help answer this question, Nuiten et al. examined how paying attention affects the way the brain processes input.

Healthy volunteers were asked to perform various tasks while the words ‘left’ or ‘right’ played through speakers. The content of the word was sometimes consistent with its location (‘left’ being played on the left speaker), and sometimes opposite (‘left’ being played on the right speaker). Processing either the content or the location of the word is relatively simple for the brain; however detecting a discrepancy between these two properties is challenging, requiring the information to be processed in a brain region that monitors conflict in sensory input.

To manipulate whether the volunteers needed to pay attention to the words, Nuiten et al. made their content or location either relevant or irrelevant for a task. By analyzing brain activity and task performance, they were able to study the effects of attention on how the word properties were processed.

The results showed that the volunteers’ brains were capable of dealing with basic information, such as location or content, even when their attention was directed elsewhere. But discrepancies between content and location could only be detected when the volunteers were focusing on the words, or when their content or location was directly relevant to the task.

The findings by Nuiten et al. suggest that while performing a difficult task, our brains continue to react to basic input but often fail to process more complex information. This, in turn, has implications for a range of human activities such as driving. New technology could potentially help to counteract this phenomenon, aiming to direct attention towards complex information that might otherwise be missed.

Introduction

Every day we are bombarded with sensory information from the environment, and we often face the challenge of selecting the relevant information and ignoring irrelevant – potentially conflicting – information to maximize performance. These selection processes require much effort and our full attention, sometimes rendering us deceptively oblivious to irrelevant sensory input (e.g., chest-banging apes), as illustrated by the famous inattentional blindness phenomenon (Simons and Chabris, 1999). However, unattended events that are not relevant for the current task might still capture our attention or interfere with ongoing task performance, for example, when they are inherently relevant to us (e.g., our own name). This is illustrated by another famous psychological phenomenon: the cocktail party effect (Cherry, 1953; Moray, 1959). Thus, under specific circumstances, task-irrelevant information may capture attentional resources and be subsequently processed with different degrees of depth.

It is currently a matter of debate which processes require top-down attention (Dehaene et al., 2006; Koch and Tsuchiya, 2007; Koelewijn et al., 2010; Lamme, 2003; Lamme and Roelfsema, 2000; Rousselet et al., 2004; VanRullen, 2007). It was long thought that only basic physical stimulus features or very salient stimuli are processed in the absence of attention (Treisman and Gelade, 1980) due to an ‘attentional bottleneck’ at higher levels of analysis (Broadbent, 1958; Deutsch and Deutsch, 1963; Lachter et al., 2004; Wolfe and Horowitz, 2004). However, there is now solid evidence that several tasks may in fact still unfold in the (near) absence of attention, including perceptual integration (Fahrenfort et al., 2017), the processing of emotional valence (Sand and Wiens, 2011; Stefanics et al., 2012), semantical processing of written words (Schnuerch et al., 2016), and visual scene categorization (Li et al., 2002; Peelen et al., 2009). Although one should be cautious in claiming complete absence of attention (Lachter et al., 2004), these and other studies have pushed the boundaries of input processing that is task-irrelevant (without attention) and may even question the existence of an attentional bottleneck at all, at least for relatively low-level information. Conceivably, the attentional bottleneck is only present at higher, more complex, levels of cognitive processing, like cognitive control functions.

Over the years, various theories have been proposed with regard to this attentional bottleneck among which are the load theory of selective attention and cognitive control (Lavie et al., 2004), the multiple resources theory (Wickens, 2002), and the hierarchical central executive bottleneck theory and formalizations thereof in a cortical network model for serial and parallel processing (Sigman and Dehaene, 2006; Zylberberg et al., 2010; Zylberberg et al., 2011). These theories all hinge on the idea that resources for the processing of information are limited and that the brain therefore has to allocate resources to processes that are currently most relevant via selective attention (Broadbent, 1958; Treisman, 1969). Resource (re-)allocation, and thus flexible behavior, is thought to be governed by an executive network, most prominently involving the prefrontal cortex (Goldman-Rakic, 1995; Goldman-Rakic, 1996). Information that is deemed task-irrelevant has fewer resources at its disposal and is therefore processed to a lesser extent. When more resources are necessary for processing the task-relevant information, for example, under high perceptual load, processing of task-irrelevant information diminishes (Lavie et al., 2003; Lavie et al., 2004). Yet even under high perceptual load, task-irrelevant features can be processed when they are part of an attended object (when object-based attention is present) (Chen, 2012; Chen and Cave, 2006; Cosman and Vecera, 2012; Kahneman et al., 1992; O'Craven et al., 1999; Schoenfeld et al., 2014; Wegener et al., 2014). There is currently no consensus which type of information can be processed in parallel by the brain and which attentional mechanisms determine what information passes the attentional bottleneck. One unresolved issue is that most empirical work has investigated the bottleneck with regard to sensory features; however, it is unknown if the bottleneck and the distribution of processing resources also take place for more complex, cognitive processes. Here, we test whether such a high-level attentional bottleneck indeed exists in the human brain.

Specifically, we aim to test whether cognitive control operations, necessary to identify and resolve conflicting sensory input, are operational when that input is irrelevant for the task at hand (and hence unattended) and what role object-based attention may have in conflict detection. Previous work has shown that the brain has dedicated networks for the detection and resolution of conflict, in which the medial frontal cortex (MFC) plays a pivotal role (Ridderinkhof et al., 2004). Conflict detection and subsequent behavioral adaptation is central to human cognitive control, and, hence, it may not be surprising that past research has shown that conflict detection can even occur unconsciously (Atas et al., 2016; D'Ostilio and Garraux, 2012a; Huber-Huber and Ansorge, 2018; van Gaal et al., 2008), suggesting that the brain may detect conflict fully automatically and that it may even occur without paying attention (e.g., Rahnev et al., 2012). Moreover, it has been shown that this automaticity can be enhanced by training, resulting in more efficient processing of conflict (Chen et al., 2013; MacLeod and Dunbar, 1988; van Gaal et al., 2008).

Conclusive evidence regarding the claim that conflict detection is fully automatic has, to our knowledge, not been provided, and therefore, the necessity of attention for cognitive control operations remains open for debate. Previous studies have shown that cognitive control processes are operational when to-be-ignored features from either a task-relevant or a task-irrelevant stimulus overlap with the behavioral response to be made to the primary task, causing interference in performance (Mao and Wang, 2008; Padrão et al., 2015; Zimmer et al., 2010). In these circumstances, the interfering stimulus feature carries information related to the primary task and is therefore de facto not task-irrelevant. Consequently, it is currently unknown whether cognitive control operations are active for conflicting sensory input that is not related to the task at hand. Given the immense stream of sensory input we encounter in our daily lives, conflict between two (unattended) sources of perceptual information is inevitable.

Here, we investigated whether conflict between two features of an auditory stimulus (its content and its spatial location) would be detected by the brain under varying levels of task relevance of these features. The main aspect of the task was as follows. We presented auditory spoken words (‘left’ and ‘right’ in Dutch) through speakers located on the left and right side of the body. By presenting these stimuli through either the left or the right speaker, content-location conflict arises on specific trials (e.g., the word ‘left’ from the right speaker) but not on others (e.g., the word ‘right’ from the right speaker) (Buzzell et al., 2013; Canales-Johnson et al., 2020). A wealth of previous studies has revealed that conflict arises between task-relevant and task-irrelevant features of the stimulus in these type of tasks (similar to the Simon task and Stroop task; Egner and Hirsch, 2005; Hommel, 2011). Here, these potentially conflicting auditory stimuli were presented during six different behavioral tasks, divided over two separate experiments, multiple experimental sessions, and different participant groups (both experiments N = 24). In all tasks, we focus on the processing of content-location conflict of the auditory stimulus. There were several critical differences between the behavioral tasks: (1) task relevance of a conflicting feature of the stimulus, (2) task relevance of a non-conflicting feature that was part of a conflicting stimulus, and (3) whether the response to be given mapped onto a conflicting feature of the stimulus. Note that in all tasks only one feature could be task-relevant and that all the other feature(s) had to be ignored. The systematic manipulation of task relevance and the response-mapping allowed us to explore the full landscape of possibilities of how varying levels of attention affect sensory and conflict processing. Electroencephalography (EEG) was recorded and multivariate analyses on the EEG data were used to extract any neural signatures of conflict detection (i.e., theta-band neural oscillations; Cavanagh and Frank, 2014; Cohen and Cavanagh, 2011) and sensory processing for any of the features of the auditory stimulus. Furthermore, in both experiments we measured behavioral and neural effects of task-irrelevant conflict before and after training on conflict-inducing tasks, aiming to investigate the role of automaticity in the detection of (task-irrelevant) conflict.

Results

Experiment 1: can the brain detect fully task-irrelevant conflict?

In the first experiment, 24 human participants performed two behavioral tasks (Figure 1A). In the auditory conflict task (from hereon: content discrimination task I), the feature ‘sound content’ was task-relevant. Participants were instructed to respond according to the content of the auditory stimulus (‘left’ vs. ‘right’), ignoring its spatial location that could conflict with the content response (presented from the left or right side of the participant). For the other behavioral task, participants performed a demanding visual random dot-motion (RDM) task in which they had to discriminate the direction of vertical motion (from hereon: vertical RDM task), while being presented with the same auditory stimuli – all features of which were thus fully irrelevant for task performance. Behavioral responses on this visual task were orthogonal to the response tendencies potentially triggered by the auditory features, excluding any task- or response-related interference (Figure 1B). Under this manipulation, all auditory features are task-irrelevant and are orthogonal to the response-mapping. To maximize the possibility of observing conflict detection when conflicting features are task-irrelevant and explore the effect of task automatization on conflict processing, participants performed the tasks both before and after extensive training, which may increase the efficiency of cognitive control (Figure 1C; van Gaal et al., 2008).

Figure 1. Experimental design of experiment 1.

Figure 1.

(A, B) Schematic representation of the experimental design for auditory content discrimination task I (A) and vertical random dot-motion (RDM) task (B). In both tasks, the spoken words ‘left’ and “right were presented through either a speaker located on the left or right side of the participant. Note that auditory stimuli are only task-relevant in auditory content discrimination task I and not in the vertical RDM task. In this figure, sounds are only depicted as originating from the right, whereas in the experiment the sounds could also originate from the left speaker. (A) In content discrimination task I, participants were instructed to report the content (‘left’ or ‘right’) of an auditory stimulus via a button press with their left or right hand, respectively, and to ignore the spatial location at which the auditory stimulus was presented. (B) During the vertical RDM task, participants were instructed to report the overall movement direction of dots (up or down) via a button press with their right hand, whilst still being presented with the auditory stimuli, which were therefore task-irrelevant. In both tasks, content of the auditory stimuli could be congruent or incongruent with its location of presentation (50% congruent/incongruent trials). (C) Overview of the sequence of the four experimental sessions of this study. Participants performed two electroencephalography sessions during which they first performed the vertical RDM task followed by auditory content discrimination task I. Each session consisted of 1200 trials, divided over 12 blocks, allowing participants to rest in between blocks. In between experimental sessions, participants were trained on auditory content discrimination task I on two training sessions of 1 hr each.

Experiment 1: conflicting information induces slower responses and decreased accuracy only for task-relevant sensory input

For content discrimination task I, mean error rates (ERs) were 2.6% (SD = 2.7%) and mean reaction times (RTs) 477.2 ms (SD = 76.1 ms), averaged over all four sessions. For the vertical RDM, mean ERs were 19.2% (SD = 6.6%) and mean RTs were 711.4 ms (SD = 151.3 ms). The mean ER of vertical RDM indicates that our staircasing procedure was effective (see Materials and methods for details on staircasing performance on the RDM). To investigate whether our experimental design was apt to induce conflict effects for task-relevant sensory input and to test whether conflict effects were still present when sensory input was task-irrelevant, we performed repeated measures (rm-)ANOVAs (2 × 2 × 2 factorial) on mean RTs and ERs gathered during the EEG recording sessions (session 1, ‘before training’; session 4, ‘after training’). This allowed us to include (1) task relevance (yes/no), (2) training (before/after), and (3) congruency of auditory content with location of auditory source (congruent/incongruent). Note that congruency is always defined based on the relationship between two features of the auditorily presented stimuli, also when participants performed the visual task (and therefore the auditory features were task-irrelevant).

Detection of conflict is typically associated with behavioral slowing and increased ERs. Indeed, we observed that, across both tasks, participants were slower and made more errors on incongruent trials as compared to congruent trials (the conflict effect, RT: F(1,23) = 52.83, p<0.001, ηp2 = 0.70; ER: F(1,23) = 9.13, p=0.01, ηp2 = 0.28). This conflict effect was modulated by task relevance of the auditory features (RT: F(1,23) = 152.76, p<0.001, ηp2 = 0.87; ER: F(1,23) = 11.15, p=0.01, ηp2 = 0.33) and post-hoc ANOVAs (see Materials and methods) showed that the conflict effect was present when the auditory feature content was task-relevant (RTcont(I): F(1,23) = 285.00, p<0.001, ηp2 = 0.93; ERcont(I): F(1,23) = 23.85, p<0.001, ηp2 = 0.51; Figure 2A, left panel), but not when all auditory features were task-irrelevant (RTVRDM: F(1,23) = 1.96, p=0.18, ηp2 = 0.08, BF01 = 5.41; ERVRDM: F(1,23) = 0.26, p=0.62, ηp2 = 0.01, BF01 = 4.55; Figure 2A, right panel). Because responses in the vertical RDM were made with the right hand only, we subsequently tested whether the auditory features in isolation affected the speed and accuracy of right-hand responses. For example, the spoken word ‘left’ may slow down responses made with the right hand more so than the spoken word ‘right’ (the same logic holds for stimulus location). However, this was not the case. A 2 × 2 × 2 factorial rm-ANOVA on mean RTs with session (before/after training), stimulus content ('left'/'right'), and stimulus location (left/right) showed that RTs were unaffected by sound content (F(1,23) = 0.01, p=0.92, ηp2 = 0.00, BF01 = 6.16) and sound location (F(1,23) = 0.49, p=0.49, ηp2 = 0.02, BF01 = 6.36).

Figure 2. Behavioral and multivariate decoding results of experiment 1.

(A, B) All results depicted here are from the merged data of both experimental sessions. The left column of plots shows the results for content discrimination task I, where auditory stimuli and conflicting features were task-relevant. The right column of plots shows the results for the vertical random dot-motion (RDM), where neither the auditory stimulus nor its conflicting features were task-relevant. (A) The behavioral results are plotted as conflict effects (incongruent – congruent). Effects of conflict were present in content discrimination task I, with longer reaction times (RTs) (left bar) and increased error rates (ERs) (right bar) for incongruent compared to congruent trials. For the vertical RDM task, no significant effects of conflict were found in behavior. Dots represent individual participants. The behavioral data that is shown here can be found in Figure 2—source data 1. (B) Multivariate classifier accuracies for different stimulus features. We trained classifiers on three stimulus features: auditory congruency, auditory content, and auditory location. Classifier accuracies (area under the curve [AUC]) are plotted across a time-frequency window of −100 ms to 1000 ms and 2–30 Hz. Classifier accuracies are thresholded (cluster-based corrected, one-sided: X¯>0.5, p<0.05), and significant clusters are outlined with a solid black line. The dotted box shows the predefined ROI on which we performed a hypothesis-driven analysis. The classifier accuracies within this ROI were not significantly greater than chance for the vertical RDM task. Note that conflicting features of the auditory stimulus, content and location, could be decoded from neural data regardless of attention to the auditory stimulus. Information related to auditory congruency was present in a theta-band cluster, but only when the auditory stimulus was attended. *** p<0.001, n.s.: p>0.05.

Figure 2—source data 1. Behavioral results of experiment 1.

Figure 2.

Figure 2—figure supplement 1. Effects of behavioral training on behavioral effects of conflict and decoding performance in experiment 1.

Figure 2—figure supplement 1.

(A, B) We performed 2 × 2 repeated measures (rm)-ANOVAs on (A) reaction times (RTs) and (B) error rates (ERs) in content discrimination task I, with the factors being session and congruency of the auditory stimulus. In (A, B), data are plotted as conflict effects (incongruent – congruent) and for separate sessions. The top horizontal line shows significance of the interaction between session and congruency, and markers above the bars indicate significance of paired sample t-tests comparing incongruent and congruent for each run (shown data and results of t-tests can be found in Figure 2—figure supplement 1—source data 1). Effects of conflict on RTs (A) and ERs (B) significantly decreased after behavioral training on this task, suggesting more efficient processing of conflict. Effects of conflict on RTs and ERs were nonetheless present during both sessions. (C) There were no clusters for which the difference in congruency decoding between the two sessions in content discrimination task I was significant (left panel), although decoding accuracies within the preselected ROI did decrease with training for content discrimination task I, suggesting more efficient conflict resolution, in line with the behavioral results plotted in (A, B). Classifier accuracies for sound content (middle panel) were higher in a delta-theta band cluster after behavioral training, showing that the task-relevant feature was processed better. Location decoding accuracy was not affected by behavioral training as we observed no clusters where the differences between sessions and classifier accuracies within the ROI were also not different between sessions. (D) Behavioral training on the content discrimination task did not affect neural processing of auditory features in the vertical random dot-motion (no significant clusters and none of the results were significant when tested for the predefined ROI). Thresholded (cluster-based corrected, p<0.05) accuracies are depicted across the frequency range (2–30 Hz), and significant clusters are outlined with a solid black line. ***p<0.001, **p<0.01, n.s.: p>0.05.
Figure 2—figure supplement 1—source data 1. Behavioral results of experiment 1 - before and after training.

Participants performed both behavioral tasks before and after extensive training of the content discrimination task to be able to investigate the role of training on conflict processing (Figure 1C). RTs and ERs in the vertical RDM task were not modulated by behavioral training (RTVRDM: F(1,23) = 2.07, p=0.16, ηp2 = 0.08, BF01 = 0.32; ERVRDM: F(1,23) = 0.24, p=0.63, ηp2 = 0.01, BF01 = 3.79). Training did result in a decrease of overall RT on content discrimination task I, although ERs were not affected (RTcont(I): F(1,23) = 45.05, p<0.001, ηp2 = 0.66; ERcont(I): F(1,23) = 1.77, p=0.20, ηp2 = 0.07, BF01 = 0.89). Moreover, the effect of conflict on RTs and ERs in this task decreased after behavioral training (RTcont(I): F(1,23) = 29.86, p<0.001, ηp2 = 0.57; ERcont(I): F(1,23) = 9.76, p=0.005, ηp2 = 0.30; Figure 2—figure supplement 1A, B), suggesting increased efficiency of within-trial conflict resolution mechanisms. All other effects were not reliable (p>0.05).

Experiment 1: neural signatures of conflict detection only for task-relevant stimuli

The observation that conflicting task-irrelevant stimuli had no effect on RTs and ERs, even after substantial training, whereas task-relevant conflicting stimuli did, may not come as a surprise because manual responses on the visual task (motion up/down with index and middle finger of right hand) were fully orthogonal to the potential conflicting nature of the auditory features (i.e., left/right). Further, content discrimination task I and the vertical RDM were independent tasks, requiring different cognitive processes. For example, mean RTs on the vertical RDM were on average 267 ms longer than mean RTs for content discrimination task I. However, caution is required in concluding that conflict detection is absent for task-irrelevant stimuli based on these behavioral results alone as neural and/or behavioral effects can sometimes be observed in isolation (one is observed but not the other, e.g., Canales-Johnson et al., 2020; van Gaal et al., 2014). Therefore, in order to test whether unattended conflict is detected by the brain we turn to the multivariate pattern analysis (MVPA) of our neural data.

Plausibly, the neural dynamics of conflict processing for task-irrelevant sensory input are different – in physical (across electrodes) and frequency space – from those related to the processing of conflict when sensory input is task-relevant. Therefore, we applied multivariate decoding techniques in the frequency domain to inspect whether and – if so – to what extent certain stimulus features were processed. These multivariate approaches have some advantages over traditional univariate approaches, for example, they are less sensitive to individual differences in spatial topography, because decoding accuracies are derived at a single participant level (Fahrenfort et al., 2018; Grootswagers et al., 2017; Haxby et al., 2001). Therefore, group statistics do not critically depend on the presence of effects in specific electrodes or clusters of electrodes. Further, although a wealth of studies have shown that conflict processing is related to an increase in power of theta-band neural oscillations (~4–8 Hz) after stimulus presentation (Cohen and Cavanagh, 2011; Jiang et al., 2015a; Nigbur et al., 2012), it is unknown whether this is also the case for task-irrelevant conflict. By performing our MVPA in frequency space, we could potentially find neural signatures in non-theta frequency bands related to the processing of task-irrelevant conflict. However, due to the temporal and frequency space that has to be covered, strict multiple comparison corrections have to be performed (across time and frequency, see Materials and methods). Therefore, we adopted an additional hypothesis-driven analysis, which also allowed us to obtain evidence for the absence of effects. Throughout this paper, we will discuss our neural data in the following order. First, the MVPAs in the frequency domain are presented for all critical features of the task (congruency, content, location, corrected for multiple comparisons). Then, we report results from the additional hypothesis-driven analysis, where we extracted classifier accuracies from a predefined time-frequency region of interest (ROI) (100–700 ms, 2–8 Hz) on which we performed (Bayesian) tests (see Materials and methods). This ROI was selected based on previous observations of conflict-related theta-band activity (Cohen and Cavanagh, 2011; Cohen and van Gaal, 2014; Jiang et al., 2015b; Nigbur et al., 2012). Specifically, for every task and every stimulus feature (i.e., congruency, content, location), we extracted average decoding accuracies from the ROI per participant and performed analyses on these values.

First, we trained a classifier on data from all EEG electrodes to distinguish between congruent versus incongruent trials, for both content discrimination task I and the vertical RDM task. Above-chance classification accuracies imply that relevant information about the decoded stimulus feature is present in the neural data, meaning that some processing of that feature occurred (Hebart and Baker, 2018). We performed our main analysis on the combined data from both EEG sessions, thereby maximizing power to establish effects in our crucial comparisons. We also performed similar analyses on the session-specific data to investigate the role of behavioral training on processing of conflict. These results are discussed more in depth below and are shown in Figure 2—figure supplement 1C, D.

Congruency decoding reveals that stimulus congruency was represented in neural data only when conflict was task-relevant (p<0.001, one-sided: X¯>0.5, cluster-corrected; frequency range: 2–12 Hz, peak frequency: 4 Hz, time range: 234–609 ms, peak time: 438 ms; Figure 2B, left panel). The conflict effect roughly falls in the theta-band (4–8 Hz), which confirms a firm body of literature linking conflict detection to post-conflict modulations in theta-band oscillatory dynamics (Cavanagh and Frank, 2014; Cohen and Cavanagh, 2011; Cohen and van Gaal, 2014; Nigbur et al., 2012). Activation patterns that were calculated from classifier weights within the predefined time-frequency theta-band ROI (2–8 Hz, 100–700 ms) revealed a clear midfrontal distribution of conflict-related activity (Figure 5—figure supplement 1A). No significant time-frequency cluster was found for the vertical RDM task (Figure 2B, right panel). To quantify the absence of this effect, we followed up this hypothesis-free (with respect to frequency and time) MVPA with a hypothesis-driven analysis focused on the post-stimulus theta-band. This more restricted analysis showed no significant effect (t(23) = −0.50, p=0.69, d = −0.10) and an additional Bayesian analysis revealed moderate evidence in favor of the null hypothesis (i.e., no effect of conflict on theta-band power) than the alternative hypothesis (BF01 = 6.53).

Similar to our observation of decreased behavioral effects of conflict after behavioral training (Figure 2—figure supplement 1A, B), decoding accuracies in the content discrimination task I were also lower after training (t(23) = −3.01, p=0.01, d = −0.63; Figure 2—figure supplement 1C), suggesting more efficient conflict resolution, as reflected in neural theta oscillations as well. In the vertical RDM, behavioral training did not affect decoding accuracies of sound congruency (t(23) = −1.24, p=0.23, d = −0.25, BF01 = 2.36; Figure 2—figure supplement 1D).

Experiment 1: stimulus features are processed in parallel, independent of task relevance

Thus, cognitive control networks − or possible substitute networks − are seemingly not capable of detecting conflict when sensory features are task-irrelevant. However, the question remains whether this observation is specific to the conflicting nature of the auditory stimuli or whether the auditory stimuli are not processed whatsoever when attention is reallocated to the visually demanding task. To address this question, we trained classifiers on two other features of the auditory stimuli, that is, location and content, to test whether these features were processed by the brain regardless of task relevance. Indeed, the content of auditory stimuli was processed both when the stimuli were task-relevant (p<0.001, one-sided: X¯>0.5, cluster-corrected; frequency range: 2–30 Hz, peak frequency: 4 Hz, time range: 47–1000 ms, peak time: 422 ms; Figure 2B, left panel) and task-irrelevant (p<0.001, one-sided: X¯>0.5, cluster-corrected; frequency range: 2–20 Hz, peak frequency: 6 Hz, time range: 78–547 ms, peak time: 297 ms; Figure 2B, right panel).

Similarly, the location of auditory stimuli could also be decoded from neural data for both content discrimination task I (p<0.001, one-sided: X¯>0.5, cluster-corrected; frequency range: 2–18 Hz, peak frequency: 6 Hz, time range: 63–-672 ms, peak time: 203 ms; Figure 2B, left panel) and the vertical RDM task (p<0.001, one-sided: X¯>0.5, cluster-corrected; frequency range: 2–12 Hz, peak frequency: 6 Hz, time range: 156–484 ms, peak time: 281 ms; Figure 2B, right panel). The above chance performance of the classifiers for the auditory stimulus features demonstrates that location and content information were processed, even when these features were task-irrelevant. Processing of task-irrelevant stimulus features was, however, more transient in time and more narrowband in frequency as compared to processing of the same features in a task-relevant setting. Further, content decoding revealed a much broader frequency spectrum than any of the other comparisons in content discrimination task I. In the next experiment, we show that this is related to the fact that this feature was response-relevant and that this effect therefore partially reflects response preparation and response execution processes. Summarizing, we show that when (conflicting) features of an auditory stimulus are truly and consistently task-irrelevant, the conflict between them is no longer detected by – nor relevant to – the conflict monitoring system, but the features (content and location) are still processed in isolation.

To investigate if, and how, behavioral training affects processing of sound content and location, we tested whether decoding accuracies for these features were different between the two experimental sessions. Decoding accuracies for the task-relevant feature of content discrimination task I (i.e., sound content) were significantly increased after behavioral training in a delta- to theta-band cluster (p<0.001, one-sided: X¯>0.5, cluster-corrected; frequency range: 2–10 Hz, peak frequency: 2 Hz, time range: 234–484 ms, peak time: 344 ms; Figure 2—figure supplement 1C). This suggests that processing of task-relevant information (i.e., sound content) is improved as a result of training. Decoding accuracies for sound location in the content discrimination task were not different before and after behavioral training (no significant clusters; predefined ROI: t(23) = 0.12, p=0.91, d = 0.02, BF01 = 4.63). In the vertical RDM task, behavioral training did not affect the decoding accuracies within the predefined ROI for sound content (t(23) = 0.75, p=0.46, d = 0.15, BF01 = 3.61) and location (t(23) = 0.04, p=0.97, d = 0.01, BF01 = 4.66, also no other significant clusters; Figure 2—figure supplement 1D). This suggests that processing of sound content and location (Figure 2B), both task-irrelevant auditory features in the vertical RDM task, is automatic and not dependent on training.

In conclusion, we observed neural signatures of the processing of sensory stimulus features (i.e., location and content of an auditory stimulus) regardless of task relevance of these features, but a lack of integration of these features to form conflict when the auditory stimulus was fully task-irrelevant. Considerable training in content discrimination task I resulted in more efficient conflict processing (i.e., decreased behavioral conflict effects and theta-band activity after training; Figure 2—figure supplement 1) when the auditory stimulus was task-relevant, but this increased automaticity did not lead to detection of conflict when the auditory stimulus was fully task irrelevant.

Experiment 2: does detection of conflict depend on task relevance of the stimulus or its individual features?

The experimental design of the first experiment rendered the auditory features to be located at the extreme ends of the scale of task relevance, that is, either the conflicting features were task-relevant and the conflicting features were consistently mapped to specific responses, or the conflicting features were task-irrelevant and the conflicting features were not mapped to responses. However, to further understand the relationship between the relevance of the conflicting features and the overlap with responses, we performed a second experiment containing four behavioral tasks. For this second experiment, we recruited 24 new participants. We included two auditory conflicting tasks, similar to content discrimination task I. In one of the auditory tasks (from hereon: content discrimination task II, Figure 3A), participants again had to respond according to the content of the auditory stimulus, whereas in the other auditory task (from hereon: location discrimination task, Figure 3B) they were instructed to report from which side the auditory stimulus was presented (i.e., left or right speaker).

Figure 3. Experimental design of experiment 2.

Figure 3.

(A–D) Schematic representation of the experimental design for auditory content discrimination task II (A), location discrimination task (B), volume oddball task (C), and horizontal random dot-motion (RDM) task (D). In all tasks, the spoken words ‘left’ and ‘right’ were presented through either a speaker located on the left or right side of the participant. (A) In auditory content discrimination task II, participants were instructed to report the content (‘left’ or ‘right’) of an auditory stimulus via a button press with their left or right hand, respectively, and to ignore the location of the auditory stimulus that was presented. (B) In the auditory location discrimination task, participants were instructed to report the location (left or right speaker) of an auditory stimulus via a button press with their left or right hand, respectively, and to ignore the content of the auditory stimulus that was presented. (C) During the volume oddball task, participants were instructed to detect auditory stimuli that were presented at a lower volume than the majority of the stimuli (i.e., oddballs) by pressing the spacebar with their right hand. (D) In the horizontal RDM, participants were instructed to report the overall movement of dots (left or right) via a button press with their left and right hands, respectively, whilst still being presented with the auditory stimuli. In all four tasks, content of the auditory stimuli could be congruent or incongruent with its location of presentation (50% congruent/incongruent trials). (E) Order of behavioral tasks in experiment 2. Participants always started with the volume oddball task, followed by the location discrimination task, content discrimination task, and horizontal RDM, in randomized order. Participants ended with another run of the volume oddball task.

Furthermore, we included two new tasks in which the conflicting features (location and content) were not task-relevant and participants responded to a non-conflicting feature that was part of the conflicting stimulus (from hereon: volume oddball detection task, Figure 3C) or the auditory stimulus was task-irrelevant but its features - location and content- overlapped with the responses to be given (from hereon: horizontal RDM task, Figure 3D). The horizontal RDM task was similar to vertical RDM task of experiment 1; however, the dots were now moving on a horizontal plane. In other words, participants were instructed to classify the overall movement of moving dots to either the left or the right. As this is a visual paradigm, the simultaneously presented auditory stimuli are fully task-irrelevant. However, both features of conflict, the content (i.e., ‘left’ and ‘right’) and the location (i.e., left and right speaker), of the auditory stimuli could potentially interfere with participants’ responses on the visual task, thereby inducing a crossmodal Stroop-like type of conflict (Stroop, 1935).

In the volume oddball detection task, participants were presented with the same auditory stimuli as before; however, one out of eight stimuli (12.5%) was presented at a lower volume. Participants were instructed to detect these volume oddballs by pressing the spacebar with their right hand as fast as possible. If they did not hear an oddball, they were instructed to withhold from responding. In this task, theoretically, the selection of an object’s feature (e.g., volume) could lead to the selection of all of its features (e.g., sound content, location), as suggested by theories of object-based attention (Chen, 2012). This in turn may lead to conflict detection, even if the conflicting features are task-irrelevant. Similar to experiment 1, we included behavioral training in conflict-inducing tasks to inspect if enhanced automaticity of conflict processing would affect conflict detection under task-irrelevant sensory input. Participants performed 500 trials of the volume oddball detection task twice at the very beginning of a session and at the end of a session (Figure 3E). During the first run of the task, neither sound content nor sound location was related to any behavioral responses, whereas during the second run these features might have acquired some intrinsic relevance through training on the other tasks. Furthermore, repeated exposure to conflict may prime the conflict monitoring system to exert more control over sensory inputs necessary for more efficient conflict detection, even when these sensory inputs are not task-relevant within the context of the task the participant is performing at that time.

In order to keep sensory input similar, moving dots (coherence: 0) were presented on the monitor during content discrimination task II, the location discrimination task, and the volume oddball detection task, but these could be ignored. Again, EEG was recorded while participants performed these tasks in order to see if auditory conflict was detected when the auditory stimulus or its conflicting features (i.e., location and content) were task-irrelevant. We performed the same artifact rejection procedure as in experiment 1. For one participant, on average 64.5% (SD = 9.9%) of all epochs within each task were removed in this procedure, which is 3.9 standard deviations from the average ratio of removed epochs in this experiment (M = 10.3%, SD = 13.9%). Therefore, this participant was excluded from the EEG analysis of experiment 2, resulting in N = 23 for the analysis of EEG data.

Experiment 2: behavioral effects of conflict only for task-relevant auditory sensory input

Mean RT in the location discrimination task was 338.2 ms (SD = 112.7 ms) and mean ER was 4.5% (SD = 3.2%). For content discrimination task II, RTs were on average 364.0 ms (SD = 127.4 ms) and ERs were 5.5% (SD = 5.8%). For the horizontal RDM, RTs were on average 362.5 ms (SD = 116.5 ms) and ERs were 27.7% (SD = 5.2%). Mean RTs of the volume oddball task were calculated for correct trials in which a response was made (i.e., hit trials) and was 504.5 ms (SD = 178.7 ms, on hit trials). On average, participants had 40.5 hits (SD = 12.8) out of 61.8 oddball trials (SD = 8.6), per run of 500 trials. We will first discuss the behavioral results of the content discrimination, location discrimination, and horizontal RDM tasks as behavioral performance for the volume oddball task is represented in perceptual sensitivity (d’), rather than ER.

rm-ANOVAs (3 × 2 factorial) were performed on mean RTs and ERs from these three tasks, with the factors (1) task and (2) congruency of the auditory features. Again, congruency always relates to the combination of the auditory stimulus features sound content ('left' vs. 'right') and sound location (left speaker vs. right speaker). We observed that participants were slower and made more errors on incongruent trials as compared to congruent trials (RT: F(1,23) = 75.41, p<0.001, ηp2 = 0.77; ER: F(1,23) = 68.00, p <0.001, ηp2 = 0.75; Figure 4A). This conflict effect was modulated by task (RT: F(1.55,35.71) = 22.80, p<0.001, ηp2 = 0.50; ER: F(1.58, 36.36) = 10.18, p<0.001, ηp2=0.31) and post-hoc paired samples t-tests (incongruent – congruent) showed that conflict effects were only present in tasks where one of the conflicting features was task-relevant (location discrimination task: RTloc: t(23) = 5.03, p<0.001, d = 1.03; ERloc: t(23) = 6.25, p<0.001, d = 1.28; content discrimination task II: RTcont(II): t(23) = 8.95, p<0.001, d = 1.83; ERcont(II): t(23) = 5.93, p<0.001, d = 1.21; horizontal RDM task: RTHRDM: t(23) = 1.44, p=0.16, d = 0.29, BF01 = 1.88; ERHRDM: t(23) = 1.65, p=0.11, d = 0.34, BF01 = 1.44). Although in the horizontal RDM task conflict between sound content and location did not affect the speed of responses, stimulus content and location in isolation could have potentially interfered with behavioral performance given the overlap of these features with both the plane of dot direction (left/right) and the response scheme (left/right hand). Indeed, trials containing conflict between sound location and dot direction resulted in slower RTs and increased ERs (RT: t(23) = 2.12, p=0.045, d = 0.44; ER: t(23) = 5.94, p<0.001, d = 1.21). Similar effects were observed for trials where sound content conflicted with the dot direction, but onlyin ERs (RT: t(23) = 1.72, p=0.10, d = 0.35, BF01 = 2.85; ER: t(23) = 5.55, p<0.001, d = 1.13). This shows that sound content and location in isolation, even though both features were task-irrelevant, interfered with task performance.

Figure 4. Behavioral and multivariate decoding results for experiment 2.

(A, B) The four columns show data belonging to, from left to right, content discrimination task II, the location discrimination task, the volume oddball detection task, and the horizontal random dot-motion (RDM) task. (A) Behavioral results are plotted as conflict effects (incongruent – congruent). Effects of conflict were present in all tasks where the auditory stimulus was task-relevant (content discrimination task II, location discrimination task, and volume oddball). In both auditory discrimination tasks, we observed longer reaction times (RTs) (left bar) and increased error rates (right bar) for incongruent compared to congruent trials. For the volume oddball, we did not observe an effect in RT, but increased sensitivity (d’) on incongruent compared to congruent trials. Dots represent individual participants. The data that is shown here can be found in Figure 4—source data 1. (B) Multivariate classifier accuracies for different stimulus features (auditory congruency, auditory content, and auditory location). Classifier accuracies (area under the curve [AUC]) are plotted across a time-frequency window of −100 ms to 1000 ms and 2–30 Hz. Classifier accuracies are thresholded (cluster-based corrected, one-sided: X¯>0.5, p<0.05), and significant clusters are outlined with a solid black line. The dotted box shows the predefined ROI on which we performed a hypothesis-driven analysis. Note that the data shown for the volume oddball task was merged over both runs. *p<0.05, **p<0.01, ***p<0.001; n.s.: p>0.05.

Figure 4—source data 1. Behavioral results of experiment 2.

Figure 4.

Figure 4—figure supplement 1. Effects of exposure to conflict inducing task on behavioral effects of conflict and decoding performance in the volume oddball task of experiment 2.

Figure 4—figure supplement 1.

(A–D) We performed 2 × 2 repeated measures ANOVAs on (A) reaction times (RTs), (B) perceptual sensitivity (d’), (C) hit rates, and (D) false alarm rates with the factors run number and congruency of the auditory stimulus. In (A–D), data are plotted as conflict effects (incongruent – congruent) and for separate runs. The top horizontal line shows significance of the interaction between session and congruency, and markers above the bars indicate significance of paired sample t-tests comparing incongruent and congruent for each run (shown data and results of t-tests can be found in Figure 4—figure supplement 1—source data 1). (A) RTs were unaffected by auditory congruency and run number (statistics in Results). (B) There was no interaction effect between congruency and run number on perceptual sensitivity (d’; statistics in Results), and post-hoc paired sample t-tests (incongruent – congruent) revealed that the effect of congruency on d’ was present during both runs. (C) The interaction between congruency and run number was not significant (F(1,23) = 2.99, p=0.10, ηp2 = 0.12, BF01 = 1.64), showing that the effect of conflict on hit rate was not different for both runs, although the effect of conflict was present during the first, but not second run. (D) False alarm rates were not modulated by the interaction between congruency and run number (F(1,23) = 0.00, p=0.99, ηp2 = 0.00, BF01 = 3.45), showing that the effects of conflict were not different between runs. This conflict effect was present during both runs. (E) There were no clusters for which the difference in decoding of all features between the two runs of the volume oddball task was significant, and there were also no differences within the preselected ROI (congruency: t(22) = 0.07, p=0.95, d = 0.01, BF01 = 4.56; content: t(22) = 0.64, p=0.53, d = 0.13, BF01 = 3.81; location: t(22) = –1.25, p=0.22, d = –0.26, BF01 = 2.29), suggesting that processing of these features was not affected by training. Thresholded (cluster-based corrected, p<0.05) accuracies are depicted across the frequency range (2–30 Hz). Plots show the difference in classifier accuracy between the two runs (run 2 – run 1) of stimulus congruency (left panel), stimulus content (middle panel), and stimulus location (right panel) in the volume oddball task. n.s.: p>0.05.
Figure 4—figure supplement 1—source data 1. Behavioral results of the volume oddball task - first and second run.
Figure 4—figure supplement 2. Sensory feature decoding in the time-domain.

Figure 4—figure supplement 2.

We trained classifiers on either sound content (A) or sound location (B) in order to see how neural representations of sensory processing were affected by our manipulation of task relevance of these features. (A) Sound content could be decoded from most tasks (except the horizontal random dot-motion [RDM]), and decoding accuracies for sound content were highest for the task in which sound content was the task-relevant feature, that is, content discrimination task II. Decoding accuracies were higher for content discrimination task II as compared to the other three tasks (difference start location discrimination: 328 ms; horizontal RDM: 313 ms; volume oddball: 344 ms). (B) Sound location could be decoded from all tasks. Again, the task in which the decoded feature was task-relevant, that is, the location discrimination task, showed the highest decoding accuracies. Location decoding performance was improved for the task in which this feature was task-relevant (i.e., location discrimination task) as compared to the other tasks. These differences started from 250 ms (vs. content discrimination task II), 234 ms (vs. horizontal RDM task), and 266 ms (vs. volume oddball task). Shaded areas represent the SEM. Bold traces indicate that feature decoding was significantly (cluster-corrected, one-sided t-test, X¯>0.5, p<0.05) above chance. Horizontal black lines at the bottom depict where feature decoding is significantly different (cluster-corrected, two-sided t-test, p<0.05) between the task in which the feature was task-relevant versus where it was task-irrelevant. CD II: content discrimination task II; HRDM: horizontal RDM task; LD: location discrimination task; VO: volume oddball detection task.

For the volume oddball task, we tested the effect of auditory congruency on RTs of trials which, by virtue of task instruction, only covers oddball trials in which a correct response was made (i.e., hits). rm-ANOVAs (2 × 2 factorial) with the factors run number and feature congruency revealed no effects of auditory conflict on RT (F(1,23) = 2.78, p=0.11, ηp2 = 0.10, BF01 = 3.34; Figure 4A, no effects of training, see Figure 4—figure supplement 1A). To test whether individual features of the auditory stimulus interfered with right-hand responses, we performed an additional 2 × 2 factorial rm-ANOVA with sound content and location as factors. Auditory content significantly affected RTs, whereas sound location did not (content: F(1,23) = 25.41, p<0.001, ηp2 = 0.53; location: F(1,23) = 0.99, p=0.33, ηp2 = 0.04, BF01 = 3.90). Specifically, RTs were slower for the spoken word 'left' (incongruent with the responding hand, M = 528.6 ms, SD = 193.1 ms) as compared to the spoken word 'right' (congruent with the responding hand, M = 493.3 ms, SD = 174.9 ms), revealing interference of sound content in isolation on right-hand responses, similar to the horizontal RDM task. Although conflict between the auditory features was not present in RTs, we did observe that sensitivity (d’) increased for incongruent (M = 2.66, SD = 0.77) compared to congruent trials (M = 2.11, SD = 0.81, F(1,23) = 45.62, p<0.001, ηp2 = 0.67; Figure 4A). These results show that volume oddball detection performance increases on trials that contain conflict between sound content and location. This effect of conflict on behavioral performance can already be found in the first run, when sound content and location had not yet been related to any responses/task and were thus fully task-irrelevant (t(23) = 5.71, p<0.001, d = 1.17). There was no significant interaction between run number and auditory stimulus congruency (F(1,23) = 1.40, p=0.25, ηp2 = 0.06, BF01 = 2.13; Figure 4—figure supplement 1B; hit rates and false alarms are plotted in this figure supplement as well).

Experiment 2: detection of conflict occurs when any feature of a stimulus is related to the response on the primary task or is task-relevant (even a non-conflicting feature)

We again trained multivariate classifiers on single-trial time-frequency data to test whether the auditory stimulus features (i.e., content, location, and congruency) were processed when (1) the auditory conflicting features were task-relevant and overlapped with the response-mapping (content and location discrimination tasks), (2) the auditory conflicting features were task-irrelevant and another feature of the conflicting stimulus was task-relevant (volume oddball task), or when (3) the auditory conflicting features were task-irrelevant, but its conflicting features overlapped with the response-mapping in the task (horizontal RDM task).

Cluster-based analyses across the entire T-F range revealed neural signatures of conflict processing in the theta-band when the content of the auditory stimulus was task-relevant (content discrimination task II: p<0.001, one-sided: X¯>0.5, cluster-corrected; frequency range: 4–10 Hz, peak frequency: 4 Hz, time range: 328–953 ms, peak time: 438 ms; Figure 4B) and when the volume of the auditory stimulus was task-relevant (volume oddball task: p=0.03, one-sided: X¯>0.5, cluster-corrected; frequency range: 2–6 Hz, peak frequency: 2 Hz, time range: 234–516 ms, peak time: 438 ms; Figure 4B, see Figure 4—figure supplement 1E: no training effects for the volume oddball task). Both observations of congruency decoding are in line with the presence of conflict-related behavioral effects in these tasks (Figure 4A). No significant clusters of above-chance classifier accuracy were found after correcting for multiple comparisons in the location discrimination task and the horizontal RDM task (Figure 4B). However, a hypothesis-driven analysis focused on the post-stimulus theta-band (2–8 Hz, 100–700 ms) revealed that congruency decoding accuracies within this ROI were significantly above chance for both tasks as well (location discrimination: t(22) = 2.00, p=0.03, d = 0.42; horizontal RDM: t(22) = 2.89, p=0.004, d = 0.60). Thus, we observed that conflict of the auditory stimulus is detected when one of the auditory conflicting features is task-relevant (content and location discrimination tasks), when one of its non-conflicting features is task-relevant (volume oddball task), and when none of the auditory features is task-relevant but these features overlap with the response-mapping of the task (horizontal RDM task). To qualify the differences between tasks, we combine the data from all experiments and compare effect sizes across tasks at the end of this section (Figure 5).

Figure 5. Processing of sensory and conflict features for different levels of task relevance.

(A) Summary of the decoding results of all behavioral tasks, sorted by congruency decoding effect size (Cohen’s dz) in a preselected time-frequency ROI. The data in these plots are identical to the ones shown in Figures 2 and 4. (B) Effect sizes are shown for all task/feature combinations derived from a predefined ROI (2–8 Hz and 100–700ms) and sorted according to effect size of congruency decoding. Effect sizes for congruency decoding were dependent on behavioral task (downward slope of the green line), whereas this was not the case, or less so, for the decoding of content and location. The data can be found in Figure 5—source data 1. CD II: content discrimination task II; CD I: content discrimination task I; HRDM: horizontal RDM task; VO: volume oddball detection task; LD: location discrimination task; VRDM: vertical RDM task; n.s.: p>0.05.

Figure 5—source data 1. Decoding results within ROI for all tasks.

Figure 5.

Figure 5—figure supplement 1. Topographic maps of reconstructed activation patterns and effects sizes for alternative ROIs.

Figure 5—figure supplement 1.

(A) Decoding weights were extracted from the ROI that was used in the final analysis (100–700 ms, 2–8 Hz). Then these weights were transformed into activation patterns by multiplying them with the covariance in the electroencephalography data. The topographical maps of the two behavioral tasks in which congruency decoding was most accurate (content discrimination tasks I and II) reveal a clear midfrontal distribution, which is commonly found in the literature (Cohen and Cavanagh, 2011; Cohen and van Gaal, 2014; Jiang et al., 2015a; Nigbur et al., 2012). (B–D) The reported results from our ROI analyses (Figure 5B) could have been accidental due to the fact that the ROI was chosen on the basis of previous results. In order to exclude this possibility, we performed the exact same analyses (repeated measures ANCOVA with factors being task and feature) on data that were extracted from three different ROIs. Crucially, we found interaction effects between behavioral task and stimulus feature for all ROIs (ROI 2: F(10,402) = 20.31, p<0.001, ηp2 = 0.34; ROI 3: F(10,402) = 12.25, p<0.001, ηp2 = 0.23; ROI 4: F(10,402) = 14.47, p<0.001, ηp2 = 0.27). Behavioral tasks on the x-axis are sorted by magnitude of the effect size of congruency decoding, similar to Figure 5B. Note that in all figures the VRDM task is the one with the lowest effect size in congruency. Interestingly, for all ROIs the same pattern as in Figure 5B is visible, namely that congruency decoding deteriorates to a point where accuracies are no longer significant, whereas for decoding of sensory stimulus features this is not the case. CD II: content discrimination task II; CD I: content discrimination task I; HRDM: horizontal RDM task; VO: volume oddball detection task; LD: location discrimination task; VRDM: vertical RDM task. ± p<0.06, n.s.: p>0.06.

Experiment 2: sensory features are processed in all tasks

Next, we trained classifiers to distinguish trials based on sound location and content in order to inspect sensory processing. We found neural signatures of the processing of sound content for all four tasks: content discrimination II (p<0.001, one-sided: X¯>0.5, cluster-corrected; frequency range: 2–30 Hz, peak frequency: 4 Hz, time range: 203–1000 ms, peak time: 469; Figure 4B), location discrimination (p=0.01, one-sided: X¯>0.5, cluster-corrected; frequency range: 2–6 Hz, peak frequency: 2 Hz, time range: 313–641 ms, peak time: 563 ms; Figure 4B), horizontal RDM task (p<0.001, one-sided: X¯>0.5, cluster-corrected; frequency range: 4–16 Hz, peak frequency: 4 Hz, time range: 78–328 ms, peak time: 281 ms; Figure 4B). For the volume oddball task, we observed a delta/theta-band cluster and a late beta-band cluster (delta/theta: p<0.001, one-sided: X¯>0.5, cluster-corrected; frequency range: 2–8 Hz, peak frequency: 4 Hz, time range: 94–797 ms, peak time: 281 ms; late beta: p=0.01, one-sided: X¯>0.5, cluster-corrected; frequency range: 12–20 Hz, peak frequency: 20 Hz, time range: 672–953 ms, peak time: 828 ms; Figure 4B).

Furthermore, sound location could be decoded from the content discrimination task II (delta/theta: p=0.02, one-sided: X¯>0.5, cluster-corrected; frequency range: 2–6 Hz, peak frequency: 2 Hz, time range: 453–688 ms, peak time: 609 ms; alpha: p=0.03, one-sided: X¯>0.5, cluster-corrected; frequency range: 10–12 Hz, peak frequency: 12 Hz, time range: 531–750 ms, peak time: 578 ms; Figure 4B), the location discrimination task (p<0.001, one-sided: X¯>0.5, cluster-corrected; frequency range: 2–30 Hz, peak frequency: 2 Hz, time range: 109–1000 ms, peak time: 453 ms; Figure 4B), and the volume oddball task (p<0.001, one-sided: X¯>0.5, cluster-corrected; frequency range: 2–22 Hz, peak frequency: 10 Hz, time range: −47 ms to 891 ms, peak time: 469 ms; Figure 4B). Initially, we did not observe a significant cluster of location decoding in the horizontal RDM, although the hypothesis-driven analysis revealed that accuracies within the predefined theta-band ROI were significantly above chance level as well (t(23) = 2.47, p=0.01, d = 0.51).

One aspect of these results is worth highlighting. When participants responded to the location of the auditory stimulus, location decoding revealed a broadband power spectrum, similar to sound content decoding when sound content was task-relevant (content discrimination tasks). This broad frequency decoding may be due to the fact that these features were task-relevant, but these results may also partially reflect response preparation and response execution processes as these auditory features were directly associated with a specific response. In order to test whether the earliest sensory responses were already modulated by task relevance and to link this to previous event-related potential (ERP) studies (Alilović et al., 2019; Molloy et al., 2015; Woldorff et al., 1993), we performed an additional time-domain multivariate analysis on these sensory features (T-F analyses are not well suited to address questions about the timing of processes). Because we were interested in the earliest sensory responses, we performed this analysis on data from experiment 2 only as all task parameters were best matched (e.g., in all tasks, a visual stimulus was presented, no training, etc.). We observed increased decoding for task-relevant sensory features compared to task-irrelevant features, starting ~250 ms (sound location RT: M = 338 ms) and ~330 ms (sound content RT: M = 364 ms) after stimulus presentation (Figure 4—figure supplement 2). The onset of these differences starts before a response is made, which may suggest that sensory processing of these features is indeed affected by task relevance; however, processes building up towards motor execution, such as decision-making and response preparation processes, cannot be excluded as potential factors driving the higher decoding accuracies in tasks where specific features are task-relevant and hence correlated with decision and motor processes. These results are elaborated upon in the Discussion.

In conclusion, in line with the behavioral results, we observed that the processing of conflict between two stimulus features (i.e., location and content of an auditory stimulus) was present in all tasks of experiment 2. This indicates that conflict can be detected when one of the auditory conflicting features is task-relevant (content and location discrimination tasks), when one of its non-conflicting features is task-relevant (volume oddball task), and when there is overlap in the response-mapping with any of its task-irrelevant conflicting features (horizontal RDM task). Overall, this reveals that when the conflicting stimulus itself is attended or when its conflicting features overlap with the response scheme, all of its features seem to be processed and integrated to elicit conflict.

All experiments: decreasing task relevance hampers cognitive control operations, but not sensory processing

The neural data from all six different tasks over two experiments suggests that if sensory input is task-irrelevant, processing of that information is preserved, while cognitive control operations are strongly hampered (Figures 2B, 4B, and 5B). To quantify this observation, we calculated Cohen's dz for all tasks and features (based on the preselected ROI), sorted tasks according to the effect sizes of congruency decoding, and plotted Cohen's dz across tasks for all features (conflict, content, and location; Figure 5). For each feature and task, we extracted individual classifier area under the curve (AUC) values and performed an analysis of covariance (ANCOVA) on these accuracies, with fixed effects being task and stimulus feature. We found main effects for behavioral task and stimulus feature (task: F(5,402) = 17.25, p<0.001, ηp2 = 0.18; stimulus feature: F(2,402) = 44.61, p<0.001, ηp2 = 0.18). Crucially, the interaction between task and stimulus feature was also significant (F(10,402) = 18.80, p<0.001, ηp2 = 0.32), showing that the accuracy of conflict decoding decreased more across tasks as compared to content and location decoding. We next performed t-tests (one-sided, X¯ > 0.5) on ROI accuracies from every task/feature combination to assess decoding performance for all stimulus features for the different task-relevance manipulations (see Figure 5—source data 1). We observed that congruency decoding accuracies were strongly influenced by task, whereas this was not the case for decoding accuracies of stimulus content and location (Figure 5B). Note that these results were robust and not dependent on the specific ROI that was selected because using other ROI windows led to similar patterns of results, that is, decreased congruency decoding under task-irrelevant input, but relatively stable sensory feature decoding (Figure 5—figure supplement 1C, D). Classifier weights were extracted from the ROI for all tasks and features, transformed to activation patterns and plotted in topomaps, to show the patterns of activity underlying the decoding results (Figure 5—figure supplement 1A).

Discussion

Although it has been hypothesized for a long time that only basic physical properties of task-irrelevant sensory input are processed (Treisman and Gelade, 1980), over the past few years an overabundance of processes has been found to be preserved in the absence of attention (Fahrenfort et al., 2017; Li et al., 2002; Peelen et al., 2009). Here, we aimed to push the limits of the brain's capacity to process unattended information and addressed whether cognitive control networks can be recruited when conflicting features of sensory input are task-irrelevant. Interestingly, similar cognitive control functions have been shown to occur when stimuli are masked and hence conscious awareness is strongly reduced (Atas et al., 2016; D'Ostilio and Garraux, 2012b; Huber-Huber and Ansorge, 2018; Jiang et al., 2015b; Jiang et al., 2018; van Gaal et al., 2008; van Gaal et al., 2011).

In two omnibus experiments with six different tasks, we presented stimuli with potentially auditory-spatial conflicting stimulus features (e.g., the word ‘left’ presented on the right side) to participants, whilst they were performing several behavioral tasks. These tasks manipulated whether the features sound content and location of the auditory stimulus were task-relevant and whether these features were mapped to specific overlapping responses of the primary task. We observed clear signals of conflict processing in behavior (i.e., longer RTs, increased ERs, increased sensitivity) and brain activity (i.e., above-chance decoding accuracy in the theta-band) when the conflicting features of the auditory stimulus were task-relevant, that is, in the content and location discrimination tasks, when another non-conflicting feature of the auditory stimulus was task-relevant, but the conflicting features content and location were not (volume oddball task) and when the conflicting features were not task-relevant but when they overlapped with the response scheme of the task (horizontal RDM task). When the features of the auditory stimulus were task-irrelevant and orthogonal to the response scheme of the primary task, that is, in the vertical RDM task, we did not observe any effects of conflict in behavior or neural measures. The absence of conflict effects was supported by Bayesian analyses, showing reliable evidence in favor of the null hypothesis. Strikingly, the individual stimulus features, that is, stimulus location/content, were always processed, regardless of their task relevance and response relevance. Note that this dissociation – hampered conflict processing yet preserved perceptual processing – cannot be explained by a lack of statistical power because decoding accuracy of stimulus location/content was comparable between behavioral tasks. These results highlight that relatively basic stimulus properties escape the attentional bottleneck, lending support to previous studies (e.g., Fahrenfort et al., 2017; Li et al., 2002; Peelen et al., 2009; Sand and Wiens, 2011; Treisman and Gelade, 1980), but furthermore showcase that an attentional bottleneck for detecting conflict (integration of stimulus features) exists upstream in the hierarchy of cognitive processing. Below we link the observed results to the existing literature.

Object-based attention as a prerequisite for feature integration (leading to conflict)

So why are auditory content and location not integrated to form conflict when all of the auditory features are task-irrelevant? It has been suggested that the MFC is crucial for monitoring the presence of conflict, through the detection of coinciding inputs (Cohen, 2014). In our paradigm, it thus seems crucial that information related to the auditory features content and location reaches the MFC to be able to detect conflict, although control networks can undergo reconfiguration under certain circumstances (Canales-Johnson et al., 2020). Previous studies have shown that task-irrelevant stimuli can still undergo elaborate processing (Li et al., 2002; Peelen et al., 2009). Our decoding results show that task-irrelevant features are indeed processed by the brain (Figures 2B, 4B, and 5). Interestingly, conflict between two task-irrelevant features was detected when another feature of the conflicting stimulus was task-relevant (volume oddball task) or when the conflicting features had overlap with the overall response scheme (horizontal RDM task), but remained undetected when none of the auditory features was task-relevant and there was no overlap with the response scheme (vertical RDM task). We argue that this difference is due to the fact that in the volume oddball and horizontal RDM tasks, the task-irrelevant conflicting features were selected through object-based attention. Theories of object-based attention have suggested that when one stimulus feature of an object is task-relevant and selected, attention ‘spreads’ to all other features of the attended stimulus, even when these features are task-irrelevant or part of a different stimulus or modality (Chen, 2012; Chen and Cave, 2006; O'Craven et al., 1999; Turatto et al., 2005; Wegener et al., 2014; Xu, 2010). In the volume oddball task, a non-conflicting feature of the auditory stimulus (volume) was task-relevant, but this allowed for the selection of the other task-irrelevant features through object-based attention. In the horizontal RDM task, on the other hand, the conflicting features of the task-irrelevant auditory stimulus overlapped with the overall response scheme or task-set of the participant, namely discriminating rightward- versus leftward-moving dots. This may have led to the automatic classification of all sensory input according to this task-set (as either coding for ‘left’ or ‘right’), even when that input was not relevant for the task at hand. Possibly, through this classification, attentional resources could be exploited for the processing of these task-irrelevant features. This is especially interesting because in all conflict analyses incongruency was defined as the mismatch between the two features of the auditory stimulus (location and sound) and not between a visual feature (leftward-moving dots) and one feature of the auditory stimulus (e.g., the word ‘right’). Note that we report additional behavioral results that show clear indications of conflict when the task-relevant feature of the visual stimulus interferes directly with a single task-irrelevant feature of the auditory task (e.g., auditory content-dot-motion conflict).

When inspecting the T-F maps for the vertical RDM task, the relatively fleeting temporal characteristics of the processing of the task-irrelevant stimulus features (sound content and location) might suggest that the integration of these features may not be possible due to a lack of time as proposed in the incremental grouping model of attention (Roelfsema, 2006; Roelfsema and Houtkamp, 2011). However, the time window in which conflict was decodable when the auditory conflicting features were task-relevant coincides with the time range in which these features could be decoded when the auditory conflicting features were task-irrelevant (Figure 5A). Therefore, it seems unlikely that the more temporally constrained processing of task-irrelevant stimulus features is the cause of hampered conflict detection. Besides time being a factor, the processing of task-irrelevant features in the vertical RDM task may have also been too constrained to (early) sensory cortices and therefore could not progress to integration networks, including the MFC, necessary for the detection of conflict. Speculatively, the processing of task-irrelevant auditory features was relatively superficial due to the relatively few remaining resources (Lavie et al., 2004; Sigman and Dehaene, 2006; Zylberberg et al., 2010; Zylberberg et al., 2011), and combined with a lack of object-based attention, this may have prevented the propagation of the information to the MFC. It has been hypothesized that unattended (sometimes referred to as ‘preconscious’; Dehaene et al., 2006; Dehaene and Changeux, 2011) stimuli are not propagated deeply in the brain, but still allow for shallow recurrent interactions in sensory cortices. The poor spatial resolution of EEG measurements and the specifics of our experimental setup, however, do not allow to test these ideas regarding the involvement of spatially distinct cortices. Yet, previous work of our group suggests that task-irrelevant nonconscious information does not propagate to the frontal cortices, whereas task-relevant nonconscious information does. We demonstrated that masked task-irrelevant conflicting cues induced similar early processing in sensory cortices as compared to masked task-relevant cues, but prohibit activation of frontal cortices (van Gaal et al., 2008). These findings are not conclusive, and so we believe that uncovering the role of task relevance in processing of (nonconscious) information deserves more attention in future work (see also van Gaal et al., 2012 for a discussion on this issue).

Sensory processing is weakened, but conflict processing hampered in the absence of task relevance

We show that conflict processing is absent when conflicting features are fully task-irrelevant, while evidence of sensory processing is still present in neural data (Figures 2B, 4B, and 5). Even though sensory processing of auditory features seems relatively preserved under various levels of task relevance of these features, it appears that sensory processing may in fact also be affected when the feature is task-irrelevant (Figure 4—figure supplement 2), in line with previous studies (e.g., Alilović et al., 2019; Jehee et al., 2011; Kok et al., 2012; Kouider et al., 2016), although to a lesser extent than conflict processing (Figure 5B). For example, when sound location is the task-relevant feature (i.e., in the location discrimination task), decoding accuracies for that feature are more broadband in the frequency domain (Figure 4B) and higher in the time domain (Figure 4—figure supplement 2), compared to location decoding performance in other tasks. This increased decoding accuracy is present even before a response has been made, suggesting decreased early stage sensory processing in tasks where the decoded feature is not task-relevant. However, despite sensory processing being weakened under decreasing levels of task relevance, it is not diminished, in line with previous findings of ongoing processing in the (near) absence of attention (Fahrenfort et al., 2017; Li et al., 2002; Peelen et al., 2009). Processing of conflict between the two interfering auditory features, on the contrary, is hampered when the features are fully task-irrelevant. This is further supported by the significant interaction between task and feature in terms of decoding performance within the predefined ROI (Figure 5B). Summarizing, although processing of sensory features is degraded under decreasing levels of task relevance it is present regardless of attention, whereas detection of conflict between these features is no longer possible when the features are fully task-irrelevant.

Besides object-based attention, the process through which attentional resources are allocated to the processing of task-irrelevant features of a task-relevant object, other mechanisms might also play a role in the extent to which sensory information is processed, such as the active suppression of task-irrelevant information. It has been shown that task-irrelevant information that is response-relevant, and can thus potentially interfere with performance on the primary task, can be suppressed to minimize interference (Appelbaum et al., 2011; Janssens et al., 2018; Polk et al., 2008; but see Egner and Hirsch, 2005). This would result in more reduced sensory processing, indexed by lower decoding performance, for task-irrelevant features that are response-relevant than for task-irrelevant features that are not. Disentangling the effects of such mechanisms, object-based attention and their possible interactions on the processing of sensory and cognitive information, however, falls outside the scope of this work.

Disentangling effects of conflict and task difficulty

For our main analysis, we trained a multivariate classifier on congruent versus incongruent trials and observed effects of task relevance of the performance of the classifier, that is, decoding performance was hampered when conflicting features were fully task-irrelevant (Figures 2B, 4B, and 5B). Moreover, we report behavioral effects of conflict in all auditory tasks as well (Figures 2A and 4A). Given that behavioral performance on the auditory tasks is worse for incongruent trials as compared to congruent trials, one may wonder whether our multivariate decoder is in fact picking up information related to conflict detection or processes related to task difficulty. Whether medial frontal theta-band oscillations are a reflection of conflict detection or task difficulty, and whether these factors can be dissociated in principle, has been the topic of debate in the literature (Grinband et al., 2011a; Grinband et al., 2011b; McKay et al., 2017; Ruggeri et al., 2019; Yeung et al., 2011). On the one hand, it has been shown that activity in the dorsal medial prefrontal cortex is related to RT, suggesting that neural markers of conflict may in fact reflect time on task (Grinband et al., 2011b; Grinband et al., 2011b; Ruggeri et al., 2019). On the contrary, other research has shown that enhanced prefrontal theta-band oscillations are found in conflicting trials even when controlling for RT (Cohen and van Gaal, 2014) or task difficulty (McKay et al., 2017). The decoding results presented in this work likely reflect conflict processing, and not just task difficulty, for two reasons. First, the spatial distribution and time-frequency dynamics of the congruency decoding results are comparable to those more commonly found in the literature on conflict processing, even in a study where conflicting signals were matched for RT (Cohen and van Gaal, 2014). Specifically, using the content discrimination task of experiment 1 as example, we observe effects of conflict centered on the theta-band and ~230–610 ms post-conflict presentation, with a clear medial frontal spatial profile (Figure 2B, Figure 5—figure supplement 1A). Second, auditory stimulus conflict was decodable from neural data for two tasks in which there were either no effects of conflict – or task difficulty – on behavioral performance (i.e., horizontal RDM task), or even increased behavioral performance on conflicting trials (i.e., volume oddball task). Therefore, we believe that the observed congruency decoding results presented here are mainly driven by the detection of conflicting sensory inputs and are not, or much less so, driven by task difficulty.

Conflict between features of a task-irrelevant stimulus versus conflict between stimuli

Contrary to the current study, previous studies using a variety of conflict-inducing paradigms and attentional manipulations reported conflict effects in behavior and electrophysiological recordings induced by unattended stimuli or stimulus features (Mao and Wang, 2008; Padrão et al., 2015; Zimmer et al., 2010). However, our study deviates from those studies in several crucial aspects. First, we explicitly separate task-relevant stimulus features that cause conflict and task-relevant features that do not, parsing the cognitive components that induce cognitive control in this context. Furthermore, in the RDM and volume oddball tasks we tested whether conflict between two task-irrelevant features could be detected by the brain. Specifically, we investigated if conflict between two task-irrelevant features would be detected in the presence or absence of object-based attention (e.g., volume oddball task vs. vertical RDM task), also manipulating whether task-irrelevant conflicting features mapped onto the response or not (horizontal RDM task vs. vertical RDM task). This approach is crucially different from previous studies that exclusively tested whether a task-irrelevant or unattended stimulus (feature) could interfere with processing of a task-relevant feature (Mao and Wang, 2008; Padrão et al., 2015; Zimmer et al., 2010). Under such conditions, at least one source contributing to the generation of conflict (i.e., the task-relevant stimulus) is fully attended, and therefore, one cannot claim that under those circumstances conflict detection occurs outside the scope of attention.

It can be argued that in our horizontal RDM task the task-irrelevant auditory features (location and content) that mapped onto the response of the primary task could interfere with the processing of horizontal dot-motion, that is, the task-relevant feature. This is in fact true, as we found effects of auditory content-dot-motion and auditory location-dot-motion conflict in behavior (both on RTs and ERs). This highlights that a single feature of a task-irrelevant stimulus can interfere with the response to a task-relevant stimulus when there are overlapping feature-response-mappings. This is different from two features of a task-irrelevant stimulus to produce inherent conflict (e.g., between auditory content and location), which is what we specifically investigated by always testing the presence of auditory content-location conflict only. A similar argument might be made for our vertical RDM and volume oddball tasks because in those cases the auditorily presented stimuli could potentially conflict with responses that were exclusively made with the right hand, for example, the spoken word ‘left’ or the sound from left location may conflict generally more with a right-hand response (independent of the up/down classification or oddball detection) than the spoken word ‘right’ or the sound from right location. In the vertical RDM task, the auditorily presented stimuli were truly task-irrelevant as both stimulus content and location in isolation did not affect behavior. In the volume oddball task, sound content and location were task-irrelevant features, but these features were part of the attended stimulus and hence selected through object-based attention. In this task, the content of the auditory stimuli (e.g., ‘left’) did interfere with right-hand responses to the volume oddball task, resulting in longer RTs (compared to ‘right’). Moreover, in this task we did find behavioral and neural effects of conflict between two auditory features (Figure 4). The absence of conflict effects in the vertical RDM and presence of such effects in the volume oddball task and horizontal RDM indicates that at least one feature of the stimulus containing the conflicting features should be task-relevant or associated with a response in order for conflict to be detected. Summarizing, we show that the brain is not able to detect conflict that emerges between two features of a task-irrelevant stimulus in the absence of object-based attention.

Lastly, in other studies, conflicting stimuli were often task-irrelevant on one trial (e.g., because they were presented at an unattended location) but task-relevant on the next (e.g., because they were presented at the attended location) (e.g., Padrão et al., 2015; Zimmer et al., 2010). Such trial-by-trial fluctuations of task relevance allow for across-trial modulations to confound any current trial effects (e.g., conflict-adaptation effect) and also induce a ‘stand-by attentional mode’ where participants never truly disengage to be able to determine if a stimulus is task-relevant. We prevented such confounding effects in the present study, where the (potentially) conflicting features or the auditory stimulus were task-irrelevant on every single trial in the vertical RDM, horizontal RDM, and volume oddball task.

Differences between response conflict and perceptual conflict cannot account for absence of conflict detection in task-irrelevant sensory input

One difference between the content and location discrimination tasks, on the one hand, and the volume oddball and RDM tasks, on the other, was the task relevance of the (conflicting) auditory features. Another major difference between these groups of tasks was, consequently, the origin of the conflict. When the auditory stimuli were task-relevant, the origin of conflict was found in the interference of a task-irrelevant feature on behavioral performance, whereas for the other tasks this was not the case. We argued that in the volume oddball and RDM tasks salient auditory stimuli could be intrinsically conflicting. Intrinsic conflict is often referred to as perceptual conflict, as opposed to the aforementioned behavioral conflict (Kornblum, 1994). Although perceptual conflict effects are usually weaker than response conflict effects, both in behavior and electrophysiology (Frühholz et al., 2011; van Veen et al., 2001; Wang et al., 2014), this difference in the origin of the conflict is unlikely to explain why we did not observe effects of conflict under task-irrelevant sensory input, as opposed to earlier studies.

First, several neurophysiological studies have previously reported electrophysiological modulations by perceptual conflict centered on the MFC (Jiang et al., 2015a; Nigbur et al., 2012; van Veen et al., 2001; Wang et al., 2014; Zhao et al., 2015). Second, an earlier study using a task similar to ours (but including only task-relevant stimuli) showed effects of perceptual conflict, that is, unrelated to response-mapping, in both behavior and neural measures (Buzzell et al., 2013). Third, the prefrontal monitoring system has previously been observed to respond when participants view other people making errors (Jääskeläinen et al., 2016; van Schie et al., 2004), suggesting that cognitive control can be triggered without the need to make a response. Fourth, in our volume oddball task, where conflict was perceptual in nature as well, we did observe conflict effects, both in behavior and neural data.

Inattentional deafness or genuine processing of stimulus features?

The lack of conflict effects in the vertical RDM task might suggest a case of inattentional deafness, a phenomenon known to be induced by demanding visual tasks, which manifests itself in weakened early (~100 ms) auditory evoked responses (Molloy et al., 2015). Interestingly, human speech seems to escape such load modulations and is still processed when unattended and task-irrelevant (Olguin et al., 2018; Röer et al., 2017; Zäske et al., 2016), potentially because of its inherent relevance, similar to (other) evolutionary relevant stimuli such as faces (Finkbeiner and Palermo, 2017; Lavie et al., 2003). Indeed, the results of our multivariate analyses demonstrate that spoken words are processed (at least to some extent) when they are task-irrelevant as stimulus content (the words ‘left’ and ‘right’, middle row, Figures 2B and 4B, and Figure 5B) and stimulus location (whether the word was presented on the left or the right side’, bottom row, Figures 2B and 4B, and Figure 5B) could be decoded from time-frequency data for all behavioral tasks. For all tasks, classification of stimulus content was present in the theta-band (4–8 Hz), which is in line with a previously proposed theoretical role for theta oscillations in speech processing, namely that they track the acoustic envelope of speech (Giraud and Poeppel, 2012). After this initial processing stage, further processing of stimulus content is reflected in more durable broadband activity for the content discrimination tasks, possibly related to higher-order processes (e.g., semantic) and response preparation (middle row and left column, Figures 2B and Figure 4B). Similarly to processing of stimulus content, processing of stimulus location was most strongly reflected in the delta- to theta-range for all tasks (Figure 4B), which may relate to the auditory N1 ERP component, an ERP signal that is modulated by stimulus location (Fuentemilla et al., 2006; Lewald and Getzmann, 2011; Salminen et al., 2015). We also observed above-chance location decoding in the alpha-band for task-relevant auditory stimuli, convergent with the previously reported role of alpha-band oscillations in orienting and allocating audio-spatial attention (Weisz et al., 2014).

Thus, the characteristics of the early sensory processing of (task-irrelevant) auditory stimulus features in our study are in line with recent findings of auditory and speech processing. Moreover, our observations are in line with recent empirical findings that suggest a dominant role for late control operations, as opposed to early selection processes, in resolving conflict (Itthipuripat et al., 2019). Specifically, this work showed that in a Stroop-like paradigm both target and distractor information is analyzed fully, after which the conflicting input is resolved. Extending on this, we show preserved initial processing of task-irrelevant sensory input, but hampered late control operations necessary to detect conflict, at least in the current setup.

Automatization of conflict processing does not promote detection of task-irrelevant conflict

Previous studies investigating conflict through masking procedures concluded that conflict detection by the MFC may happen automatically and is still operational under strongly reduced levels of stimulus visibility (D'Ostilio and Garraux, 2012b; Jiang et al., 2015b; van Gaal et al., 2008). Such automaticity can often be enhanced through training of the task. For example, training in a stop-signal paradigm in which stop-signals were rendered unconscious through masking led to an increase in the strength of behavioral modulations of these stimuli (van Gaal et al., 2009). In order to see whether enhancing such automaticity could hypothetically increase the likelihood of conflict detection, we included extensive training sessions in the first experiment and had measurements of the volume oddball task before and after exposure to conflicting tasks in the second experiment. In experiment 1, we found no neural effects of conflict detection in the vertical RDM task, even when participants had been trained on the auditory task for 3600 trials (Figure 2B, Figure 2—figure supplement 1D). Training did result in a decrease of behavioral and neural conflict effects in content discrimination task I of experiment 1, indicating that our training procedure was successful (Figure 2—figure supplement 1A–C) and suggesting more efficient functioning of conflict resolution mechanisms. In experiment 2, participants performed the volume oddball task twice, once before and once after sound location and content had been mapped to responses. Again, we aimed to see if training on conflict tasks would enhance automaticity of conflict processing in a paradigm where the auditory conflicting features were task-irrelevant. We did not find any statistically reliable differences in behavioral conflict effects or accuracy of congruency decoding between the two runs (Figure 4—figure supplement 1). Therefore, it seems that the automaticity of conflict detection by the MFC and associated networks does not hold when the auditory stimulus is task-irrelevant (at least after the extent of training and exposure as studied here).

Increased detection performance on conflicting trials

Remarkably, we report increased behavioral performance on the volume oddball task for incongruent trials as compared to congruent trials (Figure 4A, Figure 4—figure supplement 1B–D). Speculatively, this increased behavioral performance (d’) on incongruent trials may be due to attentional capture of conflicting stimuli. Attentional capture is the involuntary shift of attention towards salient stimuli (Awh et al., 2012; Theeuwes, 2010). Possibly, the detection of conflict between sound content and location is a salient event causing (re-)capture of attention towards the auditory stimulus, resulting in better processing of the stimulus information and ultimately better oddball detection performance. Thus, following the detection of conflict, frontal networks would have to exert control over attentional resources and direct them towards the source of the conflict. Interestingly, cases of frontal control over attentional processes have been demonstrated in the past, for example, showing that task-irrelevant distractors that have been related to reward induce stronger attentional capture (Anderson et al., 2011) and that high working memory load increases the strength of attentional capture by distractors (Lavie and Fockert, 2006). The present study was however not optimized to test directly which underlying mechanisms are associated with increased sensitivity of conflicting sensory input, and this issue merits further experimentation.

Conclusion

Summarizing, high-level cognitive processes that require the integration of conflict inducing stimulus features are strongly hampered when none of the stimulus features of the conflict inducing stimulus are task-relevant and hence object-based attention is absent. This work nicely extends previous findings of perceptual processing outside the scope of attention (Peelen et al., 2009; Sand and Wiens, 2011; Schnuerch et al., 2016; Tusche et al., 2013), but suggests crucial limitations of the brain’s capacity to process task-irrelevant ‘complex’ cognitive control-initiating stimuli, indicative of an attentional bottleneck to detect conflict at high levels of information analysis. In contrast, the processing of more basic physical features of sensory input appears to be less deteriorated when input is task-irrelevant (Lachter et al., 2004).

Materials and methods

Participants

We performed two separate experiments, each containing multiple behavioral tasks. For each of these experiments, we recruited 24 healthy human participants from the University of Amsterdam. None of the participants took part in both experiments. So, in total 48 participants (37 females) aged 18–30 participated in this experiment for monetary compensation or participant credits. All participants had normal or corrected-to-normal vision and had no history of head injury or physical and mental illness. This study was approved by the local ethics committee of the University of Amsterdam, and written informed consent was obtained from all participants after explanation of the experimental protocol. We will describe the experimental design and procedures for the two experiments separately. Data analyses and statistics were similar across experiments and will be discussed in the same section.

Experiment 1: design and procedures

Participants performed two tasks in which conflicting auditory stimuli were either task-relevant or task-irrelevant. In both tasks, conflict was elicited through a paradigm adapted from Buzzell et al., 2013, in which spatial information and content of auditory stimuli could interfere. In content discrimination task I, participants had to respond to the auditory stimuli, whereas in vertical RDM task participants had to perform a demanding RDM task, while the auditory conflicting stimuli were still presented (Figure 1A). Participants performed both tasks on two experimental sessions of approximately 2.5 hr. In between these two experimental sessions, participants had two training sessions of 1 hr during which they only performed the task-relevant task (Figure 1B). On each experimental session, participants first performed a shortened version of the RDM task to determine the appropriate coherency of the moving dots (73–77% correct), followed by the actual task-irrelevant auditory task, and finally the task-relevant auditory task. Participants were seated in a darkened, sound-isolated room, 50 cm from a 69 × 39 cm screen (frequency: 120 Hz, resolution: 1920 × 1080, RGB: 128, 128, 128). Both tasks were programmed in MATLAB (R2012b, The MathWorks, Inc), using functionalities from Psychtoolbox (Kleiner et al., 2007).

Experiment 1: behavioral tasks

Auditory content discrimination task I

In the task-relevant auditory conflict task, the spoken words ‘links’ (i.e., ‘left’ in Dutch) and ‘rechts’ (i.e., ‘right’ in Dutch) were presented through speakers located on both sides of the participant (Figure 1A). Auditory stimuli were matched in duration and sample rate (44 kHz) and were recorded by the same male voice. By presenting these stimuli through either the left or the right speaker, content-location conflict arose on specific trials (e.g., the word ‘left’ through the right speaker). Trials were classified accordingly as either congruent (i.e., location and content are the same) or incongruent (i.e., location and content are different). Participants were instructed to respond as fast and accurate as possible by pressing left (‘a’) or right (‘l’) on a keyboard located in front of the participants, according to the stimulus content, ignoring stimulus location. Responses had to be made with the left or right index finger, respectively. The task was divided into 12 blocks of 100 trials each, allowing participants to rest in between blocks. After stimulus presentation, participants had a 2 s period in which they could respond. A variable inter-trial interval between 850 and 1250 ms was initiated directly after the response. If no response was made, the subsequent trial would start after the 2 s response period. Congruent and incongruent trials occurred equally often (i.e., 50% of all trials) as expectancy of conflict has been shown to affect conflict processing (Soutschek et al., 2015). Due to an error in the script, there was a disbalance in the amount of trials coming from the left (70%) versus right (30%) speaker location for the first 14 participants. However, the amount of congruent versus incongruent and ‘left’ versus ‘right’ trials was equally distributed. For the upcoming analyses, all trial classes were balanced in trial count.

Vertical random-dot-motion task

In the task-irrelevant auditory task, participants performed an RDM task in which they had to discriminate the motion (up vs. down) of white dots (n = 603) presented on a black circle (RGB: 0, 0, 0; ~14° visual angle; Figure 1A). Onset of the visual stimulus was paired with the presentation of the auditory conflicting stimulus. Participants were instructed to respond according to the direction of the dots by pressing the ‘up’ or ‘down’ key on a keyboard with their right hand as fast and accurate as possible. Again, participants could respond in a 2 s time interval, which was terminated after responses, and followed by an inter-trial interval of 850–1250 ms. Task difficulty, in terms of dot-motion coherency (i.e., proportion of dots moving in the same direction), was titrated between blocks to 73–77% correct of all trials within that block. Similar to content discrimination task I, the vertical RDM was divided into 12 blocks containing 100 trials each, separated by short breaks. Again, congruent and incongruent trials, with respect to the auditory stimuli, occurred equally often.

Experiment 2: design and procedures

In the second experiment, we wanted to investigate whether it is task irrelevance of the auditory stimulus itself or task irrelevance of the auditory features (i.e., content and location) that determine whether prefrontal control processes are hampered. Participants performed two tasks in which auditory stimuli were fully task-relevant (location discrimination and content discrimination), one task in which the auditory stimulus was relevant but the features auditory location and content were not (volume oddball) and one task in which the auditory stimulus itself was task-irrelevant but its features location and content could potentially interfere with behavior (horizontal RDM). Participants came to the lab for one session, lasting 3 hr. Each session started with the volume oddball task, followed by (in a counterbalanced order) the location discrimination, content discrimination, and horizontal RDM tasks, and ended with a block of the volume oddball task again.

We included the location discrimination and content discrimination tasks both to replicate the results of experiment 1 and also to see if there would be differences in these results between the two tasks. Specifically, we investigated whether processing of auditory stimulus features – as indicated by multivariate classification accuracies – would differ between the two tasks. Participants were seated in a darkened, sound-isolated room, 50 cm from a 69 × 39 cm screen (frequency: 120 Hz, resolution: 1920 × 1080, RGB: 128, 128, 128). All four experiments were programmed in Python 2.7 using functionalities from PsychoPy (Peirce et al., 2019).

Experiment 2: behavioral tasks

Auditory content discrimination task (II)

The auditory content discrimination task from experiment 2 is a (near identical) replication of the auditory content discrimination task from experiment 1. Participants were fixating on a fixation mark in the center of the screen. Again, the spoken words ‘links’ (i.e., ‘left’ in Dutch) and ‘rechts’ (i.e., ‘right’ in Dutch) were presented through speakers located on both sides of the participant. Participants were instructed to respond according to the stimulus content by pressing left (‘A’) or right (‘L’) on a keyboard located in front of them, with their left and right index fingers, respectively. Concurrently, on every trial, a black disk with randomly moving dots (coherence: 0) was presented to keep sensory input similar between tasks. After stimulus presentation, participants had an 800 ms period in which they could respond. After a response, the response window would be terminated directly. A variable inter-trial interval (ITI) between 250 and 450 ms was initiated directly after the response. If no response was made, the subsequent trial would start after the ITI. All stimulus features (i.e., sound content, location, and congruency) were presented in a balanced manner (e.g., 50% congruent, 50% incongruent). The task was divided into six blocks of 100 trials each, allowing participants to rest in between blocks.

Auditory location discrimination task

The auditory location discrimination task was identical to the auditory content discrimination task II, with the exception that participants were now instructed to respond according to the location of the auditory stimulus. Thus, participants had to press a left button (‘A’) for sounds coming from a left speaker and right button (‘L’) for sounds coming from a right speaker. Again, participants performed six blocks of 100 trials.

Volume oddball task

In the volume oddball task, the same auditory stimuli were presented. Again, on every trial, a black disk with randomly moving dots (coherence: 0) was presented to keep sensory input similar between tasks. Occasionally an auditory stimulus would be presented at a lower volume. The initial volume of the oddballs was set to 70%, but was staircased in between blocks to yield 83–87% correct answers. If participants' performance on the previous block was below or above this range, the volume increased or decreased with 5%, respectively. The odds of a trial being an oddball trial were 1/8 (drawn from a uniform distribution). Participants were instructed to detect these oddballs by pressing the spacebar as fast as possible whenever they thought they heard a volume oddball. If they thought that the stimulus was presented at a normal volume, they were instructed to refrain from responding. The response interval was 800 ms, which was terminated at response. A variable inter-trial interval of 150–350 ms was initiated after this response interval. Participants performed two runs of this task, at the beginning of each session and at the end of each session. Each run contained five blocks of 100 trials each.

Horizontal RDM task

In the horizontal RDM task, participants had to discriminate the motion (left vs. right) of white dots (n = 603) presented on a black circle (RGB: 0, 0, 0; ~14° visual angle). Onset of the visual stimulus was paired with the presentation of the auditory stimulus. Participants were instructed to respond according to the direction of the dots by pressing a left key (‘A’) or right key (‘L’) on a keyboard with left and right index fingers, respectively, as fast and accurate as possible. Participants could respond in an 800 ms time interval, which was terminated after responses, and followed by an inter-trial interval of 250–450 ms. Task difficulty, in terms of dot-motion coherency (i.e., proportion of dots moving in the same direction), was set to 0.3 in the first block. This value indicated an intermediate coherence as every trial in that block could be the intermediate coherence, but also half (0.15) or twice this intermediate coherence (0.6). The intermediate coherence was titrated between blocks to 73–77% correct. If behavioral performance fell outside that range, 0.01 was added to or subtracted from the intermediate coherence. The horizontal RDM consisted of 10 blocks of 60 trials. All stimulus features (i.e., sound content, location, and congruency) were presented in a balanced manner (e.g., 50% congruent, 50% incongruent).

Data analysis

We were primarily interested in the effects of congruency of the auditory stimuli on both behavioral and neural data. Therefore, we defined trial congruency on the basis of these auditory stimuli in all behavioral tasks of the two experiments. All behavioral analyses were programmed in MATLAB (R2017b, The MathWorks, Inc).

Statistical analysis of behavioral data

In all tasks, trials with an RT <100 ms or >1500 ms were excluded from behavioral analyses. Missed trials were excluded in all tasks (except the volume oddball task) as well. In order to investigate whether current trial conflict effects were present under varying levels of task relevance and to inspect if training on/exposure to conflict-inducing tasks modulated such conflict effects, we performed rm-ANOVAs on different behavioral measures. For all tasks, excluding the volume oddball task, the rm-ANOVAs were performed on the ER over all trials and RTs on correct trials. For the volume oddball task, perceptual sensitivity (d’; Green and Swets, 1966) and RTs of correct trials (i.e., ‘hit’ trials) were analyzed with rm-ANOVAs. If the assumption of sphericity was violated, we applied a Greenhouse–Geisser correction. For the tasks from experiment 1, we performed these ANOVAs with task relevance, training (before vs. after) and current trial congruency as factors (2 × 2 × 2 factorial design). Additional post-hoc rm-ANOVAs, for content discrimination task I and vertical RDM task separately (2 × 2 factorial design), were used to inspect the origin of significant factors that were modulated by task relevance.

For content discrimination task I, the location discrimination task, and horizontal RDM task, we performed a rm-ANOVA with task and congruency as factors (3 × 2 factorial design). For the volume oddball task, we performed a rm-ANOVA with congruency and run number as factors (2 × 2 factorial design) on RTs, d’ scores, hit rates, and false alarm rates. We also applied paired sample t-tests comparing the difference in these variables between incongruent and congruent for all tasks, within each experimental session (vertical RDM, content discrimination task I) and run (volume oddball task).

To test interference of the individual auditory features, sound content and sound location, on performance on the vertical RDM task and volume oddball task, we performed rm-ANOVAs on RTs with sound location and sound content as factors (2 × 2 factorial design). Additionally, for the horizontal RDM, we tested for auditory-visual conflict effects (i.e., conflict between sound content/location and dot direction) in RT and ER with paired sample t-tests comparing incongruent and congruent trials.

In case of null findings, we performed a Bayesian analysis (rm-ANOVA or paired sample t-test) with identical parameters and settings on the same data to test if there was actual support of the null hypothesis (JASP Team, 2018).

Analysis of EEG data

EEG data were analyzed using custom-made software written in MATLAB, with support from the toolboxes EEGLAB (Delorme and Makeig, 2004) and ADAM (Fahrenfort et al., 2018).

Acquisition and preprocessing

EEG data were recorded with a 64-channel BioSemi apparatus (BioSemi B.V., Amsterdam, The Netherlands) at 512 Hz. Vertical eye movements were recorded with electrodes located above and below the left eye, and horizontal eye movements were recorded with electrodes located at the outer canthi of the left and the right eye. All EEG traces were re-referenced to the average of two electrodes located on the left and right earlobes (mastoidal reference for one participant in experiment 1). The data were band-pass filtered offline, with cutoff frequencies of 0.01–50 Hz. Next, epochs were created by taking data from −1 s to 2 s around onset of stimulus presentation. We then rejected epochs containing blink artifacts and high-voltage artifacts. Blinks were defined as VEOG data exceeding a threshold of ±100 mV in a time window of 0–800 ms post-stimulus. This procedure resulted in the removal of 5.03% (SD = 9.71%) of epochs in experiment 1% and 6.10% (SD = 7.54%) of epochs in experiment 2. Subsequently, high-voltage artifacts were defined as events where voltage exceeded a threshold of ±300 mV in a time window of 0–800 ms post-stimulus on any EEG channel. With this second round of artifact rejection, 3.65% (SD = 7.96%) of all epochs were removed in experiment 1% and 4.16% (SD = 8.86%) in experiment 2. In total, 8.68% (SD = 12.09%) of all trials were removed in experiment 1% and 10.26% (SD = 13.92%) were removed in experiment 2. Note that this procedure ensures the absence of blinks and high-amplitude artifacts within the predefined ROI time window of 100–700 ms. The data of one participant in experiment 2 contained many artifacts. Specifically, across all five tasks performed in experiment 2, 64.46% (SD = 9.88%) of all epochs were rejected for this participant. This was more than three standard deviations from the average number of rejected trials across participants and files and left too few trials for the decoding analysis. Therefore, this participant was removed from all EEG analyses.

Time-frequency-domain multivariate pattern analysis (decoding)

We applied a multivariate backwards decoding model to EEG data that was transformed to the time-frequency domain. We used multivariate analyses both because its higher sensitivity in comparison with univariate analyses and to inspect if and to what extent different stimulus features (i.e., location and content) were processed in both tasks, without having to preselect spatial or time-frequency ROIs. The ADAM toolbox was used for time-frequency decomposition and decoding (Fahrenfort et al., 2018). Single-trial power spectra were computed by convolving the EEG data with a complex wavelet (wavelet size of 0.5 s) after the application of a Hann taper (epochs: −100 ms to 1000 ms, 2–30 Hz in linear steps of 2 Hz). Raw time-frequency data contained both induced and evoked power. Trials were classified according to current trial stimulus features (i.e., location and content), resulting in four trial types. As decoding algorithms are known to be time-consuming, epochs were resampled to 64 Hz. Then, we applied a decoding algorithm to the data according to a 20-fold cross-validation scheme using either stimulus location, stimulus content, or congruency as stimulus class. Specifically, a linear discriminant analysis (LDA) was trained to discriminate between stimulus classes (e.g., left vs. right speaker location, etc.). Classification accuracy was computed as the AUC, a measure derived from Signal Detection Theory (Green and Swets, 1966).

The multivariate classifiers were on different subsets of trials, depending on the behavioral task. For the auditory tasks (content discrimination I and II, location discrimination, and volume oddball detection), only correct trials were included in the analysis as errors tend to elicit a similar, albeit not identical, neural response as cognitive conflict and errors are more likely on incongruent trials (Cohen and van Gaal, 2014). For the volume oddball detection task, we additionally excluded all oddball trials, thus only testing correct rejections, in order to prevent conflict arising between responses made exclusively with the right hand and sound content and location. For the two visual tasks (horizontal and vertical RDM), we trained the classifier on all trials.

Topographical maps of ROI decoding

Topographical maps were created in order to investigate the spatial sources of activity related to the processing of the auditory features (content, location, and congruency). We first extracted classifier weights for each task and feature from the predefined ROI (100–700 ms, 2–8 Hz), allowing us to directly compare the spatial distributions between features and tasks. However, raw classifier weights are not interpretable as neural sources of activity and therefore have to be reconstructed (Haufe et al., 2014). Thus, classifier weights were transformed to activation patterns by multiplying them with the covariance in the EEG data. The topographical activity maps of tasks and features with low decoding performance should be interpreted with caution as activation patterns reconstructed from classifier weights may be unreliable when decoding performance is low (Haufe et al., 2014).

Time-domain multivariate pattern analysis (decoding)

We applied a time-domain decoding analysis on EEG data to inspect the possible effect of task relevance of a stimulus feature on sensory processing. For this analysis, only EEG data from the tasks of experiment 2 were used as its parameters were comparable between tasks in this experiment (e.g., always visual stimulus present, no extensive training). For the analysis, we trained linear classifiers (LDA) to discriminate sound content (‘left’ vs. ‘right’) or sound location (left speaker vs. right speaker). First, epochs (from −100 ms to 1000 ms around stimulus presentation) were resampled to 64 Hz similar to the time-frequency decoding analyses. Then, the models were trained and tested according to a 20-fold cross-validation scheme. The AUC scores we obtained via multivariate analyses of our EEG data were tested per timepoint with one-sided t-tests (X¯>0.5) across participants against chance level (50%). These t-tests were corrected for multiple comparisons over time using cluster-based permutation tests (p<0.05, 1000 iterations). For each decoded stimulus feature, we then compared the decoding accuracies of the behavioral task in which the feature was task-relevant to all other tasks in a pairwise fashion (e.g., location decoding under location discrimination task vs. horizontal RDM task), with cluster-corrected two-sided t-tests against 0.

Statistical analysis of EEG data

The AUC scores we obtained via multivariate analyses of our EEG data were tested per timepoint and frequency with one-sided t-tests (X¯>0.5) across participants against chance level (50%). These t-tests were corrected for multiple comparisons over time and frequency using cluster-based permutation tests (p<0.05, 1000 iterations). This procedure yields time-frequency clusters of significant above-chance classifier accuracy, indicative of information processing. Note that this procedure yields results that should be interpreted as fixed effects (Allefeld et al., 2016), but is nonetheless standard in the scientific community.

In addition to the cluster analysis, we performed hypothesis-driven analyses on classifier accuracies that were extracted from a predefined time-frequency ROI. All these analyses were performed in JASP (JASP Team, 2018). We then applied an ANCOVA on these accuracies with fixed effects being task and stimulus feature. Next, one-sample t-tests (one-sided, X¯>0.5) were performed on every task/feature combination to determine whether decoding accuracy of a specific feature within our preselected ROI was above chance during the various behavioral tasks. Additional Bayesian one-sample t-tests (one-sided, X¯>0.5, Cauchy scale = 0.71) were performed to inspect evidence in favor of the null hypothesis that decoding accuracy was not above chance.

We performed the same analysis on different ROIs. The results of those analyses can be found in Figure 5—figure supplement 1B–D.

Funding Statement

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Contributor Information

Stijn Adriaan Nuiten, Email: stijnnuiten@gmail.com.

Simon van Gaal, Email: simonvangaal@gmail.com.

Michael J Frank, Brown University, United States.

Nicole C Swann, University of Oregon, United States.

Funding Information

This paper was supported by the following grant:

  • H2020 European Research Council ERC-2016-STG_715605 to Simon van Gaal.

Additional information

Competing interests

No competing interests declared.

Author contributions

Conceptualization, Formal analysis, Investigation, Visualization, Writing - original draft.

Conceptualization, Writing - review and editing.

Data curation, Writing - review and editing.

Investigation.

Conceptualization, Formal analysis, Writing - review and editing.

Conceptualization, Writing - review and editing.

Conceptualization, Resources, Supervision, Funding acquisition, Writing - review and editing.

Ethics

Human subjects: Written informed consent was obtained from all participants after explanation of the experimental protocol. This study was approved by the local ethics committee of the University of Amsterdam (projects: 2015-BC-4687, 2017-BC-8257, 2019-BC-10711).

Additional files

Transparent reporting form

Data availability

The data and analysis scripts used in this article is available on Figshare: https://uvaauas.figshare.com/projects/Preserved_sensory_processing_but_hampered_conflict_detection_when_stimulus_input_is_task-irrelevant/115020.

The following datasets were generated:

Nuiten SA, Canales-Johnson A, Beerendonk L, Nanuashvili N, Fahrenfort JJ, Bekinschtein T, van Gaal S. 2021. Analyses scripts for manuscript: Preserved sensory processing but hampered conflict detection when stimulus input is task-irrelevant. figshare.

Nuiten SA, Canales-Johnson A, Beerendonk L, Nanuashvili N, Fahrenfort JJ, Bekinschtein T, van Gaal S. 2021. Raw behavioral dataset for manuscript: Preserved sensory processing but hampered conflict detection when stimulus input is task-irrelevant. DANS. 14709420.v1

Nuiten SA, Canales-Johnson A, Beerendonk L, Nanuashvili N, Fahrenfort JJ, Bekinschtein T, van Gaal S. 2021. Decoded EEG (time-frequency) dataset for manuscript: Preserved sensory processing but hampered conflict detection when stimulus input is task-irrelevant. figshare.

Nuiten SA, Canales-Johnson A, Beerendonk L, Nanuashvili N, Fahrenfort JJ, Bekinschtein T, van Gaal S. 2021. Decoded EEG (time-domain) dataset for manuscript: Preserved sensory processing but hampered conflict detection when stimulus input is task-irrelevant. figshare.

Nuiten SA, Canales-Johnson A, Beerendonk L, Nanuashvili N, Fahrenfort JJ, Bekinschtein T, van Gaal S. 2021. Raw EEG dataset for manuscript: Preserved sensory processing but hampered conflict detection when stimulus input is task-irrelevant. figshare.

References

  1. Alilović J, Timmermans B, Reteig LC, van Gaal S, Slagter HA. No evidence that predictions and attention modulate the first feedforward sweep of cortical information processing. Cerebral Cortex. 2019;29:2261–2278. doi: 10.1093/cercor/bhz038. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Allefeld C, Görgen K, Haynes JD. Valid population inference for information-based imaging: from the second-level t-test to prevalence inference. NeuroImage. 2016;141:378–392. doi: 10.1016/j.neuroimage.2016.07.040. [DOI] [PubMed] [Google Scholar]
  3. Anderson BA, Laurent PA, Yantis S. Value-driven attentional capture. PNAS. 2011;108:10367–10371. doi: 10.1073/pnas.1104047108. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Appelbaum LG, Smith DV, Boehler CN, Chen WD, Woldorff MG. Rapid modulation of sensory processing induced by stimulus conflict. Journal of Cognitive Neuroscience. 2011;23:2620–2628. doi: 10.1162/jocn.2010.21575. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Atas A, Desender K, Gevers W, Cleeremans A. Dissociating perception from action during conscious and unconscious conflict adaptation. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2016;42:866–881. doi: 10.1037/xlm0000206. [DOI] [PubMed] [Google Scholar]
  6. Awh E, Belopolsky AV, Theeuwes J. Top-down versus bottom-up attentional control: a failed theoretical dichotomy. Trends in Cognitive Sciences. 2012;16:437–443. doi: 10.1016/j.tics.2012.06.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Broadbent DE. Perception and Communication. Oxford University Press; 1958. [Google Scholar]
  8. Buzzell GA, Roberts DM, Baldwin CL, McDonald CG. An electrophysiological correlate of conflict processing in an auditory spatial stroop task: the effect of individual differences in navigational style. International Journal of Psychophysiology. 2013;90:265–271. doi: 10.1016/j.ijpsycho.2013.08.008. [DOI] [PubMed] [Google Scholar]
  9. Canales-Johnson A, Beerendonk L, Blain S, Kitaoka S, Ezquerro-Nassar A, Nuiten S, Fahrenfort J, van Gaal S, Bekinschtein TA. Decreased alertness reconfigures cognitive control networks. The Journal of Neuroscience. 2020;40:7142–7154. doi: 10.1523/JNEUROSCI.0343-20.2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Cavanagh JF, Frank MJ. Frontal theta as a mechanism for cognitive control. Trends in Cognitive Sciences. 2014;18:414–421. doi: 10.1016/j.tics.2014.04.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Chen Z. Object-based attention: a tutorial review. Attention, Perception, & Psychophysics. 2012;74:784–802. doi: 10.3758/s13414-012-0322-z. [DOI] [PubMed] [Google Scholar]
  12. Chen A, Tang D, Chen X. Training reveals the sources of stroop and flanker interference effects. PLOS ONE. 2013;8:e76580. doi: 10.1371/journal.pone.0076580. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Chen Z, Cave KR. When does visual attention select all features of a distractor? Journal of Experimental Psychology: Human Perception and Performance. 2006;32:1452–1464. doi: 10.1037/0096-1523.32.6.1452. [DOI] [PubMed] [Google Scholar]
  14. Cherry EC. Some experiments on the recognition of speech, with one and with two ears. The Journal of the Acoustical Society of America. 1953;25:975–979. doi: 10.1121/1.1907229. [DOI] [Google Scholar]
  15. Cohen MX. A neural microcircuit for cognitive conflict detection and signaling. Trends in Neurosciences. 2014;37:480–490. doi: 10.1016/j.tins.2014.06.004. [DOI] [PubMed] [Google Scholar]
  16. Cohen MX, Cavanagh JF. Single-trial regression elucidates the role of prefrontal theta oscillations in response conflict. Frontiers in Psychology. 2011;2:30. doi: 10.3389/fpsyg.2011.00030. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Cohen MX, van Gaal S. Subthreshold muscle twitches dissociate oscillatory neural signatures of conflicts from errors. NeuroImage. 2014;86:503–513. doi: 10.1016/j.neuroimage.2013.10.033. [DOI] [PubMed] [Google Scholar]
  18. Cosman JD, Vecera SP. Object-based attention overrides perceptual load to modulate visual distraction. Journal of Experimental Psychology: Human Perception and Performance. 2012;38:576–579. doi: 10.1037/a0027406. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. D'Ostilio K, Garraux G. Brain mechanisms underlying automatic and unconscious control of motor action. Frontiers in Human Neuroscience. 2012a;6:265. doi: 10.3389/fnhum.2012.00265. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. D'Ostilio K, Garraux G. Dissociation between unconscious motor response facilitation and conflict in medial frontal Areas. European Journal of Neuroscience. 2012b;35:332–340. doi: 10.1111/j.1460-9568.2011.07941.x. [DOI] [PubMed] [Google Scholar]
  21. Dehaene S, Changeux JP, Naccache L, Sackur J, Sergent C. Conscious, Preconscious, and subliminal processing: a testable taxonomy. Trends in Cognitive Sciences. 2006;10:204–211. doi: 10.1016/j.tics.2006.03.007. [DOI] [PubMed] [Google Scholar]
  22. Dehaene S, Changeux JP. Experimental and theoretical approaches to conscious processing. Neuron. 2011;70:200–227. doi: 10.1016/j.neuron.2011.03.018. [DOI] [PubMed] [Google Scholar]
  23. Delorme A, Makeig S. EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. Journal of Neuroscience Methods. 2004;134:9–21. doi: 10.1016/j.jneumeth.2003.10.009. [DOI] [PubMed] [Google Scholar]
  24. Deutsch JA, Deutsch D. Attention: some theoretical considerations. Psychological Review. 1963;70:80–90. doi: 10.1037/h0039515. [DOI] [PubMed] [Google Scholar]
  25. Egner T, Hirsch J. Cognitive control mechanisms resolve conflict through cortical amplification of task-relevant information. Nature Neuroscience. 2005;8:1784–1790. doi: 10.1038/nn1594. [DOI] [PubMed] [Google Scholar]
  26. Fahrenfort JJ, van Leeuwen J, Olivers CN, Hogendoorn H. Perceptual integration without conscious access. PNAS. 2017;114:3744–3749. doi: 10.1073/pnas.1617268114. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Fahrenfort JJ, van Driel J, van Gaal S, Olivers CNL. From ERPs to MVPA using the Amsterdam decoding and modeling toolbox (ADAM) Frontiers in Neuroscience. 2018;12:368. doi: 10.3389/fnins.2018.00368. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Finkbeiner M, Palermo R. The role of spatial attention in nonconscious processing A comparison of face and nonface stimuli. Psychological Science. 2017;20:42–51. doi: 10.1111/j.1467-9280.2008.02256.x. [DOI] [PubMed] [Google Scholar]
  29. Frühholz S, Godde B, Finke M, Herrmann M. Spatio-temporal brain dynamics in a combined stimulus-stimulus and stimulus-response conflict task. NeuroImage. 2011;54:622–634. doi: 10.1016/j.neuroimage.2010.07.071. [DOI] [PubMed] [Google Scholar]
  30. Fuentemilla L, Marco-Pallarés J, Grau C. Modulation of spectral power and of phase resetting of EEG contributes differentially to the generation of auditory event-related potentials. NeuroImage. 2006;30:909–916. doi: 10.1016/j.neuroimage.2005.10.036. [DOI] [PubMed] [Google Scholar]
  31. Giraud AL, Poeppel D. Cortical oscillations and speech processing: emerging computational principles and operations. Nature Neuroscience. 2012;15:511–517. doi: 10.1038/nn.3063. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Goldman-Rakic PS. Architecture of the prefrontal cortex and the central executive. Annals of the New York Academy of Sciences. 1995;769:71–84. doi: 10.1111/j.1749-6632.1995.tb38132.x. [DOI] [PubMed] [Google Scholar]
  33. Goldman-Rakic PS. The prefrontal landscape: implications of functional architecture for understanding human mentation and the central executive. Philosophical Transactions of the Royal Society of London Series B: Biological Sciences. 1996;351:1445–1453. doi: 10.1098/rstb.1996.0129. [DOI] [PubMed] [Google Scholar]
  34. Green DM, Swets JA. Signal Detection Theory and Psychophysics. John Wiley; 1966. [Google Scholar]
  35. Grinband J, Savitskaya J, Wager TD, Teichert T, Ferrera VP, Hirsch J. Conflict, error likelihood, and RT: response to Brown & yeung et al. NeuroImage. 2011a;57:320–322. doi: 10.1016/j.neuroimage.2011.04.027. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Grinband J, Savitskaya J, Wager TD, Teichert T, Ferrera VP, Hirsch J. The dorsal medial frontal cortex is sensitive to time on task, not response conflict or error likelihood. NeuroImage. 2011b;57:303–311. doi: 10.1016/j.neuroimage.2010.12.027. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Grootswagers T, Wardle SG, Carlson TA. Decoding dynamic brain patterns from evoked responses: a tutorial on multivariate pattern analysis applied to time series neuroimaging data. Journal of Cognitive Neuroscience. 2017;29:677–697. doi: 10.1162/jocn_a_01068. [DOI] [PubMed] [Google Scholar]
  38. Haufe S, Meinecke F, Görgen K, Dähne S, Haynes JD, Blankertz B, Bießmann F. On the interpretation of weight vectors of linear models in multivariate neuroimaging. NeuroImage. 2014;87:96–110. doi: 10.1016/j.neuroimage.2013.10.067. [DOI] [PubMed] [Google Scholar]
  39. Haxby JV, Gobbini MI, Furey ML, Ishai A, Schouten JL, Pietrini P. Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science. 2001;293:2425–2430. doi: 10.1126/science.1063736. [DOI] [PubMed] [Google Scholar]
  40. Hebart MN, Baker CI. Deconstructing multivariate decoding for the study of brain function. NeuroImage. 2018;180:4–18. doi: 10.1016/j.neuroimage.2017.08.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Hommel B. The simon effect as tool and heuristic. Acta Psychologica. 2011;136:189–202. doi: 10.1016/j.actpsy.2010.04.011. [DOI] [PubMed] [Google Scholar]
  42. Huber-Huber C, Ansorge U. Unconscious conflict adaptation without feature-repetitions and response time carry-over. Journal of Experimental Psychology: Human Perception and Performance. 2018;44:169–175. doi: 10.1037/xhp0000450. [DOI] [PubMed] [Google Scholar]
  43. Itthipuripat S, Deering S, Serences JT. When conflict Cannot be avoided: relative contributions of early selection and frontal executive control in mitigating stroop conflict. Cerebral Cortex. 2019;29:5037–5048. doi: 10.1093/cercor/bhz042. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Jääskeläinen IP, Halme H-L, Agam Y, Glerean E, Lahnakoski JM, Sams M, Tapani K, Ahveninen J, Manoach DS. Neural mechanisms supporting evaluation of others’ errors in real-life like conditions. Scientific Reports. 2016;6:1–10. doi: 10.1038/srep18714. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Janssens C, De Loof E, Boehler CN, Pourtois G, Verguts T. Occipital alpha power reveals fast attentional inhibition of incongruent distractors. Psychophysiology. 2018;55:e13011. doi: 10.1111/psyp.13011. [DOI] [PubMed] [Google Scholar]
  46. JASP Team JASP. 0.8.6.0Jasp-Stats. 2018 https://jasp-stats.org/
  47. Jehee JF, Brady DK, Tong F. Attention improves encoding of task-relevant features in the human visual cortex. Journal of Neuroscience. 2011;31:8210–8219. doi: 10.1523/JNEUROSCI.6153-09.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Jiang J, Zhang Q, van Gaal S. Conflict awareness dissociates theta-band neural dynamics of the medial frontal and lateral frontal cortex during trial-by-trial cognitive control. NeuroImage. 2015a;116:102–111. doi: 10.1016/j.neuroimage.2015.04.062. [DOI] [PubMed] [Google Scholar]
  49. Jiang J, Zhang Q, Van Gaal S. EEG neural oscillatory dynamics reveal semantic and response conflict at difference levels of conflict awareness. Scientific Reports. 2015b;5:1–11. doi: 10.1038/srep12008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Jiang J, Correa CM, Geerts J, van Gaal S. The relationship between conflict awareness and behavioral and oscillatory signatures of immediate and delayed cognitive control. NeuroImage. 2018;177:11–19. doi: 10.1016/j.neuroimage.2018.05.007. [DOI] [PubMed] [Google Scholar]
  51. Kahneman D, Treisman A, Gibbs BJ. The reviewing of object files: object-specific integration of information. Cognitive Psychology. 1992;24:175–219. doi: 10.1016/0010-0285(92)90007-O. [DOI] [PubMed] [Google Scholar]
  52. Kleiner M, Brainard DH, Pelli DG. What’s new in Psychtoobox-3? Perception. 2007;36:1–16. doi: 10.1068/v070821. [DOI] [Google Scholar]
  53. Koch C, Tsuchiya N. Attention and consciousness: two distinct brain processes. Trends in Cognitive Sciences. 2007;11:16–22. doi: 10.1016/j.tics.2006.10.012. [DOI] [PubMed] [Google Scholar]
  54. Koelewijn T, Bronkhorst A, Theeuwes J. Attention and the multiple stages of multisensory integration: a review of audiovisual studies. Acta Psychologica. 2010;134:372–384. doi: 10.1016/j.actpsy.2010.03.010. [DOI] [PubMed] [Google Scholar]
  55. Kok P, Rahnev D, Jehee JF, Lau HC, de Lange FP. Attention reverses the effect of prediction in silencing sensory signals. Cerebral Cortex. 2012;22:2197–2206. doi: 10.1093/cercor/bhr310. [DOI] [PubMed] [Google Scholar]
  56. Kornblum S. The way irrelevant dimensions are processed depends on what they overlap with: the case of stroop- and Simon-like stimuli. Psychological Research. 1994;56:130–135. doi: 10.1007/BF00419699. [DOI] [PubMed] [Google Scholar]
  57. Kouider S, Barbot A, Madsen KH, Lehericy S, Summerfield C. Task relevance differentially shapes ventral visual stream sensitivity to visible and invisible faces. Neuroscience of Consciousness. 2016;2016:niw021. doi: 10.1093/nc/niw021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Lachter J, Forster KI, Ruthruff E. Forty-five years after Broadbent (1958): still no identification without attention. Psychological Review. 2004;111:880–913. doi: 10.1037/0033-295X.111.4.880. [DOI] [PubMed] [Google Scholar]
  59. Lamme VA. Why visual attention and awareness are different. Trends in Cognitive Sciences. 2003;7:12–18. doi: 10.1016/S1364-6613(02)00013-X. [DOI] [PubMed] [Google Scholar]
  60. Lamme VAF, Roelfsema PR. The distinct modes of vision offered by feedforward and recurrent processing. Trends in Neurosciences. 2000;23:571–579. doi: 10.1016/S0166-2236(00)01657-X. [DOI] [PubMed] [Google Scholar]
  61. Lavie N, Ro T, Russell C. The role of perceptual load in processing distractor faces. Psychological Science. 2003;14:510–515. doi: 10.1111/1467-9280.03453. [DOI] [PubMed] [Google Scholar]
  62. Lavie N, Hirst A, de Fockert JW, Viding E. Load theory of selective attention and cognitive control. Journal of Experimental Psychology: General. 2004;133:339–354. doi: 10.1037/0096-3445.133.3.339. [DOI] [PubMed] [Google Scholar]
  63. Lavie N, Fockert Jde. Frontal control of attentional capture in visual search. Visual Cognition. 2006;14:863–876. doi: 10.1080/13506280500195953. [DOI] [Google Scholar]
  64. Lewald J, Getzmann S. When and where of auditory spatial processing in cortex: a novel approach using electrotomography. PLOS ONE. 2011;6:e25146. doi: 10.1371/journal.pone.0025146. [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Li FF, VanRullen R, Koch C, Perona P. Rapid natural scene categorization in the near absence of attention. PNAS. 2002;99:9596–9601. doi: 10.1073/pnas.092277599. [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. MacLeod CM, Dunbar K. Training and Stroop-like interference: evidence for a continuum of automaticity. Journal of Experimental Psychology: Learning, Memory, and Cognition. 1988;14:126–135. doi: 10.1037/0278-7393.14.1.126. [DOI] [PubMed] [Google Scholar]
  67. Mao W, Wang Y. The active inhibition for the processing of visual irrelevant conflict information. International Journal of Psychophysiology. 2008;67:47–53. doi: 10.1016/j.ijpsycho.2007.10.003. [DOI] [PubMed] [Google Scholar]
  68. McKay CC, van den Berg B, Woldorff MG. Neural cascade of conflict processing: not just time-on-task. Neuropsychologia. 2017;96:184–191. doi: 10.1016/j.neuropsychologia.2016.12.022. [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. Molloy K, Griffiths TD, Chait M, Lavie N. Inattentional deafness: visual load leads to Time-Specific suppression of auditory evoked responses. Journal of Neuroscience. 2015;35:16046–16054. doi: 10.1523/JNEUROSCI.2931-15.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Moray N. Attention in dichotic listening: affective cues and the influence of instructions. Quarterly Journal of Experimental Psychology. 1959;11:56–60. doi: 10.1080/17470215908416289. [DOI] [Google Scholar]
  71. Nigbur R, Cohen MX, Ridderinkhof KR, Stürmer B. Theta dynamics reveal domain-specific control over stimulus and response conflict. Journal of Cognitive Neuroscience. 2012;24:1264–1274. doi: 10.1162/jocn_a_00128. [DOI] [PubMed] [Google Scholar]
  72. O'Craven KM, Downing PE, Kanwisher N. fMRI evidence for objects as the units of attentional selection. Nature. 1999;401:584–587. doi: 10.1038/44134. [DOI] [PubMed] [Google Scholar]
  73. Olguin A, Bekinschtein TA, Bozic M. Neural encoding of attended continuous speech under different types of interference. Journal of Cognitive Neuroscience. 2018;30:1606–1619. doi: 10.1162/jocn_a_01303. [DOI] [PubMed] [Google Scholar]
  74. Padrão G, Rodriguez-Herreros B, Pérez Zapata L, Rodriguez-Fornells A. Exogenous capture of medial-frontal oscillatory mechanisms by unattended conflicting information. Neuropsychologia. 2015;75:458–468. doi: 10.1016/j.neuropsychologia.2015.07.004. [DOI] [PubMed] [Google Scholar]
  75. Peelen MV, Fei-Fei L, Kastner S. Neural mechanisms of rapid natural scene categorization in human visual cortex. Nature. 2009;460:94–97. doi: 10.1038/nature08103. [DOI] [PMC free article] [PubMed] [Google Scholar]
  76. Peirce J, Gray JR, Simpson S, MacAskill M, Höchenberger R, Sogo H, Kastman E, Lindeløv JK. PsychoPy2: experiments in behavior made easy. Behavior Research Methods. 2019;51:195–203. doi: 10.3758/s13428-018-01193-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  77. Polk TA, Drake RM, Jonides JJ, Smith MR, Smith EE. Attention enhances the neural processing of relevant features and suppresses the processing of irrelevant features in humans: a functional magnetic resonance imaging study of the stroop task. Journal of Neuroscience. 2008;28:13786–13792. doi: 10.1523/JNEUROSCI.1026-08.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  78. Rahnev DA, Huang E, Lau H. Subliminal stimuli in the near absence of attention influence top-down cognitive control. Attention, Perception, & Psychophysics. 2012;74:521–532. doi: 10.3758/s13414-011-0246-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  79. Ridderinkhof KR, Ullsperger M, Crone EA, Nieuwenhuis S. The role of the medial frontal cortex in cognitive control. Science. 2004;306:443–447. doi: 10.1126/science.1100301. [DOI] [PubMed] [Google Scholar]
  80. Roelfsema PR. Cortical algorithms for perceptual grouping. Annual Review of Neuroscience. 2006;29:203–227. doi: 10.1146/annurev.neuro.29.051605.112939. [DOI] [PubMed] [Google Scholar]
  81. Roelfsema PR, Houtkamp R. Incremental grouping of image elements in vision. Attention, Perception, & Psychophysics. 2011;73:2542–2572. doi: 10.3758/s13414-011-0200-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  82. Röer JP, Körner U, Buchner A, Bell R. Semantic priming by irrelevant speech. Psychonomic Bulletin & Review. 2017;24:1205–1210. doi: 10.3758/s13423-016-1186-3. [DOI] [PubMed] [Google Scholar]
  83. Rousselet GA, Thorpe SJ, Fabre-Thorpe M. How parallel is visual processing in the ventral pathway? Trends in Cognitive Sciences. 2004;8:363–370. doi: 10.1016/j.tics.2004.06.003. [DOI] [PubMed] [Google Scholar]
  84. Ruggeri P, Meziane HB, Koenig T, Brandner C. A fine-grained time course investigation of brain dynamics during conflict monitoring. Scientific Reports. 2019;9:3667. doi: 10.1038/s41598-019-40277-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  85. Salminen NH, Takanen M, Santala O, Lamminsalo J, Altoè A, Pulkki V. Integrated processing of spatial cues in human auditory cortex. Hearing Research. 2015;327:143–152. doi: 10.1016/j.heares.2015.06.006. [DOI] [PubMed] [Google Scholar]
  86. Sand A, Wiens S. Processing of unattended, simple negative pictures resists perceptual load. NeuroReport. 2011;22:348–352. doi: 10.1097/WNR.0b013e3283463cb1. [DOI] [PubMed] [Google Scholar]
  87. Schnuerch R, Kreitz C, Gibbons H, Memmert D. Not quite so blind: semantic processing despite inattentional blindness. Journal of Experimental Psychology: Human Perception and Performance. 2016;42:459–463. doi: 10.1037/xhp0000205. [DOI] [PubMed] [Google Scholar]
  88. Schoenfeld MA, Hopf JM, Merkel C, Heinze HJ, Hillyard SA. Object-based attention involves the sequential activation of feature-specific cortical modules. Nature Neuroscience. 2014;17:619–624. doi: 10.1038/nn.3656. [DOI] [PubMed] [Google Scholar]
  89. Sigman M, Dehaene S. Dynamics of the central bottleneck: dual-task and task uncertainty. PLOS Biology. 2006;4:e220. doi: 10.1371/journal.pbio.0040220. [DOI] [PMC free article] [PubMed] [Google Scholar]
  90. Simons DJ, Chabris CF. Gorillas in our midst: sustained inattentional blindness for dynamic events. Perception. 1999;28:1059–1074. doi: 10.1068/p281059. [DOI] [PubMed] [Google Scholar]
  91. Soutschek A, Stelzel C, Paschke L, Walter H, Schubert T. Dissociable effects of motivation and expectancy on conflict processing: an fMRI study. Journal of Cognitive Neuroscience. 2015;27:409–423. doi: 10.1162/jocn_a_00712. [DOI] [PubMed] [Google Scholar]
  92. Stefanics G, Csukly G, Komlósi S, Czobor P, Czigler I. Processing of unattended facial emotions: a visual mismatch negativity study. NeuroImage. 2012;59:3042–3049. doi: 10.1016/j.neuroimage.2011.10.041. [DOI] [PubMed] [Google Scholar]
  93. Stroop JR. Studies of interference in serial verbal reactions. Journal of Experimental Psychology. 1935;18:643–662. doi: 10.1037/h0054651. [DOI] [Google Scholar]
  94. Theeuwes J. Top-down and bottom-up control of visual selection. Acta Psychologica. 2010;135:77–99. doi: 10.1016/j.actpsy.2010.02.006. [DOI] [PubMed] [Google Scholar]
  95. Treisman AM. Strategies and models of selective attention. Psychological Review. 1969;76:282–299. doi: 10.1037/h0027242. [DOI] [PubMed] [Google Scholar]
  96. Treisman AM, Gelade G. A feature-integration theory of attention. Cognitive Psychology. 1980;12:97–136. doi: 10.1016/0010-0285(80)90005-5. [DOI] [PubMed] [Google Scholar]
  97. Turatto M, Mazza V, Umiltà C. Crossmodal object-based attention: auditory objects affect visual processing. Cognition. 2005;96:B55–B64. doi: 10.1016/j.cognition.2004.12.001. [DOI] [PubMed] [Google Scholar]
  98. Tusche A, Kahnt T, Wisniewski D, Haynes JD. Automatic processing of political preferences in the human brain. NeuroImage. 2013;72:174–182. doi: 10.1016/j.neuroimage.2013.01.020. [DOI] [PubMed] [Google Scholar]
  99. van Gaal S, Ridderinkhof KR, Fahrenfort JJ, Scholte HS, Lamme VA. Frontal cortex mediates unconsciously triggered inhibitory control. Journal of Neuroscience. 2008;28:8053–8062. doi: 10.1523/JNEUROSCI.1278-08.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  100. van Gaal S, Ridderinkhof KR, van den Wildenberg WP, Lamme VA. Dissociating consciousness from inhibitory control: evidence for unconsciously triggered response inhibition in the stop-signal task. Journal of Experimental Psychology: Human Perception and Performance. 2009;35:1129–1139. doi: 10.1037/a0013551. [DOI] [PubMed] [Google Scholar]
  101. van Gaal S, Lamme VAF, Fahrenfort JJ, Ridderinkhof KR. Dissociable brain mechanisms underlying the conscious and unconscious control of behavior. Journal of Cognitive Neuroscience. 2011;23:91–105. doi: 10.1162/jocn.2010.21431. [DOI] [PubMed] [Google Scholar]
  102. van Gaal S, de Lange FP, Cohen MX. The role of consciousness in cognitive control and decision making. Frontiers in Human Neuroscience. 2012;6:121. doi: 10.3389/fnhum.2012.00121. [DOI] [PMC free article] [PubMed] [Google Scholar]
  103. van Gaal S, Naccache L, Meuwese JDI, van Loon AM, Leighton AH, Cohen L, Dehaene S. Can the meaning of multiple words be integrated unconsciously? Philosophical Transactions of the Royal Society B: Biological Sciences. 2014;369:20130212. doi: 10.1098/rstb.2013.0212. [DOI] [PMC free article] [PubMed] [Google Scholar]
  104. van Schie HT, Mars RB, Coles MG, Bekkering H. Modulation of activity in medial frontal and motor cortices during error observation. Nature Neuroscience. 2004;7:549–554. doi: 10.1038/nn1239. [DOI] [PubMed] [Google Scholar]
  105. van Veen V, Cohen JD, Botvinick MM, Stenger VA, Carter CS. Anterior cingulate cortex, conflict monitoring, and levels of processing. NeuroImage. 2001;14:1302–1308. doi: 10.1006/nimg.2001.0923. [DOI] [PubMed] [Google Scholar]
  106. VanRullen R. The power of the feed-forward sweep. Advances in Cognitive Psychology. 2007;3:167–176. doi: 10.2478/v10053-008-0022-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  107. Wang K, Li Q, Zheng Y, Wang H, Liu X. Temporal and spectral profiles of stimulus-stimulus and stimulus-response conflict processing. NeuroImage. 2014;89:280–288. doi: 10.1016/j.neuroimage.2013.11.045. [DOI] [PubMed] [Google Scholar]
  108. Wegener D, Galashan FO, Aurich MK, Kreiter AK. Attentional spreading to task-irrelevant object features: experimental support and a 3-step model of attention for object-based selection and feature-based processing modulation. Frontiers in Human Neuroscience. 2014;8:414. doi: 10.3389/fnhum.2014.00414. [DOI] [PMC free article] [PubMed] [Google Scholar]
  109. Weisz N, Müller N, Jatzev S, Bertrand O. Oscillatory alpha modulations in right auditory regions reflect the validity of acoustic cues in an auditory spatial attention task. Cerebral Cortex. 2014;24:2579–2590. doi: 10.1093/cercor/bht113. [DOI] [PubMed] [Google Scholar]
  110. Wickens CD. Multiple resources and performance prediction. Theoretical Issues in Ergonomics Science. 2002;3:159–177. doi: 10.1080/14639220210123806. [DOI] [Google Scholar]
  111. Woldorff MG, Gallen CC, Hampson SA, Hillyard SA, Pantev C, Sobel D, Bloom FE. Modulation of early sensory processing in human auditory cortex during auditory selective attention. PNAS. 1993;90:8722–8726. doi: 10.1073/pnas.90.18.8722. [DOI] [PMC free article] [PubMed] [Google Scholar]
  112. Wolfe JM, Horowitz TS. What attributes guide the deployment of visual attention and how do they do it? Nature Reviews Neuroscience. 2004;5:495–501. doi: 10.1038/nrn1411. [DOI] [PubMed] [Google Scholar]
  113. Xu Y. The neural fate of task-irrelevant features in object-based processing. Journal of Neuroscience. 2010;30:14020–14028. doi: 10.1523/JNEUROSCI.3011-10.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  114. Yeung N, Cohen JD, Botvinick MM. Errors of interpretation and modeling: a reply to grinband et al. NeuroImage. 2011;57:316–319. doi: 10.1016/j.neuroimage.2011.04.029. [DOI] [PMC free article] [PubMed] [Google Scholar]
  115. Zäske R, Perlich MC, Schweinberger SR. To hear or not to hear: voice processing under visual load. Attention, Perception, & Psychophysics. 2016;78:1488–1495. doi: 10.3758/s13414-016-1119-2. [DOI] [PubMed] [Google Scholar]
  116. Zhao J, Liang WK, Juan CH, Wang L, Wang S, Zhu Z. Dissociated stimulus and response conflict effect in the stroop task: evidence from evoked brain potentials and brain oscillations. Biological Psychology. 2015;104:130–138. doi: 10.1016/j.biopsycho.2014.12.001. [DOI] [PubMed] [Google Scholar]
  117. Zimmer U, Itthipanyanan S, Grent-’t-Jong T, Woldorff MG. The electrophysiological time course of the interaction of stimulus conflict and the multisensory spread of attention. European Journal of Neuroscience. 2010;31:1744–1754. doi: 10.1111/j.1460-9568.2010.07229.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  118. Zylberberg A, Fernández Slezak D, Roelfsema PR, Dehaene S, Sigman M. The brain's router: a cortical network model of serial processing in the primate brain. PLOS Computational Biology. 2010;6:e1000765. doi: 10.1371/journal.pcbi.1000765. [DOI] [PMC free article] [PubMed] [Google Scholar]
  119. Zylberberg A, Dehaene S, Roelfsema PR, Sigman M. The human turing machine: a neural framework for mental programs. Trends in Cognitive Sciences. 2011;2:293–300. doi: 10.1016/j.tics.2011.05.007. [DOI] [PubMed] [Google Scholar]

Decision letter

Editor: Nicole C Swann1
Reviewed by: Ulrike M Krämer, Jason Samaha

In the interests of transparency, eLife publishes the most substantive revision requests and the accompanying author responses.

Acceptance summary:

Nuiten and colleagues have conducted a well-designed series of electroencephalographic experiments to investigate if conflict detection depends on conscious awareness. They used a combination of behavioral findings and sophisticated modeling of EEG data to conclude that conflict was only present when there was at least some degree of task relevance and that there was a dependence on object-based attention. In contrast, sensory processing was preserved regardless of attentional status.

Decision letter after peer review:

Thank you for submitting your article "Intact sensory processing but hampered conflict detection when stimulus input is task-irrelevant" for consideration by eLife. Your article has been reviewed by 3 peer reviewers, and the evaluation has been overseen by a Reviewing Editor and Michael Frank as the Senior Editor. The following individuals involved in review of your submission have agreed to reveal their identity: Ulrike M. Krämer, PhD (Reviewer #1); Jason Samaha (Reviewer #2).

The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.

We would like to draw your attention to changes in our revision policy that we have made in response to COVID-19 (https://elifesciences.org/articles/57162). Specifically, we are asking editors to accept without delay manuscripts, like yours, that they judge can stand as eLife papers without additional data, even if they feel that they would make the manuscript stronger. Thus the revisions requested below only address clarity and presentation.

Summary:

Nuiten and colleagues have conducted a well-designed series of EEG experiments to investigate if conflict detection depends on conscious awareness. They used a combination of behavioral findings and MVPA analysis of EEG data to conclude that conflict was only present when there was at least some degree of task relevance and that there was a dependence on object-based attention. In contrast, sensory processing was intact regardless of attentional status.

Overall, the reviewers were enthusiastic about the manuscript. However, there were some concerns and questions which we discuss below.

Essential revisions:

1) The reviewers felt that describing the sensory processing as "intact" regardless of attention was too vague. While there does seem to be some degree of classification possible for sensory processing regardless of task relevance – it appears to be graded. For instance, there are clear differences between the decodability of auditory stimulus properties when the auditory stimulus is task-relevant versus not. Some of this can be attributed to decision/motor processes, as the authors point out, but some changes might reflect changes in sensory processing – for example, the earlier aspects of the decoding differences. In this case, the conclusion would be that sensory processes are 'relatively' intact, but potentially modulated by task-relevance. We recommend the authors show sensory ERPs when the auditory stimulus is task-relevant or not to help assess whether the task manipulation keeps sensory responses completely intact or the authors otherwise justify how their results support the idea that sensory processing is "intact".

2) The reviewers had some questions about what might be driving the congruency decoding results. Given that the congruent and non-congruent conditions produce large differences in behavior in almost all tasks (except the RDM tasks, where no conflict decoding was found), is it possible that the decoder is just picking up on the task difficulty, rather than conflict detection per se? For example, perhaps the decoder results reflect differences in decision making as a result of conflict, rather than the neural signature of the conflict detection process. Is there anything about the temporal/spatial dynamics of theta amplitude that would indicate a specific conflict detection process is underlying decodability?

3) The reviewers were unclear about the interpretation of the decoding results in terms of object-based attention. Specifically, on page 13 the authors write: "Second, the time-frequency (T-F) windows of significant sound content and location decoding in the volume oddball task was considerably more extended in both time and frequency space than observed in the two RDM tasks. This highlights that the presence of object-based attention in the volume oddball task, because one feature of the auditory stimulus was task-relevant, led to the rapid attentional selection and hence neural enhancement of the task-irrelevant features sound content and location this task only". If this explanation were true, and object-based attention was the mechanism, wouldn't a similarly broad time/frequency decoding effect be observed in the content decoding during location discrimination task, since the sound is the object of attention in that task as well? However, the data (4b) seem to only show a small T/F window of content decoding on the location task, suggesting that object-based decoding is either not operating in that task or that the broad T/F decoding does not actually reflect object-based selection. Can the authors comment on this apparent discrepancy?

4) The reviewers found it unfortunate that information about which electrodes contributed to decoding was lost in this analysis and felt that the manuscript would be improved if at least some topographical information were included.

5) The authors state that results in Figure 5 are based on a specific time-frequency ROI which was hypothesis-driven and pre-defined (p.24, lines 911). However, they go on in the next sentence to explain that the ROI was selected visually based on the most-significant clusters. Please clarify how the ROI was defined – was it hypothesis-driven or data-driven? If it was data-driven how were multiple comparison corrections handled?

6) The reviewers were surprised by the choice to not perform any artifact rejection on the data. We strongly suggest that appropriate artifact reject be applied, or, at the very least, the choice to not perform artifact rejection be better justified.

eLife. 2021 Jun 14;10:e64431. doi: 10.7554/eLife.64431.sa2

Author response


Essential revisions:

1) The reviewers felt that describing the sensory processing as “intact” regardless of attention was too vague. While there does seem to be some degree of classification possible for sensory processing regardless of task relevance – it appears to be graded. For instance, there are clear differences between the decodability of auditory stimulus properties when the auditory stimulus is task-relevant versus not. Some of this can be attributed to decision/motor processes, as the authors point out, but some changes might reflect changes in sensory processing – for example, the earlier aspects of the decoding differences. In this case, the conclusion would be that sensory processes are ‘relatively’ intact, but potentially modulated by task-relevance. We recommend the authors show sensory ERPs when the auditory stimulus is task-relevant or not to help assess whether the task manipulation keeps sensory responses completely intact or the authors otherwise justify how their results support the idea that sensory processing is “intact”.

We agree with the reviewers that, on the basis of the results presented in the manuscript, we cannot claim “intact” sensory processing regardless of task-relevance. The use of the word “intact” may indeed suggest that the processing of sensory features is unchanged under various levels of task-relevance, which we did not mean to bring across. Indeed, a substantial body of evidence has shown reduced evoked responses to stimuli that were not attended or were task-irrelevant (Alilović et al., 2019; Jehee et al., 2011; Kok et al., 2012; Molloy et al., 2015b; Woldorff et al., 1993). Our time-frequency decoding results point to the same idea, e.g. location decoding is more broadband and better when the auditory stimulus is task-relevant versus when it is not (e.g. manuscript Figure 2B). We understand the reviewers’ suggestion to inspect sensory ERPs, to test this issue further. As such, we have now performed an additional analysis to inspect the early sensory evoked responses as a factor of task-relevance. We decided to use a time-domain decoding approach however, instead of a more traditional ERP analysis, for two reasons. We wanted to prevent obscuring possible sensory effects due to electrode selection, necessary for computing ERPs but not for decoding, because we did not have strong a priori expectations about the exact scalp topography of the processing of sound content and location as a factor of task-relevance. Second, because we have used multivariate approaches throughout the manuscript so far, this better fitted the general approach taken in this project. Below, we have added a paragraph of the Methods section where we describe the details of the time-domain decoding analysis. Also, we added excerpts of the manuscript (Results) where we present the results of this new analysis (Figure 4 – Supplement 2).

As you will see, these additional analyses suggest that sensory processing is indeed graded, although it remains difficult to know with certainty, because of the direct association between specific responses/decision processes and specific features in some tasks but not others. We thus agree with the reviewers that the effects we report in the manuscript (manuscript Figures 2, 4 and 5; Figure 4—figure supplement 2) need more subtle phrasing. We have decided to refer to sensory processing as preserved instead of intact throughout the manuscript. As such, we have changed the title of the revised manuscript into ‘Preserved sensory processing but hampered conflict detection when stimulus input is task-irrelevant’. We hope this better qualifies the graded relationship between sensory processing and task-relevance. Also, we have added a paragraph to the Discussion where we discuss the graded sensory processing as a factor of task-relevance and what that means for our reported findings. We have also added that paragraph below.

In Methods section, page 30

“We applied a time-domain decoding analysis on EEG data, to inspect the possible effect of task relevance of a stimulus feature on sensory processing. […] For each decoded stimulus feature, we then compared the decoding accuracies of the behavioral task in which the feature was task-relevant, to all other tasks in a pairwise fashion (e.g. location decoding under location discrimination task versus horizontal RDM task), with cluster-corrected two-sided t-tests against 0.”

In Results section, page 17

“In order to test whether the earliest sensory responses were already modulated by task relevance, and to link this to previous ERP studies (Alilović et al., 2019; Molloy et al., 2015; Woldorff et al., 1993), we performed an additional time-domain multivariate analysis on these sensory features (T-F analysis are not well suited to address questions about the timing of processes). […] These results are elaborated upon in the Discussion.”

In Discussion, pages 21-22

“We show that conflict processing is absent when conflicting features are fully task-irrelevant, while evidence of sensory processing is still present in neural data (Figures 2B, 4B and 5). […] Summarizing, although processing of sensory features is degraded under decreasing levels of task relevance it is present regardless of attention, whereas detection of conflict between these features is no longer possible when the features are fully task-irrelevant.”

2) The reviewers had some questions about what might be driving the congruency decoding results. Given that the congruent and non-congruent conditions produce large differences in behavior in almost all tasks (except the RDM tasks, where no conflict decoding was found), is it possible that the decoder is just picking up on the task difficulty, rather than conflict detection per se? For example, perhaps the decoder results reflect differences in decision making as a result of conflict, rather than the neural signature of the conflict detection process. Is there anything about the temporal/spatial dynamics of theta amplitude that would indicate a specific conflict detection process is underlying decodability?

We thank the reviewers for this interesting question. The reviewers point to an issue concerning the source of our above-chance decoding of congruency. Whether typical neural effects in research on conflict processing, i.e. enhanced medial frontal theta-oscillations, are related to the detection of conflict or are a marker of task-difficulty has been debated in the field (Grinband et al., 2011a, 2011b; Yeung et al., 2011). We agree that this distinction is important and deserves more elaboration in the manuscript. We have therefore added a paragraph to the Discussion in which we address this issue. We have added that paragraph below.

In Discussion, page 22

“For our main analysis we trained a multivariate classifier on congruent versus incongruent trials and observed effects of task relevance of the performance of the classifier, i.e. decoding performance was hampered when conflicting features were fully task-irrelevant (Figures 2B, 4B and 5B). […] Therefore, we believe that the observed congruency decoding results presented here are mainly driven by the detection of conflicting sensory inputs and are not, or much less so, driven by task difficulty.”

3) The reviewers were unclear about the interpretation of the decoding results in terms of object-based attention. Specifically, on page 13 the authors write: "Second, the time-frequency (T-F) windows of significant sound content and location decoding in the volume oddball task was considerably more extended in both time and frequency space than observed in the two RDM tasks. This highlights that the presence of object-based attention in the volume oddball task, because one feature of the auditory stimulus was task-relevant, led to the rapid attentional selection and hence neural enhancement of the task-irrelevant features sound content and location this task only". If this explanation were true, and object-based attention was the mechanism, wouldn't a similarly broad time/frequency decoding effect be observed in the content decoding during location discrimination task, since the sound is the object of attention in that task as well? However, the data (4b) seem to only show a small T/F window of content decoding on the location task, suggesting that object-based decoding is either not operating in that task or that the broad T/F decoding does not actually reflect object-based selection. Can the authors comment on this apparent discrepancy?

This is a good point, and we thank the reviewers for this comment. After reconsideration of our results, we agree with the reviewers that object-based attention alone cannot explain why sensory feature decoding is more durable and broadband when the auditory stimulus is task-relevant versus task-irrelevant, as it falls short in explaining the more fleeting and narrowband decoding results for task-irrelevant auditory features (e.g. content) during the auditory tasks (e.g. location discrimination task). We have therefore removed this explanation of the decoding results (previous lines 448-453) and supplementary figure S5 from the manuscript. Moreover, we have added a paragraph to the Discussion in which we discuss a possible second mechanism, besides object-based attention, that may be at play and may affect the decoding results, depending on the specific task manipulation. We have added that paragraph below.

In Discussion, page 22

“Besides object-based attention, the process through which attentional resources are allocated to the processing of task-irrelevant features of a task-relevant object, other mechanisms might also play a role in the extent to which sensory information is processed, such as the active suppression of task-irrelevant information. […] Disentangling the effects of such mechanisms, object-based attention and their possible interactions on the processing of sensory and cognitive information, however, falls outside the scope of this work.”

4) The reviewers found it unfortunate that information about which electrodes contributed to decoding was lost in this analysis and felt that the manuscript would be improved if at least some topographical information were included.

We agree with the reviewers that information pertaining to the spatial sources of our effects could provide additional insights. The decoding weights of EEG-channels are, however, not interpretable as neural sources and therefore they have to be transformed back to activity patterns (Haufe et al., 2014). In order to compare topographic maps for all tasks and features, we have extracted the decoder weights from the preselected time-frequency ROI used for manuscript figure 5 for all tasks and contrasts (sound content, sound location and conflict). Then, these weights were transformed to activity patterns by multiplying them with the covariance in the EEG data (Haufe et al., 2014). We have added the topographic maps and the parts from the Results and Methods of the manuscript where we discuss these maps (Figure 5 – Supplement 1A).

In Methods, page 30

“Topographical maps were created in order to investigate the spatial sources of activity related to the processing of the auditory features (content, location and congruency). […] The topographical activity maps of tasks and features with low decoding performance should be interpreted with caution, as activation patterns reconstructed from classifier weights may be unreliable when decoding performance is low (Haufe et al., 2014).”

In Results, page 9

“Activation patterns that were calculated from classifier weights within the predefined time-frequency theta-band ROI (2Hz-8Hz, 100ms-700ms) revealed a clear midfrontal distribution of conflict related activity (Figure 5 – Supplement 1A).”

In Results, page 18

“Classifier weights were extracted from the ROI for all tasks and features, transformed to activation patterns and plotted in topomaps, to show the patterns of activity underlying the decoding results (Figure 5 – Supplement 1A).”

5) The authors state that results in Figure 5 are based on a specific time-frequency ROI which was hypothesis-driven and pre-defined (p.24, lines 911). However, they go on in the next sentence to explain that the ROI was selected visually based on the most-significant clusters. Please clarify how the ROI was defined – was it hypothesis-driven or data-driven? If it was data-driven how were multiple comparison corrections handled?

We apologize, the wording used in the outlined section was ambiguous and we have therefore revised it. The time-frequency ROI was selected on the basis of previous work, including our own, investigating the role of medial frontal theta-oscillations in conflict processing (Cohen and Cavanagh, 2011; Cohen and van Gaal, 2014; Jiang et al., 2015; Nigbur et al., 2012). Furthermore, we would like to emphasize that we verified whether ROI selection influenced our main findings, by applying the same analyses to other time-frequency ROIs. These findings are depicted in the supplements and show a similar pattern as the ROI presented in the main text, i.e. a stronger decline in congruency decoding accuracy under manipulations of task-relevance as opposed to decoding accuracy of sensory features. We have added the revised section below.

In Results, page 7

“Then, we report results from the additional hypothesis-driven analysis, where we extracted classifier accuracies from a predefined time-frequency ROI (100ms-700ms, 2Hz-8Hz) on which we performed (Bayesian) tests (see Methods). […] Specifically, for every task and every stimulus feature (i.e. congruency, content, location), we extracted average decoding accuracies from the ROI per participant and performed analyses on these values.”

6) The reviewers were surprised by the choice to not perform any artifact rejection on the data. We strongly suggest that appropriate artifact reject be applied, or, at the very least, the choice to not perform artifact rejection be better justified.

We have followed the reviewers’ suggestion to perform further preprocessing on our data. Thus, the data discussed in our revised manuscript, and this letter, are fully updated.

We initially did not perform any artefact rejection, as in our experience multivariate analyses on large EEG datasets are robust to artefacts. For smaller datasets, such as the data belonging to Experiment 2, we believed that preserving as much data as possible would improve classification performance by increasing the training set size of the model. However, we understand the reviewers’ suggestion to perform artefact rejection. We reasoned this would indeed strengthen the results presented in the paper, because readers may wonder whether our (null-)findings were perhaps a result of noisy data due to our lack of artefact rejection. We have now performed the same multivariate analyses as in the original version of the manuscript, but this time after additional preprocessing steps

We were pleased that the reviewers suggested to do additional preprocessing for several reasons. First, the majority of the decoding results remained qualitatively unchanged, strengthening our confidence in the robustness of these results. Interestingly, however, we do find significant congruency decoding in the predefined ROI for the horizontal RDM task after applying these preprocessing steps (manuscript Figure 4B), whereas previously no effects of congruency were observed in this task. Above-chance congruency decoding in the HRDM was robust and independent of the specific definition of the ROI used (manuscript Figure 5 – Supplement 1B-D). This observation of congruency decoding within the theta-band shows that when a task-irrelevant stimulus has features that are response-relevant (i.e. overlap with the response scheme), these two features are still integrated in the brain to form conflict. In contrast, when features of a task-irrelevant stimulus are not response-relevant, i.e. in the vertical RDM task where the response is orthogonal to sound content and location, conflict between these auditory features is not detected. We have incorporated this interesting new finding in the revised manuscript, and have updated the abstract, results and Discussion section accordingly. Below, we have added the section from the Methods where we describe the preprocessing pipeline and an excerpt from the Discussion where we discuss the new results.

In Methods, page 29

“EEG-data were recorded with a 64-channel BioSemi apparatus (BioSemi B.V., Amsterdam, The Netherlands), at 512Hz. […] This was more than 3 standard deviations from the average number of rejected trials across participants and files and left too few trials for the decoding analysis. Therefore this participant was removed from all EEG analyses.”

In Discussion, pages 20-21

“In the horizontal RDM task on the other hand, the conflicting features of the task-irrelevant auditory stimulus overlapped with the overall response scheme or task-set of the participant, namely discriminating rightwards versus leftwards moving dots. […] Note that we report additional behavioral results that show clear indications of conflict when the task-relevant feature of the visual stimulus interferes directly with a single task-irrelevant feature of the auditory task (e.g., auditory content-dot motion conflict).”

References:

Alilović, J., Timmermans, B., Reteig, L. C., van Gaal, S., and Slagter, H. A. (2019). No Evidence that Predictions and Attention Modulate the First Feedforward Sweep of Cortical Information Processing. Cerebral Cortex, 29(5), 2261–2278. https://doi.org/10.1093/cercor/bhz038Anderson, B. A., Laurent, P. A., and Yantis, S. (2011). Value-driven attentional capture. Proceedings of the National Academy of Sciences, 108(25), 10367–10371. https://doi.org/10.1073/pnas.1104047108Appelbaum, L. G., Smith, D. V., Boehler, C. N., Chen, W. D., and Woldorff, M. G. (2011). Rapid Modulation of Sensory Processing Induced by Stimulus Conflict. Journal of Cognitive Neuroscience, 23(9), 2620–2628. https://doi.org/10.1162/jocn.2010.21575Awh, E., Belopolsky, A. V., and Theeuwes, J. (2012). Top-down versus bottom-up attentional control: A failed theoretical dichotomy. Trends in Cognitive Sciences, 16(8), 437–443. https://doi.org/10.1016/j.tics.2012.06.010Cohen, M. X., and Cavanagh, J. F. (2011). Single-Trial Regression Elucidates the Role of Prefrontal Theta Oscillations in Response Conflict. Frontiers in Psychology, 2. https://doi.org/10.3389/fpsyg.2011.00030Cohen, M. X., and van Gaal, S. (2014). Subthreshold muscle twitches dissociate oscillatory neural signatures of conflicts from errors. NeuroImage, 86, 503–513. https://doi.org/10.1016/j.neuroimage.2013.10.033Egner, T., and Hirsch, J. (2005). Cognitive control mechanisms resolve conflict through cortical amplification of task-relevant information. Nature Neuroscience, 8(12), 1784–1790. https://doi.org/10.1038/nn1594Fahrenfort, J. J., Van Driel, J., van Gaal, S., and Olivers, C. N. L. (2018). From ERPs to MVPA using the Amsterdam Decoding and Modeling toolbox (ADAM). Frontiers in Neuroscience – Brain Imaging Methods, 12(July). https://doi.org/10.3389/fnins.2018.00368Grinband, J., Savitskaya, J., Wager, T. D., Teichert, T., Ferrera, V. P., and Hirsch, J. (2011a). Conflict, error likelihood, and RT: Response to Brown and Yeung et al. NeuroImage, 57(2), 320–322. https://doi.org/10.1016/j.neuroimage.2011.04.027Grinband, J., Savitskaya, J., Wager, T. D., Teichert, T., Ferrera, V. P., and Hirsch, J. (2011b). The Dorsal Medial Frontal Cortex is Sensitive to Time on Task, Not Response Conflict or Error Likelihood. NeuroImage, 57(2), 303–311. https://doi.org/10.1016/j.neuroimage.2010.12.027Haufe, S., Meinecke, F., Görgen, K., Dähne, S., Haynes, J.-D., Blankertz, B., and Bießmann, F. (2014). On the interpretation of weight vectors of linear models in multivariate neuroimaging. NeuroImage, 87, 96–110. https://doi.org/10.1016/j.neuroimage.2013.10.067Janssens, C., De Loof, E., Boehler, C. N., Pourtois, G., and Verguts, T. (2018). Occipital α power reveals fast attentional inhibition of incongruent distractors. Psychophysiology, 55(3). https://doi.org/10.1111/psyp.13011Jehee, J. F. M., Brady, D. K., and Tong, F. (2011). Attention Improves Encoding of Task-Relevant Features in the Human Visual Cortex. Journal of Neuroscience, 31(22), 8210–8219. https://doi.org/10.1523/JNEUROSCI.6153-09.2011Jiang, J., Zhang, Q., and van Gaal, S. (2015). Conflict awareness dissociates theta-band neural dynamics of the medial frontal and lateral frontal cortex during trial-by-trial cognitive control. NeuroImage, 116, 102–111. https://doi.org/10.1016/j.neuroimage.2015.04.062Jiang, J., Zhang, Q., and Van Gaal, S. (2015). EEG neural oscillatory dynamics reveal semantic and response conflict at difference levels of conflict awareness. Scientific Reports, 5(July), 1–11. https://doi.org/10.1038/srep12008Jiang, Jun, Zhang, Q., and Van Gaal, S. (2015). EEG neural oscillatory dynamics reveal semantic and response conflict at difference levels of conflict awareness. Scientific Reports, 5(1), 12008. https://doi.org/10.1038/srep12008Kok, P., Rahnev, D., Jehee, J. F. M., Lau, H. C., and de Lange, F. P. (2012). Attention Reverses the Effect of Prediction in Silencing Sensory Signals. Cerebral Cortex, 22(9), 2197–2206. https://doi.org/10.1093/cercor/bhr310Lavie, N., and de Fockert, J. (2006). Frontal control of attentional capture in visual search. Visual Cognition, 14(4–8), 863–876. https://doi.org/10.1080/13506280500195953McKay, C. C., van den Berg, B., and Woldorff, M. G. (2017). Neural cascade of conflict processing: Not just time-on-task. Neuropsychologia, 96, 184–191. https://doi.org/10.1016/j.neuropsychologia.2016.12.022Molloy, K., Griffiths, T. D., Chait, M., and Lavie, N. (2015a). Inattentional Deafness: Visual Load Llleads to Time-Specific Suppression of Auditory Evoked Responses. Journal of Neuroscience, 35(49), 16046–16054. https://doi.org/10.1523/JNEUROSCI.2931-15.2015Molloy, K., Griffiths, T. D., Chait, M., and Lavie, N. (2015b). Inattentional Deafness: Visual Load Leads to Time-Specific Suppression of Auditory Evoked Responses. Journal of Neuroscience, 35(49), 16046–16054. https://doi.org/10.1523/JNEUROSCI.2931-15.2015Nigbur, R., Cohen, M. X., Ridderinkhof, K. R., and Stürmer, B. (2012a). Theta Dynamics Reveal Domain-specific Control over Stimulus and Response Conflict. Journal of Cognitive Neuroscience, 24(5), 1264–1274. https://doi.org/10.1162/jocn_a_00128Nigbur, R., Cohen, M. X., Ridderinkhof, K. R., and Stürmer, B. (2012b). Theta Dynamics Reveal Domain-specific Control over Stimulus and Response Conflict. Journal of Cognitive Neuroscience, 24(5), 1264–1274. https://doi.org/10.1162/jocn_a_00128Polk, T. A., Drake, R. M., Jonides, J. J., Smith, M. R., and Smith, E. E. (2008). Attention Enhances the Neural Processing of Relevant Features and Suppresses the Processing of Irrelevant Features in Humans: A Functional Magnetic Resonance Imaging Study of the Stroop Task. The Journal of Neuroscience, 28(51), 13786–13792. https://doi.org/10.1523/JNEUROSCI.1026-08.2008Ruggeri, P., Meziane, H. B., Koenig, T., and Brandner, C. (2019). A fine-grained time course investigation of brain dynamics during conflict monitoring. Scientific Reports, 9(1), 3667. https://doi.org/10.1038/s41598-019-40277-3Theeuwes, J. (2010). Top–down and bottom–up control of visual selection. Acta Psychologica, 135(2), 77–99. https://doi.org/10.1016/j.actpsy.2010.02.006Woldorff, M. G., Gallen, C. C., Hampson, S. A., Hillyard, S. A., Pantev, C., Sobel, D., and Bloom, F. E. (1993). Modulation of early sensory processing in human auditory cortex during auditory selective attention. Proceedings of the National Academy of Sciences, 90(18), 8722–8726. https://doi.org/10.1073/pnas.90.18.8722Yeung, N., Cohen, J. D., and Botvinick, M. M. (2011). Errors of interpretation and modeling: A reply to Grinband et al. NeuroImage, 57(2), 316–319. https://doi.org/10.1016/j.neuroimage.2011.04.029

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Data Citations

    1. Nuiten SA, Canales-Johnson A, Beerendonk L, Nanuashvili N, Fahrenfort JJ, Bekinschtein T, van Gaal S. 2021. Analyses scripts for manuscript: Preserved sensory processing but hampered conflict detection when stimulus input is task-irrelevant. figshare. [DOI] [PMC free article] [PubMed]
    2. Nuiten SA, Canales-Johnson A, Beerendonk L, Nanuashvili N, Fahrenfort JJ, Bekinschtein T, van Gaal S. 2021. Raw behavioral dataset for manuscript: Preserved sensory processing but hampered conflict detection when stimulus input is task-irrelevant. DANS. 14709420.v1 [DOI] [PMC free article] [PubMed]
    3. Nuiten SA, Canales-Johnson A, Beerendonk L, Nanuashvili N, Fahrenfort JJ, Bekinschtein T, van Gaal S. 2021. Decoded EEG (time-frequency) dataset for manuscript: Preserved sensory processing but hampered conflict detection when stimulus input is task-irrelevant. figshare. [DOI] [PMC free article] [PubMed]
    4. Nuiten SA, Canales-Johnson A, Beerendonk L, Nanuashvili N, Fahrenfort JJ, Bekinschtein T, van Gaal S. 2021. Decoded EEG (time-domain) dataset for manuscript: Preserved sensory processing but hampered conflict detection when stimulus input is task-irrelevant. figshare. [DOI] [PMC free article] [PubMed]
    5. Nuiten SA, Canales-Johnson A, Beerendonk L, Nanuashvili N, Fahrenfort JJ, Bekinschtein T, van Gaal S. 2021. Raw EEG dataset for manuscript: Preserved sensory processing but hampered conflict detection when stimulus input is task-irrelevant. figshare. [DOI] [PMC free article] [PubMed]

    Supplementary Materials

    Figure 2—source data 1. Behavioral results of experiment 1.
    Figure 2—figure supplement 1—source data 1. Behavioral results of experiment 1 - before and after training.
    Figure 4—source data 1. Behavioral results of experiment 2.
    Figure 4—figure supplement 1—source data 1. Behavioral results of the volume oddball task - first and second run.
    Figure 5—source data 1. Decoding results within ROI for all tasks.
    Transparent reporting form

    Data Availability Statement

    The data and analysis scripts used in this article is available on Figshare: https://uvaauas.figshare.com/projects/Preserved_sensory_processing_but_hampered_conflict_detection_when_stimulus_input_is_task-irrelevant/115020.

    The following datasets were generated:

    Nuiten SA, Canales-Johnson A, Beerendonk L, Nanuashvili N, Fahrenfort JJ, Bekinschtein T, van Gaal S. 2021. Analyses scripts for manuscript: Preserved sensory processing but hampered conflict detection when stimulus input is task-irrelevant. figshare.

    Nuiten SA, Canales-Johnson A, Beerendonk L, Nanuashvili N, Fahrenfort JJ, Bekinschtein T, van Gaal S. 2021. Raw behavioral dataset for manuscript: Preserved sensory processing but hampered conflict detection when stimulus input is task-irrelevant. DANS. 14709420.v1

    Nuiten SA, Canales-Johnson A, Beerendonk L, Nanuashvili N, Fahrenfort JJ, Bekinschtein T, van Gaal S. 2021. Decoded EEG (time-frequency) dataset for manuscript: Preserved sensory processing but hampered conflict detection when stimulus input is task-irrelevant. figshare.

    Nuiten SA, Canales-Johnson A, Beerendonk L, Nanuashvili N, Fahrenfort JJ, Bekinschtein T, van Gaal S. 2021. Decoded EEG (time-domain) dataset for manuscript: Preserved sensory processing but hampered conflict detection when stimulus input is task-irrelevant. figshare.

    Nuiten SA, Canales-Johnson A, Beerendonk L, Nanuashvili N, Fahrenfort JJ, Bekinschtein T, van Gaal S. 2021. Raw EEG dataset for manuscript: Preserved sensory processing but hampered conflict detection when stimulus input is task-irrelevant. figshare.


    Articles from eLife are provided here courtesy of eLife Sciences Publications, Ltd

    RESOURCES