Skip to main content
eLife logoLink to eLife
. 2019 Feb 6;8:e37321. doi: 10.7554/eLife.37321

Humans strategically shift decision bias by flexibly adjusting sensory evidence accumulation

Niels A Kloosterman 1,2,, Jan Willem de Gee 3,4, Markus Werkle-Bergner 2, Ulman Lindenberger 1,2, Douglas D Garrett 1,2,, Johannes Jacobus Fahrenfort 4,5,
Editors: Michael J Frank6, Michael J Frank7
PMCID: PMC6365056  PMID: 30724733

Abstract

Decision bias is traditionally conceptualized as an internal reference against which sensory evidence is compared. Instead, we show that individuals implement decision bias by shifting the rate of sensory evidence accumulation toward a decision bound. Participants performed a target detection task while we recorded EEG. We experimentally manipulated participants’ decision criterion for reporting targets using different stimulus-response reward contingencies, inducing either a liberal or a conservative bias. Drift diffusion modeling revealed that a liberal strategy biased sensory evidence accumulation toward target-present choices. Moreover, a liberal bias resulted in stronger midfrontal pre-stimulus 2—6 Hz (theta) power and suppression of pre-stimulus 8—12 Hz (alpha) power in posterior cortex. Alpha suppression in turn was linked to the output activity in visual cortex, as expressed through 59—100 Hz (gamma) power. These findings show that observers can intentionally control cortical excitability to strategically bias evidence accumulation toward the decision bound that maximizes reward.

Research organism: Human

eLife digest

How do you decide whether to buy a new car? One factor to consider is how well the economy is doing. During an economic boom, you might happily commit to buying a new vehicle that goes on sale, but prefer to sit on your savings during a financial crisis, despite how good the offer may be. Adjusting how you make decisions in situations like this can help you optimize choices in an ever-changing world.

It’s currently thought that when deciding, we accumulate evidence for each of the available options. When evidence for one of the options passes a threshold, we choose that option. External factors – such as a booming economy when considering buying a car – could bias this process in two different ways. The standard view is that they move the starting point of evidence accumulation towards one of the two choices, so that the threshold for choosing that option is more easily reached. Alternatively, they could bias the accumulation process itself, so that evidence builds up more quickly towards one of the choices.

To distinguish between these possibilities, Kloosterman et al. asked volunteers to press a button whenever they detected a target hidden among a stream of visual patterns. To bias their decisions, volunteers were penalized differently in two experimental conditions: either when they failed to report a target (a ‘miss’), or when they ‘detected’ a target when in fact nothing was there (a ‘false alarm’). As expected, punishing participants for missing a target made them more liberal towards reporting targets, whereas penalizing false alarms made them more conservative.

Computational modeling of behavior revealed that when participants used a liberal strategy, they did not move closer to the threshold for deciding target presence. Instead, they accumulated evidence for target presence at a faster rate, even when in fact no target was shown. Brain activity recorded during this task reveals how this bias in evidence accumulation might come about. When a volunteer adopted a liberal response strategy, visual brain areas showed a reduction in low-frequency ‘alpha’ waves, suggesting increased attention. This in turn triggered an increase in high-frequency ‘gamma’ waves, reflecting biased evidence accumulation for target presence (irrespective of whether a target actually appeared or not).

Overall, the findings reported by Kloosterman et al. suggest that we can strategically bias perceptual decision-making by varying how quickly we accumulate evidence in favor of different response options. This might explain how we are able to adapt our decisions to environments that differ in payoffs and punishments. The next challenge is to understand whether such biases also affect high-level decisions, for example, when purchasing a new car.

Introduction

Perceptual decisions arise not only from the evaluation of sensory evidence, but are often biased toward a given choice alternative by environmental factors, perhaps as a result of task instructions and/or stimulus-response reward contingencies (White and Poldrack, 2014). The ability to willfully control decision bias could potentially enable the behavioral flexibility required to survive in an ever-changing and uncertain environment. But despite its important role in decision making, the neural mechanisms underlying decision bias are not fully understood.

The traditional account of decision bias comes from signal detection theory (SDT) (Green and Swets, 1966). In SDT, decision bias is quantified by estimating the relative position of a decision point (or ‘criterion’) in between sensory evidence distributions for noise and signal (see Figure 1A). In this framework, a more liberal decision bias arises by moving the criterion closer toward the noise distribution (see green arrow in Figure 1A). Although SDT has been very successful at quantifying decision bias, how exactly bias affects decision making and how it is reflected in neural activity remains unknown. One reason for this lack of insight may be that SDT does not have a temporal component to track how decisions are reached over time (Fetsch et al., 2014). As an alternative to SDT, the drift diffusion model (DDM) conceptualizes perceptual decision making as the accumulation of noisy sensory evidence over time into an internal decision variable (Bogacz et al., 2006Gold and Shadlen, 2007Ratcliff and McKoon, 2008). A decision in this model is made when the decision variable crosses one of two decision bounds corresponding to the choice alternatives. After one of the bounds is reached, the corresponding decision can subsequently either be actively reported, e.g. by means of a button press indicating a detected signal, or it could remain without behavioral report when no signal is detected (Ratcliff et al., 2018). Within this framework, a strategic decision bias imposed by the environment can be modelled in two different ways: either by moving the starting point of evidence accumulation closer to one of the boundaries (see green arrow in Figure 1B), or by biasing the rate of the evidence accumulation process itself toward one of the boundaries (see green arrow in Figure 1C). In both the SDT and DDM frameworks, decision bias shifts have little effect on the sensitivity of the observer when distinguishing signal from noise; they predominantly affect the relative response ratios (and in the case of DDM, the speed with which one or the other decision bound is reached). There has been some evidence to suggest that decision bias induced by shifting the criterion is best characterized by a drift bias in the DDM (Urai et al., 2018White and Poldrack, 2014). However, the drift bias parameter has as yet not been related to a well-described neural mechanism.

Figure 1. Theoretical accounts of decision bias.

Figure 1.

(A) Signal-detection-theoretic account of decision bias. Signal and noise + signal distributions are plotted as a function of the strength of internal sensory evidence. The decision point (or criterion) that determines whether to decide signal presence or absence is plotted as a vertical criterion line c, reflecting the degree of decision bias. c can be shifted left- or rightwards to denote a more liberal or conservative bias, respectively (green arrow indicates a shift toward more liberal). (B, C) Drift diffusion model (DDM) account of decision bias, in which decisions are modelled in terms of a set of parameters that describe a dynamic process of sensory evidence accumulation toward one of two decision bounds. When sensory input is presented, evidence starts to accumulate (drift) over time after initialization at the starting point z. A decision is made when the accumulated evidence either crosses decision boundary a (signal presence) or decision boundary 0 (no signal). After a boundary is reached, the corresponding decision can be either actively reported by a button press (e.g. for signal-present decisions), or remain implicit, without a response (for signal-absent decisions). The DDM can capture decision bias through a shift of the starting point of the evidence accumulation process (panel B) or through a shift in bias in the rate of evidence accumulation toward the different choices (panel C). These mechanisms are dissociable through their differential effect on the shape of the reaction time (RT) distributions, as indicated by the curves above and below the graphs for target-present and target-absent decisions, respectively. Panels B. and C. are modified and reproduced with permission from Urai et al., 2018 (Figure 1, published under a CC BY 4.0 license).

Regarding the neural underpinnings of decision bias, there have been a number of reports about a correlational relationship between cortical population activity measured with EEG and decision bias. For example, spontaneous trial-to-trial variations in pre-stimulus oscillatory activity in the 8—12 Hz (alpha) band have been shown to correlate with decision bias and confidence (Iemi and Busch, 2018; Limbach and Corballis, 2016). Alpha oscillations, in turn, have been proposed to be involved in the gating of task-relevant sensory information (Jensen and Mazaheri, 2010), possibly encoded in high-frequency (gamma) oscillations in visual cortex (Ni et al., 2016; Popov et al., 2017). Although these reports suggest links between pre-stimulus alpha suppression, sensory information gating, and decision bias, they do not uncover whether pre-stimulus alpha plays an instrumental role in decision bias and how exactly this might be achieved. Specifically, it is unknown whether an experimentally induced shift in decision bias is implemented in the brain by willfully adjusting pre-stimulus alpha in sensory areas.

Here, we explicitly investigate these potential mechanisms by employing a task paradigm in which shifts in decision bias were experimentally induced within participants through (a) instruction and (b) asymmetries in stimulus-response reward contingencies during a visual target detection task. By applying drift diffusion modeling to the participants’ choice behavior, we show that the effect of strategically adjusting decision bias is best captured by the drift bias parameter, which is thought to reflect a bias in the rate of sensory evidence accumulation toward one of the two decision bounds. To substantiate a neural mechanism for this effect, we demonstrate that this bias shift is accompanied by changes in pre-stimulus midfrontal 2–6 Hz (theta) power, as well as changes in sensory alpha suppression. Pre-stimulus alpha suppression in turn is linked to the post-stimulus output of visual cortex, as reflected in gamma power modulation. Critically, we show that gamma activity accurately predicted the strength of evidence accumulation bias within participants, providing a direct link between the proposed mechanism and decision bias. Together, these findings identify a neural mechanism by which intentional control of cortical excitability is applied to strategically bias perceptual decisions in order to maximize reward in a given ecological context.

Results

Manipulation of decision bias affects sensory evidence accumulation

In three EEG recording sessions, human participants (N = 16) viewed a continuous stream of horizontal, vertical and diagonal line textures alternating at a rate of 25 textures/second. The participants’ task was to detect an orientation-defined square presented in the center of the screen and report it via a button press (Figure 2A). Trials consisted of a fixed-order sequence of textures embedded in the continuous stream (total sequence duration 1 s). A square appeared in the fifth texture of a trial in 75% of the presentations (target trials), while in 25% a homogenous diagonal texture appeared in the fifth position (nontarget trials). Although the onset of a trial within the continuous stream of textures was not explicitly cued, the similar distribution of reaction times in target and nontarget trials suggests that participants used the temporal structure of the task even when no target appeared (Figure 2—figure supplement 1A). Consistent and significant EEG power modulations after trial onset (even for nontarget trials) further confirm that subjects registered trial onsets in the absence of an explicit cue, plausibly using the onset of a fixed order texture sequence as an implicit cue (Figure 2—figure supplement 1B).

Figure 2. Strategic decision bias shift toward liberal biases evidence accumulation.

(A) Schematic of the visual stimulus and task design. Participants viewed a continuous stream of full-screen diagonally, horizontally and vertically oriented textures at a presentation rate of 40 ms (25 Hz). After random inter-trial intervals, a fixed-order sequence was presented embedded in the stream. The fifth texture in each sequence either consisted of a single diagonal orientation (target absent), or contained an orthogonal orientation-defined square (either 45° or 135° orientation). Participants decided whether they had just seen a target, reporting detected targets by button press. Liberal and conservative conditions were administered in alternating nine-min blocks by penalizing either misses or false alarms, respectively, using aversive tones and monetary deductions. Depicted square and fixation dot sizes are not to scale. (B) Average detection rates (hits and false alarms) during both conditions. Miss rate is equal to 1 – hit rate since both are computed on stimulus present trials, and correct-rejection rate is equal to 1 – false alarm rate since both are computed on stimulus absent trials, together yielding the four SDT stimulus-response categories. (C) SDT parameters for sensitivity and criterion. (D) Schematic and simplified equation of drift diffusion model accounting for reaction time distributions for actively reported target-present and implicit target-absent decisions. Decision bias in this model can be implemented by either shifting the starting point of the evidence accumulation process (Z), or by adding an evidence-independent constant (‘drift bias’, db) to the drift rate. See text and Figure 1 for details. Notation: dy, change in decision variable y per unit time dt; v·dt, mean drift (multiplied with one for signal + noise (target) trials, and −1 for noise-only (nontarget) trials); db·dt, drift bias; and cdW, Gaussian white noise (mean = 0, variance = c2·dt). (E) Difference in Bayesian Information Criterion (BIC) goodness of fit estimates for the drift bias and the starting point models. A lower delta BIC value indicates a better fit, showing superiority of the drift bias model to account for the observed results. (F) Estimated model parameters for drift rate and drift bias in the drift bias model. Error bars, SEM across 16 participants. ***p<0.001; n.s., not significant. Panel D. is modified and reproduced with permission from de Gee et al. (2017) (Figure 4A, published under a CC BY 4.0 license).

Figure 2—source data 1. This csv table contains the data for Figure 2 panels B, C, E and F.
DOI: 10.7554/eLife.37321.008

Figure 2.

Figure 2—figure supplement 1. Behavioral and neurophysiological evidence that participants were sensitive to the implicit task structure.

Figure 2—figure supplement 1.

(A) Participant-average RT distributions for hits and false alarms in both conditions. The presence of similar RT distributions for false alarms and hits indicates that participants were sensitive to trial onset despite the fact that trial onsets were only implicitly signaled. Error bars, SEM. (B) Time-frequency representations of low-frequency EEG power modulations with respect to the pre-stimulus period (–0.4–0 s), pooled across the two conditions. Significant low-frequency modulation occurred even for nontarget trials without overt response (correct rejections), indicating that participants detected the onset of a trial even when neither a target was presented nor a response was given. Saturated colors indicate clusters of significant modulation, cluster threshold p<0.05, two-sided permutation test across participants, cluster- corrected; N = 15). Solid and dotted vertical lines respectively indicate the onset of the trial and the target stimulus. M, power modulation.
Figure 2—figure supplement 2. Single-participant drift diffusion model fits for the drift bias and starting point models for both conditions.

Figure 2—figure supplement 2.

Rows, single participant RT distributions and drift diffusion model fits for the two models for both conditions.
Figure 2—figure supplement 3. Signal-detection-theoretic (SDT) behavioral measures during both conditions correspond closely to drift diffusion modeling (DDM) parameters.

Figure 2—figure supplement 3.

(A) Across-participant Pearson correlation between d’ and drift rate for the two conditions. Each dot represents a participant. (B) As A. but for correlation between criterion and DDM drift bias. The correlation is negative due to a lower criterion reflecting a stronger liberal bias. (C) Left panel, mean reaction times (RT) for hits and false alarms for the two conditions. Middle and right panels, As A. but for correlation between RT for hits and drift bias. (D) Parameter estimates in the drift bias DDM not related to evidence accumulation (drift rate). ***p<0.001; n.s., not significant.

In alternating nine-minute blocks of trials, we actively biased participants’ perceptual decisions by instructing them either to report as many targets as possible (‘Detect as many targets as possible!”; liberal condition), or to only report high-certainty targets ("Press only if you are really certain!"; conservative condition). Participants were free to respond at any time during a block whenever they detected a target. A trial was considered a target present response when a button press occurred before the fixed-order sequence ended (i.e. within 0.84 s after onset of the fifth texture containing the (non)target, see Figure 2A). We provided auditory feedback and applied monetary penalties following missed targets in the liberal condition and following false alarms in the conservative condition (Figure 2A; see Materials and methods for details). The median number of trials for each SDT category across participants was 1206 hits, 65 false alarms, 186 misses and 355 correct rejections in the liberal condition, and 980 hits, 12 false alarms, 419 misses and 492 correct rejections in the conservative condition.

Participants reliably adopted the intended decision bias shift across the two conditions, as shown by both the hit rate and the false alarm rate going down in tandem as a consequence of a more conservative bias (Figure 2B). The difference between hit rate and false alarm rate was not significantly modulated by the experimental bias manipulations (p=0.81, two-sided permutation test, 10,000 permutations, see Figure 2B). However, target detection performance computed using standard SDT d’ (perceptual sensitivity, reflecting the distance between the noise and signal distributions in Figure 1A) (Green and Swets, 1966) was slightly higher during conservative (liberal: d’=2.0 (s.d. 0.90) versus conservative: d’=2.31 (s.d. 0.82), p=0.0002, see Figure 2C, left bars). We quantified decision bias using the standard SDT criterion measure c, in which positive and negative values reflect conservative and liberal biases, respectively (see the blue and red vertical lines in Figure 1A). This uncovered a strong experimentally induced bias shift from the conservative to the liberal condition (liberal: c = – 0.13 (s.d. 0.4), versus conservative: c = 0.73 (s.d. 0.36), p=0.0001, see Figure 2C), as well as a conservative average bias across the two conditions (c = 0.3 (s.d. 0.31), p=0.0013).

Because the SDT framework is static over time, we further investigated how bias affected various components of the dynamic decision process by fitting different variants of the drift diffusion model (DDM) to the behavioral data (Figure 1B,C) (Ratcliff and McKoon, 2008). The DDM postulates that perceptual decisions are reached by accumulating noisy sensory evidence toward one of two decision boundaries representing the choice alternatives. Crossing one of these boundaries can either trigger an explicit behavioral report to indicate the decision (for target-present responses in our experiment), or remain implicit (i.e. without active response, for target-absent decisions in our experiment). The DDM captures the dynamic decision process by estimating parameters reflecting the rate of evidence accumulation (drift rate), the separation between the boundaries, as well as the time needed for stimulus encoding and response execution (non-decision time) (Ratcliff and McKoon, 2008). The DDM is able to estimate these parameters based on the shape of the RT distributions for actively reported (target-present) decisions along with the total number of trials in which no response occurred (i.e. implicit target-absent decisions) (Ratcliff et al., 2018).

We fitted two variants of the DDM to distinguish between two possible mechanisms that can bring about a change in choice bias: one in which the starting point of evidence accumulation moves closer to one of the decision boundaries (‘starting point model’, Figure 1B) (Mulder et al., 2012), and one in which the drift rate itself is biased toward one of the boundaries (de Gee et al., 2017) (‘drift bias model’, see Figure 1C, referred to as drift criterion by Ratcliff and McKoon (2008)). The drift bias parameter is determined by estimating the contribution of an evidence-independent constant added to the drift (Figure 2D). In the two respective models, we freed either the drift bias parameter (db, see Figure 2D) for the two conditions while keeping starting point (z) fixed across conditions (for the drift bias model), or vice versa (for the starting point model). Permitting only one parameter at a time to vary freely between conditions allowed us to directly compare the models without having to penalize either model for the number of free parameters. These alternative models make different predictions about the shape of the RT distributions in combination with the response ratios: a shift in starting point results in more target-present choices particularly for short RTs, whereas a shift in drift bias grows over time, resulting in more target-present choices also for longer RTs (de Gee et al., 2017; Ratcliff and McKoon, 2008; Urai et al., 2018). The RT distributions above and below the evidence accumulation graphs in Figure 1B and C illustrate these different effects. In both models, all of the non-bias related parameters (drift rate v, boundary separation a and non-decision time u + w, see Figure 2D) were also allowed to vary by condition.

We found that the starting point model provided a worse fit to the data than the drift bias model (starting point model, Bayesian Information Criterion (BIC) = 7938; drift bias model, BIC = 7926, Figure 2E, see Materials and methods for details). Specifically, for 15/16 participants, the drift bias model provided a better fit than the starting point model, for 12 of which delta BIC >6, indicating strong evidence in favor of the drift bias model (Kass and Raftery, 1995). Despite the lower BIC for the drift bias model, however, we note that to the naked eye both models provide similarly reasonable fits to the single participant RT distributions (Figure 2—figure supplement 2). Finally, we compared these two models to a model in which both drift bias and starting point were fixed across the conditions, while still allowing the non-bias-related parameters to vary per condition. This model provided the lowest goodness of fit (delta BIC >6 for both models for all participants).

Given the superior performance of the drift bias model (in terms of BIC), we further characterized decision making under the bias manipulation using parameter estimates from this model (see below where we revisit the implausibility of the starting point model when inspecting the lack of pre-stimulus baseline effects in sensory or motor cortex). Drift rate, reflecting the participants’ ability to discriminate targets and nontargets, was somewhat higher in the conservative compared to the liberal condition (liberal: v = 2.39 (s.d. 1.07), versus conservative: v = 3.06 (s.d. 1.16), p=0.0001, permutation test, Figure 2F, left bars). Almost perfect correlations across participants in both conditions between DDM drift rate and SDT d’ provided strong evidence that the drift rate parameter captures perceptual sensitivity (liberal, r = 0.98, p=1e–10; conservative, r = 0.96, p=5e–9, see Figure 2—figure supplement 3A). Regarding the DDM bias parameters, the condition-fixed starting point parameter in the drift bias model was smaller than half the boundary separation (i.e. closer to the target-absent boundary (z = 0.24 (s.d. 0.06), p<0.0001, tested against 0.5)), indicating an overall conservative starting point across conditions (Figure 2—figure supplement 3D), in line with the overall positive SDT criterion (see Figure 2C, right panel). Strikingly, however, whereas the drift bias parameter was on average not different from zero in the conservative condition (db = –0.04 (s.d. 1.17), p=0.90), drift bias was strongly positive in the liberal condition (db = 2.08 (s.d. 1.0), p=0.0001; liberal vs conservative: p=0.0005; Figure 2F, right bars). The overall conservative starting point combined with a condition-specific neutral drift bias explained the conservative decision bias (as quantified by SDT criterion) in the conservative condition (Figure 2C). Likewise, in the liberal condition, the overall conservative starting point combined with a condition-specific positive drift bias (pushing the drift toward the target-present boundary) explained the neutral bias observed with SDT criterion (c around zero for liberal, see Figure 2C).

Convergent with these modeling results, drift bias was strongly anti-correlated across participants with both SDT criterion (r = –0.89 for both conditions, p=4e–6) and average reaction time (liberal, r = –0.57, p=0.02; conservative, r = –0.82, p=1e–4, see Figure 2—figure supplement 3B C). The strong correlations between drift rate and d’ on the one hand, and drift bias and c on the other, provide converging evidence that the SDT and DDM frameworks capture similar underlying mechanisms, while the DDM additionally captures the dynamic nature of perceptual decision making by linking the decision bias manipulation to the evidence accumulation process itself. As a control, we also correlated starting point with criterion, and found that the correlations were somewhat weaker in both conditions (liberal, r = –0.75.; conservative, r = –0.77), suggesting that the drift bias parameter better captured decision bias as instantiated by SDT.

Finally, the bias manipulation also affected two other parameters in the drift bias model that were not directly related to sensory evidence accumulation: boundary separation was slightly but reliably higher during the liberal compared to the conservative condition (p<0.0001), and non-decision time (comprising time needed for sensory encoding and motor response execution) was shorter during liberal (p<0.0001) (Figure 2—figure supplement 3D). In conclusion, the drift bias variant of the drift diffusion model best explained how participants adjusted to the decision bias manipulations. In the next sections, we used spectral analysis of the concurrent EEG recordings to identify a plausible neural mechanism that reflects biased sensory evidence accumulation.

Task-relevant textures induce stimulus-related responses in visual cortex

Sensory evidence accumulation in a visual target detection task presumably relies on stimulus-related signals processed in visual cortex. Such stimulus-related signals are typically reflected in cortical population activity exhibiting a rhythmic temporal structure (Buzsáki and Draguhn, 2004). Specifically, bottom-up processing of visual information has previously been linked to increased high-frequency (>40 Hz, i.e. gamma) electrophysiological activity over visual cortex (Bastos et al., 2015; Michalareas et al., 2016; Popov et al., 2017; van Kerkoerle et al., 2014). Figure 3 shows significant electrode-by-time-by-frequency clusters of stimulus-locked EEG power, normalized with respect to the condition-specific pre-trial baseline period (–0.4 to 0 s). We observed a total of four distinct stimulus-related modulations, which emerged after target onset and waned around the time of response: two in the high-frequency range (>36 Hz, Figure 3A (top) and Figure 3B) and two in the low-frequency range (<36 Hz, Figure 3A (bottom) and Figure 3C). First, we found a spatially focal modulation in a narrow frequency range around 25 Hz reflecting the steady state visual evoked potential (SSVEP) arising from entrainment by the visual stimulation frequency of our experimental paradigm (Figure 3A, bottom panel), as well as a second modulation from 42 to 58 Hz comprising the SSVEP’s harmonic (Figure 3A, top panel). Both SSVEP frequency modulations have a similar topographic distribution (see left panels of Figure 3A).

Figure 3. EEG spectral power modulations related to stimulus processing and motor response.

Figure 3.

Each panel row depicts a three-dimensional (electrodes-by-time-by-frequency) cluster of power modulation, time-locked both to trial onset (left two panels) and button press (right two panels). Power modulation outside of the significant clusters is masked out. Modulation was computed as the percent signal change from the condition-specific pre-stimulus period (–0.4 to 0 s) and averaged across conditions. Topographical scalp maps show the spatial extent of clusters by integrating modulation over time-frequency bins. Time-frequency representations (TFRs) show modulation integrated over electrodes indicated by black circles in the scalp maps. Circle sizes indicate electrode weight in terms of proportion of time-frequency bins contributing to the TFR. P-values above scalp maps indicate multiple comparison-corrected cluster significance using a permutation test across participants (two-sided, N = 14). Solid vertical lines indicate the time of trial onset (left) or button press (right), dotted vertical lines indicate time of (non)target onset. Integr. M., integrated power modulation. SSVEP, steady state visual evoked potential. (A) (Top) 42–58 Hz (SSVEP harmonic) cluster. (A) (Bottom). Posterior 23–27 Hz (SSVEP) cluster. (B) Posterior 59–100 Hz (gamma) cluster. The clusters in A (Top) and B were part of one large cluster (hence the same p-value), and were split based on the sharp modulation increase precisely in the 42–58 Hz range. (C) 12–35 Hz (beta) suppression cluster located more posteriorly aligned to trial onset, and more left-centrally when aligned to button press.

Third, we observed a 59—100 Hz (gamma) power modulation (Figure 3B), after carefully controlling for high-frequency EEG artifacts due to small fixational eye movements (microsaccades) by removing microsaccade-related activity from the data (Hassler et al., 2011; Hipp and Siegel, 2013; Yuval-Greenberg et al., 2008), and by suppressing non-neural EEG activity through scalp current density (SCD) transformation (Melloni et al., 2009; Perrin et al., 1989) (see Materials and methods for details). Importantly, the topography of the observed gamma modulation was confined to posterior electrodes, in line with a role of gamma in bottom-up processing in visual cortex (Ni et al., 2016). Finally, we observed suppression of low-frequency beta (11—22 Hz) activity in posterior cortex, which typically occurs in parallel with enhanced stimulus-induced gamma activity (Donner and Siegel, 2011; Kloosterman et al., 2015a; Meindertsma et al., 2017; Werkle-Bergner et al., 2014) (Figure 3C). Response-locked, this cluster was most pronounced over left motor cortex (electrode C4), plausibly due to the right-hand button press that participants used to indicate target detection (Donner et al., 2009). In the next sections, we characterize these signals separately for the two conditions, investigating stimulus-related signals within a pooling of 11 occipito-parietal electrodes based on the gamma enhancement in Figure 3B (Oz, POz, Pz, PO3, PO4, and P1 to P6), and motor-related signals in left-hemispheric beta (LHB) suppression in electrode C4 (Figure 3C) (O'Connell et al., 2012).

EEG power modulation time courses consistent with the drift bias model

Our behavioral results suggest that participants biased sensory evidence accumulation in the liberal condition, rather than changing their starting point. We next sought to provide converging evidence for this conclusion by examining pre-stimulus activity, post-stimulus activity, and motor-related EEG activity. Following previous studies, we hypothesized that a starting point bias would be reflected in a difference in pre-motor baseline activity between conditions before onset of the decision process (Afacan-Seref et al., 2018; de Lange et al., 2013), and/or in a difference in pre-stimulus activity such as seen in bottom up stimulus-related SSVEP and gamma power signals (Figure 4A shows the relevant clusters as derived from Figure 3). Thus, we first investigated the timeline of raw power in the SSVEP, gamma and LHB range between conditions (see Figure 4B). None of these markers showed a meaningful difference in pre-stimulus baseline activity. Statistically comparing the raw pre-stimulus activity between liberal and conservative in a baseline interval between –0.4 and 0 s prior to trial onset yielded p=0.52, p=0.51 and p=0.91, permutation tests, for the respective signals. This confirms a highly similar starting point of evidence accumulation in all these signals. Next, we predicted that a shift in drift bias would be reflected in a steeper slope of post-stimulus ramping activity (leading up to the decision). We reasoned that the best way of ascertaining such an effect would be to baseline the activity to the interval prior to stimulus onset (using the interval between –0.4 to 0 s), such that any post-stimulus effect we find cannot be explained by pre-stimulus differences (if any). The time course of post-stimulus and response-locked activity after baselining can be found in Figure 4C. All three signals showed diverging signals between the liberal and conservative condition after trial onset, consistent with adjustments in the process of evidence accumulation. Specifically, we observed higher peak modulation levels for the liberal condition in all three stimulus-locked signals (p=0.08, p=0.002 and p=0.023, permutation tests for SSVEP, gamma and LHB, respectively), and found a steeper slope toward the button press for LHB (p=0.04). Finally, the event related potential in motor cortex also showed a steeper slope toward report for liberal (p=0.07, Figure 4, bottom row, baseline plot is not meaningful for time-domain signals due to mean removal during preprocessing). Taken together, these findings provide converging evidence that participants implemented a liberal decision bias by adjusting the rate of evidence accumulation toward the target-present choice boundary, but not its starting point. In the next sections, we sought to identify a neural mechanism that could underlie these biases in the rate of evidence accumulation.

Figure 4. Experimental task manipulations affect the time course of stimulus- and motor-related EEG signals, but not its starting point.

Figure 4.

Raw power throughout the baseline period and time courses of power modulation time-locked to trial start and button press. (A) Relevant electrode clusters and frequency ranges (from Figure 3): Posterior SSVEP, Posterior gamma and Left-hemispheric beta (LHB). (B) The time course of raw power in a wide interval around the stimulus –0.8 to 0.8 s ms for these clusters. (C) Stimulus locked and response locked percent signal change from baseline (baseline period: –0.4 to 0 s). Error bars, SEM. Black horizontal bar indicates significant difference between conditions, cluster-corrected for multiple comparison (p<0.05, two sided). SSVEP, steady state visual evoked potential; LHB, left hemispheric beta; ERP, event-related potential; SCD, scalp current density.

Liberal bias is reflected in pre-stimulus midfrontal theta enhancement and posterior alpha suppression

Given a lack of pre-stimulus (starting-point) differences in specific frequency ranges involved in stimulus processing or motor responses (Figure 4B), we next focused on other pre-stimulus differences that might be the root cause of the post-stimulus differences we observed in Figure 4C. To identify such signals at high frequency resolution, we computed spectral power in a wide time window from –1 s until trial start. We then ran a cluster-based permutation test across all electrodes and frequencies in the low-frequency domain (1–35 Hz), looking for power modulations due to our experimental manipulations. Pre-stimulus spectral power indeed uncovered two distinct modulations in the liberal compared to the conservative condition: (1) theta modulation in midfrontal electrodes and (2) alpha modulation in posterior electrodes. Figure 5A depicts the difference between the liberal and conservative condition, confirming significant clusters (p<0.05, cluster-corrected for multiple comparisons) of enhanced theta (2–6 Hz) in frontal electrodes (Fz, Cz, FC1,and FC2), as well as suppressed alpha (8—12 Hz) in a group of posterior electrodes, including all 11 electrodes selected previously based on post-stimulus gamma modulation (Figure 3). The two modulations were uncorrelated across participants (r = 0.06, p=0.82), suggesting they reflect different neural processes related to our experimental task manipulations. These findings are consistent with literature pointing to a role of midfrontal theta as a source of cognitive control signals originating from pre-frontal cortex (Cohen and Frank, 2009; van Driel et al., 2012) and alpha in posterior cortex reflecting spontaneous trial-to-trial fluctuations in decision bias (Iemi et al., 2017). The fact that these pre-stimulus effects occur as a function of our experimental manipulation suggests that they are a hallmark of strategic bias adjustment, rather than a mere correlate of spontaneous shifts in decision bias. Importantly, this finding implies that humans are able to actively control pre-stimulus alpha power in visual cortex (possibly through top-down signals from frontal cortex), plausibly acting to bias sensory evidence accumulation toward the response alternative that maximizes reward.

Figure 5. Adopting a liberal decision bias is reflected in increased midfrontal theta and suppressed pre-stimulus alpha power.

Figure 5.

(A) Significant clusters of power modulation between liberal and conservative in a pre-stimulus window between −1 and 0 s before trial onset. When performing a cluster-based permutation test over all frequencies (1–35 Hz) and electrodes, two significant clusters emerged: theta (2–6 Hz, top), and alpha (8–12 Hz, bottom). Left panels: raw power spectra of pre-stimulus neural activity for conservative and liberal separately in the significant clusters (for illustration purposes), Middle panels: Liberal – conservative raw power spectrum. Black horizontal bar indicates statistically significant frequency range (p<0.05, cluster-corrected for multiple comparisons, two-sided). Right panels: Corresponding liberal – conservative scalp topographic maps of the pre-stimulus raw power difference between conditions for EEG theta power (2–6 Hz) and alpha power (8–12 Hz). Plotting conventions as in Figure 3. Error bars, SEM across participants (N = 15). (B) Probability density distributions of single trial alpha power values for both conditions, averaged across participants.

Pre-stimulus alpha power is linked to cortical gamma responses

Next, we asked how suppression of pre-stimulus alpha activity might bias the process of sensory evidence accumulation. One possibility is that alpha suppression influences evidence accumulation by modulating the susceptibility of visual cortex to sensory stimulation, a phenomenon termed ‘neural excitability’ (Iemi et al., 2017; Jensen and Mazaheri, 2010). We explored this possibility using a theoretical response gain model formulated by Rajagovindan and Ding (2011). This model postulates that the relationship between the total synaptic input that a neuronal ensemble receives and the total output activity it produces is characterized by a sigmoidal function (Figure 6A) – a notion that is biologically plausible (Destexhe et al., 2001; Freeman, 1979). In this model, the total synaptic input into visual cortex consists of two components: (1) sensory input (i.e. due to sensory stimulation) and (2) ongoing fluctuations in endogenously generated (i.e. not sensory-related) neural activity. In our experiment, the sensory input into visual cortex can be assumed to be identical across trials, because the same sensory stimulus was presented in each trial (see Figure 2A). The endogenous input, in contrast, is thought to vary from trial to trial reflecting fluctuations in top-down cognitive processes such as attention. These fluctuations are assumed to be reflected in the strength of alpha power suppression, such that weaker alpha is associated with increased attention (Figure 6B). Given the combined constant sensory and variable endogenous input in each trial (see horizontal axis in Figure 6A), the strength of the output responses of visual cortex are largely determined by the trial-to-trial variations in alpha power (see vertical axis in Figure 6A). Furthermore, the sigmoidal shape of the input-output function results in an effective range in the center of the function’s input side which yields the strongest stimulus-induced output responses since the sigmoid curve there is steepest. Mathematically, the effect of endogenous input on stimulus-induced output responses (see marked interval in Figure 6A) can be expressed as the first order derivative or slope of the sigmoid in Figure 6A, which is referred to as the response gain by Rajagovindan and Ding (2011). This derivative is plotted in Figure 6B (blue and red solid lines) across levels of pre-stimulus alpha power, predicting an inverted-U shaped relationship between alpha and response gain in visual cortex.

Figure 6. Pre-stimulus alpha power is linked to cortical gamma responses.

(A) Theoretical response gain model describing the transformation of stimulus-induced and endogenous input activity (denoted by Sx and SN respectively) to the total output activity (denoted by O(Sx +SN)) in visual cortex by a sigmoidal function. Different operational alpha ranges are associated with input-output functions with different slopes due to corresponding changes in the total output. (B) Alpha-linked output responses (solid lines) are formalized as the first derivative (slope) of the sigmoidal functions (dotted lines), resulting in inverted-U (Gaussian) shaped relationships between alpha and gamma, involving stronger response gain in the liberal than in the conservative condition. (C) Corresponding empirical data showing gamma modulation (same percent signal change units as in Figure 3) as a function of alpha bin. The location on the x-axis of each alpha bin was taken as the median alpha of the trials assigned to each bin and averaged across subjects. (D-F) Model prediction tests. (D) Raw pre-stimulus alpha power for both conditions, averaged across subjects. (E) Post-stimulus gamma power modulation for both conditions averaged across the two middle alpha bins (5 and 6) in panel C. (F) Liberal – conservative difference between the response gain curves shown in panel C, centered on alpha bin. Error bars, within-subject SEM across participants (N = 14).

Figure 6—source data 1. SPSS .sav file containing the data used in panels C, E, and F.
DOI: 10.7554/eLife.37321.014

Figure 6.

Figure 6—figure supplement 1. Gain model predictions and corresponding empirical data plotted as a function of pre-stimulus alpha bin number.

Figure 6—figure supplement 1.

(A) Model predictions for both conditions. The gain curve for the liberal condition is steeper than for the conservative condition. Binning trials based on alpha within each condition directly maps the peaks of the gain curves onto each other. (B) Model prediction for liberal – conservative as a function of alpha bin number. The difference gain curve between the two conditions is again an inverted-U shaped function. (C) Corresponding empirical data. The plot is identical to Figure 5C, except that the bin number is plotted instead of the actual alpha power for each condition.

Regarding our experimental conditions, the model not only predicts that the suppression of pre-stimulus alpha observed in the liberal condition reflects a shift in the operational range of alpha (see Figure 5B), but also that it increases the maximum output of visual cortex (a shift from the red to the blue line in Figure 6A). Therefore, the difference between stimulus conditions is not modeled using a single input-output function, but necessitates an additional mechanism that changes the input-output relationship itself. The exact nature of this mechanism is not known (also see Discussion). Rajagovindan and Ding suggest that top-down mechanisms modulate ongoing prestimulus neural activity to increase the slope of the sigmoidal function, but despite the midfrontal theta activity we observed, evidence for this hypothesis is somewhat elusive. We have no means to establish directly whether this relationship exists, and can merely reflect on the fact that this change in the input-output function is necessary to capture condition-specific effects of the input-output relationship, both in the data of Rajagovindan and Ding (2011) and in our own data. Thus, as the operational range of alpha shifts leftwards from conservative to liberal, the upper asymptote in Figure 6A moves upwards such that the total maximum output activity increases. This in turn affects the inverted-U-shaped relationship between alpha and gain in visual cortex (blue line in Figure 6B), leading to a steeper response curve in the liberal condition resembling a Gaussian (bell-shaped) function.

To investigate sensory response gain across different alpha levels in our data, we used the post-stimulus gamma activity (see Figure 3B) as a proxy for alpha-linked output gain in visual cortex (Bastos et al., 2015; Michalareas et al., 2016; Ni et al., 2016; Popov et al., 2017; van Kerkoerle et al., 2014). We exploited the large number of trials per participant per condition (range 543 to 1391 trials) by sorting each participant’s trials into ten equal-sized bins ranging from weak to strong alpha, separately for the two conditions. We then calculated the average gamma power modulation within each alpha bin and finally plotted the participant-averaged gamma across alpha bins for each condition in Figure 6C (see Materials and methods for details). This indeed revealed an inverted-U shaped relationship between alpha and gamma in both conditions, with a steeper curve for the liberal condition.

To assess the model’s ability to explain the data, we statistically tested three predictions derived from the model. First, the model predicts overall lower average pre-stimulus alpha power for liberal than for conservative due to the shift in the operational range of alpha. This was confirmed in Figure 6D (p=0.01, permutation test, see also Figure 5). Second, the model predicts a stronger gamma response for liberal than for conservative around the peak of the gain curve (the center of the effective alpha range, see Figure 6B), which we indeed observed (p=0.024, permutation test on the average of the middle two alpha bins) (Figure 6E). Finally, the model predicts that the difference between the gain curves (when they are aligned over their effective ranges on the x-axis using alpha bin number, as shown in Figure 6—figure supplement 1A) also resembles a Gaussian curve (Figure 6—figure supplement 1B). Consistent with this prediction, we observed an interaction effect between condition (liberal, conservative) and bin number (1-10) using a standard Gaussian contrast in a two-way repeated measures ANOVA (F(1,13) = 4.6, p=0.051, partial η2 = 0.26). Figure 6F illustrates this finding by showing the difference between the two curves in Figure 6C as a function of alpha bin number (see Figure 6—figure supplement 1C for the curves of both conditions as a function of alpha bin number). Subsequent separate tests for each condition indeed confirmed a significant U-shaped relationship between alpha and gamma in the liberal condition with a large effect size (F(1,13) = 7.7, p=0.016, partial η2 = 0.37), but no significant effect in the conservative condition with only a small effect size (F(1,13) = 1.7, p=0.22, partial η2 = 0.12), using one-way repeated measures ANOVA’s with alpha bin (Gaussian contrast) as the factor of interest.

Taken together, these findings suggest that the alpha suppression observed in the liberal compared to the conservative condition boosted stimulus-induced activity, which in turn might have indiscriminately biased sensory evidence accumulation toward the target-present decision boundary. In the next section, we investigate a direct link between drift bias and stimulus-induced activity as measured through gamma.

Visual cortical gamma activity predicts strength of evidence accumulation bias

The findings presented so far suggest that behaviorally, a liberal decision bias shifts evidence accumulation toward target-present responses (drift bias in the DDM), while neurally it suppresses pre-stimulus alpha and enhances poststimulus gamma responses. In a final analysis, we asked whether alpha-binned gamma modulation is directly related to a stronger drift bias. To this end, we again applied the drift bias DDM to the behavioral data of each participant, while freeing the drift bias parameter not only for the two conditions, but also for the 10 alpha bins for which we calculated gamma modulation (see Figure 6C). We directly tested the correspondence between DDM drift bias and gamma modulation using repeated measures correlation (Bakdash and Marusich, 2017), which takes all repeated observations across participants into account while controlling for non-independence of observations collected within each participant (see Materials and methods for details). Gamma modulation was indeed correlated with drift bias in both conditions (liberal, r(125) = 0.49, p=5e-09; conservative, r(125) = 0.38, p=9e-06) (Figure 7). We tested the robustness of these correlations by excluding the data points that contributed most to the correlations (as determined with Cook’s distance) and obtained qualitatively similar results, indicating these correlations were not driven by outliers (Figure 7, see Materials and methods for details). To rule out that starting point could explain this correlation, we repeated this analysis while controlling for the starting point of evidence accumulation estimated per alpha bin within the starting point model. To this end, we regressed both bias parameters on gamma. Crucially, we found that in both conditions starting point bias did not uniquely predict gamma when controlling for drift bias (liberal: F(1,124) = 5.8, p=0.017 for drift bias, F(1,124) = 0.3, p=0.61 for starting point; conservative: F(1,124) = 8.7, p=0.004 for drift bias, F(1,124) = 0.4, p=0.53 for starting point. This finding suggests that the drift bias model outperforms the starting point model when correlated to gamma power. As a final control, we also performed this analysis for the SSVEP (23–27 Hz) power modulation (see Figure 3B, bottom) and found a similar inverted-U shaped relationship between alpha and the SSVEP for both conditions (Figure 7—figure supplement 1A), but no correlation with drift bias (liberal, r(125) = 0.11, p=0.72, conservative, r(125) = 0.22, p=0.47) (Figure 7—figure supplement 1B) or with starting point (liberal, r(125) = 0.08, p=0.02, conservative, r(125) = 0.22, p=0.95). This suggests that the SSVEP is similarly coupled to alpha as the stimulus-induced gamma, but is less affected by the experimental conditions and not predictive of decision bias shifts. Taken together, these results suggest that alpha-binned gamma modulation underlies biased sensory evidence accumulation.

Figure 7. Alpha-binned gamma modulation correlates with evidence accumulation bias.

Repeated measures correlation between gamma modulation and drift bias for the two conditions. Each circle represents a participant’s gamma modulation within one alpha bin. Drift bias and gamma modulation scalars were residualized by removing the average within each participant and condition, thereby removing the specific range in which the participants values operated. Crosses indicate data points that were most influential for the correlation, identified using Cook’s distance. Correlations remained qualitatively unchanged when these data points were excluded (liberal, r(120) = 0.46, p=8e-07; conservative, r(121) = 0.27, p=0.0009) Error bars, 95% confidence intervals after averaging across participants.

Figure 7—source data 1. MATLAB .mat file containing the data used.
DOI: 10.7554/eLife.37321.017

Figure 7.

Figure 7—figure supplement 1. Alpha-binned post-stimulus SSVEP modulation.

Figure 7—figure supplement 1.

(A) Inverted-U shaped relationship between alpha and SSVEP modulation, computed as the percent signal change 23–27 Hz power modulation with respect to the pre-stimulus baseline. (B) Correlations between SSVEP modulation and drift bias for both conditions. These non-significant correlations are overall weaker than for gamma (see Figure 6).

Finally, we asked to what extent the enhanced tonic midfrontal theta may have mediated the relationship between alpha-binned gamma and drift bias. To answer this question, we entered drift bias in a two-way repeated measures ANOVA with factors theta and gamma power (all variables alpha-binned), but found no evidence for mediation of the gamma-drift bias relationship by midfrontal theta (liberal, F(1,13) = 1.3, p=0.25; conservative, F(1,13) = 0.003, p=0.95). At the same time, the gamma-drift bias relationship was qualitatively unchanged when controlling for theta (liberal, F(1,13) = 48.4, p<0.001; conservative, F(1,13) = 19.3, p<0.001). Thus, the enhanced midfrontal theta in the liberal condition plausibly reflects a top-down, attention-related signal indicating the need for cognitive control to avoid missing targets, but its amplitude seemed not directly linked to enhanced sensory evidence accumulation, as found for gamma. This latter finding suggests that the enhanced theta in the liberal condition served as an alarm signal indicating the need for a shift in response strategy, without specifying exactly how this shift was to be implemented (Cavanagh and Frank, 2014).

Discussion

Traditionally, decision bias has been conceptualized in SDT as a criterion threshold that is positioned at an arbitrary location between noise and signal-embedded-in-noise distributions of sensory evidence strengths. The ability to strategically shift decision bias in order to flexibly adapt to stimulus-response reward contingencies in the environment presumably increases chances of survival, but to date such strategic bias shifts as well as their neural underpinnings have not been demonstrated. Here, we compared two versions of the drift diffusion model to show that an experimentally induced bias shift affects the process of sensory evidence accumulation itself, rather than shifting a threshold entity as SDT implies. Moreover, we reveal the neural signature of drift bias by showing that an experimentally induced liberal decision bias is accompanied by changes in midfrontal theta and posterior alpha suppression, resulting in enhanced gamma activity by increased response gain.

Although previous studies have shown correlations between suppression of pre-stimulus alpha (8—12 Hz) power and a liberal decision bias during spontaneous fluctuations in alpha activity (Iemi et al., 2017; Limbach and Corballis, 2016), these studies have not established the effect of experimentally induced (within-subject) bias shifts. In the current study, by experimentally manipulating stimulus-response reward contingencies we show for the first time that pre-stimulus alpha can be actively modulated by a participant to achieve changes in decision bias. Further, we show that alpha suppression in turn modulates gamma activity, in part by increasing the gain of cortical responses. Critically, gamma activity accurately predicts the strength of the drift bias parameter in the DDM drift bias model, thereby providing a direct link between our behavioral and neural findings. Together, these findings show that humans are able to actively implement decision biases by flexibly adapting neural excitability to strategically shift sensory evidence accumulation toward one of two decision bounds.

Based on our results, we propose that decision biases are implemented by flexibly adjusting neural excitability in visual cortex. Figure 8 summarizes this proposed mechanism graphically by visualizing a hypothetical transition in neural excitability following a strategic liberal bias shift, as reflected in visual cortical alpha suppression (left panel). This increased excitability translates into stronger gamma-band responses following stimulus onset (right panel, top). These increased gamma responses finally bias evidence accumulation toward the target-present decision boundary during a liberal state, resulting in more target-present responses, whereas target-absent responses are decimated (blue RT distributions; right panel, bottom). Our experimental manipulation of decision bias in different blocks of trials suggests that decision makers are able to control this biased evidence accumulation mechanism willfully by adjusting alpha in visual cortex.

Figure 8. Illustrative graphical depiction of the excitability state transition from conservative to liberal, and subsequent biased evidence accumulation under a liberal bias.

Figure 8.

The left panel shows the transition from a conservative to a liberal condition block. The experimental induction of a liberal decision bias causes alpha suppression in visual cortex, which increases neural excitability. The right top panel shows increased gamma gain for incoming sensory evidence under conditions of high excitability. The right bottom panel shows how increased gamma-gain causes a bias in the drift rate, resulting in more ‘target present’ responses than in the conservative state.

A neural mechanism that could underlie bias-related alpha suppression may be under control of the catecholaminergic neuromodulatory systems, consisting of the noradrenaline-releasing locus coeruleus (LC) and dopamine systems (Aston-Jones and Cohen, 2005). These systems are able to modulate the level of arousal and neural gain, and show tight links with pupil responses (de Gee et al., 2017; de Gee et al., 2014; Joshi et al., 2016Kloosterman et al., 2015b; McGinley et al., 2015). Accordingly, pre-stimulus alpha power suppression has also recently been linked to pupil dilation (Meindertsma et al., 2017). From this perspective, our results may help to reconcile previous studies showing relationships between a liberal bias, suppression of spontaneous alpha power and increased pupil size. Consistent with this, a recent monkey study observed increased neural activity during a liberal bias in the superior colliculus (Crapse et al., 2018), a mid-brain structure tightly interconnected with the LC (Joshi et al., 2016). Taken together, a more liberal within-subject bias shift (following experimental instruction and/or reward) might activate neuromodulatory systems that subsequently increase cortical excitability and enhance sensory responses for both stimulus and ‘noise’ signals in visual cortex, thereby increasing a person’s propensity for target-present responses (Iemi et al., 2017).

We note that although the gain model is consistent with our data as well as the data on which the model was conceived (see Rajagovindan and Ding, 2011), we do not provide a plausible mechanism that could bring about the steepening in the U-curved function observed in Figure 6C F. Although Rajagovindan and Ding report a simulation in their paper suggesting that increased excitability could indeed cause increased gain, this shift could in principle either be caused by the alpha suppression itself, by the same signal that causes alpha suppression, or it could originate from an additional top-down signal from frontal brain regions. Our analysis of pre-stimulus signals indeed shows preliminary evidence for such a top-down signal, but how exactly the gain enhancement arises remains an open question that could be addressed in future research.

Whereas we report a unique link between alpha-linked gamma modulation and decision bias through the gain model, several previous studies have reported a link between alpha and objective performance instead of bias, particularly in the phase of alpha oscillations (Busch et al., 2009; Mathewson et al., 2009). Our findings can be reconciled with those by considering that detection sensitivity in many previous studies was often quantified in terms of raw stimulus detection rates, which do not dissociate objective sensitivity from response bias (see Figure 2B) (Green and Swets, 1966). Indeed, our findings are in line with recently reported links between decision bias and spontaneous fluctuations in excitability (Iemi et al., 2017; Iemi and Busch, 2018; Limbach and Corballis, 2016), suggesting an active role of neural excitability in decision bias. Relatedly, one could ask whether the observed change in cortical excitability may reflect a change in target detection sensitivity (drift rate) rather than an intentional bias shift. This is unlikely because that would predict effects opposite to those we observed. We found increased excitability in the liberal condition compared to the conservative condition; if this were related to improved detection performance, one would predict higher sensitivity in the liberal condition, while we found higher sensitivity in the conservative condition (compare drift rate to drift bias in both conditions in Figure 2C). This finding convincingly ties cortical excitability in our paradigm to decision bias, as opposed to detection sensitivity. Convergently, other studies also report a link between pre-stimulus low-frequency EEG activity and subjective perception, but not objective task performance (Benwell et al., 2017; Iemi and Busch, 2018).

In summary, our results suggest that stimulus-induced responses are boosted during a liberal decision bias due to increased cortical excitability, in line with recent work linking alpha power suppression to response gain (Peterson and Voytek, 2017). Future studies can now establish whether this same mechanism is at play in other subjective aspects of decision-making, such as confidence and meta-cognition (Fleming et al., 2018; Samaha et al., 2017) as well as in a dynamically changing environment (Norton et al., 2017). Explicit manipulation of cortical response gain during a bias manipulation (by pharmacological manipulation of the noradrenergic LC-NE system; (Servan-Schreiber et al., 1990)) or by enhancing occipital alpha power using transcranial brain stimulation (Zaehle et al., 2010) could further establish the underlying neural mechanisms involved in decision bias.

In the end, although one may be unaware, every decision we make is influenced by biases that operate on one’s noisy evidence accumulation process. Understanding how these biases affect our decisions is crucial to enable us to control or invoke them adaptively (Pleskac et al., 2017). Pinpointing the neural mechanisms underlying bias in the current elementary perceptual task may foster future understanding of how more abstract and high-level decisions are modulated by decision bias (Tversky and Kahneman, 1974).

Data and code sharing

The data analyzed in this study are publicly available on Figshare (Kloosterman et al., 2018). Analysis scripts are publicly available on Github (Kloosterman, 2018; copy archived at https://github.com/elifesciences-publications/critEEG).

Materials and methods

Key resources table.

Reagent type
(species)
or resource
Designation Source or
reference
Identifiers Additional
information
Biological
sample
(Humans)
Participants This paper See Participants
section in Materials
and methods
Software,
algorithm
MATLAB Mathworks MATLAB_R2016b,
RRID:SCR_001622
Software,
algorithm
Presentation NeuroBS Presentation_v9.9,
RRID:SCR_002521
Software,
algorithm
Custom analysis
code
Kloosterman, 2018 https://github.com/nkloost1/critEEG
Other EEG data
experimental task
Kloosterman et al., 2018
https://doi.org/10.6084/m9.figshare.6142940

Participants

Sixteen participants (eight females, mean age 24.1 years,±1.64) took part in the experiment, either for financial compensation (EUR 10, - per hour) or in partial fulfillment of first year psychology course requirements. Each participant completed three experimental sessions on different days, each session lasting ca. 2 hr, including preparation and breaks. One participant completed only two sessions, yielding a total number of sessions across subjects of 47. Due to technical issues, for one session only data for the liberal condition was available. One participant was an author. All participants had normal or corrected-to-normal vision and were right handed. Participants provided written informed consent before the start of the experiment. All procedures were approved by the ethics committee of the University of Amsterdam.

Regarding sample size, our experiment consisted of 16 biological replications (participants) and either three (fifteen participants) or two (one participant) technical replications (i.e. experimental sessions). The sample size was determined based on two criteria: 1) obtaining large amounts of data per participant (thousands of trials), which is necessary to perform robust drift diffusion modelling of choice behavior and obtain reliable EEG spectral power estimates for each of the ten bins of trials that were created within participants, and 2) obtaining data from a sufficient number of participants to leverage across-subject variability in correlational analyses. Thus, we emphasized obtaining many data points per participant relative to obtaining many participants, while still preserving the ability to perform correlations across participants.

All participants were included in the signal-detection-theoretical and drift diffusion modeling analyses. One participant was excluded from the EEG analysis due to excessive noise (EEG power spectrum opposite of 1/frequency). One further participant was excluded from the analyses that included condition-specific gamma because the liberal–conservative difference in gamma in this participant was >3 standard deviations away from the other participants.

Stimuli

Stimuli consisted of a continuous semi-random rapid serial visual presentation (rsvp) of full screen texture patterns. The texture patterns consisted of line elements approx. 0.07° thick and 0.4° long in visual angle. Each texture in the rsvp was presented for 40 ms (i.e. stimulation frequency 25 Hz), and was oriented in one of four possible directions: 0°, 45°, 90° or 135°. Participants were instructed to fixate on a red dot in the center of the screen. At random inter trial intervals (ITI’s) sampled from a uniform distribution (ITI range 0.3–2.2 s), the rsvp contained a fixed sequence of 25 texture patterns, which in total lasted one second. This fixed sequence consisted of four stimuli preceding a (non-)target stimulus (orientations of 45°, 90°, 0°, 90° respectively) and twenty stimuli following the (non)-target (orientations of 0°, 90°, 0°, 90°, 0°, 45°, 0°, 135°, 90°, 45°, 0°, 135°, 0°, 45°, 90°, 45°, 90°, 135°, 0°, 135° respectively) (see Figure 2A). The fifth texture pattern within the sequence (occurring from 0.16 s after sequence onset) was either a target or a nontarget stimulus. Nontargets consisted of either a 45° or a 135° homogenous texture, whereas targets contained a central orientation-defined square of 2.42° visual angle, thereby consisting of both a 45° and a 135° texture. 50% of all targets consisted of a 45° square and 50% of a 135° square. Of all trials, 75% contained a target and 25% a nontarget. Target and nontarget trials were presented in random order. To avoid specific influences on target stimulus visibility due to presentation of similarly or orthogonally oriented texture patterns temporally close in the cascade, no 45° and 135° oriented stimuli were presented directly before or after presentation of the target stimulus. All stimuli had an isoluminance of 72.2 cd/m2. Stimuli were created using MATLAB (The Mathworks, Inc, Natick, MA, USA; RRID:SCR_001622) and presented using Presentation version 9.9 (Neurobehavioral systems, Inc, Albany, CA, USA; RRID:SCR_002521).

Experimental design

The participants’ task was to detect and actively report targets by pressing a button using their right hand. Targets occasionally went unreported, presumably due to constant forward and backward masking by the continuous cascade of stimuli and unpredictability of target timing (Fahrenfort et al., 2007). The onset of the fixed order of texture patterns preceding and following (non-)target stimuli was neither signaled nor apparent.

At the beginning of the experiment, participants were informed they could earn a total bonus of EUR 30, -, on top of their regular pay of EUR 10, - per hour or course credit. In two separate conditions within each session of testing, we encouraged participants to use either a conservative or a liberal bias for reporting targets using both aversive sounds as well as reducing their bonus after errors. In the conservative condition, participants were instructed to only press the button when they were relatively sure they had seen the target. The instruction on screen before block onset read as follows: ‘Try to detect as many targets as possible. Only press when you are relatively sure you just saw a target.’ To maximize effectiveness of this instruction, participants were told the bonus would be diminished by 10 cents after a false alarm. During the experiment, a loud aversive sound was played after a false alarm to inform the participant about an error. During the liberal condition, participants were instructed to miss as few targets as possible. The instruction on screen before block onset read as follows: ‘Try to detect as many targets as possible. If you sometimes press when there was nothing this is not so bad’. In this condition, the loud aversive sound was played twice in close succession whenever they failed to report a target, and three cents were subsequently deducted from their bonus. The difference in auditory feedback between both conditions was included to inform the participant about the type of error (miss or false alarm), in order to facilitate the desired bias in both conditions. After every block, the participant’s score (number of missed targets in the liberal condition and number of false alarms in the conservative condition) was displayed on the screen, as well as the remainder of the bonus. After completing the last session of the experiment, every participant was paid the full bonus as required by the ethical committee.

Participants performed six blocks per session lasting ca. nine minutes each. During a block, participants continuously monitored the screen and were free to respond by button press whenever they thought they saw a target. Each block contained 240 trials, of which 180 target and 60 nontarget trials. The task instruction was presented on the screen before the block started. The condition of the first block of a session was counterbalanced across participants. Prior to EEG recording in the first session, participants performed a 10-min practice run of both conditions, in which visual feedback directly after a miss (liberal condition) or false alarm (conservative) informed participants about their mistake, allowing them to adjust their decision bias accordingly. There were short breaks between blocks, in which participants indicated when they were ready to begin the next block.

Behavioral analysis

We calculated each participant’s criterion c (Green and Swets, 1966) across the trials in each condition as follows:

c=-12[Z(Hitrate)+Z(FArate)]

where hit-rate is the proportion target-present responses of all target-present trials, false alarm (FA)-rate is the proportion target-present responses of all target-absent trials, and Z(...) is the inverse standard normal distribution. Furthermore, we calculated objective sensitivity measure d’ using:

d'=ZHitrate-Z(FArate)

as well as by subtracting hit and false alarm rates. Reaction times (RTs) were measured as the duration between target onset and button press.

Drift diffusion modeling of choice behavior

In order to be detected, the 40 ms-duration figure-ground targets used in our study undergo a process in visual cortex called figure-ground segregation. This process has been well characterized in man and monkey (Fahrenfort et al., 2008; Lamme, 1995; Lamme et al., 2002; Supèr et al., 2003), and results from recurrent processing to extract the surface region in visual cortex. Figure-ground segregation is known to extend far beyond the mere presentation time of the stimulus, thus providing a plausible neural basis for the evidence accumulation process. Further, a central assumption of the drift diffusion model is that the process of evidence accumulation is gradual, independent of whether sensory input is momentary. Indeed, the DDM was initially developed to explain reaction time distributions during memory retrieval, in which evidence accumulation must occur through retrieval of a memory trace within the brain, in the complete absence of external stimulus at the time of the decision (Ratcliff, 1978). Our observed RT distributions show the typical features that occur across many different types of decision and memory tasks, which the DDM is well able to capture, including a sharp leading edge and a long tail of the distributions (see Figure 2—figure supplement 2). The success of the DDM in fitting these data is consistent with previous work (e.g. Ratcliff, 2006) and might reflect the fact that observers modulate the underlying components of the decision process also when they do not control the stimulus duration (Kiani et al., 2008).

We fitted the drift diffusion model to our behavioral data for each subject individually, and separately for the liberal and conservative conditions. We fitted the model using a G square method based on quantile RT’s (RT cutoff, 200 ms, for details, see Ratcliff et al., 2018), using custom code (de Gee et al., 2018) that was contributed to the HDDM 0.6.1 package (Wiecki et al., 2013). The RT distributions for target-present responses were represented by the 0.1, 0.3, 0.5, 0.7 and 0.9 quantiles, and, along with the associated response proportions, contributed to G square. In addition, a single bin containing the number of target-absent responses contributed to G square. Each model fit was run six times, after which the best fitting run was kept. Fitting the model to RT distributions for target-present and target-absent choices (termed ‘stimulus coding’ in Wiecki et al., 2013), as opposed to the more common fits of correct and incorrect choice RT’s (termed ‘accuracy coding’ in Wiecki et al., 2013), allowed us to estimate parameters that could have induced biases in subjects’ behavior.

Parameter recovery simulations showed that letting both the starting point of the accumulation process and drift bias (an evidence-independent constant added to the drift toward one or the other bound) free to vary with experimental condition is problematic for data with no explicit target-absent responses (data not shown). Thus, to test whether shifts in drift bias or starting point underlie bias we fitted three separate models. In the first model (‘fixed model’), we allowed only the following parameters to vary between the liberal and conservative condition: (i) the mean drift rate across trials; (ii) the separation between both decision bounds (i.e., response caution); and (iii) the non-decision time (sum of the latencies for sensory encoding and motor execution of the choice). Additionally, the bias parameters starting point and drift bias were fixed for the experimental conditions. The second model (‘starting point model’) was the same as the fixed model, except that we let the starting point of the accumulation process vary with experimental condition, whereas the drift bias was kept fixed for both conditions. The third model (‘drift bias model’) was the same as the fixed model, except that we let the drift bias vary with experimental condition, while the starting point was kept fixed for both conditions. We used Bayesian Information Criterion (BIC) to select the model which provided the best fit to the data (Neath and Cavanaugh, 2012). The BIC compares models based on their maximized log-likelihood value, while penalizing for the number of parameters.

Distinguishing DDM drift bias and drift rate

In our task, only target-present responses were coupled to a behavioral response (button-press), so we could measure reaction times only for these responses, whereas reaction times for target-absent responses remained implicit. Thus, in our fitting procedure, the RT distributions for target-present responses were represented by the 0.1, 0.3, 0.5, 0.7 and 0.9 quantiles, and, along with the associated response proportions, contributed to G square. In addition, a single bin containing the number of target-absent responses contributed to G square. It has been shown that such a diffusion model with an implicit (no response) boundary can be fit to data with almost the same accuracy as fitting the two-choice model to two-choice data (Ratcliff et al., 2018). In a diffusion model with an implicit (no response) boundary, both an increase in drift rate and drift criterion would predict faster target-present responses. However, the key distinction is that an increase in drift additionally predicts more correct responses (for both target-present and target-absent responses), and an increase in drift criterion shifts the relative fraction of target-present and target-absent responses (decision bias). Because a single bin containing the number of target-absent responses contributed to G square, our fitting procedure can distinguish between decision bias versus drift rate.

EEG recording

Continuous EEG data were recorded at 256 Hz using a 48-channel BioSemi Active-Two system (BioSemi, Amsterdam, the Netherlands), connected to a standard EEG cap according to the international 10–20 system. Electrooculography (EOG) was recorded using two electrodes at the outer canthi of the left and right eyes and two electrodes placed above and below the right eye. Horizontal and vertical EOG electrodes were referenced against each other, two for horizontal and two for vertical eye movements (blinks). We used the Fieldtrip toolbox (Oostenveld et al., 2011) and custom software (Kloosterman et al., 2018) in MATLAB R2016b (The Mathworks Inc, Natick, MA, USA; RRID:SCR_001622) to process the data (see below). Data were re-referenced to the average voltage of two electrodes attached to the earlobes.

Trial extraction and preprocessing

We extracted trials of variable duration from 1 s before target sequence onset until 1.25 after button press for trials that included a button press (hits and false alarms), and until 1.25 s after stimulus onset for trials without a button press (misses and correct rejects). The following constraints were used to classify (non-)targets as detected (hits and false alarms), while avoiding the occurrence of button presses in close succession to target reports and button presses occurring outside of trials: 1) A trial was marked as detected if a response occurred within 0.84 s after target onset; 2) when the onset of the next target stimulus sequence started before trial end, the trial was terminated at the next trial’s onset; 3) when a button press occurred in the 1.5 s before trial onset, the trial was extracted from 1.5 s after this button press; 4) when a button press occurred between 0.5 s before until 0.2 s after sequence onset, the trial was discarded. See Kloosterman et al., 2015a and Meindertsma et al. (2017) for similar trial extraction procedures. After trial extraction, channel time courses were linearly detrended and the mean of every channel was removed per trial.

Artifact rejection 

Trials containing muscle artifacts were rejected from further analysis using a standard semi-automatic preprocessing method in Fieldtrip. This procedure consists of bandpass-filtering the trials of a condition block in the 110–125 Hz frequency range, which typically contains most of the muscle artifact activity, followed by a Z-transformation. Trials exceeding a threshold Z-score were removed completely from analysis. We used as the threshold the absolute value of the minimum Z-score within the block,+1. To remove eye blink artifacts from the time courses, the EEG data from a complete session were transformed using independent component analysis (ICA), and components due to blinks (typically one or two) were removed from the data. In addition, to remove microsaccade-related artifacts we included two virtual channels in the ICA based on channels Fp1 and Fp2, which included transient spike potentials as identified using the saccadic artefact detection algorithm from Hassler et al. (2011). This yielded a total number of channels submitted to ICA of 48 + 2 = 50. The two components loading high on these virtual electrodes (typically with a frontal topography) were also removed. Blinks and eye movements were then semi-automatically detected from the horizontal and vertical EOG (frequency range 1–15 Hz; z-value cut-off four for vertical; six for horizontal) and trials containing eye artefacts within 0.1 s around target onset were discarded. This step was done to remove trials in which the target was not seen because the eyes were closed. Finally, trials exceeding a threshold voltage range of 200 μV were discarded. To attenuate volume conduction effects and suppress any remaining microsaccade-related activity, the scalp current density (SCD) was computed using the second-order derivative (the surface Laplacian) of the EEG potential distribution (Perrin et al., 1989).

ERP analysis 

We computed event-related potentials in electrode C4 by low-pass filtering the time-domain data up to 8 Hz followed by averaging all trials within participant per condition.

Spectral analysis

We used a sliding window Fourier transform (Mitra and Pesaran, 1999); step size, 50 ms; window size, 400 ms; frequency resolution, 2.5 Hz) to calculate time-frequency representations (spectrograms) of the EEG power for each electrode and each trial. We used a single Hann taper for the frequency range of 3–35 Hz (spectral smoothing, 4.5 Hz, bin size, 1 Hz) and the multitaper technique for the 36–100 Hz frequency range (spectral smoothing, 8 Hz; bin size, 2 Hz; five tapers). See Kloosterman et al., 2015a and Meindertsma et al. (2017) for similar settings. Finally, to investigate spectral power also <3 Hz, we ran an additional time-frequency analysis with a window size of 1 s (i.e. frequency resolution 1 Hz) centered on the time point 0.5 s before trial onset (frequency range 1–35 Hz, no spectral smoothing, bin size 0.5 Hz).

Spectrograms were aligned to the onset of the stimulus sequence containing the (non)target, and (in a separate analysis) to the button press. Power modulations during the trials were quantified as the percentage of power change at a given time point and frequency bin, relative to a baseline power value for each frequency bin (Figure 3). We used as a baseline the mean EEG power in the interval 0.4 to 0 s before trial onset, computed separately for each condition. If this interval was not completely present in the trial due to preceding events (see Trial extraction), this period was shortened accordingly. We normalized the data by subtracting the baseline from each time-frequency bin and dividing this difference by the baseline (x 100%). For the analysis of raw pre-stimulus power modulations, no baseline correction was applied on the raw scalp current density values. We focused our analysis of EEG power modulations around target onsets on those electrodes that processed the visual stimulus. To this end, we averaged the power modulations or raw power across eleven occipito-parietal electrodes that showed stimulus-induced responses in the gamma-band range (59–100 Hz). See Kloosterman et al., 2015a and Meindertsma et al. (2017) for a similar procedure.

Statistical significance testing of EEG power modulations across space, time and frequency

To determine clusters of significant modulation with respect to the pre-stimulus baseline without any a priori selection, we ran statistics across space-time-frequency bins using paired t-tests across subjects performed at each bin. Single bins were subsequently thresholded at p<0.05 and clusters of contiguous time-space-frequency bins were determined. Cluster significance was assessed using a cluster-based permutation procedure (1000 permutations). For visualization purposes, we integrated (using the matlab trapz function) power modulation in the time-frequency representations (TFR’s, Figure 3, left panels) across the highlighted electrodes in the topographies (Figure 3, right panels). For the topographical scalp maps, modulation was integrated across the saturated time-frequency bins in the TFRs. To test at which frequencies raw prestimulus EEG power differed between the liberal and conservative conditions, we performed this analysis across electrodes and frequencies after taking the liberal – conservative difference at each frequency bin (Figure 5A) (see Statistical comparisons).

Response gain model test

To test the predictions of the gain model, we first averaged activity in the 8–12 Hz range from 0.8 to 0.2 s before trial onset (staying half our window size from trial onset, to avoid mixing pre- and poststimulus activity, also see Iemi et al., 2017), yielding a single scalar alpha power value per trial. If this interval was not completely present in the trial due to preceding events (see Trial extraction), this period was shortened accordingly. Trials in which the scalar was >3 standard deviations away from the participant’s mean were excluded. We then sorted all single-trial alpha values for each participant and condition in ascending order and assigned them to ten bins of equal size, ranging from weakest to strongest alpha. Adjacent bin ranges overlapped for 50% to stabilize estimates. Then we averaged the corresponding gamma modulation of the trials belonging to each bin (consisting of the average power modulation within 59–100 Hz 0.2 to 0.6 s after trial onset, see Figure 3). Finally, we averaged across participants and plotted the median alpha value per bin averaged across participants against the mean gamma modulation. See Rajagovindan and Ding (2011) for a similar procedure. To statistically test for the existence of inverted U-shaped relationships between alpha and gamma, we performed a one-way repeated measures ANOVA on gamma modulation with factor alpha bin (10 bins) to each condition separately and a two-way repeated measures ANOVA with factors bin and condition for testing the liberal–conservative difference (Figure 6F). Given the model prediction of a Gaussian-shaped relationship between alpha and gamma, we constructed a Gaussian contrast using a Gaussian shape with unit standard deviation (contrast values: −1000,–991, −825, 295, 2521, 2521, 295,–825, −991,–1000, values were chosen to sum to zero). For plotting purposes (Figure 6C-F), we computed within-subject error bars by removing within each participant the mean across conditions from the estimates.

Correlation between gamma modulation and drift bias

To link DDM drift bias and gamma power modulation, we re-fitted the DDM drift bias model while freeing the drift bias parameter both for each condition as well as for the ten alpha bins, while freeing the other parameters (drift rate, boundary separation, non-decision time) for each condition and fixing starting point across conditions. We then used repeated measures correlation to test whether stronger gamma was associated with stronger drift bias. Repeated measures correlation determines the common within-individual association for paired measures assessed on two or more occasions for multiple individuals by controlling for the specific range in which individuals’ measurements operate, and correcting the correlation degrees of freedom for non-independence of repeated measurements obtained from each individual. Specifically, the correlation degrees of freedom were 14 participants × 10 observations – Number of participants – 1 = 140 – 14 – 1 = 125. Repeated measures correlation tends to have greater statistical power than conventional correlation across individuals because neither averaging nor aggregation is necessary for an intra-individual research question. Please see Bakdash and Marusich (2017) for more information. We assessed the impact of single observations on the correlations by excluding observations exceeding five times the average Cook’s distance of all values within each condition (five observations for liberal and four for conservative) and recomputing the correlations.

Statistical comparisons

We used two-sided permutation tests (10,000 permutations) (Efron and Tibshirani, 1998) to test the significance of behavioral effects and the model fits. Permutation tests yield p=0 if the observed value falls outside the range of the null distribution. In these cases, p<0.0001 is reported in the manuscript. The standard deviation (s.d.) is reported as a measure of spread along with all participant-averaged results reported in the text. To quantify power modulations after (non-)target onset, we tested the overall power modulation for significant deviations from zero. For these tests, we used a cluster-based permutation procedure to correct for multiple comparisons (Maris and Oostenveld, 2007). For time-frequency representations along with spatial topographies of power modulation, this procedure was performed across all time-frequency bins and electrodes; for frequency spectra across all electrodes and frequencies; for power and ERP time courses, across all time bins. To test the existence of inverted-U shaped relationships between gamma and alpha bins, we conducted repeated measures ANOVA’s and Gaussian shaped contrasts (see section Response gain model test for details) using SPSS 23 (IBM, Inc). We used multiple regression to assess whether starting point could account for the correlation between gamma and drift bias. We used Pearson correlation to test the link between parameter estimates of the DDM and SDT frameworks and repeated measures correlation to test the link between gamma power and drift bias (see previous section).

Acknowledgements

The authors thank Timothy J Pleskac for discussion.

Funding Statement

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Contributor Information

Niels A Kloosterman, Email: kloosterman@mpib-berlin.mpg.de.

Michael J Frank, Brown University, United States.

Michael J Frank, Brown University, United States.

Funding Information

This paper was supported by the following grants:

  • Max-Planck-Gesellschaft Open-access funding to Niels A Kloosterman, Markus Werkle-Bergner, Ulman Lindenberger, Douglas D Garrett.

  • Deutsche Forschungsgemeinschaft Emmy Noether Programme grant to Niels A Kloosterman, Douglas D Garrett.

  • Max Planck UCL Centre for Computational Psychiatry and Ageing Research to Niels A Kloosterman, Ulman Lindenberger, Douglas D Garrett.

  • Jacobs Foundation Early Career Research Fellowship to Markus Werkle-Bergner.

  • Deutsche Forschungsgemeinschaft WE4296/5-1 to Markus Werkle-Bergner.

Additional information

Competing interests

No competing interests declared.

Author contributions

Conceptualization, Data curation, Software, Formal analysis, Investigation, Visualization, Methodology, Writing—original draft, Project administration, Writing—review and editing.

Resources, Software, Formal analysis, Methodology, Writing—review and editing.

Conceptualization, Methodology, Writing—review and editing.

Resources, Funding acquisition, Writing—review and editing.

Resources, Formal analysis, Supervision, Funding acquisition, Investigation, Methodology, Writing—review and editing.

Conceptualization, Data curation, Software, Formal analysis, Supervision, Visualization, Methodology, Writing—original draft, Project administration, Writing—review and editing.

Ethics

Human subjects: Participants provided written informed consent before the start of the experiment. All procedures were approved by the ethics committee of the Psychology Department of the University of Amsterdam (approval identifier: 2007-PN-69).

Additional files

Transparent reporting form
DOI: 10.7554/eLife.37321.019

Data availability

All data analysed during this study are publicly available, see https://doi.org/10.6084/m9.figshare.6142940.v1. Analysis scripts are publicly available on Github (https://github.com/nkloost1/critEEG; copy archived at https://github.com/elifesciences-publications/critEEG).

The following dataset was generated:

Niels A. Kloosterman, Jan Willem de Gee, Markus Werkle-Bergner, Ulman Lindenberger, Douglas D Garrett, Johannes Jacobus Fahrenfort. 2018. Humans strategically shift decision bias by flexibly adjusting sensory evidence accumulation in visual cortex. figshare.

References

  1. Afacan-Seref K, Steinemann NA, Blangero A, Kelly SP. Dynamic interplay of value and sensory information in High-Speed decision making. Current Biology. 2018;28:795–802. doi: 10.1016/j.cub.2018.01.071. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Aston-Jones G, Cohen JD. An integrative theory of locus coeruleus-norepinephrine function: adaptive gain and optimal performance. Annual Review of Neuroscience. 2005;28:403–450. doi: 10.1146/annurev.neuro.28.061604.135709. [DOI] [PubMed] [Google Scholar]
  3. Bakdash JZ, Marusich LR. Repeated measures correlation. Frontiers in Psychology. 2017;8:491. doi: 10.3389/fpsyg.2017.00456. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Bastos AM, Vezoli J, Bosman CA, Schoffelen JM, Oostenveld R, Dowdall JR, De Weerd P, Kennedy H, Fries P. Visual areas exert feedforward and feedback influences through distinct frequency channels. Neuron. 2015;85:390–401. doi: 10.1016/j.neuron.2014.12.018. [DOI] [PubMed] [Google Scholar]
  5. Benwell CSY, Tagliabue CF, Veniero D, Cecere R, Savazzi S, Thut G. Pre-stimulus EEG power predicts conscious awareness but not objective visual performance. eNeuro. 2017;2017 doi: 10.1523/ENEURO.0182-17.2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Bogacz R, Brown E, Moehlis J, Holmes P, Cohen JD. The physics of optimal decision making: a formal analysis of models of performance in two-alternative forced-choice tasks. Psychological Review. 2006;113:700–765. doi: 10.1037/0033-295X.113.4.700. [DOI] [PubMed] [Google Scholar]
  7. Busch NA, Dubois J, VanRullen R. The phase of ongoing EEG oscillations predicts visual perception. Journal of Neuroscience. 2009;29:7869–7876. doi: 10.1523/JNEUROSCI.0113-09.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Buzsáki G, Draguhn A. Neuronal oscillations in cortical networks. Science. 2004;304:1926–1929. doi: 10.1126/science.1099745. [DOI] [PubMed] [Google Scholar]
  9. Cavanagh JF, Frank MJ. Frontal theta as a mechanism for cognitive control. Trends in Cognitive Sciences. 2014;18:414–421. doi: 10.1016/j.tics.2014.04.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Cohen MX, Frank MJ. Neurocomputational models of basal ganglia function in learning, memory and choice. Behavioural Brain Research. 2009;199:141–156. doi: 10.1016/j.bbr.2008.09.029. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Crapse TB, Lau H, Basso MA. A role for the superior colliculus in decision criteria. Neuron. 2018;97:181–194. doi: 10.1016/j.neuron.2017.12.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. de Gee JW, Knapen T, Donner TH. Decision-related pupil dilation reflects upcoming choice and individual bias. PNAS. 2014;111:E618–E625. doi: 10.1073/pnas.1317557111. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. de Gee JW, Colizoli O, Kloosterman NA, Knapen T, Nieuwenhuis S, Donner TH. Dynamic modulation of decision biases by brainstem arousal systems. eLife. 2017;6:e23232. doi: 10.7554/eLife.23232. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. de Gee JW, Tsetsos K, McCormick DA, McGinley MJ, Donner TH. Phasic arousal optimizes decision computations in mice and humans. bioRxiv. 2018 doi: 10.1101/447656. [DOI]
  15. de Lange FP, Rahnev DA, Donner TH, Lau H. Prestimulus oscillatory activity over motor cortex reflects perceptual expectations. Journal of Neuroscience. 2013;33:1400–1410. doi: 10.1523/JNEUROSCI.1094-12.2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Destexhe A, Rudolph M, Fellous JM, Sejnowski TJ. Fluctuating synaptic conductances recreate in vivo-like activity in neocortical neurons. Neuroscience. 2001;107:13–24. doi: 10.1016/S0306-4522(01)00344-X. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Donner TH, Siegel M, Fries P, Engel AK. Buildup of choice-predictive activity in human motor cortex during perceptual decision making. Current Biology. 2009;19:1581–1585. doi: 10.1016/j.cub.2009.07.066. [DOI] [PubMed] [Google Scholar]
  18. Donner TH, Siegel M. A framework for local cortical oscillation patterns. Trends in Cognitive Sciences. 2011;15:191–199. doi: 10.1016/j.tics.2011.03.007. [DOI] [PubMed] [Google Scholar]
  19. Efron B, Tibshirani R. The problem of regions. The Annals of Statistics. 1998;26:1687–1718. doi: 10.1214/aos/1024691353. [DOI] [Google Scholar]
  20. Fahrenfort JJ, Scholte HS, Lamme VA. Masking disrupts reentrant processing in human visual cortex. Journal of Cognitive Neuroscience. 2007;19:1488–1497. doi: 10.1162/jocn.2007.19.9.1488. [DOI] [PubMed] [Google Scholar]
  21. Fahrenfort JJ, Scholte HS, Lamme VA. The spatiotemporal profile of cortical processing leading up to visual perception. Journal of Vision. 2008;8:12. doi: 10.1167/8.1.12. [DOI] [PubMed] [Google Scholar]
  22. Fetsch CR, Kiani R, Shadlen MN. Predicting the accuracy of a decision: a neural mechanism of confidence. Cold Spring Harbor Symposia on Quantitative Biology. 2014;79:185–197. doi: 10.1101/sqb.2014.79.024893. [DOI] [PubMed] [Google Scholar]
  23. Fleming SM, van der Putten EJ, Daw ND. Neural mediators of changes of mind about perceptual decisions. Nature Neuroscience. 2018;21:617–624. doi: 10.1038/s41593-018-0104-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Freeman WJ. Nonlinear gain mediating cortical stimulus-response relations. Biological Cybernetics. 1979;33:237–247. doi: 10.1007/BF00337412. [DOI] [PubMed] [Google Scholar]
  25. Gold JI, Shadlen MN. The neural basis of decision making. Annual Review of Neuroscience. 2007;30:535–574. doi: 10.1146/annurev.neuro.29.051605.113038. [DOI] [PubMed] [Google Scholar]
  26. Green DM, Swets JA. In: Signal Detection Theory and Psychophysics. John W, editor. Oxford, England: American Psychological Association; 1966. [Google Scholar]
  27. Hassler U, Barreto NT, Gruber T. Induced gamma band responses in human EEG after the control of miniature saccadic artifacts. NeuroImage. 2011;57:1411–1421. doi: 10.1016/j.neuroimage.2011.05.062. [DOI] [PubMed] [Google Scholar]
  28. Hipp JF, Siegel M. Dissociating neuronal gamma-band activity from cranial and ocular muscle activity in EEG. Frontiers in Human Neuroscience. 2013;7:338. doi: 10.3389/fnhum.2013.00338. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Iemi L, Chaumon M, Crouzet SM, Busch NA. Spontaneous neural oscillations bias perception by modulating baseline excitability. The Journal of Neuroscience. 2017;37:807–819. doi: 10.1523/JNEUROSCI.1432-16.2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Iemi L, Busch NA. Moment-to-Moment fluctuations in neuronal excitability bias subjective perception rather than strategic decision-making. eNeuro. 2018;5:ENEURO.0430-17.2018. doi: 10.1523/ENEURO.0430-17.2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Jensen O, Mazaheri A. Shaping functional architecture by oscillatory alpha activity: gating by inhibition. Frontiers in Human Neuroscience. 2010;4:186. doi: 10.3389/fnhum.2010.00186. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Joshi S, Li Y, Kalwani RM, Gold JI. Relationships between pupil diameter and neuronal activity in the locus coeruleus, Colliculi, and cingulate cortex. Neuron. 2016;89:221–234. doi: 10.1016/j.neuron.2015.11.028. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Kass RE, Raftery AE. Bayes factors. Journal of the American Statistical Association. 1995;90:773–795. doi: 10.1080/01621459.1995.10476572. [DOI] [Google Scholar]
  34. Kiani R, Hanks TD, Shadlen MN. Bounded integration in parietal cortex underlies decisions even when viewing duration is dictated by the environment. Journal of Neuroscience. 2008;28:3017–3029. doi: 10.1523/JNEUROSCI.4761-07.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Kloosterman NA, Meindertsma T, Hillebrand A, van Dijk BW, Lamme VA, Donner TH. Top-down modulation in human visual cortex predicts the stability of a perceptual illusion. Journal of Neurophysiology. 2015a;113:1063–1076. doi: 10.1152/jn.00338.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Kloosterman NA, Meindertsma T, van Loon AM, Lamme VAF, Bonneh YS, Donner TH. Pupil size tracks perceptual content and surprise. European Journal of Neuroscience. 2015b;41:1068–1078. doi: 10.1111/ejn.12859. [DOI] [PubMed] [Google Scholar]
  37. Kloosterman NA, de Gee JW, Werkle-Bergner M, Lindenberger U, Garrett DD, Fahrenfort JJ. 2018. Data from: humans strategically shift decision bias by flexibly adjusting sensory evidence accumulation in visual cortex. Figshare. [DOI] [PMC free article] [PubMed]
  38. Kloosterman NA. critEEG. 98f97c7GitHub. 2018 https://github.com/nkloost1/critEEG
  39. Lamme VA. The neurophysiology of figure-ground segregation in primary visual cortex. The Journal of Neuroscience. 1995;15:1605–1615. doi: 10.1523/JNEUROSCI.15-02-01605.1995. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Lamme VA, Zipser K, Spekreijse H. Masking interrupts figure-ground signals in V1. Journal of Cognitive Neuroscience. 2002;14:1044–1053. doi: 10.1162/089892902320474490. [DOI] [PubMed] [Google Scholar]
  41. Limbach K, Corballis PM. Prestimulus alpha power influences response criterion in a detection task. Psychophysiology. 2016;53:1154–1164. doi: 10.1111/psyp.12666. [DOI] [PubMed] [Google Scholar]
  42. Maris E, Oostenveld R. Nonparametric statistical testing of EEG- and MEG-data. Journal of Neuroscience Methods. 2007;164:177–190. doi: 10.1016/j.jneumeth.2007.03.024. [DOI] [PubMed] [Google Scholar]
  43. Mathewson KE, Gratton G, Fabiani M, Beck DM, Ro T. To see or not to see: prestimulus alpha phase predicts visual awareness. Journal of Neuroscience. 2009;29:2725–2732. doi: 10.1523/JNEUROSCI.3963-08.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. McGinley MJ, David SV, McCormick DA. Cortical membrane potential signature of optimal states for sensory signal detection. Neuron. 2015;87:179–192. doi: 10.1016/j.neuron.2015.05.038. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Meindertsma T, Kloosterman NA, Nolte G, Engel AK, Donner TH. Multiple transient signals in human visual cortex associated with an elementary decision. The Journal of Neuroscience. 2017;37:5744–5757. doi: 10.1523/JNEUROSCI.3835-16.2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Melloni L, Schwiedrzik CM, Wibral M, Rodriguez E, Singer W. Response to: yuval-greenberg et al., "transient induced gamma-band response in EEG as a manifestation of miniature saccades." Neuron 58, 429-441. Neuron. 2009;62:8–10. doi: 10.1016/j.neuron.2009.04.002. [DOI] [PubMed] [Google Scholar]
  47. Michalareas G, Vezoli J, van Pelt S, Schoffelen JM, Kennedy H, Fries P. Alpha-Beta and gamma rhythms subserve feedback and feedforward influences among human visual cortical areas. Neuron. 2016;89:384–397. doi: 10.1016/j.neuron.2015.12.018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Mitra PP, Pesaran B. Analysis of dynamic brain imaging data. Biophysical Journal. 1999;76:691–708. doi: 10.1016/S0006-3495(99)77236-X. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Mulder MJ, Wagenmakers E-J, Ratcliff R, Boekel W, Forstmann BU. Bias in the brain: a diffusion model analysis of prior probability and potential payoff. Journal of Neuroscience. 2012;32:2335–2343. doi: 10.1523/JNEUROSCI.4156-11.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Neath AA, Cavanaugh JE. The bayesian information criterion: background, derivation, and applications. Wiley Interdisciplinary Reviews: Computational Statistics. 2012;4:199–203. doi: 10.1002/wics.199. [DOI] [Google Scholar]
  51. Ni J, Wunderle T, Lewis CM, Desimone R, Diester I, Fries P. Gamma-Rhythmic gain modulation. Neuron. 2016;92:240–251. doi: 10.1016/j.neuron.2016.09.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Norton EH, Fleming SM, Daw ND, Landy MS. Suboptimal criterion learning in static and dynamic environments. PLOS Computational Biology. 2017;13:e1005304. doi: 10.1371/journal.pcbi.1005304. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. O'Connell RG, Dockree PM, Kelly SP. A supramodal accumulation-to-bound signal that determines perceptual decisions in humans. Nature Neuroscience. 2012;15:1729–1735. doi: 10.1038/nn.3248. [DOI] [PubMed] [Google Scholar]
  54. Oostenveld R, Fries P, Maris E, Schoffelen JM. FieldTrip: open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data. Computational Intelligence and Neuroscience. 2011;2011:1–9. doi: 10.1155/2011/156869. [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Perrin F, Pernier J, Bertrand O, Echallier JF. Spherical splines for scalp potential and current density mapping. Electroencephalography and Clinical Neurophysiology. 1989;72:184–187. doi: 10.1016/0013-4694(89)90180-6. [DOI] [PubMed] [Google Scholar]
  56. Peterson EJ, Voytek B. Alpha oscillations control cortical gain by modulating excitatory-inhibitory background activity. Biorxiv. 2017 doi: 10.1101/185074. [DOI] [Google Scholar]
  57. Pleskac TJ, Cesario J, Johnson DJ. How race affects evidence accumulation during the decision to shoot. Psychonomic Bulletin & Review. 2017;18:1–30. doi: 10.3758/s13423-017-1369-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Popov T, Kastner S, Jensen O. FEF-Controlled alpha delay activity precedes Stimulus-Induced Gamma-Band activity in visual cortex. The Journal of Neuroscience. 2017;37:4117–4127. doi: 10.1523/JNEUROSCI.3015-16.2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Rajagovindan R, Ding M. From Prestimulus alpha oscillation to visual-evoked response: an inverted-U function and its attentional modulation. Journal of Cognitive Neuroscience. 2011;23:1379–1394. doi: 10.1162/jocn.2010.21478. [DOI] [PubMed] [Google Scholar]
  60. Ratcliff R. A theory of memory retrieval. Psychological Review. 1978;85:59–108. doi: 10.1037/0033-295X.85.2.59. [DOI] [Google Scholar]
  61. Ratcliff R. Modeling response signal and response time data☆. Cognitive Psychology. 2006;53:195–237. doi: 10.1016/j.cogpsych.2005.10.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Ratcliff R, Huang-Pollock C, McKoon G. Modeling individual differences in the go/No-go task with a diffusion model. Decision. 2018;5:42–62. doi: 10.1037/dec0000065. [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Ratcliff R, McKoon G. The diffusion decision model: theory and data for two-choice decision tasks. Neural Computation. 2008;20:873–922. doi: 10.1162/neco.2008.12-06-420. [DOI] [PMC free article] [PubMed] [Google Scholar]
  64. Samaha J, Iemi L, Postle BR. Prestimulus alpha-band power biases visual discrimination confidence, but not accuracy. Consciousness and Cognition. 2017;54:47–55. doi: 10.1016/j.concog.2017.02.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Servan-Schreiber D, Printz H, Cohen JD. A network model of catecholamine effects: gain, signal-to-noise ratio, and behavior. Science. 1990;249:892–895. doi: 10.1126/science.2392679. [DOI] [PubMed] [Google Scholar]
  66. Supèr H, Spekreijse H, Lamme VA. Figure-ground activity in primary visual cortex (V1) of the monkey matches the speed of behavioral response. Neuroscience Letters. 2003;344:75–78. doi: 10.1016/S0304-3940(03)00360-4. [DOI] [PubMed] [Google Scholar]
  67. Tversky A, Kahneman D. Judgment under uncertainty: heuristics and biases. Science. 1974;185:1124–1131. doi: 10.1126/science.185.4157.1124. [DOI] [PubMed] [Google Scholar]
  68. Urai AE, de Gee JW, Donner TH. Choice history biases subsequent evidence accumulation. bioRxiv. 2018 doi: 10.1101/251595. [DOI] [PMC free article] [PubMed]
  69. van Driel J, Ridderinkhof KR, Cohen MX. Not all errors are alike: theta and alpha EEG dynamics relate to differences in error-processing dynamics. Journal of Neuroscience. 2012;32:16795–16806. doi: 10.1523/JNEUROSCI.0802-12.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. van Kerkoerle T, Self MW, Dagnino B, Gariel-Mathis MA, Poort J, van der Togt C, Roelfsema PR. Alpha and gamma oscillations characterize feedback and feedforward processing in monkey visual cortex. PNAS. 2014;111:14332–14341. doi: 10.1073/pnas.1402773111. [DOI] [PMC free article] [PubMed] [Google Scholar]
  71. Werkle-Bergner M, Grandy TH, Chicherio C, Schmiedek F, Lövdén M, Lindenberger U. Coordinated within-trial dynamics of low-frequency neural rhythms controls evidence accumulation. Journal of Neuroscience. 2014;34:8519–8528. doi: 10.1523/JNEUROSCI.3801-13.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  72. White CN, Poldrack RA. Decomposing bias in different types of simple decisions. Journal of Experimental psychology Learning, Memory, and Cognition. 2014;40:385–398. doi: 10.1037/a0034851. [DOI] [PubMed] [Google Scholar]
  73. Wiecki TV, Sofer I, Frank MJ. HDDM: hierarchical bayesian estimation of the Drift-Diffusion model in python. Frontiers in Neuroinformatics. 2013;7 doi: 10.3389/fninf.2013.00014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  74. Yuval-Greenberg S, Tomer O, Keren AS, Nelken I, Deouell LY. Transient induced gamma-band response in EEG as a manifestation of miniature saccades. Neuron. 2008;58:429–441. doi: 10.1016/j.neuron.2008.03.027. [DOI] [PubMed] [Google Scholar]
  75. Zaehle T, Rach S, Herrmann CS. Transcranial alternating current stimulation enhances individual alpha activity in human EEG. PLOS ONE. 2010;5:e13766. doi: 10.1371/journal.pone.0013766. [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision letter

Editor: Michael J Frank1

In the interests of transparency, eLife includes the editorial decision letter and accompanying author responses. A lightly edited version of the letter sent to the authors after peer review is shown, indicating the most substantive concerns; minor comments are not usually included.

Thank you for submitting your article "Humans strategically shift decision bias by flexibly adjusting sensory evidence accumulation in visual cortex" for consideration by eLife. Your article has been reviewed by three peer reviewers, including Ole Jensen as the Reviewing Editor and Reviewer #1, and the evaluation has been overseen by David Van Essen as the Senior Editor. The following individual involved in review of your submission has agreed to reveal their identity: Eelke Spaak (Reviewer #2).

The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.

Summary:

The paper by Kloosterman et al. has been using a visual paradigm and EEG to investigate evidence accumulation in the visual cortex in humans. The data were fitted using a diffusion drift model. Key to the experiment was that the participants' decision criteria were manipulated using different reward contingencies inducing liberal or conservative biases. A liberal bias resulted in a suppression of the pre-stimulus alpha power and a subsequent stimulus induced gamma increase. It is stated that the alpha decrease boosted the gamma increase. These are in principle interesting findings speaking to how decision-making relates to oscillatory brain activity in the context of bias. All reviewers judge the paper of potential interest; however, several serious concerns pertaining to the analysis were raised. In particular the result summary/conclusion in the Abstract and Discussion seems too simplistic in the light of the actual findings (in particular the inverted-U of alpha in relation to gamma as explained below). Furthermore additional work is needed on the analysis. Specifically, the excitability model seems at odds with the interpretation that alpha-power regulates the drift bias as the correlation between alpha-power and gamma is different for conservative vs liberal (which should not be the case if conservative vs liberal modulates excitability through alpha). Please see below for details.

Essential revisions:

1) The authors show that pre-stimulus alpha power depends on the condition and hypothesize that this is involved in regulating the gain – somewhat plausibly supported by showing the correlation of alpha-power with gamma-power in a non-linear fashion (inverted U-shape). The issue is that the predicted difference between conservative and liberal trials in the (nonlinear) correlation between gamma- and alpha is not logically explained if alpha-power is equated to neuronal excitation. If alpha is taken as an index of neural excitation, then this is not the predicted result that emerges from Figure 5A (which shows stimulus effect against membrane excitability). Following this logic, the conservative vs. liberal condition should result in different histograms of high vs. low alpha power states, but these should not change the profile of the dependency of the stimulation effect on excitability (as indexed by alpha).

In this respect, it would be relevant to see whether alpha vs. drift bias shows a similar (nonlinear) correlation. Wouldn't one expect to observe 1) a nonlinear correlation between gamma and decision bias? 2) the correlation to be present both in the liberal and conservative condition?

This central issue needs to be addressed: if the model is correct, one would expect to see the very same correlation between alpha and gamma for conservative and liberal trials, just with the distribution of the liberal trials being shifted toward lower alpha-power and thereby more excitable. The current result in Figure 5C (and their prediction from Figure 5B), if valid, suggests an additional mechanism, for instance a dynamic (differential) top-down signal from areas like SMA or DLPFC into visual cortex that might explain the differential correlation. We suggest to show the results of the contrast liberal vs. conservative across all electrodes (ideally with an average reference) – with a specific question on whether this shows differences in power in more frontal areas. The cluster permutation method applied is usually appropriate for correcting for multiple comparisons across time, frequency and sensors.

2) We would think that the SSVEPs directly reflect excitability. Is there a reason for not performing the analysis for the SSVEPs? Such an analysis might help to clarify the point above.

3) Most studies on evidence accumulation are applying continuous stimuli (e.g. random dot kinematograms) in which information gradually is accumulated. In this study the informative target is shown for 40 ms. I take it lasts longer to accumulate information in order to make a decision? Please clarify.

4) In Figure 5 (and elsewhere) 'excitability' denotes alpha suppression. Why not just label it 'alpha suppression' or alike? While 'excitability' and 'alpha suppression' are related, one cannot equate them.

5) The participants only made yes responses. How can one then distinguish between decision bias versus drift rate? (Only the 'upper arrows' in Figure 2D are present in the data).

6) Figure 5C is essential for making the claim on the relationship between alpha and gamma power. However it is not clear from caption or Materials and methods section how this plot is produced. We take that alpha suppression is sorted in 10 bins per subject. The description of the 'neural gain analysis' (subsection “Α suppression enhances the gain of cortical γ responses”, Figure 5, and associated Materials and methods section) is unclear, which leaves us unable to fully judge its correctness. We understand that the output of a region is considered a (sigmoidal) function of total input, where total input is the sum of stimulus-related and endogenous input. Why is it such that "the isolated effect of sensory input […] can then be expressed as the first-order derivative of the sigmoid"? It seems to us that this derivative would be the effect of any input. This mistake is a symptom of the authors sometimes conflating gain (which we would equate with the slope of the output/input curve) and actual input or output. Relatedly, the authors write "stimulus-related output gain" where they actually mean "output"; i.e. it is (if we understand correctly) precisely the output that is not stimulus-related which is relevant here, namely the endogenous fluctuations. There is confusion between gain, input, and output in some other places in this description as well; also how these terms map onto experimental measures is a bit ambiguous. (Gain = liberal vs. conservative; input = alpha; output = gamma? This is what we understand, but at times it appears as though alpha is equated to gain instead of input.)

7) The 3-way ANOVA reported in the last paragraph of the subsection “Α suppression enhances the gain of cortical γ responses” is, we believe, not the correct way to analyze these data. The dependent variable here is gamma power, with independent variables condition (liberal/conservative) and alpha power bin (10 levels). Thus, a 2-way RM-ANOVA would be appropriate. If the authors believe the 3-way approach is indeed correct, then they should explain why this is so.

8) The approach taken for the "within-subject group regression" is unclear to us (also not explained in Materials and methods). The primary evidence that links gamma activity to DDM drift bias is, it appears, based on regressing drift bias onto gamma power across different alpha bins, where both variables are averaged within bin, across participants (Figure 6). The correct approach here would be to perform this regression per participant, and then test whether the regression coefficients are different from zero at the population level. (Or better yet, show the individual regression lines.)

9) The description of report proportions (Figure 2B) is not clearly defined. Shouldn't these sum to 1 within a condition? Additionally, it would be good to have some feeling of absolute number of responses, including those responses counting as a miss/correct rejection.

[Editors' note: further revisions were requested prior to acceptance, as described below.]

Thank you for resubmitting your work entitled "Humans strategically shift decision bias by flexibly adjusting sensory evidence accumulation in visual cortex" for further consideration at eLife. Your article has been reviewed by two peer reviewers, and the evaluation has been overseen by a Senior/Reviewing Editor.

The two reviewers and I have reviewed your response and the new manuscript. While all agreed that the manuscript has improved and the overall question is interesting, there was some disagreement as to the value of the work given the limitations. As such there remain some essential revisions that will be paramount to address properly in another revision, at which point we may need to solicit input from another reviewer in case there remains disagreement.

The two major strengths of the paper are the following:

1) The fact that you demonstrate fairly clearly that there is an effect of liberal vs. conservative incentive on some form of "response gain" parameter, i.e. a change in the response time distribution that apparently cannot be explained by a change in a constant alone but a gain parameter (by model fits).

2) The fact that this (behaviourally effective) conservative vs. liberal task-set has a pronounced effect on pre-stimulus occipital alpha oscillations and also on stimulus induced gamma-oscillations.

However, the following weaknesses were noted and should be addressed directly. Please note that there is no guarantee of acceptance.

1) The fact that the occipital alpha power is not clearly the decisive element here, but rather seems to be the consequence of a different signal of currently unknown (and still clearly under-investigated) origin that actually mediates the "drift bias". The idea that these dynamics are predominantly taking place within the visual system – as suggested in title and Abstract – is not sufficiently justified by evidence. This mostly amounts to a matter of emphasis – you do acknowledge that this "remains an open question" and that alpha modulation is not the whole story. Having said that, it is clear that the text (title Abstract and elsewhere) can be interpreted as trying to make such a point. So, I would suggest that a further rewrite on this point is in order, perhaps adding some of the details in the response-to-reviewers document to the manuscript (Discussion/supplement) itself.

One of the reviewers noted that you stated in your response that you updated the description of the Rajagovindan and Ding model and now account for the fact that an additional signal is required to explain these results – but the detailed explanation of this is lacking in the central description of this model in the subsection “Pre-stimulus alpha power mediates cortical gamma responses” (the second paragraph is not clear to understand and hides the fact that another external factor is needed here).

2) Given the debate about whether you have identified the critical neural mechanism/correlate of the behavioral effect, it is even more important, for the potential novelty and significance of the results, to demonstrate that your results are specific to drift bias as opposed to starting point. This distinction is theoretically important, but the main conclusion in support of drift bias account is based on model fit metrics (BIC) alone. To be more confident that this difference is meaningful it is important to provide evidence that the drift bias model empirically captures the RT distributions better than the starting point model. In theory the better BIC fit indicates the drift bias does capture RT distributions better, but sometimes models can fit better using BIC (or other such metrics) for nuisance reasons unrelated to the core theoretical distinction. You do show individual subject RT distribution fits in Figure 2—figure supplement 3 for the drift bias model which look quite reasonable, but you do not show the corresponding fits to the starting point model, and the main claim of the paper rests on the ability to distinguish these with your task design/models.

3) Relatedly, you do provide evidence that stimulus activity in gamma is amplified in the liberal condition, which is consistent with a drift bias, and also that this correlates with the extent of drift bias across subjects. What would be nice here is if you showed that this correlation was specific to the drift bias model and that it was statistically more evident than the corresponding correlations with starting point bias (i.e., you could test if the stimulus gamma activity is similarly correlated with estimated starting point bias in the alternative model). This would provide another more specific test of the evidence for the link between EEG and behavior even without identifying the top-down mechanism.

eLife. 2019 Feb 6;8:e37321. doi: 10.7554/eLife.37321.024

Author response


Essential revisions:

1) The authors show that pre-stimulus alpha power depends on the condition and hypothesize that this is involved in regulating the gain – somewhat plausibly supported by showing the correlation of alpha-power with gamma-power in a non-linear fashion (inverted U-shape). The issue is that the predicted difference between conservative and liberal trials in the (nonlinear) correlation between gamma- and alpha is not logically explained if alpha-power is equated to neuronal excitation. If alpha is taken as an index of neural excitation, then this is not the predicted result that emerges from Figure 5A (which shows stimulus effect against membrane excitability). Following this logic, the conservative vs. liberal condition should result in different histograms of high vs. low alpha power states, but these should not change the profile of the dependency of the stimulation effect on excitability (as indexed by alpha).

Thank you for pointing out this important issue. In our initial manuscript submission, we presented a simplified version of the model originally described by Rajagovindan and Ding ((2011); from here on simply referred to as R&D). We agree however with the reviewers that this simplified R&D model merely predicted that alpha-band activity (our proxy of neural excitability) can boost gamma-band activity (our proxy of output activity of visual cortex) by regulating where the stimulus induced activity passes through the transfer function; the simplified R&D model did not predict a change in the slope of the hypothesized transfer function itself. Thus, we agree that our simplified R&D model did not explain our important finding that the U-shaped relationship between alpha- and gamma-band activity was steeper in the liberal compared to the conservative condition.

We now choose to present the full R&D model as described in Rajagovindan and Ding (2011). This full R&D model indeed posits that attention not only changes the neural excitability, but also the maximum total output of visual cortex. This latter effect culminates in a change in slope of the transfer function (see Figure 7A in Rajagovindan and Ding (2011)). We now present the full R&D model in Figures 5A and 5B, which predicts the increased steepening of the U-curve as a function of (attentional) condition (Figure 5B). As such, we believe that the full R&D model is in accordance with our empirical result that the U-shaped relationship between alpha- and gamma-band activity is steeper in the liberal compared to the conservative condition (Figure 5C).

Please note that although we follow R&D’s model in our manuscript, we agree that it is not necessarily the most parsimonious model, because it requires an extra mechanism on top of the inverted U-shaped effect of alpha on gamma. As the reviewers suggest, a simpler model would have been one in which there is only a single input-output relationship (such as depicted in our original Figure 5A), that does not change as a function of condition. However, such a simple model cannot explain our data. If this were the case, the shift of the range in which alpha occurs (as can be seen in Figure 4C, but also in 5C) would have to be large enough to make the distribution fall outside the range in which alpha suppression maximally drives gamma. As can be seen in Figure 5C, this is not what we see in our data. Conversely, the R&D model depicted in Figure 5B makes three predictions that are in line with our data, which we now make explicit in Figure 5D-5F: 1) overall lower alpha power for liberal than for conservative due to the shift in the effective range of alpha (Figure 5D) 2) A stronger gamma response in the peak (the center of the effective alpha range) for liberal than for conservative (Figure 5E), and 3) the difference between the alpha-gamma functions in the two conditions (when mapping the alpha-ranges in which they operate onto each other), results again in an inverted U-shape (Figure 5F and Figure 5—figure supplement 1). Please see the rationale for the related ANOVA and its outcome under point 7. Below, we further discuss the plausibility of an extra mechanism that changes the steepness of the input-output function.

In this respect, it would be relevant to see whether alpha vs. drift bias shows a similar (nonlinear) correlation.

Indeed, we observe an inverse-U shaped between alpha and drift bias for both conditions, as shown in Author response image 1. This correlation follows logically given that alpha and gamma also correlate non-linearly (see Figure 5C) and gamma and drift bias are linearly correlated (see Figure 6).

Author response image 1. Inverse-U shaped relationship between pre-stimulus alpha power and drift bias.

Author response image 1.

DDM drift bias parameter estimates were Z-scored within each condition to remove the large inter-individual differences in average drift bias within each condition and focus on the shape of the relationship between alpha and drift bias. Therefore, the difference in average drift bias between conditions (see Figure 2F) is not reflected in these figures.

Wouldn't one expect to observe:

1) a nonlinear correlation between gamma and decision bias?

On theoretical grounds, we hypothesized that gamma (output of visual cortex) is a neural reflection of drift bias (speed of accumulation toward a decision boundary). This predicts that a linear increase in gamma is expressed in a linear increase in drift bias, hence a linear correlation. This is indeed what we observed (see Figure 6). The only non-linear relation we hypothesized to exist is between alpha and gamma, where alpha only drives gamma in an effective range (hence the inverted U-shape). This is also what we observe: a non-linear relation between alpha and gamma (Figure 5C), but a linear relationship between gamma and drift bias (see Figure 6).

2) The correlation to be present both in the liberal and conservative condition?

This central issue needs to be addressed: if the model is correct, one would expect to see the very same correlation between alpha and gamma for conservative and liberal trials, just with the distribution of the liberal trials being shifted toward lower alpha-power and thereby more excitable.

Thank you for pointing this out. Indeed, we agree that the model predicts a correlation between gamma and drift bias in both conditions, albeit with lower gamma and drift bias values in the conservative than in the liberal condition. While investigating this, we identified a number of shortcomings in both the initial alpha-gamma coupling analysis and the initial correlation analysis, which we improved as follows:

- Estimating the DDM drift bias parameter separately for each of the ten alpha bins requires a sufficient number of trials per bin to build a distribution of reaction times to be fitted by the DDM. However, we realized that our initial fixed-bin-width alpha binning procedure did not guarantee this for all bins, because fewer trials were assigned to the extreme bins due to the two-tailed shape of the single trial alpha distributions (see Figure 4C). Instead, we now bin the trials using equal-sized bins, such that by design each bin has the same number of trials (within each participant and condition). Although this change in binning procedure does not qualitatively change the outcome of the alpha-gamma analysis as reported in our original submission (Figure 5C), we found that the DDM drift bias estimates are less noisy with equal-sized bins, enhancing our ability to detect an effect.

- Further, we now use the same percent signal change normalization of post-stimulus gamma power in the alpha-versus-gamma analysis (Figure 5C) as in the analysis of stimulus-related responses (Figure 3). Specifically, we take the condition-specific pre-stimulus baseline spectrum (-0.4 to 0 s), and express modulation in percent signal change (psc) with respect to this baseline (see Figures 3 and 5). These psc values now go directly into the final behavioral correlation analysis, without z-scoring the data (see Figure 6). This unified approach has both sharpened the responses observed in the different analyses, and has made the analysis trajectory throughout the paper more transparent.

- Finally, instead of the within-subject group regression, we now use a more sensitive repeated measures correlation approach. This approach and its rationale are explained under point 8 below.

Due to these improvements in the analysis, we now observe a significant effect of condition on the slope of the inverted U-curve (Figure 5), and detect robust positive correlations between gamma and drift bias in both conditions, in line with the predictions of the gain model. See Figure 6 for the results of this improved correlation analysis. Figure 5E confirms that gamma operates in a higher range in the liberal condition, in line with the observed stronger drift bias in the liberal condition.

The current result in Figure 5C (and their prediction from Figure 5B), if valid, suggests an additional mechanism, for instance a dynamic (differential) top-down signal from areas like SMA or DLPFC into visual cortex that might explain the differential correlation. We suggest to show the results of the contrast liberal vs. conservative across all electrodes (ideally with an average reference) – with a specific question on whether this shows differences in power in more frontal areas. The cluster permutation method applied is usually appropriate for correcting for multiple comparisons across time, frequency and sensors.

We agree with the reviewers that we do not uncover a plausible mechanism that could bring about the steepening in the U-curved function observed in Figure 5C. This shift could either be caused by the same signal that causes alpha suppression, by the alpha suppression itself, or it could originate from an additional top-down signal from frontal brain regions. To test whether any frontal brain region shows differences between conditions, we performed the suggested liberal–conservative contrast across space, time and frequency, using a condition-average baseline correction. In this exploratory analysis, we did not pre-select any electrodes, time or frequency bins, but instead identified clusters of space-time-frequency bins that showed significant differences between conditions. These clusters were corrected for multiple comparisons using the cluster permutation procedure in FieldTrip. See Meindertsma et al., 2018, for a similar approach. We did not find any frontal clusters in this analysis, even when using a less stringent test by omitting the required correction for multiple comparisons. We report this result in the fifth paragraph of the Discussion and have added this as a supplementary figure to the manuscript (Figure 3—figure supplement 1).

We note, however, that R&D report a simulation in their paper exploring the relationship between cortical excitability and gain. This simulation indicates that an intermediate excitability state results in a steeper sigmoid transfer function between background synaptic activity and output firing rate, whereas states of lower and higher excitability yield shallower transfer functions (see Figure 10 from Rajagovindan and Ding, 2011). This result suggests that the stronger alpha suppression in the liberal compared to the conservative condition might indeed reflect a steepened transfer function slope (enhanced gain) due to increased excitability. Although this model fits our and R&D’s empirical findings, it does by itself not explain the mechanism by which the gain is enhanced. Although we deem this a very important issue, we did not see opportunities to further characterize this mechanism beyond the findings we already present. We have now made this issue explicit, added a description of this model limitation to the Discussion section (fifth paragraph), and propose this issue as a topic for future research.

2) We would think that the SSVEPs directly reflect excitability. Is there a reason for not performing the analysis for the SSVEPs? Such an analysis might help to clarify the point above.

We agree with the reviewers that the strength of the pre-stimulus SSVEP could in principle reflect excitability, because it can be seen as a readout of the responsiveness of visual cortex to external input. Figures 4A and 4B, however, show that besides the robust alpha band suppression during the liberal condition, there is no significant pre-stimulus difference between conditions in the SSVEP frequency range, indicating that the SSVEP does not differentiate the two conditions in terms of excitability.

We also investigated the effect of pre-stimulus alpha level on the strength of the post-stimulus SSVEP modulation, and observed a similar U-shaped relationship as for gamma, suggesting that SSVEP power more closely reflects the output of visual cortex than that it reflects excitability itself. However, the Gaussian (inverted-U shaped) contrast on SSVEP across alpha bins in the two-way repeated measures ANOVA was not significant for any condition, nor was it significantly different between conditions. Finally, the alpha-binned SSVEP modulation was not significantly correlated with drift bias, as observed for gamma, although the correlations were in the same (positive) direction as for gamma. Taken together, these results suggest that the stimulus-related SSVEP shows a similar coupling to alpha as the stimulus-induced gamma, but is less affected by the experimental conditions and not predictive of criterion shifts, like gamma. We now report these findings in the Results subsection “Visual cortical gamma activity predicts strength of evidence accumulation bias” and have added them to the manuscript as a supplementary figure (Figure 6—figure supplement 1).

3) Most studies on evidence accumulation are applying continuous stimuli (e.g. random dot kinematograms) in which information gradually is accumulated. In this study the informative target is shown for 40 ms. I take it lasts longer to accumulate information in order to make a decision? Please clarify.

Thank you for raising this important issue. We have now added the following text to the subsection “Drift diffusion modeling of choice behavior”:

“In order to be detected, the 40 ms-duration figure-ground targets used in our study undergo a process in visual cortex called figure-ground segregation. […] The success of the DDM in fitting these data is consistent with previous work (e.g. Ratcliff, 2006) and might reflect the fact that observers modulate the underlying components of the decision process also when they do not control the stimulus duration (Kiani, Hanks, and Shadlen, 2008).”

4) In Figure 5 (and elsewhere) 'excitability' denotes alpha suppression. Why not just label it 'alpha suppression' or alike? While 'excitability' and 'alpha suppression' are related, one cannot equate them.

Thank you for pointing this out. We fully agree and carefully revised the manuscript to always refer to ‘alpha suppression’, and only when interpreting the results, referring to terms such as excitability.

5) The participants only made yes responses. How can one then distinguish between decision bias versus drift rate? (only the 'upper arrows' in Figure 2D are present in the data).

Thank you for raising this important question, we realize we had not sufficiently explained this in the previous version of the manuscript. We have now added the following text to the subsection “Distinguishing DDM drift bias and drift rate”:

“Distinguishing DDM drift bias and drift rateIn our task, only target-present responses were coupled to a behavioral response (button-press), so we could measure reaction times only for these responses, whereas reaction times for target-absent responses remained implicit. […] Because a single bin containing the number of target-absent responses contributed to G square, our fitting procedure can distinguish between decision bias versus drift rate.”

6) Figure 5C is essential for making the claim on the relationship between alpha and gamma power. However it is not clear from caption or Materials and methods section how this plot is produced. We take that alpha suppression is sorted in 10 bins per subject. The description of the 'neural gain analysis' (subsection “Alpha suppression enhances the gain of cortical gamma responses”, Figure 5, and associated Materials and methods section) is unclear, which leaves us unable to fully judge its correctness. We understand that the output of a region is considered a (sigmoidal) function of total input, where total input is the sum of stimulus-related and endogenous input. Why is it such that "the isolated effect of sensory input […] can then be expressed as the first-order derivative of the sigmoid"? It seems to us that this derivative would be the effect of any input.

Thank you for pointing this out. We agree that the description of the model was unclear. The total input S on the x-axis of Figure 5A can come in two forms: pre-stimulus activity (endogenous, Sn) and sensory stimulus (exogenous, Sx). The total output is expressed in the sigmoidal function O(Sn + Sx). However, because the amount of activity induced by the sensory stimulus itself (Sx) is assumed to be more or less constant from trial to trial over the physiological range of S, the isolated effect of the stimulus on the output – given a certain level of Sn – is expressed in the first-order derivative: O(Sn+Sx)–O(Sn))/Sx. As a result, the stimulus-evoked response as measured through gamma is proportional to the first-order derivative of the total output: O(Sn+Sx)−O(Sn)]/Sx. This is referred to as the gain, and is a function of pre-stimulus synaptic activity Sn (alpha-power). We have completely rewritten the section explaining this reasoning in the subsection “Pre-stimulus alpha power mediates cortical gamma responses”.

This mistake is a symptom of the authors sometimes conflating gain (which we would equate with the slope of the output/input curve) and actual input or output. Relatedly, the authors write "stimulus-related output gain" where they actually mean "output"; i.e. it is (if we understand correctly) precisely the output that is not stimulus-related which is relevant here, namely the endogenous fluctuations. There is confusion between gain, input, and output in some other places in this description as well; also how these terms map onto experimental measures is a bit ambiguous. (Gain = liberal vs. conservative; input = alpha; output = gamma? This is what we understand, but at times it appears as though alpha is equated to gain instead of input.)

Thanks for pointing this out – we apologize for the fact that the term definitions in the manuscript were not always consistent and at times confusing. We have now revised the entire manuscript, including the model description, with clear definitions in mind. Specifically, we now only refer to gain as the steepness of the sigmoid input-output curve (which is reflected directly in its derivative, describing the alpha-mediated effect of input (the sensory stimulus) on the output (gamma). Input to visual cortex is defined as the sum of endogenous activity (measured in alpha) and stimulus-related input (not measured, but assumed to be constant across trials). Finally, the alpha-mediated output of visual cortex is thought to be reflected in gamma activity.

7) The 3-way ANOVA reported in the last paragraph of the subsection “Alpha suppression enhances the gain of cortical gamma responses” is, we believe, not the correct way to analyze these data. The dependent variable here is gamma power, with independent variables condition (liberal/conservative) and alpha power bin (10 levels). Thus, a 2-way RM-ANOVA would be appropriate. If the authors believe the 3-way approach is indeed correct, then they should explain why this is so.

We fully agree with this comment and performed the suggested 2-way repeated measures ANOVA. Moreover, while revising the input-output model we realized that the U-shaped relationship between alpha and gamma predicted by the model is more appropriately described by a Gaussian-shape (this is explicit in the original R&D model, and in our depiction in Figure 5B) rather than a quadratic function (as used in the original submission). Therefore, we took the standard Gaussian with unit standard deviation and used this shape as the contrast of interest in the ANOVA instead of the standard quadratic contrast. Indeed, this uncovers an alpha bin-by-condition interaction effect at significance, suggesting that the input-output curves for liberal and conservative conditions show differential fits to a Gaussian shape. We describe the results of this analysis in the fourth paragraph of the subsection “Pre-stimulus alpha power mediates cortical gamma responses”.

8) The approach taken for the "within-subject group regression" is unclear to us (also not explained in Materials and methods). The primary evidence that links gamma activity to DDM drift bias is, it appears, based on regressing drift bias onto gamma power across different alpha bins, where both variables are averaged within bin, across participants (Figure 6). The correct approach here would be to perform this regression per participant, and then test whether the regression coefficients are different from zero at the population level. (Or better yet, show the individual regression lines.)

Indeed, we previously performed the regression analysis of drift bias on gamma after averaging the data within bins across participants. Although averaging across participants before regression suppresses noise (i.e. interindividual variability) by focusing on the within-subject group effect (see e.g. Linkenkaer-Hansen et al., 2004, and Kloosterman et al., 2015, for applications of this analysis), it required the extra step of normalizing the data of each of the individual participants (z-scoring) to prevent the possibility that the presumed within-subject correlation was actually driven by interindividual differences (i.e. by subjects with weak or strong responses at the group level). On the other hand, fitting regression lines per participant across the ten bins and testing the regression coefficients against zero is suboptimal because just a single outlier in a given participant’s ten bins can greatly affect the slope of the regression line for that participant, which reduces sensitivity of a subsequent statistical group-level test on the individual slopes. To avoid these issues, we instead performed a so-called repeated measures correlation with our data, a term coined by Bakdash and Marusich, 2017, for an approach originally introduced by Bland and Altman, 1995. This mixed-effects approach entails a correlation across all repeated observations from all participants, while controlling for differences in individuals’ average responses, and correcting the degrees of freedom of the correlation for the number of subjects. Because such a mixed-effects repeated measures correlation takes the non-independence of the ten observations per participant into account, it tends to yield greater statistical power than data that are averaged in order to meet the assumption of data point independence for simple regression/correlation (see Bakdash and Marusich, 2017, for more details). Finally, this approach allows us to directly enter the drift bias and alpha-binned gamma values into the correlation analysis without additional normalization steps, thereby aiding transparency of our analyses. Indeed, this more sensitive approach reveals highly significant positive correlations between gamma and drift bias in both conditions, which we report in our new Figure 6 and the accompanying text in the subsection “Visual cortical gamma activity predicts strength of evidence accumulation bias”.

9) The description of report proportions (Figure 2B) is not clearly defined. Shouldn't these sum to 1 within a condition?

Thank you for pointing this out. The reason they do not sum to one within a condition is that the hit rates and false alarm rates are computed using different subsets of trials: target present and target absent trials, respectively. Thus, the hit rate in Figure 2B is computed as the N hits/N target present trials, whereas the false alarm rate is computed as N false alarms/N target absent trials. Hit and miss rates do indeed sum to 1 since they are each other’s complement, and the same applies to false alarm and correction rejection rates, but hit rate and false alarm rate (by definition) do not sum to 1. We now point out the complements of the hit and false alarm rates in the legend of Figure 2B to make this explicit.

Additionally, it would be good to have some feeling of absolute number of responses, including those responses counting as a miss/correct rejection.

Thank you: we now report median trial counts across participants for all four signal-detection-theoretic trial categories in both conditions in the second paragraph of the subsection “Manipulation of decision bias affects sensory evidence accumulation”.

[Editors' note: further revisions were requested prior to acceptance, as described below.]

[…] The following weaknesses were noted and should be addressed directly. Please note that there is no guarantee of acceptance.

1) The fact that the occipital alpha power is not clearly the decisive element here, but rather seems to be the consequence of a different signal of currently unknown (and still clearly under-investigated) origin that actually mediates the "drift bias". The idea that these dynamics are predominantly taking place within the visual system – as suggested in title and Abstract – is not sufficiently justified by evidence. This mostly amounts to a matter of emphasis – you do acknowledge that this "remains an open question" and that alpha modulation is not the whole story. Having said that, it is clear that the text (title Abstract and elsewhere) can be interpreted as trying to make such a point. So, I would suggest that a further rewrite on this point is in order, perhaps adding some of the details in the response-to-reviewers document to the manuscript (Discussion/supplement) itself.

Thank you for this suggestion. To once more investigate the existence of a top-down signal, we decided to further inspect the literature to determine what could be the source of the alpha effect. This prompted us to investigate the potential role of theta oscillations that mediate cognitive control mechanisms as a signature of top-down processes reflecting our experimental task manipulations. We were pleasantly surprised to find that when we took a wider pre-stimulus time window than we had done before (-1 to 0 seconds, necessitated by the wish to find a signature for theta) and performed a cluster-based permutation test over electrodes and frequencies (1-35 Hz), we not only recovered the alpha signal that we initially obtained by looking at the occipital electrode pooling only, but we also observed a clear modulation of pre-stimulus theta power (2-6 Hz) in midfrontal electrodes, with stronger theta in the liberal than in the conservative condition. We now highlight both findings in Figure 5. Further, we have made substantial changes throughout the manuscript to reflect the fact that the effect of our experimental manipulation is not only visual in nature.

One of the reviewers noted that you stated in your response that you updated the description of the Rajagovindan and Ding model and now account for the fact that an additional signal is required to explain these results – but the detailed explanation of this is lacking in the central description of this model in the subsection “Pre-stimulus alpha power mediates cortical gamma responses” (the second paragraph is not clear to understand and hides the fact that another external factor is needed here).

Thank you. Although we point to the necessity of an additional mechanism in other parts of the manuscript, we now also make the need for an additional mechanism clear in the description of the model in the Results subsection “Pre-stimulus alpha power mediates cortical gamma responses”.

2) Given the debate about whether you have identified the critical neural mechanism/correlate of the behavioral effect, it is even more important, for the potential novelty and significance of the results, to demonstrate that your results are specific to drift bias as opposed to starting point. This distinction is theoretically important, but the main conclusion in support of drift bias account is based on model fit metrics (BIC) alone. To be more confident that this difference is meaningful it is important to provide evidence that the drift bias model empirically captures the RT distributions better than the starting point model. In theory the better BIC fit indicates the drift bias does capture RT distributions better, but sometimes models can fit better using BIC (or other such metrics) for nuisance reasons unrelated to the core theoretical distinction. You do show individual subject RT distribution fits in Figure 2—figure supplement 3 for the drift bias model which look quite reasonable, but you do not show the corresponding fits to the starting point model, and the main claim of the paper rests on the ability to distinguish these with your task design/models.

Indeed, our main conclusion that the drift bias model best explains our behavioral data were based on the BIC results, and supported by the observed link between stimulus-induced gamma and drift bias. Bayesian model comparisons do consistently identify the drift bias model as superior across participants (for 15/16 participants), but the actual BIC differences between the two models are relatively small. We now show the subject RT distributions in Figure 2—figure supplement 3 for the starting point model as well, but to the naked eye both models seem to provide similarly reasonable fits to the RT distributions. We thus turned to the EEG data to test this main conclusion more rigorously. Specifically, we investigated the time courses of established EEG signatures of decision formation at the levels of sensory encoding and motor responses. Following previous studies, we hypothesized that a starting point bias would be reflected in a difference in baseline activity between the conditions before onset of the decision process, as has been shown previously at the motor output level during perceptual expectation (de Lange et al., 2013) and speeded decision making (Afacan-Seref et al., 2018). Conversely, we predicted that a drift bias occurring during the process of sensory evidence accumulation would be reflected in a steeper slope of post-stimulus and pre-response ramping activity, as well as a higher peak amplitude following stimulus onset. We now provide these results in a new Figure 4. A number of observations we make in this figure are in line with the drift bias model. First, we compared the two conditions in the pre-trial baseline period by looking at raw power. Specifically, we inspect two types of signals: i) stimulus-related activity (gamma modulation and the SSVEP), and ii) motor-related EEG signatures in the left motor cortex (left-hemispheric beta (LHB) power and the event-related potential (ERP)) in left motor cortex around the time of the behavioral response (as also suggested by the reviewer below). For none of these signals we found a statistically meaningful difference between conditions in the pre-trial baseline activity, suggesting a similar starting point of evidence accumulation in both conditions. Next, we looked at post-stimulus activity. After trial onset, in contrast, both sensory signals as well as the motor-related signal evolved differently in the liberal compared to the conservative condition, as expressed in higher peak level and steeper slope in the liberal condition. Together, these findings provide converging evidence that participants responded to the bias manipulations by adjusting the rate of evidence accumulation toward signal presence, but not its starting point. We have added these new findings to a new Figure 4 in the manuscript and report them in the subsection “EEG power modulation time courses consistent with the drift bias model”. Please also see our descriptions of the specific effects in response to the reviewer’s points below.

3) Relatedly, you do provide evidence that stimulus activity in gamma is amplified in the liberal condition, which is consistent with a drift bias, and also that this correlates with the extent of drift bias across subjects. What would be nice here is if you showed that this correlation was specific to the drift bias model and that it was statistically more evident than the corresponding correlations with starting point bias (i.e., you could test if the stimulus gamma activity is similarly correlated with estimated starting point bias in the alternative model). This would provide another more specific test of the evidence for the link between EEG and behavior even without identifying the top-down mechanism.

Thank you for this suggestion. To test whether drift bias was more strongly linked to gamma than starting point, we regressed both bias parameters estimated per alpha bin (within the two respective models) on gamma. Crucially, we found that in both conditions starting point bias did not uniquely predict gamma when controlling for drift bias (liberal: F(1,124) = 5.8, p = 0.017 for drift bias, F(1,124) = 0.3, p = 0.61 for starting point; conservative: F(1,124) = 8.7, p = 0.004 for drift bias, F(1,124) = 0.4, p = 0.53 for starting point. This finding again suggests that the drift bias model outperforms the starting point model when correlated to gamma power. We report this finding in the first paragraph of the subsection “Visual cortical gamma activity predicts strength of evidence accumulation bias”.

To sum up, we now present three independent and converging data points indicating that participants implement decision bias by adjusting the process of evidence accumulation, but not its starting point:

1) BIC values are lower for the drift bias model than for the starting point drift diffusion models.

2) Established sensory and motor-related EEG signals show no significant differences in pre-stimulus baseline activity in the liberal compared to the conservative condition, but do show stronger post-stimulus modulation.

3) The correlation between alpha-binned gamma and drift bias is robust to controlling for alpha-binned starting point.

We feel that these converging pieces of evidence together make a compelling case for our conclusion that decision bias is linked to adjustments in sensory evidence accumulation.

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Data Citations

    1. Niels A. Kloosterman, Jan Willem de Gee, Markus Werkle-Bergner, Ulman Lindenberger, Douglas D Garrett, Johannes Jacobus Fahrenfort. 2018. Humans strategically shift decision bias by flexibly adjusting sensory evidence accumulation in visual cortex. figshare. [DOI] [PMC free article] [PubMed]
    2. Kloosterman NA, de Gee JW, Werkle-Bergner M, Lindenberger U, Garrett DD, Fahrenfort JJ. 2018. Data from: humans strategically shift decision bias by flexibly adjusting sensory evidence accumulation in visual cortex. Figshare. [DOI] [PMC free article] [PubMed]

    Supplementary Materials

    Figure 2—source data 1. This csv table contains the data for Figure 2 panels B, C, E and F.
    DOI: 10.7554/eLife.37321.008
    Figure 6—source data 1. SPSS .sav file containing the data used in panels C, E, and F.
    DOI: 10.7554/eLife.37321.014
    Figure 7—source data 1. MATLAB .mat file containing the data used.
    DOI: 10.7554/eLife.37321.017
    Transparent reporting form
    DOI: 10.7554/eLife.37321.019

    Data Availability Statement

    All data analysed during this study are publicly available, see https://doi.org/10.6084/m9.figshare.6142940.v1. Analysis scripts are publicly available on Github (https://github.com/nkloost1/critEEG; copy archived at https://github.com/elifesciences-publications/critEEG).

    The following dataset was generated:

    Niels A. Kloosterman, Jan Willem de Gee, Markus Werkle-Bergner, Ulman Lindenberger, Douglas D Garrett, Johannes Jacobus Fahrenfort. 2018. Humans strategically shift decision bias by flexibly adjusting sensory evidence accumulation in visual cortex. figshare.


    Articles from eLife are provided here courtesy of eLife Sciences Publications, Ltd

    RESOURCES