Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 Oct 1.
Published in final edited form as: J Exp Psychol Hum Percept Perform. 2021 Oct;47(10):1329–1347. doi: 10.1037/xhp0000942

Appealing to the cognitive miser: Using demand avoidance to modulate cognitive flexibility in cued and voluntary task switching

Nicholaus P Brosowsky 1, Tobias Egner 1
PMCID: PMC8597921  NIHMSID: NIHMS1738649  PMID: 34766818

Abstract

Current cognitive control accounts view goal-directed behavior as striking a balance between two antagonistic control demands: Stability, on the one hand, reflects a rigid, focused state of control and flexibility, on the other, reflects a relaxed, distractible state, whereby goals can be rapidly updated to meet unexpected changes in demands. In the current study, we sought to test whether the avoidance of cognitive demand could motivate people to dynamically regulate control along the stability-flexbility continuum. In both cued (Experiment 1) and voluntary (Experiment 2) task-switching paradigms, we selectively associated either task-switches or task-repetitions with high cognitive demand (independent of task identity), and measured changes in performance in a following phase after the demand manipulation was removed. Contrasting performance with a control group, across both experiments, we found that selectively associating cognitive demand with task repetitions increased flexibility, but selectively associating cognitive demand with task switches failed to increase stability. The results of the current study provide novel evidence for avoidance-driven modulations of control regulation along the stability-flexibility continuum, while also highlighting some limitations in using task-switching paradigms to examine motivational influences on control adaptation. Data, analysis code, experiment code, and pre-print available at osf.io/7rct9/.

Keywords: cognitive control, task switching, cognitive flexibility, mental effort


Current cognitive control accounts view goal-directed behavior as striking a balance between two antagonistic control demands (Braem & Egner, 2018; Braver, 2012; Brosowsky & Crump, 2018; Diamond, 2013; Dreisbach, 2012; Dreisbach & Fröber, 2019; Egner, 2014; Goschke, 2003, 2013; Hommel & Elliot, 2015). Cognitive stability, on the one hand, is a rigid, focused state, enabling firm goal-maintenance and the suppression of distraction. Cognitive flexibility, on the other hand, is a more relaxed yet, distractible state, where goals can be rapidly updated to meet unexpected changes in demands. Importantly, the desirability of biasing control towards stability or flexibility is context dependent. In some contexts, like studying, you need to focus on a singular task over sustained periods and ignore many potential distractions. In other contexts, like cooking, however, you need to monitor and quickly switch between multiple tasks (e.g., cutting vegetables, boiling water, heating oil in a pan). Even being ‘distractible’ in this context could be beneficial if a distraction alerts you to an important change in the task environment (e.g., being distracted by the smell of burning oil).

Accordingly, adopting contextually inappropriate control strategies can have negative consequences. You might fail to notice the burning oil because you were overly focused on cutting vegetables, for instance, or you might fail to remember critical exam material because you were insufficiently focused when studying. Failures to adaptively regulate control can thus be disruptive for everyday functioning and–if persistent–characterize various clinical disorders (Chamberlain et al., 2006; Geurts et al., 2009; Meiran et al., 2011; see Goschke & Bolte, 2014 for a review). For example, extreme flexibility may result in overly distractible behavior as observed in attention deficit hyperactive disorder (ADHD). Extreme stability, by contrast, may result in overly rigid or perseverative behavior as observed in obsessive compulsive disorder (OCD) and autism. Thus, identifying the factors that enable adaptive control regulation in a context-sensitive manner is both of theoretical and clinical relevance. In the current study, we sought to test whether the avoidance of cognitive demand could motivate people to dynamically regulate control along the hypothetical stability-flexibility continuum.

In the laboratory, cognitive flexibility is often measured using task-switching paradigms, where participants shift between two simple cognitive tasks (Dreisbach & Fröber, 2019; Monsell, 2003). Here, switching tasks incurs a switch cost – slower and more error-prone performance in task switches than task repetition trials - which is thought to be primarily due to both time-consuming active task-set reconfiguration processes and the influence of “task set inertia”, a passive carry-over of the previous trial task set (for a recent review, see Koch et al., 2018). Whereas task repetitions benefit from stability, task switches require flexibility. Such flexibility could be accomplished through advanced preparation—improved task-set reconfiguration—or, by influencing the carry-over effects—increasing the inhibition of previous task-sets, for instance. In forced-choice variants (externally cued task switching), switching efficiency, as measured by switch costs (here, also referred to as switch efficiency scores), is used to index control, with large switch costs indicating a stable mode of control and small switch costs indicating a more flexible mode of control. In voluntary choice variants—where participants are free to choose which task, they perform on a given trial—the switch rate is taken as an index of voluntarily engaged control – i.e., a willingness to switch rather than the efficiency of switching. Here, low switch rates indicate adoption of a more stable mode of control and high switch rates a more flexible mode of control (Dreisbach, et al., 2019).

Much research has focused on how changes in task context can influence adaptive control. For instance, varying the proportion of forced task-switches (e.g., Crump & Logan, 2010; Dreisbach & Haider, 2006; Mayr, 2006; Monsell & Mizon, 2006; Siqi-Liu & Egner, 2020), providing predictive contextual cues (e.g., Crump & Logan, 2010), and varying stimulus availability (Mittelstädt, Dignath, et al., 2018; Mittelstädt et al., 2019; Mittelstädt, Miller, et al., 2018) have all been shown to modulate control regulation. Learning to adapt control to meet changes in task demands is thought to be accomplished via associative learning processes. In line with recent cognitive control literature, control settings themselves are thought to become associated with specific stimuli or event types (including event transitions, see e.g., Chiu & Egner, 2017) via the same kind of associative learning processes that link stimuli and motor responses (Abrahamse et al., 2016; Egner, 2014). Accordingly, the observation of smaller switch costs in contexts where switching is required more frequently would reflect participants learning to associate the context (or specific task sets) with a high likelihood of having to switch tasks and adjusting their cognitive strategies in line with this expectation, for instance, by increasing their readiness to reconfigure task sets.

However, more recently, research has also begun to examine motivational influences on adaptive control regulation, such as rewards (Chiew & Braver, 2011; Dreisbach & Fischer, 2012; Dreisbach & Fröber, 2019; Goschke & Bolte, 2014; Hommel, 2015). Numerous studies have now demonstrated that the prospect of a performance-contingent reward biases control regulation towards stability (Chiew & Braver, 2013, 2014; Fröber & Dreisbach, 2016; Jimura et al., 2010; Locke & Braver, 2008; Padmala & Pessoa, 2011) at the cost of flexibility (Hefer & Dreisbach, 2017; Müller et al., 2007). Braem et al. (2012), for instance, used a cued task switching paradigm and rewarded 25% of trials. They found increased switch costs on trials immediately following a reward, suggesting that the reward reinforced the previously used task set, biasing control towards stability and, consequently, making it more difficult to switch (Kleinsorge & Rinkenauer, 2012; Shen & Chun, 2011). However, there have also been some cases where rewards have been used to promote cognitive flexibility. For instance, when rewards are not contingent on performance, control tends to be biased towards flexibility (Fröber & Dreisbach, 2014, 2016; Notebaert & Braem, 2015). Similarly, increasing reward prospect from one trial to the next tends to bias control towards flexibility on the following trial (Fröber et al., 2019, 2020; Fröber & Dreisbach, 2016; Kleinsorge & Rinkenauer, 2012; Shen & Chun, 2011).

Finally, and most relevant for the current study, selectively rewarding task switches has also been shown to bias control regulation towards flexibility. Braem (2017) used a mixed cued and voluntary task switching paradigm. Within each block of trials, the first half contained only forced-choice trials and the second half only voluntary task-selection trials. During the cued trials, one group of participants was selectively rewarded on task-switch trials, whereas another group was selectively rewarded on task-repetition trials. They found that participants who were selectively rewarded on task-switch trials had a higher voluntary task switch rate than participants who were selectively rewarded on task-repetition trials (though, no differences in reaction time or error rate switch costs). This result suggests that control regulation can be conditioned (e.g., Abrahamse et al., 2016; Braem & Egner, 2018; Egner, 2014; Verbruggen et al., 2014), in this case towards flexibility, and persists even after rewards have been removed.

One key assumption is that rewards motivate adaptive control regulation because they offset the intrinsic costs of engaging control. It is a longstanding and pervasive idea in psychology that people are “cognitive misers”, adapting their behavior to minimize cognitive demands (e.g., Allport et al., 1954; Hull, 1943; Rosch, 1999; Solomon, 1948; Zipf, 1949). In cognitive psychology, cognitive demand or ‘effort’ has long been associated with controlled information processing (Posner & DiGirolamo, 1998; Shiffrin & Schneider, 1977, 1977) whereby people weigh the intrinsic cost of engaging in controlled, effortful, processing against the potential gains in performance (Monsell, 2003). Indeed, evidence suggests that people tend to avoid engaging in controlled processing when they can offload such processing to learning and memory processes that exploit environmental regularities (Brosowsky & Crump, 2016, 2018, 2021; Bugg, 2014; Crump et al., 2017). Recent work also suggests that people explicitly avoid engaging in control-demanding tasks. In so-called ‘demand selection tasks’ (Botvinick & Rosen, 2009; Dunn & Risko, 2016; Gold et al., 2015; Kool et al., 2010; McGuire & Botvinick, 2010), participants can select between two alternative courses of action that vary in terms of cognitive demand—here, operationalized as the likelihood of a task switch (see Monsell, 2003). This work has demonstrated that people tend to avoid selecting courses of action associated with increased executive control demand (i.e., tasks with a high likelihood of switches; Dunn et al., 2016; Kool et al., 2010; Gold et al., 2015).

Taken together, the aforementioned research demonstrates that people tend to avoid demanding tasks—an observation typically attributed to the aversive nature of mental effort (Shenhav et al., 2017). However, there is little work examining how or whether the avoidance of ‘costs’ influences the adaptive regulation of control (e.g., Mittelstädt et al., 2019; Mittelstädt, Miller, et al., 2018). Such work is necessary for developing a comprehensive framework of the cost-benefit analysis thought to underly control regulation. Furthermore, finding novel evidence that demand avoidance is (or is not) a determinant of control regulation would be important for understanding maladaptive control regulation—which may result from an inability to appropriately evaluate cognitive demand. To address this gap, we examined whether cognitive demand avoidance can motivate adaptive control regulation in the context of cued and voluntary task switching.

Specifically, in the current study we examined whether selectively associating high cognitive demand with task switches versus repetitions would bias control regulation towards flexible versus stable control strategies, respectively. Inspired by Braem (2017), we reasoned that if demand avoidance were a determinant of control regulation (similar to reward), then selectively associating high cognitive demand with task repetitions should cause participants to bias their control regulation towards flexibility (i.e., adopting a control regulation strategy that reduces demand) as evidenced by improved switching efficiency. Similarly, selectively associating high cognitive demand with task switches should bias control towards stability, as evidenced by reduced switching efficiency.

Although our experimental design follows a similar logic as Braem (2017), we extended it in two important ways. First, we examined the influence of cognitive demand (rather than reward) on cued and voluntary task selection independently to determine whether our manipulation would influence task-switching efficiency and overt task selection behavior in a similar manner. Second, we included a third, control, group who received an equal number of high and low demand trials, unassociated with task switches and repetitions. Braem (2017) did not include such a control group, but without a baseline comparison it is unknown whether their manipulation influenced both groups equally (i.e., increased flexibility in one group and decreased flexibility in the other) or only influenced one group (e.g., influenced flexibility in one group, but had no influence on the other group), as both possibilities could result in a difference between groups. Therefore, including the control group allows us to assess whether the avoidance of demand produces symmetrical effects along the stability-flexibility dimension.

Experiment 1

In Experiment 1, we examined control regulation in the context of a forced-choice, cued task switching paradigm. On every trial an array of colored shapes was presented, and participants completed either a color or shape discrimination task, indicating which one out of two possible colors/shapes were presented in a higher quantity (see Figure 1). Critically, cognitive demand was manipulated by varying the relative proportions of each shape or color, thereby increasing or decreasing the relative discriminability of the target stimulus (see Method). Participants were randomly assigned to one of three groups (High Demand Switch, High Demand Repetition, and Control groups), each completing a learning and a transfer phase. In the learning phase, high cognitive demand was either associated with task repetitions (High Demand Repetition group), task switches (High Demand Switch group), or randomly assigned on every trial (Control group). In the transfer phase, all three groups received the same ‘medium’ demand trials (as determined by the titration phase; see Method). We expected that the learning phase would bias control regulation, which would then persist through the transfer phase (e.g., Braem 2017), as evidenced by smaller switch efficiency scores (i.e., switch costs) for the flexible group and larger switch efficiency scores for the stable group, as compared to the control group.

Figure 1.

Figure 1.

An illustration of the color and shape tasks used in Experiments 1 and 2. On every trial an array of colored shapes was presented and participants identified the shape with a higher quantity. Demand was manipulated by varying the relative proportion of shapes. Whereas low demand trials contained a relatively high target-shape proportion (e.g., 80:20), high demand trials contained a relatively low target-shape proportion (e.g., 60:40). For illustrative purposes, the example stimuli contain arrays of 10 shapes. The stimuli used in the experiments contained arrays of 100 shapes (see Method).

Method

Sample Size Justification

Sample sizes were determined by estimating the power to detect a range of effects using a Monte-Carlo simulation approach (e.g., Brosowsky et al., 2021; Brosowsky & Crump, 2021; Crump et al., 2017, p. 2017). Using pilot data we collected using the current paradigm we estimated the distributions of ex-gaussian parameters representative of participant response times (mu = 850 ms [sd = 243 ms], sigma = 224 ms [sd = 114 ms], tau = 265 ms [sd = 79ms]). For each simulated participant, we sampled ex-gaussian parameters (truncated to +/− 1.5 standard deviations) and sampled 240 response times (120 switch/120 repeat trials) from the ex-gaussian distribution. To create a “switch cost” we subtracted half switch cost effect from the sampled response times on ‘repeat’ trials and added half the switch cost effect to the sampled response times on ‘switch’ trials. In particular, we were interested in estimating our ability to detect changes in the switch cost across two groups. The effect size, therefore, was the size of the difference in response time switch costs between groups (e.g., a 30-ms difference in switch costs). For each effect size and sample size we ran 1000 simulations analyzing the simulated data using a linear mixed-effect model with Group (High Switch Cost and Low Switch Cost) and Task (Repeat and Switch) as fixed effects and Subject as a random effect. From these simulations, we estimated that a minimum 38 participants per group were needed to detect a 30 ms difference in switch costs between groups with 80% power. We collected 50 participants per group to ensure we would meet this minimum threshold. With 50 participants we estimated we could detect a 30 ms difference in switch costs with 90% power (code and results available at osf.io/7rct9).

Participants

Participants were 150 individuals (demographics are presented in Table 1) who completed a Human Intelligence Task (HIT) posted on the Amazon Mechanical Turk. Participants were paid $3.50 (U.S. dollars) for completing the HIT, which lasted approximately 25 minutes. Restricting participation to Amazon workers with a high reputation and proven work history has shown to increase the data quality of online studies (e.g., Peer et al., 2014). Therefore, only Amazon workers who had completed more than 5000 HITs with 98% approval were able to complete the experiment.

Table 1.

Demographics from Experiments 1 and 2

Experiment 1
HD-R Control HD-S
Age 43.08 (12.58) 39.35 (10.21) 38.57 (10.14)
Handedness
Right 48 39 41
Left 1 4 6
Both 1 1 1
No Response 0 6 2
Gender
Men 27 23 33
Women 23 26 16
Self-Defined 0 0 1
No Response 0 1 0
Experiment 2
HD-R Control HD-S

Age 38.98 (10.52) 39.34 (10.28) 43.28 (12.31)
Handedness
Right 41 38 43
Left 4 5 6
Both 4 3 1
No Response 2 1 2
Gender
Men 25 22 28
Women 24 25 23
Self-Defined 0 0 1
No Response 2 0 0

Note: HD-R = High Demand Repetition; HD-S = High Demand Switch; Age = Mean age and standard deviations

Stimuli and procedure

The experiment was programmed in JavaScript and HTML/CSS. The experiment was presented in full-screen and stimuli were presented at the center of an off-white screen in near-black colored font. The response-key mappings were displayed throughout the experiment below the stimulus display.

Throughout the experiment, participants completed color and shape discrimination tasks. For each task, 100 non-overlapping shapes were presented in random positions on a 400 x 400 px display. Instructions indicating which response corresponded with which key were always presented below the stimulus. In the color task, participants had to indicate whether there were more light-blue (hexidecimal color code: #92c5de) or dark-blue (hexidecimal color code: #4393c3) circles. In the shape task, participants had to indicate whether there were more black squares or black triangles. Participants responded using the “Z” and “M” keys on the keyboard for both tasks. Response-key mappings were randomly assigned for each participant (e.g., “Z” for dark-blue or squares; “M” for light-blue or triangles). If participants responded correctly, the word “Correct” was displayed in green font for 500 ms before the next trial automatically began. If they responded incorrectly, the words “Incorrect. Press the space bar to continue.” were displayed in red font. Pressing the space bar triggered the next trial.

Participants were first given instructions and a general overview of the experiment. They were informed that they would have to complete one of two tasks on every trial and to respond as quickly and as accurately as possible. However, they were not informed about the difficulty manipulations. Verbatim instructions can be found in Appendix A.

Each participant was randomly assigned to one of three groups (hereafter referred to as the High Demand Switch, High Demand Repetition, and Control groups) and completed three phases: A Titration phase, a Learning phase, and a Transfer phase. The first, Titration, phase contained two blocks of 96 trials, each consisting of only one of the two tasks, randomly assigned to the first or second block. These two blocks served to titrate the difficulty of each task independently in a participant-specific fashion. We used a 3-up-1-down adaptive staircase procedure to titrate the difficulty of each task. Starting the procedure at 70/100, on each step, the relative proportion of shapes shifted by 2 out of 100 items. After three consecutive correct responses, the difficulty increased by one step. After an incorrect response, the difficulty decreased by one step. At the end of the 96 trials, the titrated proportion was assigned to the “Medium” demand for the remainder of the experiment. The Low and High demand trials were determined by selecting the proportion halfway between the Medium demand and 0/100 (Low) and halfway between Medium and 50/50 (High). For example, if a participant titrated to 66/100 as the medium demand, low demand was set at 83/100 and high demand was set at 58/100.

The second, Learning, phase, consisted of 241 trials, consisting of an equal number of color and shape trials, an equal number of low- and high-demand trials, and an equal number of task switches and task repetitions. We used custom functions to create trial lists ensuring these criteria were met. This required an extra trial at the beginning of the phase. This trial was randomly assigned a difficulty but removed prior to all analyses. The association between difficulty and task-repetition differed across groups. For the High Demand Repetition group, every task-switch trial contained a low-demand target stimulus and every task-repeat trial contained a high-demand target stimulus. For the High Demand Switch group, every task-switch trial contained a high-demand target stimulus and every task-repeat trial contained a low-demand target stimulus. For the Control group, difficulty was randomly assigned on every trial with the constraint that there were an equal number low- and high-demand trials throughout the phase.

The third, and final, Transfer phase, consisted of 240 trials, an equal number of color and shape trials, and an equal number of task repetition and task switch trials. Critically, however, for all groups, all trials were medium demand during this phase. The learning and transfer phases were separated by an additional instruction screen, informing participants they were half-way finished the experimental trials and reiterating the instructions.

Results

Data analysis and manuscript preparation

This manuscript was prepared using R (R Core Team, 2019). A variety of R packages were used for data analysis (Bates et al., 2015; Fox & Weisberg, 2019; Kuznetsova et al., 2017; Singmann et al., 2019; Wickham et al., 2019; Wickham & Henry, 2019), data visualization (Fox & Weisberg, 2018; Kassambara, 2019; Wickham, 2016; Wilke, 2019), and general manuscript preparation (Aust & Barth, 2018). All data, analysis, and manuscript preparation code can be found at osf.io/7rct9/.

Participants with less than 60% accuracy in the transfer phase were excluded from all analyses. This removed 3 participants from the High Demand Repetition group, 2 participants from the High Demand Switch group, and 1 participant from the Control group. Prior to all analyses, the first trial for each block was removed and any trials with a response time greater than 3 seconds or less than 300 milliseconds were removed (removing 3.3% of observations). In addition, for response time analyses, all error trials were removed, all trials where the previous trial was an error were removed, and then finally, the van Selst and Jolicoer non-recursive outlier removal procedure was applied (Van Selst & Jolicoeur, 1994). This procedure uses an adaptive standard deviation cut-off, adjusted for the number of observations, and thus reducing potential bias in mean estimates. This outlier procedure removed an additional 3.4% of observations, for a total of 6%.

For the response time analyses, we used linear mixed-effects models (LMM), which provide numerous advantages over an ANOVA (Baayen et al., 2008). Most importantly, unlike repeated-measures ANOVA, when using a mixed-effects model we do not aggregate over participant data, preventing the loss of important information about the variability within participants and increasing our statistical power (e.g., Barr, 2008). Given the similarity in LMM and ANOVA outputs and interpretations, we decided that the precision we gained from adopting the LMM outweighed the added overhead in complexity. For the error-rate analyses, we opted for the more conventional ANOVA over other more complex methods (e.g., generalized linear mixed-effects models with Bernoulli or binomial distributions). Importantly, however, the results and conclusions do not differ depending on the type of error-rate analysis.

Titration and Difficulty Analyses

First, we sought to determine the success of our titration procedure (see Figure 2). Collapsing across groups, we compared the resulting titration levels across shape and color tasks. and found that the shape task (M = 0.68), on average, titrated to a higher proportion than the color task (M = 0.62), Md = −0.05, 95% CI [−0.06, −0.04], t(143) = −9.46, p < .001. That is, participants required a higher signal-to-noise ratio in the shape task than the color task to reach similar accuracy levels.

Figure 2.

Figure 2.

Results from the control group in Experiment 1. Response times and error rates are plotted as a function of trial demand (low, medium, and high). Whereas the low and high demand trials are from the learning phase, the medium demand trials are from the transfer phase (see Method for more details).

Next, to validate whether the difficulty manipulations were successful, we examined performance across the three levels of difficulty (Low, Medium, and High) for the control Group (see Figure 1; see Appendix B for full model results). As a reminder, the Control group completed Low, Medium, and High demand trials on both task-switches and task-repetitions. In the learning phase, they completed the low and high demand trials and in the transfer phase, they completed the medium demand trials. Therefore, the Control group performance provides the clearest estimate of our demand manipulation, independent of task-switching. To analyze the response times, we used a linear mixed-effect model with Difficulty as a fixed effect and Subject as a random effect. We found that participants in the control Group were significantly quicker responding on the Low demand trials (M = 921 ms), as compared to both the Medium demand (M = 1021 ms), β = 99.87, 95% CI [87.59, 112.14], t(17718.39) = 15.94, p < .001, and the High demand trials (M = 1173 ms), β = 251.37, 95% CI [236.17, 266.56], t(17718.33) = 32.42, p < .001. Participants were also quicker to respond on Medium demand Trials as compared to High trials, β = 151.5, 95% CI [137.63, 165.38], t(17718.42) = 21.4, p < .001.

We also compared error rates across each of the demand conditions and found that Control Group participants produced significantly fewer errors on Low demand trials (M = 5.82%), as compared to both Medium demand (M = 18.95%), t(94.49) = −9.92, p < .001; and High demand trials (M = 32.74%), t(93.63) = −23.44, p < .001. Similarly, participants produced significantly more errors on High versus Medium demand trials, t(89.02) = −11.11, p < .001.

In sum, we found our titration method successfully produced 15–20% error rates, on average, and our demand manipulations successfully produced low (as evidenced by quicker response times and lower error rates) and high demand trials (as evidenced by slower response times and higher error rates).

Task Performance

First, we analyzed task performance using a linear mixed-effects model for response times and mixed ANOVA for error rates (see Table 2). For the response time analyses, we included Group (High Demand Repetition, High Demand Switch, and Control), Task Transition (Switch and Repeat), and Phase (Learning and Transfer) as fixed effects, and subject as a random effect. For the error rate analysis, we included Task Transition and Phase as within-subjects factors and Group as the between-subjects factor. In both, the response time and the error rate analyses, we found significant three-way interactions (ps < .001), indicating that performance differed across the phases. To follow-up these interactions, we analyzed the learning and transfer phases separately. For the complete model results, see Appendix B.

Table 2.

Task performance in Experiment 1

Learning Phase
RT ER RT Eff. ER Eff.
HD-Rep Switch 1037 (37) 6.34 (0.81) −102 (8) −24.69 (1.27)
Repeat 1139 (37) 31.03 (0.93)
Control Switch 1111 (36) 20.09 (0.73) 187 (8) 2.22 (0.60)
Repeat 924 (36) 17.86 (0.61)
HD-Switch Switch 1225 (36) 31.65 (0.90) 431 (8) 27.45 (1.02)
Repeat 795 (36) 4.2 (0.81)
Transfer Phase
RT ER RT Eff. ER Eff.

HD-Rep Switch 1136 (40) 18.21 (0.78) 104 (8) 1.38 (0.65)
Repeat 1032 (40) 16.83 (0.96)
Control Switch 1101 (39) 20.34 (1.16) 138 (8) 2.98 (0.78)
Repeat 962 (39) 17.36 (0.91)
HD-Switch Switch 1090 (40) 20.52 (1.09) 145 (8) 3.03 (0.83)
Repeat 945 (40) 17.5 (1.00)

Note: HD-Rep = High Demand Repetition; HD-Switch = High Demand Switch; RT = reaction time (ms); ER = error rate; Eff. = Switch Efficiency (Switch minus Repeat)

Learning phase.

To analyze response times (see Figure 3), we used a linear mixed-effects model with Group (High Demand Repetition, High Demand Switch, and Control) and Task (Switch and Repeat) as fixed effects and subject as a random effect. We were particularly interested in how switch efficiency (response times on switch trials minus repeat trials) varied across groups. To that end, we found that participant switch efficiency scores were significantly smaller for the High Demand Repetition group (M = −102.33 ms) as compared to the Control group (M = 186.94 ms), β = −289.28, 95% CI [−312.34, −266.22], t(21352.77) = −24.59, p < .001, as well as the the High Demand Switch group (M = 430.54 ms), β = 532.87, 95% CI [509.75, 556], t(21352.85) = 45.16, p < .001. Additionally, the switch cost for the High Demand Switch group was significantly larger than the Control group, β = 243.6, 95% CI [220.85, 266.34], t(21352.22) = 20.99, p < .001.

Figure 3.

Figure 3.

Results from the learning phase in Experiment 1. Participant mean response times and error rates are plotted as a function of group and trial type (S = task switch and R = task repeat). RT and error switch efficiency (as estimated by the linear mixed models; performance on switch trials minus performance on repetition trials) are plotted as a function of group (HD-R = High Demand Repetition, C = Control, and HD-S = High Demand Switch). Error bars represent standard error of the mean.

Next, we analyzed error rates using a mixed ANOVA with Group as the between-subjects factor and Task as the within-subjects factor. Here, we found a significant main effect of Task, F(1,141) = 8.33, M = 23.82, p = .005, η^p2 = .056, but no main effect of Group, F(2,141) = 0.74, MSE= 37.86, p = .477, η^p2 = .010, both qualified by an interaction between Group and Task, F(2,141) = 677.81, MSE = 23.82, p < .001, η^p2 = .906. Comparing switch efficiency scores across groups we found significantly lower error switch efficiency scores for the High Demand Repetition group (M = 127.49%) as compared to the Control (M = 60.19%), t(65.66) = −19.09, p < .001, and High Demand Switch groups (M = 102.09%), t(88.35) = −31.93, p < .001. Switch efficiency scores were also significantly larger for the High Demand Switch group as compared to the Control group, t(76.32) = −21.29, p < .001.

Transfer phase.

Turning to the response times in the Transfer phase (see Figure 4), we found significantly smaller switch efficiency scores in the High Demand Repetition group (M = 104.49 ms) as compared to the Control group (M = 138.32 ms), β = −33.83, 95% CI [−56.69, −10.98], t(21396.03) = −2.9, p = 0.004, as well as the High Demand Switch group (M = 145.47 ms), β = 40.98, 95% CI [18.07, 63.9], t(21395.75) = 3.51, p < .001. However, the switch efficiency scores did not significantly differ between the High Demand Switch and Control groups, β = 7.15, 95% CI [−15.74, 30.04], t(21396.16) = 0.61, p = 0.54. Finally, we examined error rates using a mixed ANOVA and found a significant main effect of Task, F(1,141) = 31.30, MSE = 13.95, p < .001, η^p2 = .182, but no main effect of Group, F(2,141) = 0.79, MSE = 80.50, p = .457, η^p2 = .011 and no interaction between Group and Task, F(2,141) = 1.49, MSE = 13.95, p = .228, η^p2 = .021.

Figure 4.

Figure 4.

Results from the transfer phase in Experiment 1. Participant mean response times and error rates are plotted as a function of group and trial type (S = task switch and R = task repeat). RT and error switch efficiency (as estimated by the linear mixed models; performance on switch trials minus performance on repetition trials) are plotted as a function of group (HD-R = High Demand Repetition, C = Control, and HD-S = High Demand Switch). Error bars represent standard error of the mean.

Exploratory analyses.

As an exploratory analysis, we also examined whether the learning effects observed in the Transfer phase persisted across the entire phase. One possibility is that observed effects are strongest at the beginning of the Transfer phase but rapidly dissipate once the demand manipulation is removed—potentially explaining the absence of effect in the stable group. To test the longevity of the effects, we divided the transfer phase into two blocks and re-analyzed the response times in the Transfer phase with group and block as factors using the same procedures as the previous analyses. Here, we found no significant effect of block on switch efficiency (p > .05) and no interaction between group and block (p > .05) indicating that the observed effects did not differ across blocks.

Discussion

In Experiment 1, we manipulated the cognitive demand associated with task-switch versus repetition trials across three groups of participants to determine whether the avoidance of cognitive demand would motivate participants to engage in more flexible or stable control strategies. First, results from the learning phase demonstrate that our demand manipulations were successful: Participants in the High Demand Repetition group showed reversed switch efficiency scores and participants in the High Demand Switch group showed inflated switch efficiency scores compared to our control group. We then compared performance in the transfer phase, where all three groups received medium-demand trials. Here, we find smaller switch efficiency scores for the High Demand Repetition group as compared to the Control and High Demand Switch groups. Thus, selectively associating high cognitive demand with task-repetitions appears to have biased control regulation towards flexibility, improving switching efficiency for this group in the transfer phase. This suggests that the avoidance of cognitive demand can indeed motivate participants to shift their control strategy towards flexibility when stable control strategies are associated with higher demand. However, we did not find any evidence for differences in switch efficiency scores between the High Demand Switch and Control groups in the transfer phase. That is, despite the increase in demand associated with task switching during the learning phase, these participants did not show any persistent changes in control regulation.

Experiment 2

In Experiment 2, we examined whether cognitive demand avoidance would motivate control regulation in the context of a voluntary task-switching paradigm. In a forced-choice context, the experimenter determines whether any given trial, or set of trials, requires flexibility. Thus, some have argued that voluntary task-switching paradigms, where participants are free to choose the task on each trial, provides a more direct measure of control because flexibility (or stability) is truly optional (Arrington & Logan, 2004). Research using the demand selection task has already shown that participants tend to select lists of trials with low switch rates versus high switch rates, suggesting that participants were sensitive to the demand-context associations and adjusted behavior to avoid high demand (e.g., Gold et al., 2015; Kool et al., 2010). Thus, we might expect that when repeat trials are associated with high demand, participants will increase their voluntary switch rates and when switch trials are associated with high demand, participants will decrease their voluntary switch rates.

Our task, however, differs in some important ways from these prior studies. Specifically, in the demand selection task, participants choose between two task contexts, each associated with a high or low demand, which is incidentally manipulated using the frequency of task-switches (e.g., Crump & Logan, 2010; Dreisbach & Haider, 2006; Mayr, 2006; Monsell & Minzon, 2006; Siqi-Liu & Egner, 2020). participants choose on any given trial. Instead, we are interested in whether participants choose to switch more or less frequently between the two tasks, indicative of a more flexible or stable control strategy. This is a subtle, but important distinction. Selecting a low-demand over high-demand context does not necessarily require a shift in control strategy but switching more frequently does. Therefore, although the prior work suggests people will avoid demand, it cannot speak to whether demand avoidance can motivate changes in adaptive control regulation along the stability-flexibility continuum.

One final point worth discussing is the natural tendency for participants to choose to repeat tasks more often than switch (e.g., Arrington et al., 2010; Arrington & Logan, 2005; Mittelstädt, Dignath, et al., 2018). That is, despite participants being instructed to choose each task equally as often in a random order (Arrington & Logan, 2005), participants typically show a repetition bias, producing repetitions more often than expected by chance. The repetition bias in task-switching stands in stark contrast to the well-established finding that when asked to generate random sequences, people tend to alternate more often than repeat (Nickerson, 2002; Rapoport & Budescu, 1997). To account for the repetition bias, Arrington and Logan (2005) suggested that task selection is driven by two competing heuristics: The availability heuristic, where tasks are selected on the basis of the most active task set, and the representativeness heuristic, where tasks are selected on the basis of a mental representation of a random sequence (Rapopart & Budescu, 1997). Critically, the assumption is that participants are biased towards using the availability heuristic because it is less effortful than the representativeness heuristic (e.g., Mittelstädt, Dignath, et al., 2018; Vandierendonck et al., 2012; Yeung, 2010).

Returning to the current study, we can make some additional predictions regarding the learning phase. Specifically, if the repetition bias is driven solely by effort-avoidance, we should expect to see the bias abolished when task-repetitions are made more effortful; in fact, we might even expect to observe a switch-bias. Similarly, we might expect to inflate the repetition bias by making task-switches even more demanding relative to task-repetitions. As in Experiment 1, however, the primary question of interest is not whether participants adapt during the learning phase, but whether we observe differences in performance when the demand-manipulation is removed during the transfer phase. Such effects would be indicative of adaptive control regulation driven by demand-avoidance.

Method

Participants

Participants were 150 individuals (demographics are presented in Table 1) who completed a Human Intelligence Task (HIT) posted on Amazon Mechanical Turk. Participants were paid $3.50 (U.S. dollars) for completing the HIT, which lasted approximately 25 minutes. Only Amazon workers who had completed more than 5000 HITs with 98% approval were able to complete the experiment.

Stimuli and procedure

The apparatus and stimuli were nearly identical to those used in Experiment 1. However, on every trial, participants were first presented the task-selection display (“Shape or Color?”) and responded by pressing “Z” for shape or “X” for color. After task selection, a blank screen was displayed for 750 ms, followed by the target stimulus. In the shape task, participants had to indicate whether there were more black squares or black triangles. In the color task, participants indicated whether there were more light-blue or dark-blue circles. Participants responded using the “N” and “M” keys on the keyboard for both tasks. Response-key mappings were randomly assigned for each participant (e.g., “N” for dark-blue or squares, “M” for light-blue or triangles). If participants correctly responded the word “Correct” was displayed in green font for 500 ms before the next trial automatically began. If they responded incorrectly, the words “Incorrect. Press the space bar to continue.” were displayed in red font. Pressing the space bar triggered the next trial.

Participants were first given instructions and a general overview of the experiment. As in Experiment 1, they were informed that they would have to complete one of two tasks on every trial and to respond as quickly and as accurately as possible and were not informed about the task demand manipulations. Unlike Experiment 1, participants were given additional instructions about selecting the task on every trial. Specifically, they were instructed to try to select each of the tasks about equally as often, in a random order (Arrington & Logan, 2004; Verbatim instructions can be found in Appendix A). Participants then completed a titration block for each of the color and shape tasks in a random order, followed by the learning phase, then finally, the transfer phase. The learning and transfer phases were separated by an additional instruction screen, informing participants they were half-way finished the experimental trials and reiterating the general instructions.

Results

Data Analysis

All data analysis procedures were identical to those outlined in Experiment 1. However, in addition to the response time and error rate analyses, we also analyzed the voluntary switch rate (VSR; the proportion trials participants chose to switch from one task to the other) using the same procedures.

Difficulty and titration analyses

Collapsing across groups, we compared the resulting titration levels and found that the shape task (M = 0.65), on average, titrated to a higher proportion than the color task (M = 0.61), Md = −0.05, 95% CI [−0.06, −0.04], t(144) = −9.89, p < .001. As in Experiment 1, this suggests participants required a higher signal-to-noise ratio in the shape task than the color task to reach a similar level of accuracy.

As in Experiment 1, to validate whether the demand manipulations were successful, we examined the Control Group performance across the three levels of demand (Low, Medium, and High). The Low and High demand trials were completed during the Learning phase and the Medium demand trials were completed during the Transfer phase. To analyze the response times, we used a linear mixed-effect model with Difficulty as a fixed effect and Subject as a random effect. We found that participants in the control Group were significantly quicker responding on the Low demand trials (M = 958 ms), as compared to both the Medium demand (M = 1077 ms), β = 119.46, 95% CI [106.33, 132.59], t(17186.6) = 17.83, p < .001, and the High demand trials (M = 1252 ms), β = 293.92, 95% CI [277.47, 310.38], t(17186.73) = 35.01, p < .001. Participants were also quicker to respond on Medium demand trials as compared to High demand trials, β = 174.46, 95% CI [159.37, 189.56], t(17186.29) = 22.65, p < .001.

We also compared error rates across each of the demand conditions and found that participants produced significantly fewer errors on Low demand trials (M = 3.43%), as compared to both Medium demand (M = 16.54%), t(56.58) = −10.34, p < .001; and High demand trials (M = 30.36%), t(60.96) = −24.58, p < .001. Similarly, participants produced significantly more errors on High versus Medium demand trials, t(87.56) = −8.86, p < .001.

To summarize, the results of the titration and demand manipulations in Experiment 2, replicated the results of Experiment 1: we found our titration method successfully produced 1520% error rates, on average, and our demand manipulations successfully produced low (as evidenced by quicker response times and lower error rates) and high demand trials (as evidenced by slower response times and higher error rates).

Task Selection

We submitted the task selection responses, as a proportion of task switches, to a mixed ANOVA with Group (High Demand Repetition, High Demand Switch, and Control) as the between subjects factor and Phase (Learning and Transfer) as the within-subjects factor (see Figure 5; Table 3). First, we found no significant interaction between Group and Phase, F(2,143) = 0.14, MSE = 0.02, p = .873, η^p2 = .002. There was a main effect of Group, F(2,143) = 5.07, MSE = 0.09, p = .007, η^p2 = .066; Participants in the High Demand Repetition group (M = 0.44) produced significantly more task switches than participants in the Control group (M = 0.34), t(92.48) = 2.19, p = .031, and the High Demand Switch group (M = 0.31), t(97.98) = 3.09, p = .003. However, there was no significant difference in switch rates between the High Demand Switch and Control groups, t(94.65) = 0.87, p = .386. There was also a signifcant main effect of Phase, F(1,143) = 8.90, MSE = 0.02, p = .003, η^p2 = .059. Participants across all groups showed an increase in switch rates from the Learning phase (M = 0.34) to the Transfer phase (M = 0.39).

Figure 5.

Figure 5.

Results from Experiment 2. Density distributions of switch proportions are plotted by Group (High Demand Switch [HD-S], Control, High Demand Repetition [HD-R]) and Phase (Learning and Transfer). Vertical marks along the x-axis represent individual participant response proportions. Vertical lines represent the group’s mean response proportion.

Table 3.

Task performance in Experiment 2

Learning Phase
RT ER RT Eff. ER Eff. VSR
HD-Rep Switch 1058 (39) 6.81 (1.90) −211 (10) −23.48 (1.98) 0.42 (0.04)
Repeat 1270 (39) 30.33 (1.10)
Control Switch 1183 (40) 17.64 (1.40) 198 (11) 1.5 (1.32) 0.33 (0.03)
Repeat 986 (40) 15.81 (0.68)
HD-Switch Switch 1381 (39) 29.82 (1.60) 539 (11) 24.88 (1.86) 0.28 (0.03)
Repeat 842 (38) 4.87 (1.02)
Transfer Phase
RT ER RT Eff. ER Eff. VSR

HD-Rep Switch 1195 (44) 16.31 (1.18) 85 (9) 1.63 (1.55) 0.46 (0.03)
Repeat 1110 (44) 14.68 (1.39)
Control Switch 1152 (45) 17.40 (1.61) 129 (10) 0.5 (1.12) 0.36 (0.03)
Repeat 1023 (45) 16.66 (1.28)
HD-Switch Switch 1212 (43) 18.48 (1.08) 191 (11) 1.46 (0.92) 0.33 (0.03)
Repeat 1022 (43) 17.05 (1.01)

Note: HD-Rep = High Demand Repetition; HD-Switch = High Demand Switch; RT = reaction time (ms); ER = error rate; Eff. = Switch Efficiency (Switch minus Repeat)

Task Performance

Learning phase.

To analyze response times, we used a linear mixed-effect model with Task (Switch and Repeat) and Group (High Demand Repetition, High Demand Switch, and Control) as fixed effects, and subject as a random effect (see Figure 6; for full model results, see Appendix B). We found RT switch efficiency scores to be significantly smaller for the High Demand Repetition group (M = −211.07 ms), as compared to the Control (M = 197.68 ms), β = −408.75, 95% CI [−437.44, −380.06], t(23268.24) = −27.92, p < .001, and High Demand Switch groups (M = 538.95 ms), β = 750.02, 95% CI [720.91, 779.13], t(23271.32) = 50.5, p < .001. Similarly, switch efficiency scores were significantly larger for the High Demand Switch group compared to the Control group, β = 341.27, 95% CI [311.03, 371.5], t(23270.05) = 22.12, p < .001.

Figure 6.

Figure 6.

Results from the learning phase in Experiment 2. Participant mean response times and error rates are plotted as a function of group and trial type (S = task switch and R = task repeat). RT and error switch efficiency scores (as estimated by the linear mixed models; performance on switch trials minus performance on repetition trials) are plotted as a function of group (HD-R = High Demand Repetition, C = Control, and HD-S = High Demand Switch). Error bars represent standard error of the mean.

Turning to error rates, using a mixed-ANOVA with Group and Task as factors (removing 10 participants with missing cells), we found a significant interaction between Group and Task, F(2,133) = 192.21, MSE = 71.52, p < .001, η^p2 = .743, but no significant main effect of Task, F(1,133) = 0.89, MSE = 71.52, p = .348, η^p2 = .007, or Group, F(2,133) = 0.65, MSE = 103.57, p = .524, η^p2 = .010. Comparing error switch efficiency scores across groups, we found significantly lower switch efficiency scores for the High Demand Repetition group (M = −23.48%) as compared to the Control (M = 1.5%), t(78.59) = −10.47, p < .001, and High Demand Switch groups (M = 24.88%), t(91.63) = −17.77, p < .001. Similarly, switch efficiency scores were significantly larger for the High Demand Switch group as compared to the Control group, t(81.04) = −10.24, p < .001.

Transfer phase.

Analyzing response times using a linear mixed-effect model with Group and Task as fixed effects and Subject as a random effect, we found significantly smaller RT switch efficiency scores in the High Demand Repetition group (M = 84.91 ms) as compared to the Control group (M = 129.18 ms), β = −44.27, 95% CI [−71.15, −17.39], t(22873.36) = −3.23, p = 0.001, as well as the High Demand Switch group (M = 190.53 ms), β = 105.62, 95% CI [77.82, 133.43], t(22896.98) = 7.45, p < .001. Moreover, we found significantly larger switch efficiency scores in the High Demand Switch group compared to the Control group, β = 61.35, 95% CI [32.64, 90.07], t(22897.88) = 4.19, p < .001.

Analyzing error rates using a mixed-ANOVA with Group and Task as factors (removing 4 participants with missing cells), we found no significant main effects of Task, F(1,139) = 2.81, MSE = 36.13, p = .096, η^p2 = .020 or Group, F(2,139) = 1.15, MSE = 114.62, p = .321, η^p2 = .016; And no significant interaction between Group and Task, F(2,139) = 0.24, MMMMMM = 36.13, p = .790, η^p2 = .003.

Exploratory Analyses

As in Experiment 1, we also tested the longevity of the effects across the Transfer phase. We divided the transfer phase into two blocks and re-analyzed the voluntary switch with trial type, group, and block as factors using the same procedures as the previous analyses. Here, we found no significant effect of block on voluntary switch rates (p > .05) and no interaction between group and block (p > .05) indicating that the observed effects did not differ across blocks.

Discussion

In Experiment 2, we again manipulated the cognitive demand associated with task-switches versus repetition trials to determine whether the avoidance of cognitive demand would motivate participants to engage in more flexible versus stable control strategies. Unlike Experiment 1, however, we used a voluntary task-switching paradigm to determine whether the demand manipulation would influence voluntary switch rates. Critically, we found that whereas the High Demand Repetition group produced higher task switch rates than both High Demand Switch and Control groups across learning and transfer phases, there was no evidence that the High Demand Switch group produced fewer task switches than the Control group. Turning to task-switching efficiency, we observed differences across all three groups in the transfer phase, with the High Demand Switch group producing the largest switch cost, followed by the Control group, and then the High Demand Repetition group. This result is consistent with prior work showing switch efficiency scores are sensitive to the frequency of switch rates (e.g., Crump & Logan, 2010; Dreisbach & Haider, 2006; Mayr, 2006; Monsell & Minzon, 2006; Siqi-Liu & Egner, 2020) .

Taken together, these results mirror the effects from Experiment 1: Whereas we found evidence that selectively associating high demand with task-repetitions could bias control regulation towards flexibility–here, in the form of voluntarily engaging in task switching–, we found only weak evidence that selectively associating high demand with task-switches could bias control regulation towards stability. In this case, we found no difference in task selection switch rates (our primary measure of interest) but did find differences in switching efficiency which are consistent with a more stable mode of control.

General Discussion

To date, most research examining motivational influences on control regulation has focused on rewards, manipulating the receipt or prospect of a reward and measuring the transient aftereffects (Chiew & Braver, 2013, 2014; Fröber & Dreisbach, 2014, 2016; Fröber et al., 2019, 2020; Shen & Chun, 2011). Braem (2017) demonstrated that selectively rewarding task-switches, as compared to task-repetitions, could motivate participants to adopt more flexible control strategies, even after the rewards were no longer present. Typically, the influence of reward on cognitive control is explained as a cost-benefit tradeoff, whereby the intrinsic costs of engaging in control are offset by increasing the potential reward (Shenhav et al., 2013). This view suggests that the costs of control regulation may be as important as the potential rewards for determining control states. However, no prior study has examined the extent to which the avoidance of cognitive effort can motivate adaptive control regulation. In the current study, we tested whether the selective association of high cognitive demand with task switches versus repetitions can influence control regulation along the stability-flexibility continuum. In both experiments, we find clear evidence that selectively associating task repetitions with high demand can bias people towards adopting a more flexible control state (as evidenced by lower switch efficiency scores and higher voluntary switch rates). However, we found little evidence that selectively associating task switches with high demand biased participants to adopt a more stable control state; In Experiment 1, we found no differences in switch efficiency scores and in Experiment 2, we found no differences in voluntary switch rates, but did find larger switch efficiency scores. Taken together, these results provide the first evidence that the avoidance of cognitive demand can motivate people to bias their control regulation along the stability-flexibility continuum and document that demand-avoidance is an important determinant of adaptive control regulation.

One plausible explanation for the asymmetry in our results–successfully biasing control towards flexibility, but not stability–is that task-switching paradigms, like the ones used here, already biases control towards stability. Assuming task-switching is inherently more demanding than repeating (cf. Gold et al., 2015; Kool et al., 2010), participants may naturally be biased towards a more stable control state. That is, stability (or a task repetition set) is the default control setting in a typical task-switching paradigm and, therefore, making switching more demanding, cannot motivate participants to engage more strongly in a strategy they are already engaging in. Interestingly, this limitation might be specific to motivational influences, since, for example it is possible to increase switch efficiency scores by manipulating the proportion of task-switches (e.g., Crump & Logan, 2010; Dreisbach & Haider, 2006; Mayr, 2006; Monsell & Minzon, 2006; Siqi-Liu & Egner, 2020). Braem (2017) did not include a control group to compare the reward-modulated switch rates, and it is hence unclear whether the same asymmetry would appear when selectively rewarding task switches versus repetitions. One should therefore be cautious when interpreting modulations of control regulation in task-switching paradigms. Our results suggest that standard task switching protocols (with 50% switches) may inadvertently induce a stability-biased control strategy (for a similar argument, see Monsell & Mizon, 2006).

Also noteworthy is the size of the effect that our demand manipulation had on task selection behavior in Experiment 2. Recall that in the learning phase, task switches or task repetitions were selectively associated with high-demand target stimuli 100% of the time. Yet, participants in all groups still produced a repetition bias (producing <50% task switches) and only differed from one another by 10 to 15%. While this would normally be considered a very large effect of a cognitive manipulation, it seems surprising that this effect was not even larger, considering that if participants in the High Demand Switch group had chosen to switch tasks on every trial, they would have reduced their error rate by upwards of 20%! Similarly, if participants in the High Demand Repetition group had chosen to repeat the same task on every trial, they would also had reduced their error rate by ~20%. That is, participants incurred a huge performance cost by not fully capitalizing on the demand associations. This, in part, might be explained by our instruction to try to perform each task equally often in a random order; the standard instruction used in voluntary task-switching paradigms (e.g., Arrington & Logan, 2004). Participants, therefore, might have attempted to adhere to this instruction despite their awareness that adopting different strategies would have produced fewer errors. Alternatively, they may have not been explicitly aware of our demand manipulations and unable to fully capitalize on it. Still, the relatively small effect of error-driven learning on task selection behavior here is somewhat puzzling. Although prior work has shown the importance of instructional manipulations on voluntary switch rates (e.g., Liefooghe et al., 2010), future research is necessary to understand how these manipulations interact with avoidance-driven control regulation.

The influence of rewards on control regulation has also been shown to vary depending on how rewards are presented. For instance, prior work has demonstrated the theoretical importance of distinguishing between performance-contingent and non-contingent rewards (Fröber & Dreisbach, 2014, 2016) as well as the prospect versus reception of rewards (Notebaert & Braem, 2015). Here, we examined the reception of performance non-contingent demands. However, prior research examining cognitive effort has demonstrated the importance of effort anticipation for driving behavioral change (Dunn et al., 2019), and in daily life, task-demands often vary in accordance with our performance (i.e., leveling up difficulty when a performance criterion is met). Thus, we might expect that such distinctions will also be important for understanding the motivational influence of demand-avoidance and provides a potentially fruitful avenue for future research.

Although we have interpreted our results in terms of demand-avoidance, it is important to consider how other explanations might accommodate our result, such as a motivation to improve performance. One possibility, for instance, is that preparation differentially influences performance on low- versus high-demand trials - perhaps there is more performance-gain when one prepares for a low-demand task than when one prepares for a high-demand task. If this were the case, participants might be biased to prepare whichever task they believe will be low demand on the next trial, exploiting the structure in the task to improve performance—and, not to avoid demand as such. Although plausible, this explanation relies on an untested assumption and whether preparation differentially affects performance within this context remains an open empirical question.

More generally, the effect of demand-avoidance on forced and voluntary task-switching makes an important contribution to the question of how cognitive control is itself managed (Botvinick et al., 2001; Dreisbach & Fröber, 2019; Hommel & Elliot, 2015). Adaptive goal-directed behavior requires identifying the current situational demands and adjusting control to meet those demands in a context-appropriate manner. From an applied, clinical, perspective, identifying and understanding the determinants of control regulation is important because psychiatric disorders are often characterized by a dysregulation along the stability-flexibility continuum (e.g., Goschke, 2014). Failing to appropriately adapt to cognitive demand—an inability to monitor or anticipate demand, lack of avoidance, or extreme avoidance—may result in persistent control regulation failures observed in various clinical disorders.

Along the same lines, it is interesting to consider our results from a ‘cognitive training’ perspective (e.g., Karbach & Kray, 2009; Sabah et al., 2019). A recent study (Sabah et al., 2020), for instance, found increased content variability improved task switching training and including interference (i.e., bivalent stimuli) promoted higher transfer gains, in terms of switching efficiency, as compared to a previous study (Sabah et al., 2019). From this perspective, one might have expected that training participants with high-demand task-repetitions would have improved performance on repetitions, increasing subsequent switch costs. However, consistent with the demand-avoidance hypothesis, we observed the opposite: reduced switch costs when trained with high-demand repetitions. Introducing different training demands might then, unintentionally, cause participants to adjust their control strategies to avoid those demands rather than exploiting them to improve their long-term performance. Importantly, we have no measures of long-term learning or far transfer of learning in the present study, and it is unclear whether the observed effects would persist over time beyond the length of our transfer phase or generalize across different tasks. Thus, whether demand-avoidance might help or hinder long-term learning is still an open question.

From a theoretical perspective, control regulation is often characterized as a cost-benefit tradeoff, whereby the potential rewards of engaging in control are weighed against the costs of engaging in control (Shenhav et al., 2013). The results of the current study are consistent with this view, providing novel evidence that changes to the inherent costs of control result in adaptations in control regulation. However, the mechanisms underlying the observed changes in control—or switching efficiency—still remain unclear. Flexible task switching is thought to involve both the active reconfiguration of task sets and overcoming the passive carry-over of prior task-set processing (e.g., Koch, et al., 2018.). Increased flexibility could have been accomplished by motivating participants to actively prepare for the low-demand task or by increasing the inhibition of the prior, high-demand, task set. More work is needed to adjudicate between these possibilities. However, such learning, in our view, is likely accomplished through associative learning, whereby cognitive control settings themselves become associated with event transitions (e.g., Chiu & Egner, 2017) via the same kind of associative learning processes that link stimuli and motor responses (e.g., Egner, 2014; Abrahamse, Braem, Notebaert, & Verguts, 2016).

Finally, the motivational influence of rewards is thought to offset the inherent cost of control, as demonstrated by various experimental manipulations (Braem, 2017; Braem et al., 2012; Chiew & Braver, 2013, 2014; Fröber & Dreisbach, 2014, 2016; Fröber et al., 2019, 2020; Shen & Chun, 2011). However, the asymmetry of our results suggests that forced-choice task-switching paradigms (with 50% switch probabilities) and voluntary task switching paradigms (with ‘randomness’ instructions) may naturally elicit stable control states and have unintended consequences on the size and direction of experimental manipulations. For instance, performance-contingent rewards may appear to only induce stable control states because this is the state the task already elicited, which is now being reinforced. It is worth noting that this point has been made elsewhere. For instance, Monsell and Mizon (2006) recommended keeping the probability of task switches low (e.g., 25%) to prevent differential preparation on task-switch and task-repeat trials. The results of the current study, however, suggest that differential preparation might occur because of the motivation to avoid the demands of switching. Therefore, some care needs to be taken when interpreting these experimental manipulations of control within the context of task-switching paradigms such as those we have used. Alternatively, other task-switch paradigms might be more suitable for examining shifts in flexibility, such as the hybrid forced and voluntary task-switch paradigm with low switch-rate probabilities (e.g., Fröber & Dreisbach, 2016, 2017). The hybrid paradigm mixes forced-choice and voluntary trial-types, circumventing the aforementioned issues because it does not require any additional randomness instructions..

Conclusions

The results of the current study provide novel evidence for avoidance-driven modulations of control regulation along the stability-flexbility continuum (Dreisbach et. al., 2019). In both forced-choice and voluntary task-switching, we found that selectively associating high cognitive demand with task repetitions increased flexibility, but selectively associating cognitive demand with task switches failed to increase stability. These findings are consistent with cost-benefit frameworks of control regulation (Shenhav, et al., 2013), demonstrating that changes in the inherent cost of control can motivate control adaptation.

Figure 7.

Figure 7.

Results from the transfer phase in Experiment 2. Participant mean response times and error rates are plotted as a function of group and trial type (S = task switch and R = task repeat). RT and error switch efficiency scores (as estimated by the linear mixed models; performance on switch trials minus performance on repetition trials) are plotted as a function of group (HD-R = High Demand Repetition, C = Control, and HD-S = High Demand Switch). Error bars represent standard error of the mean.

Public significance statement:

This study suggests that the avoidance of cognitive demand can motivate control regulation much like rewards. However, the results also highlight limitations in using task-switching paradigms to examine motivational influences on cognitive control.

Appendix A

Experiment 1 Instructions

In this part of the experiment, you will be presented a display full of colored shapes. On every trial, you will have to complete one of two tasks. Either you will have to indicate whether there are more light-blue or more dark-blue shapes, or you will have to indicate if there are more squares or triangles.

If completing the first, color-version, task, you will press ‘z’ if there are more light-blue and press ‘m’ if there are more dark-blue shapes. If completing the second, shape-version, task will press ‘z’ if there are more square shapes and press ‘m’ if there are more triangle shapes.

When you are responding to color, the display will only contain circles. For example: [example image]. In this example, there are more light-blue than dark-blue shapes, so you would respond ‘z’.

When you are responding to shape, the display will only contain grey shapes. For example: [example image]. In this example, there are more triangles than squares, so you would respond ‘m’.

There will be three blocks of trials. In the first block, you will be responding to colors (96 trials), and in the second, you will only be responding to shapes (96 trials). Finally, the third block will be a mix of both kinds of trials (576 trials).

Try to respond as quickly and as accurately as possible. When you are ready to begin the first block, press ‘start’.

Experiment 2 Instructions

[In addition to the above instructions, participants received the following instructions prior to the third block of trials.]

In these blocks you will get to choose which task you do on every trial. You will be asked “Color or Shape?” on every trial and you choose by pressing ‘z’ (color) or ‘x’ (shape) on the keyboard. The trial will begin immediately after choosing your task and will be identical to the trials you performed earlier.

You should perform each task on about half of the trials and should perform the tasks in a random order. For example, imagine that you had a coin that said Color on one side and Shape on the other. Try to perform the tasks as if flipping the coin decided which task to perform.

So sometimes you should be repeating the same task and sometimes you should be switching tasks. We do not want you to count the number of times you have done each task or alternate strictly between tasks to be sure you do each one half the time. Just try to do them randomly.

Appendix B

Table 1.

Linear mixed model results from Experiment 1

Learning Phase
 Predictors Estimates SE t-Value df p-Value
 (Intercept) 924.12 35.96 25.70 147.40 <.0001
 Group [HD-Rep] 215.19 51.46 4.18 148.21 <.0001
 Group [HD-Switch] −129.23 51.07 −2.53 146.87 0.012
 Task [Switch] 186.94 8.18 22.85 21352.13 <.0001
 Group [HD-Rep] x Task [Switch] −289.28 11.77 −24.59 21352.77 <.0001
 Group [HD-Switch] x Task [Switch] 243.60 11.61 20.99 21352.22 <.0001
 Random Effects
a2 120588.76
T00Subject 61754
 ICC 0.34
NSubject 144
 Observations 21495
 Marginal R2 / Conditional R2 0.103 / 0.407
Transfer Phase
 Predictors Estimates SE t-Value df p-Value

 (Intercept) 962.40 39.33 24.47 146.92 <.0001
 Group [HD-Rep] 69.55 56.20 1.24 146.84 0.218
 Group [HD-Switch] −17.52 55.91 −0.31 146.92 0.754
 Task [Switch] 138.32 8.24 16.80 21396.44 <.0001
 Group [HD-Rep] x Task [Switch] −33.83 11.66 −2.90 21396.03 0.003
 Group [HD-Switch] x Task [Switch] 7.15 11.68 0.61 21396.16 0.540
 Random Effects
a2 122055.94
T00Subject 74148.27
 ICC 0.38
NSubject 144
 Observations 21539
 Marginal R2 / Conditional R2 0.025 / 0.394

Note: The references levels were the Control group and Repeat task. Square brackets indicate the level of the factor contrasted against the reference level of each factor.

Table 2.

Linear mixed model results from Experiment 2

Learning Phase
 Predictors Estimates SE t-Value df p-Value
 (Intercept) 985.81 39.83 24.75 147.88 <.0001
 Group [HD-Rep] 283.70 55.56 5.11 149.00 <.0001
 Group [HD-Switch] −144.20 54.88 −2.63 147.40 0.01
 Task [Switch] 197.68 10.76 18.36 23266.97 <.0001
 Group [HD-Rep] x Task [Switch] −408.75 14.64 −27.92 23268.24 <.0001
 Group [HD-Switch] x Task [Switch] 341.27 15.43 22.12 23270.05 <.0001
 Random Effects
a2 130465.51
T00Subject 71549.7
 ICC 0.35
NSubject 146
 Observations 23279
 Marginal R2 / Conditional R2 0.137 / 0.442
Transfer Phase
 Predictors Estimates SE t-Value df p-Value

 (Intercept) 1022.56 44.91 22.77 146.91 <.0001
 Group [HD-Rep] 87.60 62.59 1.40 147.46 0.164
 Group [HD-Switch] −0.59 61.94 −0.01 146.93 0.992
 Task [Switch] 129.18 10.04 12.87 22876.33 <.0001
 Group [HD-Rep] x Task [Switch] −44.27 13.72 −3.23 22873.36 0.001
 Group [HD-Switch] x Task [Switch] 61.35 14.65 4.19 22897.88 <.0001
 Random Effects
a2 139251.1
T00Subject 91222.39
 ICC 0.4
NSubject 146
 Observations 22933
 Marginal R2 / Conditional R2 0.024 / 0.411

Note: The references levels were the Control group and Repeat task. Square brackets indicate the level of the factor contrasted against the reference level of each factor.

References

  1. Abrahamse E, Braem S, Notebaert W, & Verguts T (2016). Grounding cognitive control in associative learning. Psychological Bulletin, 142, 693–728. [DOI] [PubMed] [Google Scholar]
  2. Allport GW, Clark K, & Pettigrew T (1954). The nature of prejudice. Addison-Wesley. [Google Scholar]
  3. Arrington CM, & Logan GD (2004). The cost of a voluntary task switch. Psychological Science, 15(9), 610–615. [DOI] [PubMed] [Google Scholar]
  4. Arrington CM, & Logan GD (2005). Voluntary task switching: Chasing the elusive homunculus. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31(4), 683. [DOI] [PubMed] [Google Scholar]
  5. Arrington CM, Weaver SM, & Pauker RL (2010). Stimulus-based priming of task choice during voluntary task switching. Journal of Experimental Psychology: Learning, Memory, and Cognition, 36(4), 1060. [DOI] [PubMed] [Google Scholar]
  6. Aust F, & Barth M (2018). papaja: Create APA manuscripts with R Markdown. https://github.com/crsh/papaja
  7. Baayen RH, Davidson DJ, & Bates DM (2008). Mixed-effects modeling with crossed random effects for subjects and items. Journal of Memory and Language, 59(4), 390–412. [Google Scholar]
  8. Barr DJ (2008). Analyzing ‘visual world’ eyetracking data using multilevel logistic regression. Journal of Memory and Language, 59(4), 457–474. [Google Scholar]
  9. Bates D, Mächler M, Bolker B, & Walker S (2015). Fitting Linear Mixed-Effects Models Using lme4. Journal of Statistical Software, 67(1), 1–48. [Google Scholar]
  10. Botvinick MM, Braver TS, Barch DM, Carter CS, & Cohen JD (2001). Conflict monitoring and cognitive control. Psychological Review, 108, 624. [DOI] [PubMed] [Google Scholar]
  11. Botvinick MM, & Rosen ZB (2009). Anticipation of cognitive demand during decision-making. Psychological Research PRPF, 73(6), 835–842. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Braem S (2017). Conditioning task switching behavior. Cognition, 166, 272–276. [DOI] [PubMed] [Google Scholar]
  13. Braem S, & Egner T (2018). Getting a grip on cognitive flexibility. Current Directions in Psychological Science, 27(6), 470–476. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Braem S, Verguts T, Roggeman C, & Notebaert W (2012). Reward modulates adaptations to conflict. Cognition, 125(2), 324–332. [DOI] [PubMed] [Google Scholar]
  15. Braver TS (2012). The variable nature of cognitive control: A dual mechanisms framework. Trends in Cognitive Sciences, 16, 106–113. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Brosowsky NP, & Crump MJC (2016). Context-specific attentional sampling: Intentional control as a pre-requisite for contextual control. Consciousness and Cognition, 44, 146–160. [DOI] [PubMed] [Google Scholar]
  17. Brosowsky NP, & Crump MJC (2018). Memory-guided selective attention: Single experiences with conflict have long-lasting effects on cognitive control. Journal of Experimental Psychology: General, 147, 1134–1153. [DOI] [PubMed] [Google Scholar]
  18. Brosowsky NP, & Crump MJC (2021). Contextual recruitment of selective attention can be updated via changes in task relevance. Canadian Journal of Experimental Psychology/Revue Canadienne de Psychologie Expérimentale, 75(1), 19–34. [DOI] [PubMed] [Google Scholar]
  19. Brosowsky NP, Murray S, Schooler JW, & Seli P (2021). Attention need not always apply: Mind wandering impedes explicit but not implicit sequence learning. Cognition, 209, 104530. [DOI] [PubMed] [Google Scholar]
  20. Bugg JM (2014). Conflict-triggered top-down control: Default mode, last resort, or no such thing? Journal of Experimental Psychology: Learning, Memory, and Cognition, 40(2), 567. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Chamberlain SR, Fineberg NA, Blackwell AD, Robbins TW, & Sahakian BJ (2006). Motor inhibition and cognitive flexibility in obsessive-compulsive disorder and trichotillomania. American Journal of Psychiatry, 163(7), 1282–1284. [DOI] [PubMed] [Google Scholar]
  22. Chiew KS, & Braver TS (2011). Monetary incentives improve performance, sometimes: Speed and accuracy matter, and so might preparation. Frontiers in Psychology, 2, 325. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Chiew KS, & Braver TS (2013). Temporal dynamics of motivation-cognitive control interactions revealed by high-resolution pupillometry. Frontiers in Psychology, 4, 15. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Chiew KS, & Braver TS (2014). Dissociable influences of reward motivation and positive emotion on cognitive control. Cognitive, Affective, & Behavioral Neuroscience, 14(2), 509–529. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Chiu Y-C, & Egner T (2017). Cueing cognitive flexibility: Item-specific learning of switch readiness. Journal of Experimental Psychology: Human Perception and Performance, 43(12), 1950. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Crump MJC, Brosowsky NP, & Milliken B (2017). Reproducing the location-based context-specific proportion congruent effect for frequency unbiased items: A reply to Hutcheon and Spieler (2016). The Quarterly Journal of Experimental Psychology, 70, 1792–1807. [DOI] [PubMed] [Google Scholar]
  27. Crump MJC, & Logan GD (2010). Contextual control over task-set retrieval. Attention, Perception, & Psychophysics, 72(8), 2047–2053. [DOI] [PubMed] [Google Scholar]
  28. Diamond A (2013). Executive functions. Annual Review of Psychology, 64(1), 135–168. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Dreisbach G (2012). Mechanisms of Cognitive Control: The Functional Role of Task Rules. Current Directions in Psychological Science, 21(4), 227–231. [Google Scholar]
  30. Dreisbach G, & Fischer R (2012). The role of affect and reward in the conflict-triggered adjustment of cognitive control. Frontiers in Human Neuroscience, 6, 342. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Dreisbach G, & Fröber K (2019). On how to be flexible (or not): Modulation of the stability-flexibility balance. Current Directions in Psychological Science, 28(1), 3–9. [Google Scholar]
  32. Dreisbach G, & Haider H (2006). Preparatory adjustment of cognitive control in the task switching paradigm. Psychonomic Bulletin & Review, 13(2), 334–338. [DOI] [PubMed] [Google Scholar]
  33. Dunn TL, Inzlicht M, & Risko EF (2019). Anticipating cognitive effort: Roles of perceived error-likelihood and time demands. Psychological Research, 83(5), 1033–1056. [DOI] [PubMed] [Google Scholar]
  34. Dunn TL, & Risko EF (2016). Toward a metacognitive account of cognitive offloading. Cognitive Science, 40(5), 1080–1127. [DOI] [PubMed] [Google Scholar]
  35. Egner T (2014). Creatures of habit (and control): A multi-level learning perspective on the modulation of congruency effects. Frontiers in Psychology, 5, 1247. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Fox J, & Weisberg S (2018). Visualizing Fit and Lack of Fit in Complex Regression Models with Predictor Effect Plots and Partial Residuals. Journal of Statistical Software, 87(9), 1–27. [Google Scholar]
  37. Fox J, & Weisberg S (2019). An R Companion to Applied Regression (Third). Sage. https://socialsciences.mcmaster.ca/jfox/Books/Companion/ [Google Scholar]
  38. Fröber K, & Dreisbach G (2014). The differential influences of positive affect, random reward, and performance-contingent reward on cognitive control. Cognitive, Affective, & Behavioral Neuroscience, 14(2), 530–547. [DOI] [PubMed] [Google Scholar]
  39. Fröber K, & Dreisbach G (2016). How sequential changes in reward magnitude modulate cognitive flexibility: Evidence from voluntary task switching. Journal of Experimental Psychology: Learning, Memory, and Cognition, 42(2), 285. [DOI] [PubMed] [Google Scholar]
  40. Fröber K, & Dreisbach G (2017). Keep flexible–keep switching! The influence of forced task switching on voluntary task switching. Cognition, 162, 48–53. [DOI] [PubMed] [Google Scholar]
  41. Fröber K, Pfister R, & Dreisbach G (2019). Increasing reward prospect promotes cognitive flexibility: Direct evidence from voluntary task switching with double registration. Quarterly Journal of Experimental Psychology, 72(8), 1926–1944. [DOI] [PubMed] [Google Scholar]
  42. Fröber K, Pittino F, & Dreisbach G (2020). How sequential changes in reward expectation modulate cognitive control: Pupillometry as a tool to monitor dynamic changes in reward expectation. International Journal of Psychophysiology, 148, 35–49. [DOI] [PubMed] [Google Scholar]
  43. Geurts HM, Corbett B, & Solomon M (2009). The paradox of cognitive flexibility in autism. Trends in Cognitive Sciences, 13(2), 74–82. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Gold JM, Kool W, Botvinick MM, Hubzin L, August S, & Waltz JA (2015). Cognitive effort avoidance and detection in people with schizophrenia. Cognitive, Affective, & Behavioral Neuroscience, 15(1), 145–154. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Goschke T (2003). Voluntary action and cognitive control from a cognitive neuroscience perspective. In Massen S, Prinz W, & Roth G (Eds.), Voluntary action: Brains, minds, and sociality (pp. 49–85). Oxford University Press. [Google Scholar]
  46. Goschke T (2013). Volition in action: Intentions, control dilemmas and the dynamic regulation of intentional control. In Prinz W, Beisert A, & Herwig A (Eds.), Action science: Foundations of an emerging discipline (pp. 409–434). MIT Press. [Google Scholar]
  47. Goschke T, & Bolte A (2014). Emotional modulation of control dilemmas: The role of positive affect, reward, and dopamine in cognitive stability and flexibility. Neuropsychologia, 62, 403–423. [DOI] [PubMed] [Google Scholar]
  48. Hefer C, & Dreisbach G (2017). How performance-contingent reward prospect modulates cognitive control: Increased cue maintenance at the cost of decreased flexibility. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(10), 1643. [DOI] [PubMed] [Google Scholar]
  49. Hommel B, & Elliot AJ (2015). Between persistence and flexibility: The Yin and Yang of action control. In Advances in motivation science (Vol. 2, pp. 33–67). Elsevier. [Google Scholar]
  50. Hull CL (1943). Principles of behavior (Vol. 422). Appleton-century-crofts New York. [Google Scholar]
  51. Jimura K, Locke HS, & Braver TS (2010). Prefrontal cortex mediation of cognitive enhancement in rewarding motivational contexts. Proceedings of the National Academy of Sciences, 107(19), 8871–8876. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Karbach J, & Kray J (2009). How useful is executive control training? Age differences in near and far transfer of task-switching training. Developmental Science, 12(6), 978–990. [DOI] [PubMed] [Google Scholar]
  53. Kassambara A (2019). ggpubr: “ggplot2” Based Publication Ready Plots. https://CRAN.R-project.org/package=ggpubr
  54. Kleinsorge T, & Rinkenauer G (2012). Effects of monetary incentives on task switching. Experimental Psychology. [DOI] [PubMed] [Google Scholar]
  55. Koch I, Poljac E, Müller H, & Kiesel A (2018). Cognitive structure, flexibility, and plasticity in human multitasking—An integrative review of dual-task and task-switching research. Psychological Bulletin, 144(6), 557. [DOI] [PubMed] [Google Scholar]
  56. Kool W, McGuire JT, Rosen ZB, & Botvinick MM (2010). Decision making and the avoidance of cognitive demand. Journal of Experimental Psychology: General, 139(4), 665–682. [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Kuznetsova A, Brockhoff PB, & Christensen RHB (2017). lmerTest Package: Tests in Linear Mixed Effects Models. Journal of Statistical Software, 82(13), 1–26. [Google Scholar]
  58. Liefooghe B, Demanet J, & Vandierendonck A (2010). Persisting activation in voluntary task switching: It all depends on the instructions. Psychonomic Bulletin & Review, 17(3), 381–386. [DOI] [PubMed] [Google Scholar]
  59. Locke HS, & Braver TS (2008). Motivational influences on cognitive control: Behavior, brain activation, and individual differences. Cognitive, Affective, & Behavioral Neuroscience, 8(1), 99–112. [DOI] [PubMed] [Google Scholar]
  60. Mayr U (2006). What matters in the cued task-switching paradigm: Tasks or cues? Psychonomic Bulletin & Review, 13(5), 794–799. [DOI] [PubMed] [Google Scholar]
  61. McGuire JT, & Botvinick MM (2010). The impact of anticipated cognitive demand on attention and behavioral choice. In Effortless attention: A new perspective in the cognitive science of attention and action (pp. 103–120). MIT Press. [Google Scholar]
  62. Meiran N, Diamond GM, Toder D, & Nemets B (2011). Cognitive rigidity in unipolar depression and obsessive compulsive disorder: Examination of task switching, Stroop, working memory updating and post-conflict adaptation. Psychiatry Research, 185(1–2), 149–156. [DOI] [PubMed] [Google Scholar]
  63. Mittelstädt V, Dignath D, Schmidt-Ott M, & Kiesel A (2018). Exploring the repetition bias in voluntary task switching. Psychological Research, 82(1), 78–91. [DOI] [PubMed] [Google Scholar]
  64. Mittelstädt V, Miller J, & Kiesel A (2018). Trading off switch costs and stimulus availability benefits: An investigation of voluntary task-switching behavior in a predictable dynamic multitasking environment. Memory & Cognition, 46(5), 699–715. [DOI] [PubMed] [Google Scholar]
  65. Mittelstädt V, Miller J, & Kiesel A (2019). Linking task selection to task performance: Internal and predictable external processing constraints jointly influence voluntary task switching behavior. Journal of Experimental Psychology: Human Perception and Performance. [DOI] [PubMed] [Google Scholar]
  66. Monsell S (2003). Task switching. Trends in Cognitive Sciences, 7(3), 134–140. [DOI] [PubMed] [Google Scholar]
  67. Monsell S, & Mizon GA (2006). Can the task-cuing paradigm measure an endogenous task-set reconfiguration process? Journal of Experimental Psychology: Human Perception and Performance, 32(3), 493. [DOI] [PubMed] [Google Scholar]
  68. Müller J, Dreisbach G, Goschke T, Hensch T, Lesch K-P, & Brocke B (2007). Dopamine and cognitive control: The prospect of monetary gains influences the balance between flexibility and stability in a set-shifting paradigm. European Journal of Neuroscience, 26(12), 3661–3668. [DOI] [PubMed] [Google Scholar]
  69. Nickerson RS (2002). The production and perception of randomness. Psychological Review, 109(2), 330. [DOI] [PubMed] [Google Scholar]
  70. Notebaert W, & Braem S (2015). Parsing the effects of reward on cognitive control. In Braver TS (Ed.), Motivation and cognitive control (pp. 117–134). Routledge. [Google Scholar]
  71. Padmala S, & Pessoa L (2011). Reward reduces conflict by enhancing attentional control and biasing visual cortical processing. Journal of Cognitive Neuroscience, 23(11), 3419–3432. [DOI] [PMC free article] [PubMed] [Google Scholar]
  72. Peer E, Vosgerau J, & Acquisti A (2014). Reputation as a sufficient condition for data quality on Amazon Mechanical Turk. Behavior Research Methods, 46(4), 1023–1031. [DOI] [PubMed] [Google Scholar]
  73. Posner MI, & DiGirolamo GJ (1998). Executive attention: Conflict, target detection, and cognitive control. In Parasuraman R (Ed.), The attentive brain (pp. 401–423). MIT Press. [Google Scholar]
  74. R Core Team. (2019). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing. https://www.R-project.org/
  75. Rapoport A, & Budescu DV (1997). Randomization in individual choice behavior. Psychological Review, 104(3), 603–617. [Google Scholar]
  76. Rosch E (1999). Principles of categorization. In Margolis E & Laurence S (Eds.), Concepts: Core readings (pp. 189–206). MIT Press. [Google Scholar]
  77. Sabah K, Dolk T, Meiran N, & Dreisbach G (2019). When less is more: Costs and benefits of varied vs. fixed content and structure in short-term task switching training. Psychological Research, 83(7), 1531–1542. [DOI] [PubMed] [Google Scholar]
  78. Sabah K, Dolk T, Meiran N, & Dreisbach G (2020). Enhancing task-demands disrupts learning but enhances transfer gains in short-term task-switching training. Psychological Research, 1–15. [DOI] [PMC free article] [PubMed] [Google Scholar]
  79. Shen YJ, & Chun MM (2011). Increases in rewards promote flexible behavior. Attention, Perception, & Psychophysics, 73(3), 938–952. [DOI] [PubMed] [Google Scholar]
  80. Shenhav A, Botvinick MM, & Cohen JD (2013). The expected value of control: An integrative theory of anterior cingulate cortex function. Neuron, 79(2), 217–240. [DOI] [PMC free article] [PubMed] [Google Scholar]
  81. Shenhav A, Musslick S, Lieder F, Kool W, Griffiths TL, Cohen JD, & Botvinick MM (2017). Toward a rational and mechanistic account of mental effort. Annual Review of Neuroscience, 40, 99–124. [DOI] [PubMed] [Google Scholar]
  82. Shiffrin RM, & Schneider W (1977). Controlled and automatic human information processing: II. Perceptual learning, automatic attending and a general theory. Psychological Review, 84, 127–190. [Google Scholar]
  83. Singmann H, Bolker B, Westfall J, & Aust F (2019). afex: Analysis of Factorial Experiments. https://CRAN.R-project.org/package=afex
  84. Siqi-Liu A, & Egner T (2020). Contextual Adaptation of Cognitive Flexibility is driven by Task- and Item-Level Learning. Cognitive, Affective, & Behavioral Neuroscience, 20(4), 757–782. [DOI] [PMC free article] [PubMed] [Google Scholar]
  85. Solomon RL (1948). The influence of work on behavior. Psychological Bulletin, 45(1), 1. [DOI] [PubMed] [Google Scholar]
  86. Van Selst M, & Jolicoeur P (1994). A solution to the effect of sample size on outlier elimination. The Quarterly Journal of Experimental Psychology Section A, 47, 631–650. [Google Scholar]
  87. Vandierendonck A, Demanet J, Liefooghe B, & Verbruggen F (2012). A chain-retrieval model for voluntary task switching. Cognitive Psychology, 65(2), 241–283. [DOI] [PubMed] [Google Scholar]
  88. Verbruggen F, McLaren IPL, & Chambers CD (2014). Banishing the control homunculi in studies of action control and behavior change. Perspectives on Psychological Science, 9, 497–524. [DOI] [PMC free article] [PubMed] [Google Scholar]
  89. Wickham H (2016). ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag New York. https://ggplot2.tidyverse.org [Google Scholar]
  90. Wickham H, François R, Henry L, & Müller K (2019). dplyr: A Grammar of Data Manipulation. https://CRAN.R-project.org/package=dplyr
  91. Wickham H, & Henry L (2019). tidyr: Tidy Messy Data. https://CRAN.Rproject.org/package=tidyr
  92. Wilke CO (2019). cowplot: Streamlined Plot Theme and Plot Annotations for “ggplot2.” https://CRAN.R-project.org/package=cowplot
  93. Yeung N (2010). Bottom-up influences on voluntary task switching: The elusive homunculus escapes. Journal of Experimental Psychology: Learning, Memory, and Cognition, 36(2), 348. [DOI] [PubMed] [Google Scholar]
  94. Zipf GK (1949). Human behavior and the principle of least effort: An introduction to human ecology. Addison-Wesley Press. [Google Scholar]

RESOURCES