Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2017 Jun 1.
Published in final edited form as: J Exp Psychol Gen. 2016 Apr 7;145(6):796–805. doi: 10.1037/xge0000169

Mechanisms of Habitual Approach

Failure to Suppress Irrelevant Responses Evoked by Previously Reward-Associated Stimuli

Brian A Anderson 1, Charles L Folk 2, Rebecca Garrison 3, Leeland Rogers 4
PMCID: PMC4873395  NIHMSID: NIHMS770434  PMID: 27054684

Abstract

Reward learning has a powerful influence on the attention system, causing previously reward-associated stimuli to automatically capture attention. Difficulty ignoring stimuli associated with drug reward has been linked to addiction relapse, and the attention system of drug-dependent patients seems especially influenced by reward history. This and other evidence suggests that value-driven attention has consequences for behavior and decision-making, facilitating a bias to approach and consume the previously reward-associated stimulus even when doing so runs counter to current goals and priorities. Yet, a mechanism linking value-driven attention to behavioral responding and a general approach bias is lacking. Here we show that previously reward-associated stimuli escape inhibitory processing in a go/no-go task. Control experiments confirmed that this value-dependent failure of goal-directed inhibition could not be explained by search history or residual motivation, but depended specifically on the learned association between particular stimuli and reward outcome. When a previously high-value stimulus is encountered, the response codes generated by that stimulus are automatically afforded high priority, bypassing goal-directed cognitive processes involved in suppressing task-irrelevant behavior.

Keywords: selective attention, inhibition, reward learning, habit learning, addiction


How we experience the world is influenced by selective attention. Attention determines the information contained within an environment that is represented in the brain (Desimone & Duncan, 1995). What we pay attention to is not always a matter over which we have control. Physically salient stimuli can at times automatically capture attention (e.g., Theeuwes, 1992; Yantis & Jonides, 1984), as can stimuli previously associated with reward (e.g., Anderson, Laurent, & Yantis, 2011a, 2011b; Hickey, Chelazzi, & Theeuwes, 2010a; see Anderson, in press, for a review). Although the goal-state of the observer can modulate such automatic attentional capture (e.g., Folk, Remington, & Johnston, 1992; Yantis & Johnston, 1990), previously reward-associated stimuli can capture attention even when task-irrelevant and physically non-salient (Anderson et al., 2011b), suggesting a powerful and direct role for reward history in the guidance of attention (referred to as value-driven attention; Anderson, 2013). For example, Anderson et al. (2011b) had participants complete a training phase followed by a test phase. During the training phase, participants searched for targets appearing in colors that were probabilistically associated with either high (5 cents) or low (1 cent) reward. During the test phase, participants completed a different visual search task in which irrelevant distractors appearing in the previously rewarded color produced significant costs in response time even though participants searched for targets defined by shape.

More recent evidence points to the specific role of associative reward learning in generating attentional biases. For example, value-driven attention cannot be explained by the motivation provided by reward incentives during learning (Sali, Anderson, & Yantis, 2014), and value-driven attentional biases occur for stimuli that were never task-relevant but none-the-less predicted reward outcomes (Bucker, Belopolsky, & Theeuwes, 2015; Le Pelley, Pearson, Griffiths, & Beesley, 2015; Mine & Saiki, 2015; Pearson, Donkin, Tran, Most, & Le Pelley, 2015). This suggests that associative reward learning is accompanied by a corresponding shift in attentional priority such that the previously reward-associated stimulus more effectively competes for attention (Della Libera & Chelazzi, 2009). Such value-driven attentional orienting can be likened to a habit (Anderson, in press), reflecting a subcortical biasing signal (Anderson et al., 2016; Anderson, Laurent, & Yantis, 2014; Hickey & Peelen, 2015) that is robust to the modulatory impact of current goal-state (e.g., Anderson et al., 2011b).

In the addiction literature, attentional biases for stimuli associated with drug reward have been well characterized in drug-dependent patients (Field, Mogg, Zetteler, & Bradley, 2004; Lubman, Peters, Mogg, Bradley, & Deakin, 2000; Mogg, Bradley, Field, & De Houwer, 2003; Stromark, Field, Hugdahl, & Horowitz, 1997; see Field & Cox, 2008, for a review). The magnitude of drug-related attentional biases predicts the treatment outcome of drug-dependent patients (Carpenter, Schreiber, Church, & McDowell, 2006; Marissen et al., 2006). More recent evidence suggests that such attentional biases might reflect a more general sensitivity to the influence of reward on attention, extending beyond drug reward per se (Anderson, Faulkner, Rilee, Yantis, & Marvel, 2013). On the other end of the spectrum, abnormally blunted attentional biases for previously reward-associated stimuli are associated with depressive symptoms, which include reduced engagement in enjoyable activities (Anderson, Leal, Hall, Yassa, & Yantis, 2014).

Such evidence suggests a possible relationship between value-driven attention and overt behavior, including drug use and enjoyable activities. Further support for this link can be found in a study correlating value-driven attentional biases with impulsive non-planning behaviors as well as substance abuse history (Anderson, Kronemer, Rilee, Sactor, & Marvel, in press). Value-driven attentional biases are also more pronounced in adolescence, a period of life marked by increases in risky reward-motivated behavior (Roper, Vecera, & Vaidya, 2014). In non-clinical samples, attentional processing of reward-associated stimuli predicts related economic risk-taking (San Martin, Appelbaum, Huettel, & Woldorff, 2016), and value-driven attentional capture can interfere with the process of value-based decision-making (Itthipuripat, Cha, Rangsipat, & Serences, 2015).

A mechanism linking value-driven attention to overt behavior, and more specifically to a general approach bias, is lacking. Such a mechanism, if it were to play a role in the sort of problematic reward-motivated behavior described above, would need to involve more than changes in the strength of perceptual input. Such perceptual modulation could explain biases in deciding between multiple potentially rewarding options (e.g., Itthipuripat et al., 2015; San Martin et al., 2016), but has difficulty explaining how people might come to make decisions that are in opposition to their current goals. Were associative reward learning capable of exerting such a powerful and direct influence on approach behavior, it would imply a change in the nature of the processing of the response code itself, such that the response code generated by a previously reward-associated stimulus is afforded a competitive advantage.

There is some evidence that learned reward associations could have a more direct influence on the selection of a response code. Learned action-effect bindings more strongly affect performance when the action has been rewarded (Muhle-Karbe & Krebs, 2012). The representation of expected reward is modulated by whether that reward depends on executing verses withholding an action, reflecting a natural mapping between reward and approach behavior (Guitart-Masip et al., 2011).

In a pair of studies, Krebs and colleagues (2010, 2011) examined response conflict generated by reward-associated information in a Stroop task. In their task, participants were rewarded for naming certain colors (but not others) correctly. Performance was better when naming the high-value colors. Importantly, naming the color of the word was more strongly impaired by incongruent text when that text spelled a currently high-value color. In another study, Anderson and colleagues (2012) had participants first learn color–reward associations in a training phase, and then used these colors in a subsequent unrewarded flankers task in which participants made left/right responses to the identity of a central letter flanked by irrelevant letters with compatible or incompatible response association. Typically, incompatible flankers produce longer target response times than compatible flankers. Anderson et al. (2012) found that flanker conflict was greater for flankers presented in a color previously associated with high-value. Such findings suggest that learned reward associations can magnify the response conflict generated by reward-associated stimuli.

These findings, however, can still be explained by a value-based modulation of the strength of sensory input. This is because the stimulus–reward associations were still in play when the conflict was observed (Krebs et al., 2010, 2011), and most importantly, a response-conflict paradigm was used in which all irrelevant stimuli generate some degree of interference (Anderson et al., 2012; Krebs et al., 2010, 2011). Participants have the goal of responding to certain stimulus features that the (previously) reward-associated stimuli possess, the activation of which either competes with or facilitates the correct (target) response. Were the mere strength of the representation of these stimuli and their associated response code enhanced, increased response conflict would be expected.

If prior stimulus–reward associations are capable of exerting a direct influence on the process of response selection, facilitating automatic approach, it would imply that these associations should be accompanied by difficulty suppressing responses. In essence, stimulus–reward associations should imbue stimuli with the ability to generate response conflict that would otherwise not be present, or should at least reduce the effectiveness with which their associated responses can be suppressed. The role of inhibition in supporting goal-directed behavior and information processing is becoming increasingly recognized (e.g., Anderson & Folk, 2012b; Moher, Anderson, & Song, 2015; Moher, Lakshmanan, Egeth, & Ewen, 2014). In the present study, we examine the role that learned stimulus–reward associations play in the ability to inhibit a response in accordance with current task-specific goals.

Recently, we developed a paradigm for measuring inhibition and the influence of current task-specific goals on such inhibition (Anderson & Folk, 2012a, 2014). Specifically, a go no-go manipulation was combined with the standard flankers task, such that participants only respond to the central target if it appears in a color specified by a cue at the beginning of each trial. Critically, the flankers could appear in the task-relevant (go) color or the task-irrelevant (no-go) color. When flankers appeared in the go color, the typical flanker compatibility effect was observed. However, when the flankers appeared in the no-go color, a significant reverse compatibility effect was obtained, revealing inhibition of the response codes associated with the flanker. We refer to this effect as contingent response inhibition, because the inhibition was only observed when the flankers appeared in a no-go color. Here, we examine the impact of previously learned stimulus–reward associations on contingent response inhibition.

To this end, we employed the contingent response inhibition paradigm in combination with the value-driven attention paradigm developed by Anderson and colleagues (Anderson et al., 2011a, 2011b). Participants learned associations between particular colors and the receipt of reward during a training phase, and in a subsequent unrewarded test phase we examined flanker compatibility effects as a function of the prior value of the flanker color and whether that color was designated as the go or no-go color on that trial. Of particular interest were trials on which the flanker was presented in the no-go color, which, without any prior reward training, typically produce a reverse-compatibility effect indicative of the inhibition of the flanker-associated response (Anderson & Folk, 2012a, 2014). If responses to stimuli previously associated with reward are difficult to inhibit, reflecting a habitual approach bias, participants should fail to show contingent response inhibition for flankers appearing in a no-go color, resulting in the absence of the reverse compatibility effect typically observed in this paradigm. Such a result would be consistent with a more direct influence of associative reward learning on overt behavior.

Experiment 1

In Experiment 1, we combined the reward learning procedure of Anderson et al. (2011a, 2011b) previously used to examine subsequent attentional biases, and the contingent response inhibition paradigm of Anderson and Folk (2014). The former served as a training phase in which color–reward associations were learned, and the latter as a test phase in which the consequence of these learned associations on behavioral performance was examined. In the training phase, participants performed visual search for red and green targets, one of which was associated with a larger monetary reward than the other when correctly reported. In the test phase, participants completed a flankers task in which the flankers and central target were colored either red or green. A color cue at the beginning of each trial indicated the response-relevant (go) color on that trial. If the target was presented in the cued color, participants were to report its identity, but if it was presented in the uncued (no-go) color, participants were to withhold a response and wait for the next trial. Of interest were trials on which the flankers were presented in the no-go color while the target was presented in the go color, thus providing a behavioral measure of response interference by response-irrelevant stimuli. Without prior reward training, response-irrelevant flankers typically produce a reverse-compatibility effect indicative of the inhibition of their associated response. We predicted that prior reward associations would reduce such inhibition.

Methods

Participants

Nineteen participants were recruited from the Villanova University community. All reported normal or corrected-to-normal visual acuity and normal color vision. Sample size was informed by our prior study measuring contingent response inhibition (Anderson & Folk, 2014). Using the effect size for the critical negative compatibility effect indicative of response inhibition in the main experiment (Experiment 1) of Anderson and Folk (2014), the current sample size yields power > 0.90 with d = .91 and α = .05 (G*Power; http://www.gpower.hhu.de/).

Apparatus

A Dell Optiplex 780 equipped with Matlab software and Psychophysics Toolbox extensions (Brainard, 1997) was used to present the stimuli on a 21” Sony Trinitron Multiscan 500 monitor. The participants viewed the monitor from a distance of approximately 75 cm in a dimly lit room. Reponses were entered by participants using a standard 101-key U.S. layout keyboard.

Training Phase

Stimuli

Each trial consisted of a fixation display, a search array, and a feedback display (Figure 1A). The fixation display contained a white fixation cross (.5° × .5° visual angle) presented in the center of the screen against a black background, and the search array consisted of the fixation cross surrounded by six colored circles (each 2.3° × 2.3°) placed at equal intervals on an imaginary circle with a radius of 5°. The target was defined as the red (RGB: 255 0 0) or green (RGB: 0 255 0) circle, exactly one of which was presented on each trial; the color of each non-target circle was drawn from the set {blue (RGB: 0 128 255), cyan (RGB: 0 255 255), pink (RGB: 255 128 255), orange (RGB: 255 128 0), yellow (RGB: 255 255 0), white (RGB: 255 255 255)} without replacement. Inside the target circle, a white bar was oriented either vertically or horizontally, and inside each of the non-targets, a white bar was tilted at 45° to the left or to the right (randomly determined for each non-target). The feedback display indicated the amount of monetary reward earned on the current trial, as well as the total accumulated reward.

Figure 1.

Figure 1

Sequence and time course of trial events. (A) Training phase. Participants searched for a color-defined target (red or green) and reported the orientation of the bar within the target as vertical or horizontal. Correct responses resulted in a small amount of money added to the participant's bank total. (B) Test phase. Participants reported the identity of the central letter only if its color matched that of a cue at the beginning of the trial. Task-irrelevant flankers appeared shortly before the onset of the target, and could be either compatible or incompatible with the target response. The colors that were used for flankers and targets were the same colors that were rewarded during the training phase.

Design

One of the two color targets (counterbalanced across participants) was followed by a high reward of 10¢ on 80% of correct trials and a low reward of 2¢ on the remaining 20% (high-reward target); for the other color target, these percentages were reversed (low-reward target). Each color target appeared in each location equally often, and trials were presented in a random order.

Procedure

The training phase consisted of 240 trials, which were preceded by 50 practice trials. Each trial began with the presentation of the fixation display for a randomly varying interval of 400, 500, or 600 ms. The search array then appeared and remained on screen until a response was made or 800 ms had elapsed, after which the trial timed out. The search array was followed by a blank screen for 1000 ms, the reward feedback display for 1500 ms, and a 1000 ms inter-trial interval (ITI).

Participants made a forced-choice target identification by pressing the "z" and the "m" keys for the vertically- and horizontally-orientated bars within the targets, respectively. They were instructed to respond both quickly and accurately. Correct responses were followed by monetary reward feedback in which a small amount of money was added to the participant's total earnings. Incorrect responses or responses that were too slow were followed by feedback indicating 0¢ had been earned. If the trial timed out, the computer emitted a 500 ms 1000 Hz tone.

Test Phase

Stimuli

Each trial involved four different displays (see Figure 1B). The first display consisted of a color-word cue presented in the center of the screen. Each letter was rendered in the color indicated by the word, which designated the response-relevant color on that trial. In the second display, the fixation display, a white fixation cross (1.8° × 1.8° visual angle) appeared following a blank inter-stimulus-interval. The third display, the flanker display, consisted of two identical letters (2.75° × 1.4°) each presented 2.6° center-to-center from the fixation cross on the left and right. In the fourth display, the target display, a target letter (2.75° × 1.4°) replaced the fixation cross at the center of the screen while the flankers remained onscreen. Each trial was followed by a blank ITI. The cue, flankers, and target were either red or green in color. The letters that were used for the flankers and target were A and X.

Design

The experiment consisted of 3 blocks of 96 trials. Within each block, cue color, target color, target identity, flanker color, and flanker compatibility were fully crossed and counterbalanced, and trials were presented in a random order. Thus, 50% of the trials were go trials, and 50% were no-go trials.

Procedure

Participants were instructed to respond as quickly as possible while minimizing errors, and to respond only when the target color matched the cue color (go trials). Participants were also informed that the flankers were irrelevant to the task and did not predict the upcoming target, and that they should focus exclusively on preparing for the upcoming target when the flankers appeared.

Each trial began with the presentation of the color-word cue for 1000 ms, followed by a 500 ms blank screen and then by the fixation display for a randomly varying period of 400, 500, or 600 ms. After this period, two identical flankers were presented along with the fixation cross for 200 ms. Following the flanker display, the central fixation cross was replaced with the target letter while the flankers remained onscreen for 100 ms. The screen then turned blank and waited until the participant responded or 1200 ms had elapsed, after which the trial timed out. Each trial was followed by a blank ITI lasting 1000 ms.

If the target color matched the cue color on that trial, participants were instructed to identify it as an "A" by pressing the "m" key and as an "X" by pressing the "z" key, and to withhold responding if the colors did not match. False alarms (responses to no-go targets), misses (failing to respond to go targets), and incorrect responses to go targets were all considered errors. The computer emitted a 500 ms long 1000 Hz tone to inform the participant when an error occurred. The experiment began with 40 practice trials.

Results

Mean correct response times (RTs) on go trials for the test phase are shown in Figure 2A. For each participant, trials were coded with respect to the reward contingencies experienced in the training phase, such that flankers were classified as appearing in either the high or low reward color. RTs were entered into a 2 × 2 × 2 repeated measures analysis of variance (ANOVA) with flanker color response association (go vs. no-go), flanker color reward association (high vs. low reward) and flanker compatibility (compatible vs. incompatible) as factors. The only significant result was the main effect of flanker compatibility, with compatible flankers producing significantly faster RTs than incompatible flankers (517 ms vs. 529 ms, respectively; F(1,18) = 5.47, p = .031 ηp2 = .233). No other main effects or interactions were significant, Fs < 4.30, ps > .05. Most notably, there was no interaction between flanker color response association and flanker compatibility, F(1,18) = 0.99, p = .333. This is in stark contrast to previous results from this flanker task (i.e., without reward training) which typically show a negative compatibility effect for trials on which the flankers appear in the no-go color (Anderson & Folk, 2012a, 2014).

Figure 2.

Figure 2

Mean response time by distractor condition in the test phase of Experiments 1 (A), 2a (B), and 2b (C). Error bars reflect the within-subjects S.E.M.

Mean overall accuracy on go trials was 91%. Accuracy data were subjected to the same ANOVA as the RTs. Only the main effect of compatibility was significant, with compatible flankers yielding higher error rates (10.2%) than incompatible flankers (7.4%), F(1,18) = 9.67, p = .006, ηp2 = .350; other Fs < 2.50, ps > .13. On no-go trials, the rate of commission errors was 4.8%. Commission errors occurred numerically more often when the flankers were presented in the high-value (5.3%) compared to the low-value (4.3%) color, although this difference was not significant, t(18) = 1.29, p = .215. On the first half of trials, however, when effects of prior reward might be expected to be strongest, this difference was reliable, 6.9% vs 4.8%, t(18) = 2.85, p = .011, d = .65.

Discussion

The flankers exerted a similar effect on performance regardless of their relation to the color cue. Even when presented in the response-irrelevant (no-go) color, the flankers produced a positive compatibility effect indicative of the activation of their associated response. This pattern can be contrasted with the robust reverse-compatibility effect, indicative of response inhibition, previously observed in this paradigm without a preceding training phase (Anderson & Folk, 2014; see also Anderson & Folk, 2012a). Our findings suggest that the prior reward learning qualitatively changed the manner in which the colored flankers were processed. Trial-by-trial adjustments in the goal-state of the participants were ineffective at modulating the priority afforded to response codes generated by previously reward-associated stimuli. There was also some evidence that flankers previously associated with high reward were more likely to elicit a false alarm (commission error).

Experiment 2

The results of Experiment 1 demonstrate that when previously associated with reward, task-irrelevant flankers generate a response signal that interferes with response selection regardless of top-down goals concerning the relevance of such signals. The cues indicating the response-relevance of the colors were ineffective at modulating behavior. However, relative reward (high vs. low value) had no measurable effect on flanker compatibility effects. As such, if the observed response conflict on no-go trials is really the result of associative reward learning, two alternative possibilities will need to be ruled out.

One alternative possibility is that selection history, rather than reward history per se, is creating a response bias that impairs inhibitory processing. Perhaps previously responding to any stimulus creates a bias against inhibiting responses associated with that stimulus in the future. To examine this possibility, in Experiment 2a participants completed an otherwise identical version of the task in which the reward feedback was removed from the training phase.

Another alternative possibility is that reward feedback and related motivation, independent of any associations with particular stimulus features, creates a general approach bias that subsequently interferes with inhibitory processes. Perhaps participants learn that rapidly facilitating action is rewarding and so are inclined to continue this strategy into a new task. To examine this possibility, in Experiment 2b participants first completed a training phase in which the target colors that were rewarded were different than the ones used in the test phase (blue and yellow). Thus, participants were just as motivated by reward as in Experiment 1, but this reward was not related to the specific colors that participants were sometimes instructed to ignore during test.

Methods

Participants

Seventeen participants were recruited from the Villanova University community for Experiment 2a, and 17 from the Johns Hopkins University community for Experiment 2b. None of the participants had participated in Experiment 1.

Apparatus

Experiment 2b was run on a Mac Mini with an Asus VE247 monitor positioned in a testing booth. Otherwise, the apparatus was identical to Experiment 1.

Stimuli and Procedure

The stimuli and procedure for Experiment 2a were identical to those for Experiment 1, with the exception that the reward feedback display was omitted from the training phase and instructions concerning reward feedback were removed. Instead, the word "Incorrect" was centrally presented in white font for 1000 ms following the offset of the search array in the event of an incorrect response. The stimuli and procedure for Experiment 2b were identical to those for Experiment 1, with the exception that different colors were used during training. The target was blue (RGB: 0 0 255) or yellow (RGB: 255 255 0), and the non-targets were purple (RGB: 112 48 160), orange (RGB: 255 127 0), teal (RGB: 150 255 200), brown (RGB: 152 72 0), pink (RGB: 255 175 175) and white (RGB: 255 255 255).

Results

Experiment 2a

Mean correct RTs on go trials as a function of flanker color response association and flanker compatibility for the test phase are shown in Figure 2B. RTs were entered into a 2 × 2 repeated measures ANOVA with flanker color response association (go vs. no-go) and flanker compatibility (compatible vs. incompatible) as factors. Unlike Experiment 1, there was no main effect of compatibility, F(1,16) = 3.91, p = .066, nor was there a main effect of flanker color response association, F(1,16) = 2.80, p = .114, but the interaction between flanker color response association and flanker compatibility was significant, F(1,16) = 17.8, p = .001, ηp2 = .526. Simple effects analyses revealed a significant 31 ms compatibility effect for go-colored flankers, F(1,16) = 27.4, p < .001, ηp2 = .631, and a non-significant 11 ms reverse compatibility effect for no-go-colored flankers, F(1,16) = 1.80, p = .198.

Mean overall accuracy on go trials was 88%. Accuracy data were subjected to the same ANOVA as the RT data, which revealed a marginal effect of flanker color response association, F(1,16) = 4.51, p = .05, ηp2 = .220, and interaction between flanker color response association and compatibility, F(1,16) = 4.55, p = .049, ηp2 = .222, other Fs < 2.65, ps > .14. The marginal effect of flanker color response association reflected more errors on trials in which flankers were presented in the no-go (13.6%) compared to the go (10.5%) color. The interaction mirrored the pattern in RT, with errors occurring more frequently on incompatible (11.5%) compared to compatible (9.5%) trials when the flankers were presented in the go color, and more frequently on compatible (15.2%) compared to incompatible (11.9%) trials when the flankers were presented in the no-go color. On no-go trials, the rate of commission errors was 4.9%

Experiment 2b

Mean correct RTs on go trials as a function of flanker color response association and flanker compatibility for the test phase are shown in Figure 2C. RTs were entered into a 2 × 2 repeated measures ANOVA with flanker color response association (go vs. no-go) and flanker compatibility (compatible vs. incompatible) as factors. As in Experiment 2a, there was no main effect of compatibility, F(1,16) = 0.7, p = .415, or flanker color response association, F(1,16) = 2.87, p = .109, but the interaction between flanker color response association and flanker compatibility was significant, F(1,16) = 19.72, p < .001, ηp2 = .552. Simple effects analyses revealed a significant 26 ms compatibility effect for go-colored flankers, F(1,16) = 12.09, p = .003, ηp2 = .430, and a significant 19 ms reverse compatibility effect for no-go-colored flankers, F(1,16) = 11.32, p = .004, ηp2 = .414. Further examination of the time course of this reverse-compatibility effect examined fast and slow responses separately via a median split, which revealed evidence for inhibition during both fast (M = 16 ms), t(16) = 2.44, p = .026, d = .59, and slow responses (M = 19 ms), t(16) = 2.08, p = .054, d = .50.

Mean overall accuracy on go trials was 87%. Accuracy data were subjected to the same ANOVA as the RT data, which revealed no main effects or interactions, Fs < 1.72, ps > .20. On no-go trials, the rate of commission errors was 6.1%

Discussion

The results of Experiment 2a and 2b are clear and rule out alternative explanations for the observed failure to inhibit response codes generated by response-irrelevant stimuli in Experiment 1. Experiment 2a shows that without associated reward feedback, prior target color flankers no longer produce compatibility effects when presented in the response-irrelevant color, resulting in a robust goal-directed modulation of information processing. Although the 11 ms reverse compatibility effect was not statistically reliable, the significant interaction coupled with the fact that no positive compatibility effect was observed in this condition demonstrates unambiguous goal-contingent modulation and is consistent with some degree of inhibition of the response associated with flankers presented in the no-go color. The critical interaction was also evident in error rates. Experiment 2b clearly shows this following training in which participants were rewarded for responding to targets of colors not appearing in the test phase. In both of these cases, the results closely resemble the pattern observed without any prior training at all (Anderson & Folk, 2014).

Experiment 3

Experiment 2 rules out selection history, independent of reward, and general motivational factors tied to reward feedback as explanations for the observed failure to inhibit response-irrelevant (cued no-go) information. Instead, it appears that learning a specific stimulus–reward association changes how response codes generated by the stimulus are processed in the future, creating an approach bias that circumvents inhibitory processing. In order to further examine this possibility, and replicate the critical results in a single experiment, we modified the training phase of Experiment 1 such that one color target never yielded a reward (as in, e.g., Failing & Theeuwes, 2014). This allowed us to compare inhibition of flankers in the response-irrelevant color when the color was (a) previously used as a target but without value and (b) previously predictive of reward as a target. A failure to inhibit (b) but not (a) on no-go flanker trials would provide further evidence that learned stimulus–reward associations influence the processing of response information.

Methods

Participants

Twenty-five participants were recruited from the Villanova University community and 10 from the Johns Hopkins University community. Given the failure to observe an interaction with reward value in Experiment 1, we increased our sample size. None of the participants had participated in any of the prior experiments. Using the effect size for the impact of learned value on flanker compatibility effects in Anderson et al. (2012), the current sample size yields power > 0.90 with d = .57 and α = .05 to detect the predicted interaction on no-go flanker trials (G*Power; http://www.gpower.hhu.de/).

Apparatus

The apparatus was identical to Experiment 2a for participants run at Villanova University, and identical to Experiment 2b for participants run at Johns Hopkins University.

Stimuli and Procedure

The stimuli and procedure were identical to Experiment 1, with the exception that the low-reward target now yielded 0¢ on every correct trial (here referred to as the unrewarded target).

Results

Mean correct RTs on go trials as a function of flanker color response association and flanker compatibility for the test phase are shown in Figure 3. For each participant, trials were coded with respect to the reward contingencies experienced in the training phase, such that flankers were classified as appearing in either the rewarded or unrewarded color. Given our predictions concerning inhibition, we focused specifically on trials in which flankers were presented in the no-go color1 (go-colored flanker trials produced only the predicted main effect of compatibility, F(1,34) = 4.69, p = .037, ηp2 = .121). No-go colored flanker RTs were subjected to a 2 × 2 ANOVA with flanker color reward association (rewarded vs. unrewarded) and flanker compatibility (compatible vs. incompatible) as factors. In addition to significant main effects of compatibility and reward, F(1,34) = 5.67, p = .023, ηp2 = .143 and F(1,34) = 6.98, p = .012, ηp2 = .170, respectively, there was a significant interaction, F(1,34) = 5.49, p = .025, ηp2 = .139. Simple effects analyses of the interaction yielded a significant 21 ms reverse compatibility effect for flankers in an unrewarded color, F(1,34) = 9.84, p = .004, ηp2 = .224, and no significant compatibility effect for flankers in a rewarded color, F(1,34) = 0.13, p = .726. The reverse-compatibility effect for flankers in the unrewarded color could be seen in both fast (M = 10 ms), t(34) = 1.49, p = .146, d = .25, and slow responses (M = 32 ms), t(34) = 2.88, p = .007, d = .49, although only in the latter case was it statistically reliable. Flankers in the rewarded color showed no evidence of a reverse-compatibility effect even for slow responses (M = 6 ms), t(34) = 0.58, p = .563.

Figure 3.

Figure 3

Mean response time by distractor condition in the test phase of Experiment 3. Error bars reflect the within-subjects S.E.M.

Mean overall accuracy on go trials was 89%. Accuracy data were subjected to the same ANOVAs as the response time data. None of the main effects or interactions were significant, Fs < 2.79, ps > .07, with the trend being towards a main effect of compatibility on no-go colored flanker trials in which error rate was greater on compatible trials (interaction: F < 1). On no-go trials, the rate of commission errors was 4.9%. Commission errors were unrelated to the value of the flankers both during the entire test phase and when only considering the first half of the test phase, ts < 0.17, ps > .66.

Discussion

As in Experiment 1, flankers rendered in a previously reward-associated color showed no evidence of inhibition (reverse-compatibility effect) when presented in the response-irrelevant color. As in Experiments 2a, flankers rendered in a prior target color not previously associated with reward produced robust inhibition when presented in the response-irrelevant color, resulting in the critical interaction on no-go trials. This provides further evidence that the response codes generated by stimuli previously associated with reward can escape or otherwise circumvent goal-directed inhibitory processing, reflecting an automatic approach bias.

Interestingly, although inhibition was not observed for the previously reward-associated flankers when presented in the response-irrelevant color, they did not produce a positive compatibility effect either, in contrast to Experiment 1. The reasons for this difference are unclear and may reflect differences in the reward structure experienced during training. The explicit lack-of-reward for responding to targets on certain trials during training in Experiment 3 may have discouraged approach behavior more generally, making the goal of withholding responses easier to execute. It is clear, however, that the previously reward-associated flankers were unaffected by the same inhibitory processing that robustly influenced the processing of previously unrewarded flankers.

General Discussion

Associative reward learning gives rise to automatic attentional biases (Anderson, in press). Evidence from the clinical literature suggests that such attentional biases might be related to corresponding biases in the selection of overt behavior, facilitating automatic approach (e.g., Anderson et al., 2013; Anderson et al., in press; Carpenter et al., 2006; Field & Cox, 2008; Marissen et al., 2006). A mechanism linking associative reward learning to a general behavioral approach bias, however, is lacking. In the present study, we show that previously reward-associated stimuli escape goal-directed inhibitory processing. Using a well-established training procedure known to produce robust value-based attentional biases (e.g., Anderson et al., 2011a, 2011b; Anderson, 2013), we examined the consequence of associative reward learning on response conflict in a subsequent flankers task in which one color was designated as response-relevant and another as response-irrelevant (go/no-go) on each trial (as in, Anderson & Folk, 2014). When the flankers were rendered in a color previously associated with reward, participants showed no evidence of inhibiting the response code generated by these stimuli. This is in contrast to previously published results (Anderson & Folk, 2012a, 2014) that were replicated under conditions in which the flanker colors were not previously associated with reward while controlling for other potential influences of training.

Our findings suggest a mechanism by which the response codes generated by previously reward-associated stimuli are automatically afforded high priority. Akin to habitual responding (Graybiel, 2008), these response codes bypass goal-directed cognitive processing stages involved in evaluating the task-relevance of the behavior and suppressing that behavior when it is deemed task-irrelevant. Such priority helps to ensure that actions that have proven rewarding in the past will be selected in the future, at the expense of the ability to exert strategic control over such behavior. Perhaps most strikingly, our findings reveal that such a shift in behavioral priority can reflect a general approach bias tied to a previously reward-associated stimulus. Both the specific stimuli (colored letters) and the stimulus-to-response mappings were new to participants in the test phase, such that the participants were never reinforced for making the specific response selections that were required in this task. As such, the failure to inhibit flanker-evoked responses when the flankers were rendered in the response-irrelevant (but previously reward-associated) color cannot be explained by a low-level form of stimulus–response learning. Instead, associative reward learning qualitatively changes the manner in which the process of deciding whether to act towards the stimulus unfolds, giving that stimulus an edge in the competition for behavior. Further consistent with an approach bias, participants were more likely to false alarm on trials containing a flanker previously associated with high reward in Experiment 1, although this was not replicated in Experiment 3. In general, false alarms occur infrequently in our task and were not a measure of interest.

It is important to note that the observed pattern of results reflects processes surrounding the selection of behavior and cannot be explained by reward impacting sensory processing of the flankers. Simply enhancing the strength of the sensory signals generated by the flankers without changing the underlying process of response selection would result in a corresponding magnification of inhibition when the flankers were presented in the response-irrelevant color. Although the corresponding response signal might provide a stronger source of evidence for behavior selection in this case, it would be evidence in favor of inhibiting the response rather than executing it. This would be consistent with how the motivational aspects of reward modulate inhibitory processing in a stop-signal task (Schevernels et al., 2015) and go, no-go task (O'Connor, Upton, Moore, & Hester, 2015). Participants had ample time to prepare how they would react to stimuli of a particular color upon seeing the color cue at the beginning of each trial, but were unable to engage inhibitory processes in accordance with task goals when the stimuli were previously associated with reward. Such inhibitory processing is normally rapid, being evident in fast as well as slow responses, but even slow responses showed no evidence of inhibition when the flankers were presented in a previously reward-associated color.

In both Experiment 1 and 3, when the flankers were presented in the response-relevant color, their impact on performance did not differ according to their associated value. This stands in contrast to the value-modulation of flanker compatibility effects observed in Anderson et al. (2012). However, Anderson et al. (2012) did not involve a go/no-go manipulation, which, in the current study, may have encouraged greater engagement of cognitive control processes that could counteract automatic attention biases. Moreover, the lack of a go/no-go manipulation in Anderson et al. (2012) rendered the color of the flankers completely task-irrelevant, whereas the color of go-colored flankers in the present study was clearly task-relevant, which may therefore have engaged goal-directed selection mechanisms, washing out any value-driven biases. Differential effects of associated value can be difficult to detect when the reward-associated stimuli are task-relevant (e.g., Anderson et al., 2011a, 2012; Anderson et al., 2013; Anderson, Leal, et al., 2014).

Also relevant to this issue, there is a complexity in the design of the current experiment that complicates comparison across relative value when the flankers are presented in the response-relevant (go) color. The go/no-go manipulation required that the same colors be used for both flankers and targets, as response relevance must be defined in relation to the target. Furthermore, the effect of the flankers on response selection can only be measured on go trials, when the participant is required to make a response. On such trials, when the flankers are presented in the response-relevant color, the target is necessarily presented in the same (also response-relevant) color. Therefore, to the degree that an attentional bias enhances the response signal associated with the flankers, it will also enhance the response signal associated with the target, creating potentially offsetting biases. Importantly, however, the associated value of the target would not be expected to influence how the flankers are processed with regards to response-relevance, such that the color cue would be ineffective at modulating approach vs. inhibition. Also, to the degree that the target response was better processed when presented in the previously high-value color, this would only work against the value-modulation of compatibility effects for response-irrelevant flankers observed in Experiment 3 (the target is high-value when the flankers are presented in the previously unrewarded color, which would reduce any compatibility effects on these trials rather than vice versa).

Although approach biases for previously reward-associated stimuli that are response-relevant may be detectible under different experimental conditions, another possibility is that learned value biases approach behavior primarily by blocking the ability to inhibit such stimuli when task-irrelevant. Inhibitory mechanisms are being increasingly recognized as critical to the selection process (e.g., Anderson & Folk, 2012b; Moher et al., 2014, 2015). In this way, previously reward-associated stimuli would be ensured consideration in response selection even when they might otherwise go unnoticed, at the expense of the ability to effectively inhibit them when entirely task-irrelevant.

On the trait level, the behavioral activation system has been validated as a useful framework for understanding the degree to which behavior is affected by rewards (Carver & White, 1994). Scores on the reward-drive subcomponent of this construct, which reflect the strength with which reward motivates the behavior of the individual, are positively correlated with the effect of reward on attention (Hickey, Chelazzi, & Theeuwes, 2010b). A hypersensitive behavioral activation system is also thought to contribute to a range of problematic behaviors that include addictions (e.g., Franken & Muris, 2006). Our findings provide evidence that, in addition to trait-level variance, recruitment of the behavioral activation system can reflect a learned response to particular stimuli. Such recruitment may reflect an important component of incentive salience, whereby learned reward cues evoke a sense of "wanting" that motivates approach and consummatory behaviors (Berridge & Robinson, 1998; Robinson & Berridge, 1993).

Acknowledgments

This research was supported in part by NIH grant R01-DA013165

Footnotes

1

We chose to focus specifically on trials in which the flankers were presented in the no-go color based on the results of the prior experiments, which showed an effect on these trials only. The purpose was to replicate the critical results from the prior experiments. In order for the experiment to work, though, both go and no-go colors must be used, so all of the conditions had to be included in the design regardless of our hypotheses.

Contributor Information

Brian A. Anderson, Johns Hopkins University

Charles L. Folk, Villanova University

Rebecca Garrison, Villanova University.

Leeland Rogers, Villanova University.

References

  1. Anderson BA. A value-driven mechanism of attentional selection. Journal of Vision. 2013;13(3):7, 1–16. doi: 10.1167/13.3.7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Anderson BA. The attention habit: How reward learning shapes attentional selection. Annals of the New York Academy of Sciences. doi: 10.1111/nyas.12957. (in press) [DOI] [PubMed] [Google Scholar]
  3. Anderson BA, Faulkner ML, Rilee JJ, Yantis S, Marvel CL. Attentional bias for non-drug reward is magnified in addiction. Experimental and Clinical Psychopharmacology. 2013;21:499–506. doi: 10.1037/a0034575. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Anderson BA, Folk CL. Contingent involuntary motoric inhibition: The involuntary inhibition of a motor response contingent on top-down goals. Journal of Experimental Psychology: Human Perception and Performance. 2012a;38:1348–1352. doi: 10.1037/a0030514. [DOI] [PubMed] [Google Scholar]
  5. Anderson BA, Folk CL. Dissociating location-specific inhibition and attention shifts: Evidence against the disengagement account of contingent capture. Attention, Perception, and Psychophysics. 2012b;74:1183–1198. doi: 10.3758/s13414-012-0325-9. [DOI] [PubMed] [Google Scholar]
  6. Anderson BA, Folk CL. Conditional automaticity in response selection: Contingent involuntary response inhibition with varied stimulus-response mapping. Psychological Science. 2014;25:547–554. doi: 10.1177/0956797613511086. [DOI] [PubMed] [Google Scholar]
  7. Anderson BA, Kronemer SI, Rilee JJ, Sacktor N, Marvel CL. Reward, attention, and HIV-related risk in HIV+ individuals. Neurobiology of Disease. doi: 10.1016/j.nbd.2015.10.018. (in press) [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Anderson BA, Kuwabara H, Wong DF, Gean EG, Rahmim A, Brasic JR, George N, Frolov B, Courtney SM, Yantis S. The role of dopamine in value-based attentional orienting. Current Biology. 2016;26:550–555. doi: 10.1016/j.cub.2015.12.062. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Anderson BA, Laurent PA, Yantis S. Learned value magnifies salience-based attentional capture. PLoS ONE. 2011a;6(11):e27926. doi: 10.1371/journal.pone.0027926. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Anderson BA, Laurent PA, Yantis S. Value-driven attentional capture. Proceedings of the National Academy of Sciences, USA. 2011b;108:10367–10371. doi: 10.1073/pnas.1104047108. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Anderson BA, Laurent PA, Yantis S. Generalization of value-based attentional priority. Visual Cognition. 2012;20:647–658. doi: 10.1080/13506285.2012.679711. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Anderson BA, Laurent PA, Yantis S. Value-driven attentional priority signals in human basal ganglia and visual cortex. Brain Research. 2014;1587:88–96. doi: 10.1016/j.brainres.2014.08.062. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Anderson BA, Leal SL, Hall MG, Yassa MA, Yantis S. The attribution of value-based attentional priority in individuals with depressive symptoms. Cognitive, Affective, and Behavioral Neuroscience. 2014;14:1221–1227. doi: 10.3758/s13415-014-0301-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Anderson BA, Yantis S. Persistence of value-driven attentional capture. Journal of Experimental Psychology: Human Perception and Performance. 2013;39:6–9. doi: 10.1037/a0030860. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Berridge KC, Robinson TE. What is the role of dopamine in reward: hedonics, learning, or incentive salience? Brain Research Reviews. 1998;28:309–369. doi: 10.1016/s0165-0173(98)00019-8. [DOI] [PubMed] [Google Scholar]
  16. Brainard D. The psychophysics toolbox. Spatial Vision. 1997;10:433–436. [PubMed] [Google Scholar]
  17. Bucker B, Belopolsky AV, Theeuwes J. Distractors that signal reward attract the eyes. Visual Cognition. 2015;23:1–24. [Google Scholar]
  18. Carpenter KM, Schreiber E, Church S, McDowell D. Drug Stroop performance: relationships with primary substance of use and treatment outcome in a drug-dependent outpatient sample. Addictive Behaviors. 2006;31:174–181. doi: 10.1016/j.addbeh.2005.04.012. [DOI] [PubMed] [Google Scholar]
  19. Carver CS, White TL. Behavioral inhibition, behavioral activation, and affective responses to impending reward and punishment: The BIS/BAS scales. Journal of Personality and Social Psychology. 1994;67:319–333. [Google Scholar]
  20. Della Libera C, Chelazzi L. Learning to attend and to ignore is a matter of gains and losses. Psychological Science. 2009;20:778–784. doi: 10.1111/j.1467-9280.2009.02360.x. [DOI] [PubMed] [Google Scholar]
  21. Desimone R, Duncan J. Neural mechanisms of selective visual attention. Annual Review of Neuroscience. 1995;18:193–222. doi: 10.1146/annurev.ne.18.030195.001205. [DOI] [PubMed] [Google Scholar]
  22. Failing MF, Theeuwes J. Exogenous visual orienting by reward. Journal of Vision. 2014;14(5):6, 1–9. doi: 10.1167/14.5.6. [DOI] [PubMed] [Google Scholar]
  23. Field M, Cox WM. Attentional bias in addictive behaviors: a review of its development, causes, and consequences. Drug and Alcohol Dependence. 2008;97:1–20. doi: 10.1016/j.drugalcdep.2008.03.030. [DOI] [PubMed] [Google Scholar]
  24. Field M, Mogg K, Zetteler J, Bradley BP. Attentional biases for alcohol cues in heavy and light social drinkers: the roles of initial orienting and maintained attention. Psychopharmacology (Berlin) 2004;173:116–123. doi: 10.1007/s00213-004-1855-1. [DOI] [PubMed] [Google Scholar]
  25. Folk CL, Remington RW, Johnston JC. Involuntary covert orienting is contingent on attentional control settings. Journal of Experimental Psychology: Human Perception and Performance. 1992;18:1030–1044. [PubMed] [Google Scholar]
  26. Franken IHA, Muris P. BIS/BAS personality characteristics and college students' substance use. Personality and Individual Differences. 2006;40:1497–1503. [Google Scholar]
  27. Graybiel AM. Habits, rituals, and the evaluative brain. Annual Review of Neuroscience. 2008;31:359–387. doi: 10.1146/annurev.neuro.29.051605.112851. [DOI] [PubMed] [Google Scholar]
  28. Guitart-Masip M, Fuentemilla L, Bach DR, Huys QJM, Dayan P, Dolan RJ, Duzel E. Action dominates valence in anticipatory representations in the human striatum and dopaminergic midbrain. Journal of Neuroscience. 2011;31:7867–7875. doi: 10.1523/JNEUROSCI.6376-10.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Hickey C, Chelazzi L, Theeuwes J. Reward changes salience in human vision via the anterior cingulate. Journal of Neuroscience. 2010a;30:11096–11103. doi: 10.1523/JNEUROSCI.1026-10.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Hickey C, Chelazzi L, Theeuwes J. Reward guides vision when it's your thing: Trait reward-seeking in reward-mediated visual priming. PLOS ONE. 2010b;5:e14087. doi: 10.1371/journal.pone.0014087. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Hickey C, Peelen MV. Neural mechanisms of incentive salience in naturalistic human vision. Neuron. 2015;85:512–518. doi: 10.1016/j.neuron.2014.12.049. [DOI] [PubMed] [Google Scholar]
  32. Itthipuripat S, Cha K, Rangsipat N, Serences JT. Value-based attentional capture influences context dependent decision-making. Journal of Neurophysiology. 2015;114:560–569. doi: 10.1152/jn.00343.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Krebs RM, Boehler CN, Egner T, Woldorff MG. The neural underpinnings of how reward associations can both guide and misguide attention. Journal of Neuroscience. 2011;31:9752–9759. doi: 10.1523/JNEUROSCI.0732-11.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Krebs RM, Boehler CN, Woldorff MG. The influence of reward associations on conflict processing in the Stroop task. Cognition. 2010;117:341–347. doi: 10.1016/j.cognition.2010.08.018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Le Pelley ME, Pearson D, Griffiths O, Beesley T. When goals conflict with values: Counterproductive attentional and oculomotor capture by reward-related stimuli. Journal of Experimental Psychology: General. 2015;144:158–171. doi: 10.1037/xge0000037. [DOI] [PubMed] [Google Scholar]
  36. Lubman DI, Peters LA, Mogg K, Bradley BP, Deakin JFW. Attentional bias for drug cues in opiate dependence. Psychological Medicine. 2000;30:169–175. doi: 10.1017/s0033291799001269. [DOI] [PubMed] [Google Scholar]
  37. Marissen MAE, Franken IHA, Waters AJ, Blanken P, van den Brink W, Hendriks VM. Attentional bias predicts heroin relapse following treatment. Addiction. 2006;101:1306–1312. doi: 10.1111/j.1360-0443.2006.01498.x. [DOI] [PubMed] [Google Scholar]
  38. Mine C, Saiki J. Task-irrelevant stimulus-reward association induces value-driven attentional capture. Attention, Perception, and Psychophysics. 2015;77:1896–1907. doi: 10.3758/s13414-015-0894-5. [DOI] [PubMed] [Google Scholar]
  39. Mogg K, Bradley BP, Field M, De Houwer J. Eye movements to smoking-related pictures in smokers: relationship between attentional biases and implicit and explicit measures of stimulus valence. Addiction. 2003;98:825–836. doi: 10.1046/j.1360-0443.2003.00392.x. [DOI] [PubMed] [Google Scholar]
  40. Moher J, Anderson BA, Song J-H. Dissociable effects of salience on attention and goal-directed action. Current Biology. 2015;25:2040–2046. doi: 10.1016/j.cub.2015.06.029. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Moher J, Lakshmanan BM, Egeth HE, Ewen JB. Inhibition drives early feature-based attention. Psychological Science. 2014;25:315–324. doi: 10.1177/0956797613511257. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Muhle-Karbe PS, Krebs RM. On the influence of reward on action-effect binding. Frontiers in Psychology. 2012;3:450. doi: 10.3389/fpsyg.2012.00450. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. O'Connor DA, Upton DJ, Moore J, Hester R. Motivationally significant self-control: Enhanced action withholding involves the right inferior frontal junction. Journal of Cognitive Neuroscience. 2015;27:112–123. doi: 10.1162/jocn_a_00695. [DOI] [PubMed] [Google Scholar]
  44. Pearson D, Donkin C, Tran SC, Most SB, Le Pelley ME. Cognitive control and counterproductive oculomotor capture by reward-related stimuli. Visual Cognition. 2015;23:41–66. [Google Scholar]
  45. Robinson TE, Berridge KC. The neural basis of drug craving: An incentive-sensitization theory of addiction. Brain Research Reviews. 1993;18:247–291. doi: 10.1016/0165-0173(93)90013-p. [DOI] [PubMed] [Google Scholar]
  46. Roper ZJJ, Vecera SP, Vaidya JG. Value-driven attentional capture in adolescents. Psychological Science. 2014;25:1987–1993. doi: 10.1177/0956797614545654. [DOI] [PubMed] [Google Scholar]
  47. Sali AW, Anderson BA, Yantis S. The role of reward prediction in the control of attention. Journal of Experimental Psychology: Human Perception and Performance. 2014;40:1654–1664. doi: 10.1037/a0037267. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. San Martin R, Appelbaum LG, Huettel SA, Woldorff MG. Cortical brain activity reflecting attentional biasing toward reward-predicting cues covaries with economic decision-making performance. Cerebral Cortex. 2016;26:1–11. doi: 10.1093/cercor/bhu160. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Schevernels H, Bombeke K, Van der Borght L, Hopf J-M, Krebs RM, Boehler CN. Electrophysiological evidence for the involvement of proactive and reactive control in a rewarded stop-signal task. NeuroImage. 2015;121:115–125. doi: 10.1016/j.neuroimage.2015.07.023. [DOI] [PubMed] [Google Scholar]
  50. Stromark KM, Field NP, Hugdahl K, Horowitz M. Selective processing of visual alcohol cues in abstinent alcoholics: an approach-avoidance conflict? Addictive Behaviors. 1997;22:509–519. doi: 10.1016/s0306-4603(96)00051-2. [DOI] [PubMed] [Google Scholar]
  51. Theeuwes J. Perceptual selectivity for color and form. Perception and Psychophysics. 1992;51:599–606. doi: 10.3758/bf03211656. [DOI] [PubMed] [Google Scholar]
  52. Yantis S, Johnston JC. On the locus of visual selection: Evidence from focused attention tasks. Journal of Experimental Psychology: Human Perception and Performance. 1990;16:135–149. doi: 10.1037//0096-1523.16.1.135. [DOI] [PubMed] [Google Scholar]
  53. Yantis S, Jonides J. Abrupt visual onsets and selective attention: Evidence from visual search. Journal of Experimental Psychology: Human Perception and Performance. 1984;10:350–374. doi: 10.1037//0096-1523.10.5.601. [DOI] [PubMed] [Google Scholar]

RESOURCES