Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2026 Mar 7.
Published before final editing as: J Cogn Neurosci. 2026 Jan 26:1–20. doi: 10.1162/JOCN.a.2435

Expectation Exerts Flexible and Context-Dependent Influence on Conscious Object Recognition

Yuan-hao Wu 1,*, Brandon Chen 1,*, Biyu J He 1,2,3,4,5
PMCID: PMC12965712  NIHMSID: NIHMS2147966  PMID: 41570199

Abstract

Prior expectation powerfully shapes perception, yet its effects have been notoriously difficult to characterize due to the confounding influence of attention. In this study, we systematically investigated expectation’s influence on conscious visual object recognition while carefully disentangling it from attentional effects. Across three experiments, we observed that expectation’s effect varied markedly depending on the experimental context. When expectation was manipulated in isolation, it enhanced recognition sensitivity, mirroring the effects of attention. However, when expectation and attention were orthogonally manipulated, a surprising interaction effect emerged whereby observers were less likely to report recognition of an expected stimulus in the unattended condition—an effect attributed to swap errors. Finally, a stronger expectation cue in both space and time reduced swap errors and increased the likelihood of observers reporting seeing the expected stimuli. These findings reveal a remarkable degree of flexibility and context dependence in expectation’s influence on perception and shed new light on how attention and expectation jointly shape conscious object recognition.

Keywords: Conscious perception, visual recognition, spatial attention, expectation, signal detection theory

Introduction

To navigate a complex and dynamic world, our perceptual system must transform overwhelming sensory inputs into interpretable and actionable perception. This transformation is guided by two powerful cognitive processes — selective attention and expectation.

Selective attention enables us to prioritize behaviorally relevant stimuli while suppressing irrelevant information. For example, when searching for a specific book on a crowded shelf, knowing its title and spine color helps identify potential matches while filtering out unrelated books, thereby increasing search efficiency. This attentional modulation enhances observers’ sensitivity for detecting or discriminating stimuli(Bashinski & Bacharach, 1980; Buschman & Kastner, 2015; Carrasco, 2006; Maunsell, 2015). Extensive research has established that attention influences the early stages of information processing, particularly through the enhancement of signal-to-noise ratio in the neural encoding of sensory information and selective tuning of neural responses to attended stimuli (Cohen & Maunsell, 2009; McAdams & Maunsell, 2000; Mitchell, Sundberg, & Reynolds, 2009; Reynolds, Pasternak, & Desimone, 2000).

By contrast, expectation allows us to leverage prior knowledge about statistical regularities in the environment to generate predictions about likely upcoming events or stimuli. For example, a phone charger is more likely to be on the desk than under the kitchen sink. Such probabilistic knowledge associated with specific stimuli and properties of the environment can profoundly impact information processing (Floris P De Lange, Heilbron, & Kok, 2018; Summerfield & De Lange, 2014; Summerfield & Egner, 2009).

Unlike attention, the precise mechanisms through which expectation modulates perception remain controversial. Classic theoretical frameworks such as the signal detection theory (SDT) hold that probabilistic knowledge primarily modulates decision criterion during post-perceptual processing rather than early sensory processing (Green & Swets, 1966; Hautus, Macmillan, & Creelman, 2021; Rungratsameetaweemana & Serences, 2019). Supporting this view, several studies have shown that expectation biases decisions toward the expected stimuli, primarily through modulation in the late post-perceptual processing stage (Bang & Rahnev, 2017; Rungratsameetaweemana, Itthipuripat, Salazar, & Serences, 2018). However, this classic framework has been challenged by two recent lines of work. First, there is evidence suggesting that expectation can also improve perceptual sensitivity(Sam Cheadle, Egner, Wyart, Wu, & Summerfield, 2015; Stein & Peelen, 2015). Second, contrary to a decisional-bias account, a prominent body of work suggests that expectation modulates neural representations in the earliest stages of cortical processing of sensory inputs (Aitken et al., 2020; Esterman & Yantis, 2010; Kok, Jehee, & de Lange, 2012; Kok, Mostert, & de Lange, 2017).

Thus, existing literature paints an inconsistent picture of expectation’s influence on perception—variably influencing bias or sensitivity, and early (sensory) or late (decisional) processing. At present, it is unclear what are the circumstances in which expectation might have one or the other influence on perceptual processing. By one account, the discrepancy across these experimental findings might have stemmed from experimental designs where probabilistic expectation cues signaled both the expected stimulus and its relevance, conflating the effects of expectation and attention (Rungratsameetaweemana & Serences, 2019; Summerfield & Egner, 2009; Zhao, Al-Aidroos, & Turk-Browne, 2013).

A promising approach to overcome this limitation is to manipulate attention and expectation in an orthogonal manner, rather than assessing each of them in isolation(Summerfield & De Lange, 2014). Distinct effects of attention and expectation on perception have been revealed in studies employing such an approach. For example, in a study where observers judged the presence of faint grating stimuli, attention enhanced sensitivity (d) whereas expectation of stimulus presence led to a more liberal decision criterion (c) (Wyart, Nobre, & Summerfield, 2012). However, expectation has also been shown to enhance discrimination sensitivity (d) (Sam Cheadle et al., 2015). Yet other studies suggest that attention and expectation influence perception in an interactive manner rather than independently, with expectation’s effects on behavior and brain responses in sensory areas significantly mediated by attention (Jiang, Summerfield, & Egner, 2013) or attention reversing expectation’s suppression effect on sensory activation (Kok, Rahnev, Jehee, Lau, & de Lange, 2012).

In sum, despite great interest, a clear understanding of the relationship between the effects of attention and expectation on perception remains elusive. Previous studies have reported discrepant findings, suggesting that they either act independently or interactively. Further, the reported effects of expectation on perception have been inconsistent. Our study aimed to fill in this gap by manipulating attention and expectation either in isolation or orthogonally, and by varying the specific properties of the expectation cue. We reasoned that the specific context and manner in which expectation is manipulated might have contributed to this range of effects.

Building on recent studies examining conscious recognition of high-level objects at liminal contrasts (Levinson, Podvalny, Baete, & He, 2021; Podvalny, Flounders, King, Holroyd, & He, 2019; Wu, Podvalny, Levinson, & He, 2024), we conducted three experiments to interrogate attention and expectation’s influences on conscious object recognition. In Experiment 1, we examined the independent and interactive effects of attention and expectation on object recognition using an orthogonal-manipulation task design. Experiment 2 examined whether attention and expectation’s effects would change when each is manipulated in isolation. In Experiment 3, we tested how expectation’s effect on perception might vary with temporal and spatial features of the expectation cue. Our results revealed a striking degree of flexibility in expectation’s influence on perception, depending on the specific task condition and the expectation cue itself. These findings help to contextualize and reconcile previously discrepant findings and provide a more nuanced and comprehensive framework to understand expectation’s influence on conscious perception.

Results

General Methods

Across all three experiments, participants performed a threshold-level object recognition task using real-world object stimuli presented at liminal contrasts (Fig 1A) across two sessions (for details, see Methods). On each trial, two stimuli were concurrently presented in placeholders flanking the central fixation. The stimuli set consisted of 16 real images from four categories (human faces, houses, man-made objects, and animals) and their phase-scrambled counterparts lacking any meaningful content (examples shown in Fig 1B). Participants’ task was to report the stimulus category and their subjective recognition experience for the stimulus in one of the placeholders as indicated by the probe cue during the response period. The basic task structure followed a well-established paradigm to investigate threshold-level object recognition used in previous neuroimaging studies (Levinson et al., 2021; Podvalny et al., 2019; Wu, Podvalny, & He, 2023; Wu et al., 2024). Here, instead of presenting a single stimulus at central fixation as in these previous studies, we present two stimuli simultaneously, which allowed us to manipulate spatial attention and expectation independently using attention and expectation cues using a well-established approach(Wyart et al., 2012).

Fig 1. Experimental paradigm and stimuli.

Fig 1.

A Trial structure in Experiment 1. B Example real and scrambled images from each category. C Expectation and attention cues. The luminance of squared placeholders informed about the probability of the occurrence of a real and scrambled image at each location with 67% validity. Attention cue predicted the locations of task-relevant stimulus (target) with 67% validity. D Trial types and their classification into behavioral metrics. C: correct, I: incorrect, FA: false alarm, CR: correct rejection. Categorization reports for scrambled images were scored as correct or incorrect based on the category that the scrambled image was generated from, reflecting the fact that category-specific low-level image features are preserved by the phase-scrambling procedure even though these images are devoid of any meaningful content. A-D Due to copyright restrictions, the original experimental stimuli are not displayed in the figure. The images shown here are representative examples sourced from http://www.pexels.com.

The categorization report involved a four-alternative forced-choice task, wherein we instructed participants to make a genuine guess if they could not identify the object. For the recognition report, participants indicated whether they perceived a meaningful content in the stimulus (“yes”) or only low-level features, such as lines or abstract patterns (“no”). In addition, participants rated their confidence in their recognition report. Thus, this task probed both objective discrimination performance (4AFC task) and subjective perception of a meaningful object content (object recognition task)(Levinson et al., 2021; Podvalny et al., 2019; Wu et al., 2024; Wyart et al., 2012), as well as a metacognitive confidence judgment of the subjective report.

To establish individual recognition thresholds, we first conducted an adaptive staircase procedure, using the QUEST approach (Watson & Pelli, 1983), which titrated the contrast of each exemplar real image individually until participants reported recognizing its content in approximately 50% of trials. Scrambled images were created by phase-shuffling the original images, preserving category-specific low-level features while eliminating any meaningful content (Podvalny et al., 2019). In the main task, each exemplar image was presented repeatedly across trials, and scrambled images were presented at the same contrasts as their matching original images.

Experiment 1

In Experiment 1, we aimed to dissociate attention and expectation’s effects on high-level object recognition. To this end, we collected data from 19 participants and orthogonally manipulated attention and expectation during the object recognition task described above. Each participant completed 576 trials organized into 8 blocks with 72 trials each.

Following previous work (Summerfield & De Lange, 2014; Wyart et al., 2012), we operationalized attention as the knowledge about the task relevance of stimuli and expectation as the knowledge about the probability of object presence within the stimuli. These factors were manipulated using visual cues presented before the stimulus presentation. The placeholders of different luminance levels (black versus white) indicated whether a real or scramble image was more likely to appear in each location, with 2/3 probability (Fig 1C, left). The association of placeholder luminance and image type was counterbalanced across participants—i.e., for half (/the other half) of the participants, black (/white) place holders were associated with 2/3 probability of the stimulus being a real image. In this experiment, the two locations always had placeholders of opposite luminance (i.e., one white, one black). The spatial arrangement of these placeholders was held constant within each block and balanced across blocks, with blocks presented in randomized order; that is, expectation was manipulated at the block level.

Following the 3-s expectation cue, an arrow placed immediately to the left or right of the central fixation (‘attention cue’) indicated the placeholder where the task-relevant stimulus would likely appear, also with 67% validity (Fig 1C, right). After the stimulus offset, a central arrow in the response display (‘probe cue’) indicated the location of the stimulus participants should report on for a given trial (henceforth ‘target’). Consequently, the combination of the two cues provided four task conditions in which the response to a real/scrambled image was given: 1) attended and expecting real images; 2) attended and expecting scrambled images; 3) unattended and expecting real images; and 4) unattended and expecting scrambled images.

Following the SDT framework, we hypothesized that attention would primarily enhance participants’ ability to discriminate between signals (real images) and noise (scrambled images) in the probed stimulus, resulting in a higher sensitivity (d’), while the expectation of encountering a real image would increase participants’ propensity to report recognizing an object, resulting in a more liberal criterion (c).

Attention and expectation influence conscious perception in an interactive manner

For each attention-expectation condition listed above, we computed the proportion of “yes” responses to the recognition question for targets containing real and scrambled images, which constitute the hit rate (HR) and false alarm rate (FA), respectively (Fig 1D, see Tables S1S2). These two measures were used to compute sensitivity d (the ability to distinguish between real and scrambled images) and criterion c (the propensity to report seeing a meaningful content), following the SDT framework. These behavioral metrics were subsequently entered into 2×2 repeated-measures ANOVAs with attention and expectation conditions as the two factors.

The ANOVA on HR detected significant main effects of both attention and expectation (Fig 2A). HR was significantly higher in the attended condition F1,18=26.62,p=6.6×10-5,ηp2=0.60, i.e., when participants’ attention was directed to the target location (mean ± 95% CI: 0.63 ± 0.07) as compared to the nontarget location (0.43 ± 0.09). However, the direction of the expectation effect was surprising. HR was significantly lower when targets were expected to contain real images (0.26 ± 0.06) than when expected to contain scrambled images (0.30 ± 0.06; F1,18=9.04,p=7.58×10-3,ηp2=0.33). This effect was primarily observed in the unattended condition, wherein participants’ attention was directed away from the targets, as indicated by a significant interaction (F1,18=23.27,p=1.36×10-4,ηp2=0.56).

Fig 2. Perceptual behaviors in Experiment 1.

Fig 2.

A From left to right: mean hit rate (HR), false alarm rate (FAR), sensitivity (d) and criterion (c) for targets at the attended location (blue) or unattended location (orange), and under expecting real image (right column) or expecting scrambled image (left column) condition. Horizontal dashed lines in the HR plot indicates threshold recognition rate (50%). B From left to right: mean categorization accuracy in hit, false alarm (FA), miss, and correct rejection (CR) trials. Horizonal dashed lines indicate chance level accuracy (25%). C same as B but for mean confidence rating. Error bars indicate 95% confidence intervals of means. *p<0.05,**p<0.01,***p<0.001.

FAR was significantly lower when participants’ attention was directed toward targets (0.24 ± 0.05) compared to when it was directed away from them (0.33 ± 0.08; F1,18=10.32,p=4.84×10-3,ηp2=0.36). Mirroring the effect on HR, expectation also influenced FAR in an unanticipated way: FAR was significantly lower when real images were expected at the probed location (0.261 ± 0.055) than when scrambled images were expected (0.30 ± 0.06; F1,18=7.62,p=2.39×10-3,ηp2=0.30). Again, there was a significant interaction between attention and expectation (F1,18=12.45,p=0.002,ηp2=0.41), with expectation primarily influencing FAR when participants’ attention was directed away from the targets.

These results hint at the possibility that attention enhanced sensitivity by increasing HR and reducing FAR, while expecting a real stimulus at the probed location counterintuitively resulted in a more conservative (i.e., larger) criterion by reducing both HR and FAR—specifically in the unattended condition. We next directed confirmed these impressions by assessing attention and expectation’s effects on d’ and c.

The 2×2 ANOVA on sensitivity (d) revealed a significant main effect of attention (Fig 2A), with increased d when targets were within the locus of attention (1.14 ± 0.21) than outside of it (0.31 ± 0.19; F1,18=41.71,p=4×10-6,ηp2=0.699). In contrast, neither a main effect of expectation (F1,18=0.03,p=0.86,ηp2=0.002) or an interaction effect (F1,18=0.22,p=0.65,ηp2=0.01) was observed.

We found no evidence for a main effect of attention on criterion (F1,18=2.78,p=0.11,ηp2=0.13). Instead, a significant main effect of expectation was identified, with criterion shifting higher when targets were expected to contain real images (0.33 ± 0.16) than scrambled images (0.22 ± 0.18; F1,18=16.16,p=8.04×10-4,ηp2=0.47). This effect was primarily driven by trials where attention was directed away from the targets, as indicated by the significant interaction effect (F1,18=17.97,p=4.93×10-4,ηp2=0.50).

Together, these analyses on sensitivity and criterion confirm the above impression from HR and FAR, and show that while, as predicted, attention enhanced sensitivity, expecting a real stimulus at the target location paradoxically increased criterion under the condition of inattention, leading to fewer answers of recognizing a meaningful content.

Attention and expectation’s influences on discrimination performance and confidence

Next, we assessed attention and expectation’s influences on discrimination performance. To this end, we conducted 2×2 ANOVAs on categorization accuracy, separately for hit, miss, false alarm (FA), and correct rejection (CR) trials. These conditions are defined by the combination of image type (real vs. scrambled) and subjective recognition response (yes vs. no) (see Fig 1D). Because scrambled images are devoid of any meaningful content, responding “yes” to the recognition question constitutes a false alarm. In addition, because these image preserve low-level features that differ between categories, previous studies have shown that categorization accuracy—as scored according to the original image used to generate the scrambled image—can nevertheless be above chance-level, suggesting that low-level image features likely biased false perceptions in FA trials (Levinson et al., 2021; Podvalny et al., 2019).

For hit trials, there was a significant main effect of attention (F1,18=43.94,p=0.3×10-5,ηp2=0.71; Fig 2B, column 1), with higher categorization accuracy when participants’ attention was directed toward the targets (0.81 ± 0.07) compared to when it was directed away from them (0.44 ± 0.12). In contrast, there were no significant effects related to expectation (F1,18=0.71,p=0.41,ηp2=0.04) nor an interaction effect (F1,18=3.21,p=0.09,ηp2=0.15).

A similar pattern was observed in miss trials (Fig 2B, column 3): Categorization accuracy was significantly higher when the target stimuli were within the locus of attention (0.39 ± 0.06) than outside of it (0.27 ± 0.03; F1,18=12.95,p=0.002,ηp2=0.42). Again, no expectation (F1,18=0.83,p=0.37,ηp2=0.04) nor interaction effects (F1,18=2.10,p=0.17,ηp2=0.10) were observed. For both false alarm and correct rejection trials (Fig 2B, columns 2 and 4), no significant main or interaction effects were observed.

These results suggest that spatial attention improved stimulus processing, leading to enhanced categorization accuracy whether subjective recognition was successful or not, but did not influence whether preserved low-level features in scrambled images guided categorization responses.

Lastly, we assessed the effects of attention and expectation on confidence. For trials in which participants gave the objectively correct recognition reports (hits and correct rejections), confidence ratings were significantly higher under the attended condition than the unattended condition (Fig 2C, column 1 and 4; hit: attended: 3.11 ± 0.26, unattended: 2.85 ± 0.22, F1,18=15.24,p=0.001,ηp2=0.46; CR: attended: 2.72 ± 0.39, unattended: 2.59 ± 0.41, F1,18=4.78,p=0.042,ηp2=0.21). By contrast, in false alarm trials, confidence ratings decreased under the attended condition (2.35 ± 0.28) compared to the unattended condition (2.64 ± 0.30, F1,18=7.59,p=0.013,ηp2=0.30; Fig 2C, column 2). No significant attention effect was observed in miss trials (Fig 2C, column 3). In addition, no significant effects of expectation or significant interaction effects were observed in any trial type.

Since we asked participants to rate confidence about their subjective recognition responses, these results show that attention improved metacognitive accuracy, boosting confidence for objectively correct recognition responses (Hit and CR) and decreasing confidence for objectively incorrect recognition responses (FA).

Together, these findings reveal that attention influences multiple aspects of object recognition. Attention enhanced sensitivity, enabling observers to better distinguish between real and scrambled images when attention was directed toward them. Attention also improves the ability to assign targets to the correct stimulus categories. Additionally, attention enhances confidence for objectively correct responses. These results align well with the framework that attention facilitates sensory processing by improving signal to noise ratio (Buschman & Kastner, 2015; Maunsell, 2015), but they additionally reveal that attention’s effects permeate across subjective recognition, objective discrimination, and metacognitive judgments.

In contrast, in this experiment, expectation influenced object recognition primarily through its interaction with attention. Notably, expectation affects object recognition when attention is directed away from the targets. Under these conditions, observers are less likely to report recognizing a meaningful content when the target had a higher probability (67%) of being a real image as opposed to a scrambled image—a paradoxical finding. Indeed, this observation was opposite to our initial hypothesis that expecting the target to contain a real image would result in a more liberal criterion (i.e., more “recognized” answers). Given that this puzzling effect occurred when attention was directed away from the targets, one possible explanation is that perceptual reports were biased by the nontargets that were initially attended to. This possibility is explored in the next section.

Perceptual reports are influenced by the task-irrelevant nontarget stimuli

To examine whether the observed interaction effects were mediated by the nontargets, we divided trials based whether the concurrently presented target (i.e., the probed stimulus) and nontarget (i.e., the non-probed stimulus) were real or scrambled images. This resulted in four groups of trials: 1) real target & real nontarget; 2) real target & scrambled nontarget; 3) scrambled target & real nontarget; and 4) scrambled target & scrambled nontarget. If nontargets indeed influenced recognition reports, then recognition reports on unattended trials should vary depending on the nontarget image type. To test this, we computed recognition rates for each group of trials and examined the effects of attention and expectation in each.

When both targets and nontargets were real images, recognition rates were higher in the attended (0.631 ± 0.069) than unattended condition (0.52 ± 0.10; F1,18=5.87,p=0.026,ηp2=0.25; Fig 3A, column 1). This attention effect was also observed when real targets were paired with scrambled nontargets (attended: 0.63 ± 0.08, unattended: 0.34 ± 0.11; F1,18=31.45,p=2.5×10-5,ηp2=0.64; Fig 3A, column 2). Importantly, when comparing recognition rates between real targets that were paired with real or scrambled nontargets (Fig 3A, columns 1–2), recognition rates for attended targets remained stable regardless of nontarget types (t18=0.09,p=0.93); however, the recognition rate for unattended targets was significantly higher when paired with real nontargets than when paired with scrambled nontargets (t18=4.06,p=0.7×10-3). This suggests that in the unattended condition, the recognition rate for real targets was interfered by nontarget stimuli that were initially attended.

Fig 3. Nontargets biased perceptual reports.

Fig 3.

A Mean proportion of “yes” response collapsed by target-nontarget pairing. Columns from left to right depicted results for trials with real target – real nontarget, real target – scrambled nontarget, scrambled target – real nontarget, and scrambled target – scrambled nontarget parings. B Same as A, but for mean categorization accuracy scored based on the object category in the concurrently presented nontargets.

We observed a similar pattern of attention effect for scrambled targets. The recognition rate for attended targets (0.23 ± 0.06) was significantly lower than that for unattended targets (0.43 ± 0.11; F1,18=17.38,p=0.58×10-4,ηp2=0.49; Fig. 3A, column 3), consistent with attention’s effect of enhancing d’. Crucially, when comparing recognition rates for scrambled targets based on nontarget types (Fig. 3A, columns 3–4), we observed a similar pattern as seen earlier. While the recognition rate for attended targets did not differ with the nontarget type (attended nontarget = real :0.23 ± 0.06, attended nontarget = scr :0.24 ± 0.05, t18=-0.29,p=0.774), the recognition rate for unattended targets was significantly higher when paired with real nontargets (0.431 ± 0.106) than when paired with scrambled nontargets (0.22 ± 0.05; t18=4.85,p=1.29×10-4). Across all conditions, no significant main effects of expectation or significant interaction effects were observed.

These findings support the idea that recognition reports for the unattended targets were biased by nontargets that were initially attended to. Specifically, participants were more likely to report seeing meaningful contents in targets if nontargets were real images. This raised the question of whether such response biases were also prevalent in categorization reports. To assess this possibility, we recomputed categorization accuracy according to the category of the nontarget. Since the nontarget category was chosen randomly and independently from the target category, significant deviations from chance level would indicate that categorization reports were influenced by nontargets.

When targets were attended, the recomputed categorization accuracy did not deviate from the chance level regardless of the target-nontarget pairings (Fig 3B, column 1–4, blue), indicating that categorization reports for attended targets were not influenced by the nontarget category.

Notably, when targets were unattended and paired with real nontargets, the recomputed categorization accuracy was significantly higher than chance level. This effect was consistent regardless whether targets were real images (mean accuracy: 0.60 ± 0.12; t18=5.97,p=1.19×10-5; Fig 3B, column 1) or scrambled images (0.59 ± 0.13; t18=5.55,p=2.9×10-5; Fig. 3B, column 3). This suggests that in the unattended condition, participants tended to give categorization reports that aligned with the task-irrelevant nontarget that was initially attended to.

In contrast, when unattended targets were paired with scrambled nontargets, the categorization accuracies remained indistinguishable to those for attended targets (Fig 3B, columns 2 and 4; p=0.813 and p=0.794, respectively) and were close to the chance level. Across all conditions, no significant expectation effects were observed.

The results above demonstrate that attentional misallocation systematically influences perceptual judgments: participants’ object category reports were biased toward the contents in the attended but task-irrelevant stimuli. Crucially, these response biases cannot be attributed to improper task execution. Participants’ confidence ratings exhibited the expected pattern: higher confidence for correct recognition of attended targets compared to unattended ones, indicating proper task understanding and engagement (Fig 2C, hit and CR trials). This alignment between confidence patterns and attention conditions indicates that the observed response biases reflect genuine experimental effects rather than task confusion.

Instead, we interpret the observed response biases as swap errors (Bays, 2016; Bays, Schneegans, Ma, & Brady, 2024)—a phenomenon well-documented in working memory research where features from task-irrelevant items are misattributed to task-relevant items due to failures in stimulus encoding or feature binding. The predominant occurrence of swap errors in the unattended condition herein reveals how attentional capture by task-irrelevant stimuli can fundamentally alter perceptual judgments.

Swap errors also explain the previously puzzling attention × expectation interaction effect on criterion (Fig. 2A, column 4). Because the expectation cues for the two locations were anticorrelated in this experiment, in the ‘expect real’ condition, the place holders indicate that the probability of a real image at the probed location is high, but the probability of a real image at the non-probed location is low. Therefore, on unattended trials, participants might have misattributed their expectation of seeing a scrambled image in the initially attended (but non-probed) location to the unattended (but probed) target. This misattribution would lead to a conservative criterion under the “expect real’ but unattended condition, contrary to the optimal performance of lowering the criterion due to the probed location having a higher probability of being a real image.

Experiment 2

Experiment 1 revealed that expectation modulates object recognition mainly through its interaction with attention rather than through independent effects. This finding is at odds with previous studies that demonstrated significant expectation-driven effects (Esterman & Yantis, 2010; Kok, Jehee, et al., 2012; Kok et al., 2017; Stein & Peelen, 2015). We speculated that this discrepancy may be due to differences in task designs. In particular, these previous studies typically only manipulated expectation, but it has been suggested that expectation’s effect in such scenarios is often confounded by attention (Rungratsameetaweemana et al., 2018; Rungratsameetaweemana & Serences, 2019). In contrast, our Experiment 1 explicitly disentangled the effects of attention and expectations on recognition behavior.

To test this hypothesis, we conducted Experiment 2 in six additional participants (N=6) using a modified design that manipulated attention and expectation in separate blocks, while maintaining the basic trial structure of Experiment 1. Thus, while Experiment 1 manipulated attention and expectation orthogonally, Experiment 2 manipulated them independently. Each participant completed 6 attention blocks and 6 expectation blocks of 48 trials each, totaling 576 trials.

In attention blocks, we eliminated the expectation manipulation by presenting gray placeholders of equal luminance at both the left and right locations in all trials (Fig 4A). The gray placeholder signaled that, within each location, there was an equal probability (0.5) of the stimulus being a real or a scramble image. Since the equal probability was consistent across both locations, it eliminated the expectation of seeing a particular image type at a given location. The attention manipulation used in Experiment 1 was preserved: Prior to the stimulus presentation an arrow placed on the left or right of the fixation cross directed attention to the likely target with 67% validity.

Fig 4. Experimental paradigms used in Experiment 2.

Fig 4.

A Task structure in attention blocks. Expectation manipulation was removed by using placeholders that indicated an equal probability of a real or scramble image appearing in either visual field. B Task structure in expectation blocks. Attention manipulation was eliminated by removing the attention cue, while the black and white expectation cues seen in Experiment 1 remained preserved.

In expectation blocks, the attention manipulation was removed through the omission of the attention cue, while the expectation manipulation used in Experiment 1 was preserved (Fig 4B). On each trial, placeholders with opposite luminance (black vs white) were simultaneously presented in the left and right visual fields. The luminance of placeholder predicted the likely image type (real vs scramble) at its location with a 2/3 probability. As in Experiment 1, placeholders were always paired with opposite luminance, ensuring that they predicted different image types; the spatial arrangement of black and white placeholders was held constant within each block, balanced across blocks, and presented in a randomized order. The probe cue pointed to either location with equal chance.

Attention and expectation induce similar behavioral patterns

Data from the attention blocks revealed effects consistent with findings from Experiment 1 (Fig 5Ai). When attention was directed toward targets, participants showed higher HR (attended: 0.72 ± 0.18, unattended: 0.54 ± 0.08, W = 0, p=0.031,BF10=8.92) and lower FAR (attended: 0.27 ± 0.04, unattended: 0.49 ± 0.05, W = 0, p=0.031,BF10=338.74) than when attention was directed away. This resulted in higher recognition sensitivity for attended targets (1.33 ± 0.63) compared to unattended ones (0.13 ± 0.11, W = 0,p=0.031,BF10=20.03), while criterion remained unchanged (attended: −0.03 ± 0.34, unattended: −0.03 ± 0.16, W = 8, p=0.688,BF10=0.37). Attention also significantly improved categorization accuracy in hit trials (attended: 0.84 ± 0.19, unattended: 0.33 ± 0.1, W = 0, p=0.031,BF10=2342.72), with no significant effects in FA, miss, or CR trials (Fig 5Aii).

Fig 5. Perceptual behavior in Experiment 2.

Fig 5.

A i) Mean HR, FAR, d’ and c for attended (blue) and unattended targets (orange). Horizontal dashed lines indicate threshold recognition rate (50%). ii) Mean categorization accuracy in hit, FA, miss, and CR trials under attended and unattended conditions. Horizontal dashed lines display chance accuracy level. B Same as A, but plotted as a function of expectations (real (green) vs scrambled (purple)). Error bars indicate 95% confidence interval of means.

Analysis of expectation blocks revealed a somewhat similar pattern (Fig 5Bi). Expecting a real image slightly enhanced hit rates and slightly reduced false alarm rates, though neither effect reached statistical significance (HR: W = 2, p=0.094,BF10=0.91; FAR: W = 3, p=0.156,BF10=0.93). Unlike Experiment 1, we observed a trend toward increased sensitivity (d) when targets were expected to be real (0.854 ± 0.610) versus scrambled (0.46 ± 0.33), though this effect did not reach statistical significance (W=2, p=0.094,BF10=2.83). No significant effects were observed for criterion (expect real: 0.14 ± 0.23, expect scrambled: 0.03 ± 0.16, W = 8, p=0.688,BF10=0.53). In addition, expectation did not have a significant effect on categorization accuracy in any trial type (Fig 5Bii).

Experiment 2 revealed two key findings. First, it confirmed the robust effects of attention on perceptual sensitivity and categorization accuracy observed in Experiment 1, demonstrating that these effects persist when attention is manipulated in isolation. Second, it revealed that when manipulated alone, the expectation of encountering a real image had a trend effect toward enhancing perceptual sensitivity d, while having no effect on criterion, a pattern that closely paralleled that of attention.

Notably, this expectation-driven enhancement of d was absent in Experiment 1 when attention was properly controlled for through orthogonal manipulation, suggesting that the effect of expectation observed in Experiment 2 likely stemmed from an attention confound such that the location associated with a higher probability of having a real image presented received more spatial attention. Overall, our results are consistent with the idea that when manipulated alone, expectation effects may be confounded by attention (Rungratsameetaweemana et al., 2018; Rungratsameetaweemana & Serences, 2019).

Experiment 3

Experiments 1 and 2 revealed that the experimental setting significantly influences how expectation affects threshold-level object recognition. When expectation was manipulated alone (Experiment 2), it had a similar effect as attention in enhancing sensitivity. When expectation was orthogonally manipulated with attention (Experiment 1), expecting a real image had the counterintuitive effect of increasing the detection criterion in unattended trials—an effect attributed to swap errors by further analysis because the expectation cue was anti-correlated between the two spatial locations.

To understand the task conditions in which swap errors can happen and to reconcile the above findings with a previous work showing that the expectation of a stimulus lowers detection criterion, which also used the orthogonal manipulation of attention and expectation cues (Wyart et al., 2012), we conducted Experiment 3 (N=6) to examine how changes in the temporal and spatial properties of expectation cues would affect recognition behaviors.

Experiment 3 incorporated two key modifications to Experiment 1. First, the placeholders remained visible during the response period, rather than disappearing after the stimulus presentation. This modification aligned with this earlier study (Wyart et al., 2012). In designing Experiment 1, we reasoned that removing the placeholders from the response period would avoid the participant using the placeholders—which provided probabilistic information—to directly inform their response strategies and thereby provide a cleaner picture about how expectation influences perception. In Experiment 3, we tested whether this paradigm difference might have contributed to the opposite pattern of expectation’s influence on detection criterion between Experiment 1 and this earlier study.

The second modification addressed an interpretative limitation in Experiment 1’s design. Similar to the earlier study(Wyart et al., 2012), Experiment 1 employed local expectation cues with anti-correlated placeholders at the two locations. This experimental choice created an ambiguity: observed effects could have stemmed from either expectation of a real image at the high-probability location or expectation of a scrambled image at the low-probability location.

To remove this ambiguity, Experiment 3 included two distinct cueing approaches: local and global. The design of local expectation (LE) cue blocks was identical to Experiment 1 but for the continued presentation of expectation cues during the response period (Fig 6A). As in experiment 1, expectation cues of opposite luminance (black vs white) were presented in the two locations, with each cue predicting the occurrence of either a real image or a scramble image at its respective location with 2/3 probability. The expectation cues were fixed within a block and balanced across LE blocks. Accordingly, half of the LE blocks featured a high probability of real images on the left and a low probability of real images on the right, while the other half had the reverse configuration.

Fig 6. Experimental paradigms used in Experiment 3.

Fig 6.

A Task structure in local expectation cue blocks. As in Experiment 1, probabilities of encountering real or scrambled images were always anti-correlated across visual fields. B Task structure in global expectation cue blocks. Placeholders had the same luminance (either both black or both white) across both locations, indicating equal probability of encountering a particular image type. Placeholder’s luminance was fixed with blocks and balanced across blocks

In contrast, the global expectation (GE) cue blocks also featured sustained expectation cues, but the two placeholders had identical luminance, providing no spatially specific information. Instead, the uniform luminance signaled the same likelihood of encountering a real image or a scrambled image across both locations. On each trial, placeholders were either both black or both white, predicting the occurrence of the associated image type (real or scramble) with a 2/3 probability. The expectation cues were kept constant across trials within a block and balanced across blocks. Consequently, half of the GE blocks were composed of trials with a high probability of real images at both locations, while the remaining blocks consisted of trials with a low probability of real images at both locations (Fig 6B).

As in Experiment 1, the association of luminance and image type (Fig 1C, left) was counterbalanced across participants. Each participant (N=6) completed a total of 576 trials in Experiment 3, consisting of 4 LE blocks and 4 GE blocks, with 72 trials per block.

Modifications of expectation cue reversed the expectation effect seen in Experiment 1

Results from Experiment 3 revealed consistent patterns across both local (LE) and global (GE) expectation cue blocks, while also uncovering important differences from Experiment 1’s findings.

In LE blocks, we replicated the attention effect on recognition sensitivity d observed in Experiment 1. Participants showed significantly higher sensitivity in the attended (1.46 ± 0.7) than unattended condition (0.49 ± 0.41; F1,5=15.92,p=0.01,ηp2=0.76,BF10=108.8; Fig 7Ai). This enhancement primarily stemmed from increased HR in attended than unattended condition (F1,5=20.28,p=0.006,ηp2=0.8,BF10=311.7). In contrast, neither attention nor expectation had significant effect on criterion (attention: F1,5=3.88,p=0.106,ηp2=0.44,BF10=3.07; expectation: F1,5=0.26,p=0.64,ηp2=0.05,BF10=0.35).

Fig 7. Perceptual behaviors in Experiment 3.

Fig 7.

A Behavioral result in local expectation cue blocks. i) mean HR, FAR, d’ and for targets under different attention (attended (blue) vs unattended (orange)) and expectation (real vs scrambled) conditions. Horizontal dashed lines in the first column indicate threshold recognition rate (50%). ii) mean categorization accuracy in hit, FA, miss, and CR trials. Horizonal dashed lines indicate chance level accuracy (25%). B Same as A, but for global expectation cue blocks. Error bars indicate 95% confidence intervals of means. *p<0.05,**p<0.01.

For categorization accuracy, as in Experiment 1, we observed a trend effect suggesting a higher categorization accuracy in the attended condition in hit trials (attended: 0.76 ± 0.19; unattended: 0.54 ± 0.21; F1,5=5.04,p=0.074,ηp2=0.5,BF10=4.34; Fig 7Aii). No attention effect was observed in the other trial types, and no expectation effect was observed in any trial type.

The GE blocks exhibited similar attention effects (Fig. 7B). Hit rates were significantly higher in the attended than unattended condition (0.81 ± 0.12 vs. 0.49 ± 0.13; F1,5=24.47,p=0.004,ηp2=0.83,BF10=238.5), while FA rates were similar between attention conditions, resulting in a marginally higher d’ in the attended (1.37 ± 0.99) than unattended condition (0.46 ± 0.41; F1,5=5.05,p=0.074,ηp2=0.50,BF10=6.21), as well as a more liberal criterion in the attended (−0.3 ± 0.29) than unattended condition (0.26 ± 0.2; F1,5=17.28,p=0.009,ηp2=0.78,BF10=111.3). Attention also significantly increased categorization accuracy in both hit (F1,5=7.23,p=0.043,ηp2=0.59,BF10=12.21) and FA trials (F1,5=13.21,p=0.015,ηp2=0.73,BF10=4.38).

Particularly noteworthy in the GE blocks was a reversal of the expectation effects as compared to that observed in Experiment 1. Participants showed a trend toward a more liberal criterion when expecting real images (−0.161 ± 0.263) compared to scrambled images (0.12 ± 0.23; F1,5=4.51,p=0.087,ηp2=0.47,BF10=3.06; Fig 7Bi column 4). The liberal shift in criterion was largely driven by increased FAR when expecting real images (0.41 ± 0.11) versus when expecting scrambled images (0.28 ± 0.1; F1,5=17.06,p=0.009,ηp2=0.77,BF10=19.66). As in the LE blocks, expectation did not have an effect on categorization accuracy (Fig 7Bii).

Overall, both LE and GE blocks yielded qualitatively similar attention effects, replicating Experiment 1’s findings on attention’s influence recognition sensitivity d By contrast, the interaction effect of attention and expectation on detection criterion observed in Experiment 1 disappeared entirely in Experiment 3. In GE blocks, we even observed a reversal of this pattern: participants were more likely to report recognizing an object when expecting a real image, consistent with earlier reports of response bias toward the expected percepts (Bang & Rahnev, 2017; Floris P. de Lange, Rahnev, Donner, & Lau, 2013; Rahnev, Lau, & de Lange, 2011; Wyart et al., 2012). This suggests that—while sustained cue visibility eliminated the interaction effects seen in Experiment 1, the addition of a location-independent global cue produced an effect pattern that matched our initial predictions and previous work. Given that we previously attributed the interaction effects on criterion to swap errors in Experiment 1, these new findings prompt questions about the extent to which swap errors played a role in Experiment 3.

Sustained expectation cues mitigate response biases induced by the task-irrelevant nontarget stimuli

To evaluate the potential role of swap errors in Experiment 3, we analyzed how recognition rates and categorization accuracy varied with nontargets, following the analytical approach from Experiment 1.

In LE blocks, as in Experiment 1, attention enhanced the recognition rates for real targets (Fig. 8Ai, columns 1–2), which reached significance when real targets were paired with scrambled nontargets (F1,5=29.67,p=0.003,ηp2=0.86,BF10=269.6). Importantly, unlike Experiment 1, recognition rates for real targets remained stable regardless of whether paired with real or scrambled nontargets (attended: W=2,p=0.56,BF10=0.52; unattended: W=7,p=0.56,BF10=0.64).

Fig 8. Nontargets’ influence on perceptual behaviors in Experiment 3.

Fig 8.

A Local expectation cue blocks: i) Proportion of “yes” response and ii) categorization accuracy scored based on nontargets’ categories, conditioned by target-nontarget combination. Columns from left to right depicted results for trials with real target – real nontarget, real target – scrambled nontarget, scrambled target – real nontarget, and scrambled target – scrambled nontarget parings. B Same as A, but for global expectation cue blocs. Error bars indicate 95% confidence intervals of means. *p<0.05,**p<0.01

For scramble image targets, no significant effect of attention was observed (Fig. 8Ai, columns 3–4). Similar to real targets, recognition rates remained stable across nontarget types for both attended (W=2p=0.09,BF10=1.57) and unattended (W=8,p=0.69,BF10=0.42) conditions. These results suggest that recognition rates for both real and scrambled targets were not influenced by nontargets regardless of the locus of attention, unlike the pattern seen in Experiment 1.

When categorization responses were scored based on nontarget categories, accuracies did not differ significantly from chance level across attended and unattended conditions, for all target-nontarget pairings (Fig 8Aii). Importantly, in contrast to Experiment 1, accuracies in trials where targets were unattended and paired with real nontargets, also remained near chance level (real targets: 0.39 ± 0.34, W=8,p=0.69,BF10=0.68; scrambled targets: 0.35 ± 0.33, W=3,p=0.58,BF10=0.58, Fig 8Aii, column 1 and 3), indicating markedly reduced swap errors relative to Experiment 1 (see Fig 3B, columns 1 and 3). Since the LE blocks were identical to Experiment 1 except for the presence of placeholders during the response period, this suggests that this single manipulation substantially diminished the swap errors.

Analysis of GE blocks yielded a similar pattern of results. Attention enhanced the recognition rates for real image targets (when paired with real nontargets: F1,5=12.97,p=0.015,ηp2=0.72,BF10=35.25; when paired with scrambled nontargets: F1,5=24.19,p=0.004,ηp2=0.83,BF10=130.4; Fig 8Bi, columns 1–2). Similar to LE blocks, recognition rates remained stable across nontarget types in both the attended (W=4,p=0.22,BF10=0.77) and unattended (W=6,p=0.44,BF10=0.6) condition. For scrambled image targets, no attention effect on recognition rate was observed (Fig 8Bi, columns 3–4). Again, recognition rate did not vary with nontarget types in both the attended (W=6,p=0.44,BF10=0.52) and unattended W=3,p=0.17,BF10=0.98 condition. When analyzing categorization responses based on nontarget categories, no significant evidence for swap errors was observed: accuracies for unattended targets paired with real nontargets did not differ significantly from chance (real targets: 0.47 ± 0.29, W=2,p=0.09,BF10=1.39; scrambled targets: 0.38 ± 0.34, W=3.5,p=0.34,BF10=0.62; Fig 8Bii, column 1 and 3). Nonsignificant accuracies were also observed across all other target-nontarget parings in both attended and unattended condition (Fig 8Bii).

The modifications of expectation manipulation in Experiment 3 yielded two key findings. First, they eliminated and reversed the counterintuitive effect of expectation on criterion seen in Experiment 1, restoring the classic pattern of expectation effect operating independently of attention and reducing—rather than increasing—criterion, at least under the global expectation cues. Second, swap errors seen in the unattended condition in Experiment 1 were significantly reduced.

Because the LE blocks in Experiment 3 differed from Experiment 1 only by presenting the expectation cue during the response period, and the GE blocks differ from the LE blocks only in making the expectation cues stronger and non-location-dependent, these findings suggest two important messages. First, the classic pattern of expectation’s effect (operating independently from attention and lowering criterion) might be driven primarily by a response bias. Second, expectation’s effect on perception can be quite subtle and easily influenced by attention (thereby producing the interaction effect in Experiment 1). This second point is also consistent with our Experiment 2, which showed that when manipulated in isolation, expectation had an effect similar to attention, which may be due to participants paying more spatial attention to the location where a real image is more probable.

Discussion

In this study, we investigated how attention and expectation shape conscious recognition of real-world objects. Across three experiments, we found a robust effect of attention on observers’ detection sensitivity, consistent with earlier studies. In contrast, expectation primarily influenced object recognition by shifting the criterion, with its effect varying substantially depending on the specific experimental condition. Below, we summarize and discuss the key findings in detail.

Across all three experiments, attention robustly enhanced sensitivity (d) by both increasing HR and decreasing FAR in object recognition. Beyond these measures, attention also improved the precision of object categorization, particularly in trials where object recognition was successful. These findings align well with attention’s established role in enhancing sensory processing, which has been extensively documented in both behavioral and neuroscientific studies (Bashinski & Bacharach, 1980; Buschman & Kastner, 2015; Carrasco, 2006). Moreover, the observed attention effects closely parallel those reported in previous research aimed at dissociating attention from expectation in perceptual processing (Samuel Cheadle et al., 2014; Wyart et al., 2012). By demonstrating these attentional effects in a high-level object recognition task, our study extends previous findings to the context of threshold-level object recognition.

In contrast to attention effects, our study revealed distinct patterns of expectation effects across experiments. Experiment 1 suggests that expectation influences the detection criterion primarily in the unattended condition, rather than exerting an independent effect. Strikingly, we observed an effect opposite to the typical assumption about expectation’s effect: participants were less likely to report recognizing objects in stimuli where they expected object information, but only when their attention was initially directed away.

We found that this counterintuitive finding could be explained by the presence of swap errors, evidenced by the observation that recognition and categorization reports were strongly biased toward the attended but task-irrelevant nontargets. Behaviorally, such errors are often attributed to limited processing capacity that leads to imperfect representation of task-relevant stimuli (Bays, 2016; Bays, Catalao, & Husain, 2009; Bays et al., 2024). Under this account, the misallocation of attentional resources to task-irrelevant stimuli compromised how task-relevant features outside the attentional focus were encoded and bound, allowing task-irrelevant inputs to dominate the perceptual processing and drive perceptual reports. An alternative explanation is that swap errors arise from educated guessing strategies (Mallett, Lorenc, & Lewis-Peacock, 2022; Pratte, 2019), whereby observers default to reporting recently attended stimuli when uncertain.

Furthermore, recent neural evidence highlights selection errors as another major factor contributing to swap errors (Alleman, Panichello, Buschman, & Johnston, 2024; Mallett et al., 2022). Studies in humans and monkeys show that concurrent memory representations of both target and nontarget stimuli remain accurate during the delay period, yet noise or bias in the control processes that route information for report can lead to the wrong item being selected (Alleman et al., 2024). Accordingly, our findings that swap errors emerged mainly when attention was directed to real task-irrelevant stimuli suggest that selection-related mechanisms are particularly susceptible to meaningful object information presented at the initially attended location that turned out to be task-irrelevant. Notably, the above-mentioned mechanisms are not mutually exclusive and may work in concert to produce the observed swap errors, as part of the complex processing bridging stimulus encoding and response generation.

In addition to these mechanistic accounts, the specific demands of our task likely increased susceptibility to swap errors. Unlike prior studies using objective tasks (change detection or signal detection) on low-level Gabor stimuli (Sunder, Rajendran, Biswas, & Sridharan, 2025; Wyart et al., 2012), our paradigm required subjective object recognition judgements on high-level, visually complex images presented at liminal contrast—a context that likely introduced heightened perceptual and decisional uncertainty. Such uncertainty may have increased vulnerability to misattribution when the target lay outside the locus of attention, accounting for why swap errors—and the resulting expectation-dependent shifts in criterion—were most prominent in the unattended condition.

Experiment 2 investigated the influences of attention and expectation on object recognition when these factors were manipulated in isolation. The key finding was that the expectation of real images enhanced sensitivity (d), mirroring the attention-related benefits consistently observed across all three experiments. This accords with earlier studies showing that expectation enhances perceptual behavior by modulating early sensory processing in a similar manner to attention (Aitken et al., 2020; Kok, Jehee, et al., 2012; Kok et al., 2017). However, we interpret this parallel cautiously, because a comparable expectation effect was notably absent in Experiments 1 and 3 when attention and expectation were orthogonally manipulated. This pattern of results suggests that the observed expectation effect in Experiment 2 likely emerged from an inherent confound from attentional processes.

This interpretation is supported by the close relationship between attention and expectation, with attentional deployment often guided by prior expectations (Summerfield & De Lange, 2014; Summerfield & Egner, 2009). Particularly relevant is the finding that attention is implicitly biased toward expected stimuli, even when these items are task-irrelevant (Zhao et al., 2013). Our findings thus underscore the importance to consider the potential confound of attention when expectation is manipulated alone. Nevertheless, this interpretation does not preclude the possibility that expectation can influence perception without being mediated by attention, nor does it suggest that orthogonal manipulation is the only valid approach to disentangling attention and expectation. Recent studies employing other types of experimental designs have demonstrated viable approaches beyond orthogonal manipulation for dissociating these cognitive processes (Rungratsameetaweemana et al., 2018; Zivony & Eimer, 2024).

Consistent with the classic prediction of the SDT framework and earlier work (Wyart et al., 2012), Experiment 3 revealed independent effects of attention and expectation on perceptual behavior, with attention enhancing recognition sensitivity d and expectation inducing a more lenient detection criterion. This finding stands in stark contrast to Experiments 1 and 2, where the classic pattern of expectation effect was either absent or reversed, highlighting the context-dependent and flexible nature of expectation’s influence on perception.

While the GE blocks exhibited a more robust expectation effect on detection criterion, closer examination revealed that both LE and GE blocks gave rise to qualitatively similar behavioral patterns. Specifically, although the LE blocks did not yield a clear shift in detection criterion, they successfully eliminated the counterintuitive expectation effect observed in Experiment 1. These findings suggest that both the temporal persistence and spatial prevalence of the expectation cue influence expectation’s effects on perceptual behavior. The reduction in swap errors in Experiment 3 relative to Experiment 1 indicates that sustained visibility of the expectation cue helps protect against interference by irrelevant environmental stimuli—though this may reflect a decisional bias rather than a perceptual bias (see below). Moreover, the stronger criterion shift observed in the presence of GE cues suggests that spatially distributed, location-independent cues facilitate the integration of prior expectations into perceptual decision-making. This interpretation aligns with real-world scenarios where persistent and spatially distributed contextual cues are often more effective than those that are brief or spatially constrained.

The expectation effects observed herein inform the debate on whether expectation primarily modulates sensory or decisional processing (Kok, Jehee, et al., 2012; Rungratsameetaweemana & Serences, 2019). Our results showing that the temporal persistence of the expectation cue having a major impact on detection criterion (compare column 4 of Fig. 8ABi from Experiment 3 with column 4 of Fig. 2A from Experient 1) support an interpretation of expectation’s effect as primarily modulating late decisional processing. The classic expectation effect—shifting criterion in line with expectation regardless of attentional state (Wyart et al., 2012)—was observed only in Experiment 3, when the expectation cue persisted into the response period. This interpretation is consistent with prior behavioral findings showing that post-stimulus expectation cues—too late to influence sensory processing—were sufficient to shift detection criterion and fully accounted for the behavioral effects induced by pre-stimulus cues (Bang & Rahnev, 2017). Our finding is also consistent with an earlier EEG study showing that expectation exerts minimal influence on early sensory responses but modulates later, post-perceptual components associated with decision-related processes (Rungratsameetaweemana et al., 2018). As such, our observations contribute to a growing body of evidence suggesting that expectation might primarily shape perceptual behavior by influencing decision-making processes rather than altering early sensory representations.

Note that, although our fixed-duration paradigm does not permit examination of the temporal dynamics of evidence accumulation, our findings can nevertheless be situated within the broader context of sequential sampling frameworks such as the drift–diffusion model and the linear ballistic accumulator. Such models typically distinguish between influences on evidence quality—often associated with attention (Kelly & O’Connell, 2013; Rangelov, West, & Mattingley, 2021; Wyart, Myers, & Summerfield, 2015)—and influences on decisional bias, which are commonly captured as shifts in the starting point toward the more likely response option (Mulder, Wagenmakers, Ratcliff, Boekel, & Forstmann, 2012; Ratcliff & McKoon, 2008; Ratcliff & Smith, 2004). This conceptual mapping parallels the dissociation we observed between attention-related improvements in sensitivity and expectation-related shifts in criterion in GE blocks. Future work incorporating response-time measures or variable viewing durations could clarify how attention and expectation jointly influence evidence accumulation dynamics in perceptual decision-making.

In conclusion, our study reveals distinct and complex effects of attention and expectation on conscious visual object recognition. Attention consistently enhances recognition sensitivity, while expectation influences object recognition primarily by shifting detection criteria and does so in a highly context-dependent manner (including whether it is orthogonally or independently manipulated from attention, and the spatial and temporal properties of the expectation cue). These findings underscore the flexible nature of expectation’s influence and help reconcile discrepancies in previous research, thus offering a more comprehensive and nuanced understanding of how attention and expectation shape the conscious perceptual experience.

Methods

Participants

A total of 31 participants took part in the three experiments. Each participant performed two sessions (~3 hr total). Written informed consent was obtained from all participants before the experiments. The study protocol (#15-01323) was approved by the Institutional Review Board of New York University School of Medicine and adhered to the Declaration of Helsinki. Participants received monetary compensation for their time.

Experiment 1 included 19 participants (12 females) with a mean age of 29.53 years (range: 20–66 years). In Experiment 2, six participants (2 females) were involved, with a mean age of 33.83 years and an age range of 26–47 years. Experiment 3 included six participants, with a mean age of 25.67 years (range: 21–32 years). Gender was determined by self-reports. All participants reported normal or corrected-to-normal vision and fluency in English. No individual participated in more than one experiment.

Experimental procedure

Each experiment comprised two sessions conducted on separate days. The first session involved an image contrast staircase procedure to determine individual threshold contrasts. Prior to the staircase, participants completed practice blocks to get familiar with the procedure. The session lasted approximately one hour.

At the beginning of the second session, participants completed a brief recognition task to validate the previously estimated thresholds. If recognition performance fell outside the acceptable range (recognition rate: 0.3–0.8), a shortened staircase procedure was administered to recalibrate the contrast levels. Participants then completed practice blocks to become familiar with the main task before proceeding to the main experimental blocks. The second session lasted approximately two hours.

Apparatus and stimuli

The general experimental setup was consistent across all three experiments. Visual stimuli were displayed on a gamma-corrected LED monitor (ViewSonic V3D245, 50 × 35 cm) with a resolution of 1920 × 1080 pixels and a refresh rate of 120 Hz. The experiments were conducted in a dimly lit room, with custom scripts written in Python using PsychoPy (Peirce et al., 2019) v2020.2.10. Participants’ heads were stabilized with a chin-rest positioned approximately 55 cm from the screen.

Participants provided behavioral responses via manual button presses, recorded using a NAtA button pad. Eye gaze and pupil data were tracked using an eye-tracking system (Eyelink 1000 Plus; SR Research).

The stimuli consisted of 16 unique images, derived from four categories: faces, houses, man-made objects, and animals (four exemplars per category). These images were obtained from public domain datasets or the Psychological Image Collection at Stirling (PICS, http://pics.psych.stir.co.uk) and were converted to grayscale by us. Each image was then resized to 300 × 300 pixels, normalized by subtracting the mean pixel intensity and dividing by the standard deviation, and smoothed using a two-dimensional Gaussian kernel (standard deviation: 1.5 pixels, kernel size: 7 × 7 pixels). Scrambled versions of each image were created by shuffling the phase of their 2D Fourier-transformed counterparts.

Stimuli were presented on a uniform gray background. To blend the edges of the stimuli into the background, their intensities were multiplied by a Gaussian window with a standard deviation of 0.2. Each stimulus measured 4 × 4 degrees of visual angle (d.v.a.) and was horizontally offset by 4 d.v.a. to the left or right of the screen center. The contrast of the stimuli was incrementally increased from 0.001 to the threshold intensity over ~66 ms (8 frames at 120 Hz).

Task design

Overview

Across all experiments, trials followed a consistent structure unless otherwise noted. Each trial began with the presentation of two square placeholders (serving as expectation cues) in the left and right visual fields for 3 seconds. These placeholders indicated the locations of upcoming visual stimuli and were used to manipulate participants’ expectations of encountering real versus scrambled images in each location. Depending on the experimental conditions, the placeholders either differed in luminance (black vs. white) to signal location-specific expectations or matched in luminance to signal global (location-independent) expectations. Following a blank screen interval of either 1 or 1.5 seconds, a white arrow (the attention cue) appeared for 50 ms either to the left or right of the central fixation cross, pointing toward the placeholder likely to contain the stimulus that would later be probed (target stimulus). The cue was valid on 67% of the trials. After a 0.9-s blank screen, two stimuli were simultaneously presented within the placeholders for 66.67 ms, followed by a 0.4-s blank screen. Each trial concluded with three sequential question screens, each displaying a central white arrow (the target cue) indicating the location of the target stimulus. Participants had 3 seconds to respond to each question using a right-hand keypad.

Stimuli were drawn from 4 categories, each containing 4 unique exemplars, shown in both real and scrambled form. Each exemplar-image type combination was repeated 18 times, resulting in 576 trials per experiment. Trial identities were shuffled independently for each location, such that the stimulus in one placeholder did not predict the identity of the stimulus in the other. Moreover, experimental elements, including stimulus exemplars, categories, image types, target locations, and experimental conditions, were balanced across spatial locations.

Experiment 1

The Experiment comprised 576 trials, divided into 8 blocks of 72 trials each. Each trial included an attention cue and an expectation cue. Placeholders of opposite luminance (black vs. white) signaled the opposite probabilities of encountering a real versus a scrambled image in the two locations (i.e., 2/3 probability in favor of a real image on one side and a scramble image on the other).

The combination of attention (valid vs. invalid) and expectation (real vs. scrambled) cues yielded four experimental conditions: 1) attended and expecting real images (192 trials); 2) attended and expecting scrambled images (192 trials); 3) unattended and expecting real images (96 trials); and 4) unattended and expecting scrambled images (96 trials). Trials were shuffled before divided into 8 blocks.

Within each participant, the spatial arrangement of black and white placeholders was fixed within blocks and balanced across blocks (i.e., black-left/white-right in four blocks, reversed in the remaining four). Block order was randomized. Luminance–probability contingencies were counterbalanced across participants: For half of the participants, black placeholders indicated a 2/3 probability of real images; for the other half, this mapping was reversed.

Experiment 2

The experiment consisted of 12 blocks in total: 6 attention blocks and 6 expectation blocks, each containing 48 trials, for a total of 576 trials. Attention and expectation blocks were presented in randomized order.

Attention blocks followed the same trial structure as in Experiment 1, with one key difference: the expectation manipulation was removed. Instead of black and white placeholders, both locations displayed gray placeholders of equal luminance, indicating an equal (0.5/0.5) probability of real and scrambled images at each location. The attention cue (white arrow) remained present and 67% valid. This yielded two attention conditions: 1) attended (192 trials) and 2) unattended (96 trials), with trials randomly shuffled across the 6 attention blocks.

In contrast, expectation blocks retained the expectation manipulation from Experiment 1 but removed the attention cue. Black and white placeholders signaled the probability of encountering a real versus scrambled image at each location, as before, but no attention cue was presented. This resulted in two expectation conditions: 1) Expecting real images (192 trials) and 2) expecting scrambled (96 trials). Trials were randomized across the 6 expectation blocks. As in Experiment 1, the spatial arrangement of black and white placeholders was fixed within blocks, balanced across blocks, and the blocks were presented in randomized order.

For four of the six participants, luminance–probability contingencies were fixed and counterbalanced across individuals. For the remaining two, contingencies were fixed within each block but reversed across blocks. These two participants completed four blocks in which black placeholders indicated a 2/3 probability of real images and white a 2/3 probability of scrambled images each; in the other two blocks, the mappings were reversed.

Experiment 3

The experiment comprised 8 blocks of 72 trials each, totaling 576 trials. Four blocks used local expectation (LE) cues and four used global expectation (GE) cues, presented in randomized order.

The design of the LE blocks was largely identical to that of Experiment 1, with one key difference: In LE blocks, the expectation cues (placeholders) remained visible throughout the three question screens, whereas in Experiment 1 they disappeared after stimulus presentation. All other trial elements were the same as in Experiment 1.

GE blocks also used sustained placeholders but differed from Experiment 1 in one other key aspect: Instead of using placeholders of opposite luminance to manipulate location-specific expectations, GE blocks used placeholders of identical luminance across both visual fields (either both black or both white). Specifically, one luminance level signaled a 2/3 probability of real images and a 1/3 probability of scrambled images in both locations; the other luminance level indicated the reverse (2/3 scrambled, 1/3 real). The luminance of the placeholders was fixed within each block and balanced across blocks.

Each LE and GE block included 96 trials in both the “attended and expecting real images” and “attended and expecting scrambled images” conditions, and 48 trials in both the “unattended and expecting real images” and “unattended and expecting scrambled images” conditions, with trials randomized within each block type. The four LE and GE blocks were presented in a randomized order. As in previous experiments, luminance–probability contingencies in both the LE and GE blocks were counterbalanced across participants.

Image Contrast Staircase

The QUEST adaptive staircase method was used to determine the specific contrast level for each individual real image exemplar that would yield a recognition rate of 50% (the proportion of “yes” responses to the recognition question). Image pixel intensity (I) at a given contrast level (c) was computed using the following equation:

I=b*1+c*Iscaled

where b represents the baseline background intensity, set to 127, and Iscaled refers to the normalized pixel intensities ranging between −1 and 1. This ensured that the brightest pixel value in the image was Imax=b*(1+c), while the darkest was Imin=b*(1-c). The contrast of the image (c) was then defined as:

c=Imax-Imin/2b

This definition ensured that contrast values fell between 0 and 1, providing a standardized measure across images.

The staircase procedure maintained the core structure of the main tasks with several key modifications. The trials proceeded at a faster pace than the main tasks and the stimulus presentation always contained a real image alongside a scrambled counterpart (matched in contrast). Unlike the main tasks, there were no expectation or attention manipulations. Instead, colored placeholders indicated the locations of real and scrambled images with 100% validity, and participants were instructed to respond only to real images. During the response period, an arrow next to the central fixation directed participants to the location of the real image requiring their response.

Each of the 16 image exemplars underwent 35–40 staircase trials, totaling 560–640 trials per participant. All staircases were individually monitored to verify convergence to appropriate contrast levels. In cases where staircases failed to converge, they were repeated individually for the affected participants.

Behavioral Analysis

Recognition report was evaluated using parameters from SDT framework. We calculated hit rate (HR) as the proportion of real image trials wherein participants reported recognizing a particular content. False alarm rate (FAR) measured the proportion of scrambled image trials wherein participants reported recognizing a particular content where none existed. To handle extreme values of HR and FAR (1 and 0), we applied the Macmillan and Kaplan adjustment (Macmillan & Kaplan, 1985), replacing HRs of 1 with -12Nreal and FARs of 0 with 12Nscr, where Nreal and Nscr represent the number of real and scrambled image trials respectively.

From these measures, we derived the decision criterion (c) and sensitivity (d’) using these formulas:

c=-12((Z(HR)+Z(FAR)) (1)
d=Z(HR)-Z(FAR) (2)

with Z representing the inverse normal cumulative distribution function.

Categorization accuracy was calculated as the proportion of trials where participants assigned the content in the target stimulus to the correct object category. For scrambled images, categorization accuracy was determined based on the categories from which the scrambled images were generated.

To analyze the influence of nontargets’ influence, recognition rate was calculated as the proportion of “yes” responses to the recognition question across all trials. Categorization accuracy in this context was scored based on object category displayed in the concurrently shown nontarget stimulus. This measure was computed from trials in which participants reported recognizing objects, and the analysis was limited to trials where target and nontarget stimuli were from different categories.

The main and interaction effects of attention and expectation on behavioral metrics were computed using a 2 × 2 repeated-measures ANOVA. Pairwise differences between two conditions were assessed using two-tailed, paired t-tests in Experiment 1 and two-tailed Wilcoxon signed-rank tests in Experiment 2 and 3. Statistical significance was defined at p<0.05. Due to smaller sample sizes in Experiment 2 and 3, we additionally reported Bayes factors to quantify the strength of evidence. Bayes factors (BF10) equal or greater than 3 were taken as moderate evidence for the alternative hypothesis, whereas values smaller than 1/3 indicate moderate evidence for the null (Jeffreys, 1998).

Supplementary Material

Supplementary Material

Acknowledgements

This work was supported by U.S. National Institutes of Health/National Eye Institute (R01EY032085; PI: B.J.H.).

Footnotes

Competing interest Statement

The authors declare no competing interests.

References

  1. Aitken F, Menelaou G, Warrington O, Koolschijn RS, Corbin N, Callaghan MF, & Kok P (2020). Prior expectations evoke stimulus-specific activity in the deep layers of the primary visual cortex. PLoS Biol, 18(12), e3001023. 10.1371/journal.pbio.3001023 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Alleman M, Panichello M, Buschman TJ, & Johnston WJ (2024). The neural basis of swap errors in working memory. Proceedings of the National Academy of Sciences, 121(33), e2401032121. 10.1073/pnas.2401032121 [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Bang JW, & Rahnev D (2017). Stimulus expectation alters decision criterion but not sensory signal in perceptual decision making. Scientific reports, 7(1), 17072. 10.1038/s41598-017-16885-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Bashinski HS, & Bacharach VR (1980). Enhancement of perceptual sensitivity as the result of selectively attending to spatial locations. Perception & psychophysics, 28, 241–248. 10.3758/BF03204380 [DOI] [PubMed] [Google Scholar]
  5. Bays PM (2016). Evaluating and excluding swap errors in analogue tests of working memory. Scientific reports, 6(1), 19203. 10.1038/srep19203 [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Bays PM, Catalao RF, & Husain M (2009). The precision of visual working memory is set by allocation of a shared resource. Journal of vision, 9(10), 7–7. 10.1167/9.10.7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Bays PM, Schneegans S, Ma WJ, & Brady TF (2024). Representation and computation in visual working memory. Nature Human Behaviour, 1–19. 10.1038/s41562-024-01871-2 [DOI] [PubMed] [Google Scholar]
  8. Buschman TJ, & Kastner S (2015). From behavior to neural dynamics: an integrated theory of attention.Neuron, 88(1), 127–144. 10.1016/j.neuron.2015.09.017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Carrasco M (2006). Covert attention increases contrast sensitivity: Psychophysical, neurophysiological and neuroimaging studies. Progress in brain research, 154, 33–70. 10.1016/S0079-6123(06)54003-8 [DOI] [PubMed] [Google Scholar]
  10. Cheadle S, Egner T, Wyart V, Wu C, & Summerfield C (2015). Feature expectation heightens visual sensitivity during fine orientation discrimination. Journal of vision, 15(14), 14–14. 10.1167/15.14.14 [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Cheadle S, Wyart V, Tsetsos K, Myers N, De Gardelle V, Castañón SH, & Summerfield C (2014). Adaptive gain control during human perceptual choice. Neuron, 81(6), 1429–1441. 10.1016/j.neuron.2014.01.020 [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Cohen MR, & Maunsell JH (2009). Attention improves performance primarily by reducing interneuronal correlations. Nature neuroscience, 12(12), 1594–1600. 10.1038/nn.2439 [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. De Lange FP, Heilbron M, & Kok P (2018). How do expectations shape perception? Trends in cognitive sciences, 22(9), 764–779. 10.1016/j.tics.2018.06.002 [DOI] [PubMed] [Google Scholar]
  14. de Lange FP, Rahnev DA, Donner TH, & Lau H (2013). Prestimulus Oscillatory Activity over Motor Cortex Reflects Perceptual Expectations. The Journal of Neuroscience, 33(4), 1400–1410. 10.1523/JNEUROSCI [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Esterman M, & Yantis S (2010). Perceptual expectation evokes category-selective cortical activity. Cereb Cortex, 20(5), 1245–1253. 10.1093/cercor/bhp188 [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Green DM, & Swets JA (1966). Signal detection theory and psychophysics (Vol. 1): Wiley; New York. [Google Scholar]
  17. Hautus MJ, Macmillan NA, & Creelman CD (2021). Detection theory: A user’s guide: Routledge. 10.4324/9781003203636 [DOI] [Google Scholar]
  18. Jeffreys H (1998). The theory of probability: OuP Oxford. 10.1093/oso/9780198503682.001.0001 [DOI] [Google Scholar]
  19. Jiang J, Summerfield C, & Egner T (2013). Attention Sharpens the Distinction between Expected and Unexpected Percepts in the Visual Brain. The Journal of Neuroscience, 33(47), 18438. 10.1523/JNEUROSCI.3308-13.2013 [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Kelly SP, & O’Connell RG (2013). Internal and external influences on the rate of sensory evidence accumulation in the human brain. Journal of Neuroscience, 33(50), 19434–19441. 10.1523/JNEUROSCI [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Kok P, Jehee JF, & de Lange FP (2012). Less is more: expectation sharpens representationsin the primary visual cortex. Neuron, 75(2), 265–270. 10.1016/j.neuron.2012.04.034 [DOI] [PubMed] [Google Scholar]
  22. Kok P, Mostert P, & de Lange FP (2017). Prior expectations induce prestimulus sensory templates. Proc Natl Acad Sci U S A, 114(39), 10473–10478. 10.1073/pnas.1705652114 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Kok P, Rahnev D, Jehee JF, Lau HC, & de Lange FP (2012). Attention reverses the effect of prediction in silencing sensory signals. Cereb Cortex, 22(9), 2197–2206. 10.1093/cercor/bhr310 [DOI] [PubMed] [Google Scholar]
  24. Levinson M, Podvalny E, Baete SH, & He BJ (2021). Cortical and subcortical signatures of conscious object recognition. Nature communications, 12(1), 2930. 10.1038/s41467-021-23266-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Macmillan NA, & Kaplan HL (1985). Detection theory analysis of group data: estimating sensitivity from average hit and false-alarm rates. Psychological bulletin, 98(1), 185–199. 10.1037/0033-2909.98.1.185 [DOI] [PubMed] [Google Scholar]
  26. Mallett R, Lorenc ES, & Lewis-Peacock JA (2022). Working memory swap errors have identifiable neural representations. Journal of cognitive neuroscience, 34(5), 776–786. 10.1162/jocn_a_01831 [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Maunsell JH (2015). Neuronal mechanisms of visual attention. Annual review of vision science, 1(1), 373–391. 10.1146/annurev-vision-082114-035431 [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. McAdams CJ, & Maunsell JH (2000). Attention to both space and feature modulates neuronal responses in macaque area V4. Journal of neurophysiology, 83(3), 1751–1755. 10.1152/jn.2000.83.3.1751 [DOI] [PubMed] [Google Scholar]
  29. Mitchell JF, Sundberg KA, & Reynolds JH (2009). Spatial attention decorrelates intrinsic activity fluctuations in macaque area V4. Neuron, 63(6), 879–888. 10.1016/j.neuron.2009.09.01 [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Mulder MJ, Wagenmakers E-J, Ratcliff R, Boekel W, & Forstmann BU (2012). Bias in the brain: a diffusion model analysis of prior probability and potential payoff. Journal of Neuroscience, 32(7), 2335–2343. 10.1523/JNEUROSCI [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Peirce J, Gray JR, Simpson S, MacAskill M, Höchenberger R, Sogo H, … Lindeløv JK(2019). PsychoPy2: Experiments in behavior made easy. Behavior Research Methods, 51(1), 195–203. 10.3758/s13428-018-01193-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Podvalny E, Flounders MW, King LE, Holroyd T, & He BJ (2019). A dual role of prestimulus spontaneous neural activity in visual object recognition. Nature communications, 10(1), 3910. 10.1038/s41467-019-11877-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Pratte MS (2019). Swap errors in spatial working memory are guesses. Psychonomic bulletin & review, 26, 958–966. 10.3758/s13423-018-1524-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Rahnev D, Lau H, & de Lange FP (2011). Prior Expectation Modulates the Interaction between Sensory and Prefrontal Regions in the Human Brain. The Journal of Neuroscience, 31(29), 10741–10748. 10.1523/JNEUROSCI [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Rangelov D, West R, & Mattingley JB (2021). Stimulus reliability automatically biases temporal integration of discrete perceptual targets in the human brain. Journal of Neuroscience, 41(36), 7662–7674. 10.1523/JNEUROSCI [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Ratcliff R, & McKoon G (2008). The diffusion decision model: theory and data for two-choice decision tasks. Neural computation, 20(4), 873–922. 10.1162/neco.2008.12-06-420 [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Ratcliff R, & Smith PL (2004). A comparison of sequential sampling models for two-choice reaction time. Psychological review, 111(2), 333. 10.1037/0033-295X.111.2.333 [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Reynolds JH, Pasternak T, & Desimone R (2000). Attention increases sensitivity of V4 neurons. Neuron, 26(3), 703–714. 10.1016/S0896-6273(00)81206-4 [DOI] [PubMed] [Google Scholar]
  39. Rungratsameetaweemana N, Itthipuripat S, Salazar A, & Serences JT (2018). Expectations Do Not Alter Early Sensory Processing during Perceptual Decision-Making. The Journal of Neuroscience, 38(24), 5632. 10.1523/JNEUROSCI [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Rungratsameetaweemana N, & Serences JT (2019). Dissociating the impact of attention and expectation on early sensory processing. Current Opinion in Psychology, 29, 181–186. 10.1016/j.copsyc.2019.03.014 [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Stein T, & Peelen MV (2015). Content-specific expectations enhance stimulus detectability by increasing perceptual sensitivity. Journal of Experimental Psychology: General, 144(6), 1089–1104. 10.1037/xge0000109 [DOI] [PubMed] [Google Scholar]
  42. Summerfield C, & De Lange FP (2014). Expectation in perceptual decision making: neural and computational mechanisms. Nature Reviews Neuroscience, 15(11), 745–756. 10.1038/nrn3838 [DOI] [PubMed] [Google Scholar]
  43. Summerfield C, & Egner T (2009). Expectation (and attention) in visual cognition. Trends in cognitive sciences, 13(9), 403–409. 10.1016/j.tics.2009.06.003 [DOI] [PubMed] [Google Scholar]
  44. Sunder S, Rajendran K, Biswas M, & Sridharan D (2025). Neural mechanisms of attention, not expectation, govern spatial selection by probabilistic cueing. NeuroImage, 121412. 10.1016/j.neuroimage.2025.121412 [DOI] [PubMed] [Google Scholar]
  45. Watson AB, & Pelli DG (1983). QUEST: A Bayesian adaptive psychometric method. Perception & psychophysics, 33(2), 113–120. 10.3758/BF03202828 [DOI] [PubMed] [Google Scholar]
  46. Wu Y. h., Podvalny E, & He BJ (2023). Spatiotemporal neural dynamics of object recognition under uncertainty in humans. eLife, 12, e84797. 10.7554/eLife.84797 [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Wu Y. h., Podvalny E, Levinson M, & He BJ (2024). Network mechanisms of ongoing brain activity’s influence on conscious visual perception. Nature communications, 15(1), 5720. 10.1038/s41467-024-50102-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Wyart V, Myers NE, & Summerfield C (2015). Neural mechanisms of human perceptual choice under focused and divided attention. Journal of Neuroscience, 35(8), 3485–3498. 10.1523/JNEUROSCI [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Wyart V, Nobre AC, & Summerfield C (2012). Dissociable prior influences of signal probability and relevance on visual contrast sensitivity. Proceedings of the National Academy of Sciences, 109(9), 3593–3598. 10.1073/pnas.1120118109 [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Zhao J, Al-Aidroos N, & Turk-Browne NB (2013). Attention is spontaneously biased toward regularities. Psychological science, 24(5), 667–677. 10.1177/0956797612460407 [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Zivony A, & Eimer M (2024). A dissociation between the effects of expectations and attention in selective visual processing. Cognition, 250, 105864. 10.1016/j.cognition.2024.105864 [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary Material

RESOURCES