Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2018 Jan 11.
Published in final edited form as: Conscious Cogn. 2016 May 21;47:63–74. doi: 10.1016/j.concog.2016.05.003

Predictions penetrate perception: Converging insights from brain, behaviour and disorder

Claire O’Callaghan a,b,c, Kestutis Kveraga d, James M Shine e,f, Reginald B Adams Jr g, Moshe Bar h
PMCID: PMC5764074  NIHMSID: NIHMS932440  PMID: 27222169

Abstract

It is argued that during ongoing visual perception, the brain is generating top-down predictions to facilitate, guide and constrain the processing of incoming sensory input. Here we demonstrate that these predictions are drawn from a diverse range of cognitive processes, in order to generate the richest and most informative prediction signals. This is consistent with a central role for cognitive penetrability in visual perception. We review behavioural and mechanistic evidence that indicate a wide spectrum of domains—including object recognition, contextual associations, cognitive biases and affective state—that can directly influence visual perception. We combine these insights from the healthy brain with novel observations from neuropsychiatric disorders involving visual hallucinations, which highlight the consequences of imbalance between top-down signals and incoming sensory information. Together, these lines of evidence converge to indicate that predictive penetration, be it cognitive, social or emotional, should be considered a fundamental framework that supports visual perception.

Keywords: Cognitive penetration, prediction, visual perception, object recognition, context, top-down, orbitofrontal cortex, parahippocampal cortex, visual hallucinations

Introduction

Visual perception is not a passive or exclusively stimulus-driven process. Instead, there is a proactive interplay between incoming stimuli and predictions based on internally generated models, which shapes our conscious perception of the world around us (Bar, 2004; Bullier, 2001; Engel, Fries, & Singer, 2001). This enables our perceptual system to harness a lifetime of experience with the world, leveraging our past to aid our interpretation of the present. Along these lines, a predictive coding framework for visual perception is well established, fitting with a broader general principle of the brain as a predictive machine (Clark, 2013; Dayan, Hinton, Neal, & Zemel, 1995; Friston, 2010). Echoed across perceptual modalities, motor outputs and learning systems, hierarchically organised top-down pathways convey information about what to expect based on our prior experience. These predictions are combined with incoming sensory information to sculpt the most likely interpretation of the world around us. Sparingly, only mismatches between the descending predictions and lower-level sensory information are carried forward in the form of prediction error signals, to be reconciled at higher levels of the processing hierarchy, where new hypotheses are generated to accommodate the incoming input (Friston, Stephan, Montague, & Dolan, 2014).

Predictive coding is both a biologically and computationally plausible description of information flow in the brain, yet the intimately related process of cognitive penetration in perception is hotly debated (Firestone & Scholl, 2015, in press; Macpherson, 2012; O’Callaghan, Kveraga, Shine, Adams, & Bar, in press; Raftopoulos, 2014; Stokes, 2013; Vetter & Newen, 2014; Zadra & Clore, 2011). Cognitive penetration entails that a myriad of information from other modalities can influence the earlier stages of the perceptual process. Access to a wide variety of information processing systems when generating predictions will ensure that the predictions will be rich and most effective for minimising global prediction error (Lupyan, 2015), emphasising a central role for cognitive penetration in visual perception.

The predictive nature of the visual processing system itself is apparent from the basic tenets of object perception, (Bar et al., 2006; Enns & Lleras, 2008), but there is less clarity regarding the extent to which other cognitive systems shape perception. Here we start by using object recognition to demonstrate the mechanisms of top-down influence on perception. We then extend this framework to show that beyond features intrinsic to an object itself, contextual factors, as well as other cognitive and affective processes, all converge to provide a source of top-down prediction that influences visual perception. We explore these patterns both in social and affective neuroscience, and then describe novel insights on how a failure of this system can manifest neuropsychiatric disorders such as visual hallucinations.

Object recognition

The premise for a predictive coding account of visual perception relies on descending neural pathways throughout the visual processing system (Angelucci et al., 2002; Bullier, 2001). This structure allows for feed-forward projections, originating in the primary visual cortex and ascending via dorsal and ventral streams, to be matched with reciprocal feedback connections (Felleman & Van Essen, 1991; Gilbert & Li, 2013; Salin & Bullier, 1995). Under this scheme, top-down predictions are generated at higher levels and compared with lower-level representations of sensory input, with any resultant mismatch coded as a prediction error (Clark, 2013; Friston, 2005b; Friston et al., 2014; Rao & Ballard, 1999). In the case of object recognition, the orbitofrontal cortex has emerged as a source of such top-down expectations (Bar et al., 2006).

Earlier work characterised the ascending dorsal and ventral visual pathways as dominated respectively by magnocellular and parvocellular cells (Goodale & Milner, 1992; Ungerleider & Mishkin, 1982). The magnocellular (M) pathway carries achromatic, low spatial frequency information rapidly and is sensitive to high temporal frequencies, whereas the parvocellular (P) pathway transmits fine detail in the luminance channel, low spatial frequency information in the colour channel, has slower conduction velocities and can only process lower temporal frequencies (Maunsell, Nealey, & DePriest, 1990; Nowak & Bullier, 1997; Tootell, Switkes, Silverman, & Hamilton, 1988). Studies have since capitalised on the different properties of M and P pathways to reveal the time course and sources of top-down modulation of vision. Using low spatial frequency (LSF) stimuli to preferentially recruit M pathways, a combined approach of magnetoencephalography (MEG) functional magnetic resonance imaging (fMRI) revealed early activity was evident in the orbitofrontal cortex ~130 ms post stimulus presentation, well before object recognition-related activity peaks in the ventrotemporal cortex (Bar et al., 2006). A similar time window for top-down responses to categorising LSF scenes versus HSF scenes, where fMRI showed increased activity in the left prefrontal cortex and left middle temporal cortex, was associated with ERP activation in the 140–160 ms period (Peyrin et al., 2010). Although these time scales do not represent the earliest temporal component in visual processing (i.e., 0–100 ms), they are occurring at the short latency between 100–200 ms, suggesting that these top-down influences on object recognition modulate processing from reasonably early stages.

In order to provide a clearer picture of the spatiotemporal trajectory of top-down feedback, an fMRI study using dynamic causal modelling confirmed that early activation in the orbitofrontal cortex in response to M-biased stimuli then initiated top-down feedback to the fusiform gyrus (Kveraga, Boshyan, & Bar, 2007). P-biased stimuli in the same study were associated with a different connectivity pattern where only feedforward information flow increased between occipital cortex and fusiform gyrus. By comparison, orbitofrontal cortex activity only predicted M- and not P-biased stimuli, which resulted in faster recognition of M stimuli by ~100 ms. Similar evidence of early orbitofrontal activation preceding temporal lobe activity has been found when individuals make initial judgements about the coherence of degraded visual stimuli (Horr, Braun, & Volz, 2014) as well as during categorisation of face versus non-face images (Summerfield et al., 2006). These studies highlight the orbitofrontal cortex as an origin for top-down effects that can modulate processing in earlier visual processing areas.

The above findings support a predictive coding account of object recognition, whereby cursory visual information is sufficient to engage a memory store of information about the basic category of the object, used to generate the most likely prediction about the object’s identity (Bar, 2003; Oliva & Torralba, 2007). Rapidly extracted predictions about the general category of an object is fed back to lower levels of the visual processing hierarchy to enable faster recognition by constraining the number of possible interpretations. Fitting with the predictive coding framework that we generate top-down hypotheses based on our knowledge of stored regularities in the world, another fMRI study revealed this orbitofrontal cortex facilitation of object recognition was triggered only for meaningful LSF images (Chaumon, Kveraga, Barrett, & Bar, 2014). Meaningful LSF images resulted in a top-down feedback signature of increased functional connectivity between the orbitofrontal cortex and ventral visual pathway, whereas meaningless images did not. This suggests that generating strong top-down predictions requires connection with a learned representation of an object. In typical circumstances, guiding perception with top-down expectations facilitates the accuracy and speed of object recognition. When sensory input is ambiguous, top-down guidance is even more important for supplying information to resolve the ambiguity (O’Reilly, Wyatte, Herd, Mingus, & Jilk, 2013; Panichello, Cheung, & Bar, 2012). In a predictive coding framework, the integration of top-down signals with the feed-forward information flow is an iterative process geared toward progressive error reduction. Initial feed-forward sweeps associated with coarse (LSF) categorical processing are followed by graded HSF recurrent processing, which reflects the formation of increasing detailed recognition judgments (Clarke, Taylor, & Tyler, 2011; Poch et al., 2015). Such a pattern supports the idea that more complex information may require further iteration in the error reduction process.

Over and above this predictive mechanism characterising visual perception, considering the orbitofrontal cortex as an origin site for top-down effects reveals how diverse information streams may penetrate visual processing from the initial stages. The orbitofrontal cortex is well connected with the visual system, receiving projections from the temporal visual regions (inferior temporal cortex, superior temporal sulcus and temporal pole) (Barbas, 1988, 1995; Carmichael & Price, 1995) and is a target of visual information relayed via the pulvinar (Bridge, Leopold, & Bourne, 2015; Guillery & Sherman, 2002). Recordings in non-human primates reveal neuronal responses to visual stimuli in the orbitofrontal cortex (Rolls & Baylis, 1994; Thorpe, Rolls, & Maddison, 1983). Further to this, however, the orbitofrontal cortex is a richly connected association region, receiving inputs from visceral, limbic and other sensory modalities (Rolls, 2004). The orbitofrontal cortex is therefore ideally positioned to integrate cross-modal information relevant to the interpretation of incoming visual stimuli. Studies relating orbitofrontal activation to top-down modulation of object recognition chiefly identified activation in the inferior orbitofrontal cortex (Bar et al., 2006; Bar et al., 2001; Chaumon et al., 2014). This region overlaps with the lateral orbitofrontal region (area 12) where sensory input from the visual system terminates (Kringelbach & Rolls, 2004).

The orbitofrontal cortex has been associated with an array of functions, including inhibition, emotion regulation and reward processing. Evidence suggests the unifying computational nature of the orbitofrontal cortex is to signal the predicted value of choice or action outcomes, by encoding both the sensory features of outcomes and their biological value relative to the current state of the organism (Passingham & Wise, 2012; Rudebeck & Murray, 2014; Schoenbaum, Roesch, Stalnaker, & Takahashi, 2009). Exactly how this computational process supports early categorical predictions in object recognition is not yet clear, but activity related to expected versus observed outcomes during perceptual decision making has been observed in the orbitofrontal cortex (Summerfield & Koechlin, 2008), consistent with a comparative process extending to visual predictions. The inter-connectedness of the orbitofrontal cortex may allow object interpretations to be evaluated on the combined basis of sensory information, emotional information and information regarding an organism’s current internal state and environmental context. The convergence of sensory, emotional and contextual information is consistent with more information-rich prediction error signals being a most effective means of minimising global prediction error. The winning interpretation is then fed back to bias processing in earlier visual regions (Trapp & Bar, 2015). In the following section we consider contextual associations as one of the critical sources of information harnessed in ongoing visual processing.

Contextual processing

A fundamental assumption is that our cognitive and perceptual processes do not operate in a vacuum, but are subject to the constraints of a changeable environment that requires flexible and context-dependent behaviour (Engel et al., 2001). Responses to contextual cues dictate much of human behaviour, from influencing our social interaction to determining levels of reward seeking or avoidant behaviours (Bartz, Zaki, Bolger, & Ochsner, 2011; Koob & Volkow, 2010; Pennartz, Ito, Verschure, Battaglia, & Robbins, 2011). An ability to both encode contextual associations and utilise them to guide behaviour is hard-wired across animals and humans, and visual perception leverages this same mechanism. In this way, scene perception can be understood in terms of the broader cognitive mechanism of associative processing (Aminoff & Tarr, 2015; Bar, 2004; Summerfield & Egner, 2009).

Visual objects tend to occur in stereotypical settings. For example, before even walking into a bathroom we could predict seeing certain objects – a shower, basin, lavatory, towels, and so on. Experience affords the opportunity to learn such associations, which can in turn become a rich source of information to guide predictions about object identification. In the case of ambiguous visual input, information about the contextual surrounding will facilitate its interpretation (Bar & Ullman, 1996). For example, the same object can be perceived as a hairdryer or a drill, depending on whether it appears in a bathroom or a workshop context (Bar, 2004). Regions anchored in the ventral visual stream, such as the parahippocampal cortex, along with hippocampal and retrosplenial complexes, have a well-established shared involvement in place-processing and episodic memory (Burgess, Maguire, & O’Keefe, 2002; Epstein, Harris, Stanley, & Kanwisher, 1999). Studies exploring contextual processing in visual perception unified these disparate functions to reveal their role for penetration in the visual processing system, as described next.

By contrasting stimuli that are strongly versus weakly associated with a specific context, or by training associations between novel visual stimuli, a network of activation across parahippocampal, retrosplenial, and medial orbitofrontal cortices has been uniquely associated with contextual processing (Aminoff, Gronau, & Bar, 2007; Aminoff, Kveraga, & Bar, 2013; Bar & Aminoff, 2003). In contrast to the sites associated with early processing of LSF information, more medial areas of the orbitofrontal cortex were identified in relation to contextual processing. Although broadly involved in the computational processes outlined earlier, a more fine-grained distinction is that lateral orbitofrontal regions evaluate options independently of each other, and medial regions are involved in comparing the options to directly guide decisions (Rudebeck & Murray, 2014). Consistent with a comparative process that draws upon multiple information streams, activity in overlapping regions of the medial orbitofrontal cortex tracks both an object’s valence and its associative strength (Shenhav, Barrett, & Bar, 2013). Continued careful examination of the exact role of the orbitofrontal cortex in generating predictions to guide visual processing may reveal a lateral-medial gradient related to the information processing demands of the incoming visual stimulus. The significance of context in object processing is further highlighted when identical objects that were included in different task sets (i.e., manipulating the context) resulted in different patterns of ventral temporal and prefrontal cortex (Harel, Kravitz, & Baker, 2014). In this case, activation in higher processing regions reflected the nature of the task set (context) rather than object identity.

Critically for a predictive coding account, MEG with phase synchrony analyses revealed that top-down contextual influences occur during the formation stages of a visual percept, extending all the way back to early visual cortex (Kveraga et al., 2011). Substantiating an account of top-down modulation drawn from associative processing, the reduced responses in early visual areas shown in the context of cued or expected visual stimuli (Marois, Leung, & Gore, 2000; Summerfield & Egner, 2009; Yoshiura et al., 1999) result in sharper representations of the expected stimulus in those early regions (Kok, Jehee, & de Lange, 2012) – suggesting that the learnt association directly affects the perceptual process. This has been interpreted as neural activation of expected events being silenced as it is constrained by prediction feedback from higher-level regions (Murray, Kersten, Olshausen, Schrater, & Woods, 2002; Summerfield & de Lange, 2014)

In object recognition, the orbitofrontal context emerges as an origin of neural prediction that incorporates cross-modal information and utilises associative memory, in concert with parahippocampal and retrosplenial complexes. Top-down information from these higher-level regions is consistent with an initial global ‘context signal’, which incorporates a range of information from LSF input and learned associations to facilitate perceptual processing in earlier visual areas (Trapp & Bar, 2015). This rich set of information and associations penetrates the earlier processing stream in the form of top-down predictions about the most likely interpretation of the visual world. However, the scope of information that penetrates visual perception is by no means limited to features intrinsic to an object or scene. Information is drawn from a range of sources in order to formulate the predictions that mould ongoing visual processing. Predictions in visual perception are drawn not only from learned associations and context, but also from cognitive biases, interoceptive feedback and affective states, as emphasised by the roles of social cognition and emotion in visual processing.

Social cognitive penetration

There is a long history in social psychology of empirical work in favour of the claim that social, cultural and motivational factors can directly influence visual perception (Bruner, 1957). For instance, social status is found to affect perceptions of the size of coins (Bruner & Goodman, 1947). Motivational factors that centre on how desirable an object is to the perceiver are suggested to influence perception of the size, distance, steepness, salience, and brightness of visual stimuli (Alter & Balcetis, 2011; Balcetis & Dunning, 2006; Banerjee, Chatterjee, & Sinha, 2012; Bhalla & Proffitt, 1999; Den Daas, Häfner, & de Wit, 2013; Radel & Clément-Guillotin, 2012; Song, Vonasch, Meier, & Bargh, 2012). Social categorisation, including judgements about gender and race, is also thought to be strongly influenced by well-learnt cultural stereotypes (Correll, Wittenbrink, Crawford, & Sadler, 2015; Levin & Banaji, 2006; MacLin & Malpass, 2001; Macrae & Martin, 2007; Payne, 2006).

Many of these behavioural findings, however, have been effectively criticised as failing to provide true evidence of dynamic cognitive penetration of visual perception. Pitfalls in the methodological approaches of many of these studies mean that task-specific effects of judgement, memory and response bias cannot be ruled out as potential explanations of the findings, as opposed to being true examples of top-down effects altering visual perception (Firestone & Scholl, 2014, in press). This criticism highlights the continued need to validate behavioural evidence of cognitive penetration in vision against neural markers, to establish whether they are indeed consistent with top-down effects on ongoing visual processing.

Electrophysiological and fMRI studies of face perception, which is central to social interactions, go some way to support a role for diverse social cognitive effects on visual perception. When making judgements about the gender of a face, activity in the fusiform cortex tracks closely with the linear gradations between morphed male and female faces, whereas orbitofrontal cortex responses instead reflect subjective perceptions of face gender (Freeman, Rule, Adams, & Ambady, 2010). Greater neural activity in upstream top-down processing areas, including the fusiform face area of the ventral visual stream and the orbitofrontal cortex, has been found in response to faces from a individual’s own ‘group’ (i.e., their racial group, or a group they have been arbitrarily assigned to by an experimenter) (Golby, Gabrieli, Chiao, & Eberhardt, 2001; Van Bavel, Packer, & Cunningham, 2008). Neural activity in the fusiform face area during processing of ‘in-group’ faces is functionally distinct from the pattern of activity in lower-level processing regions of the primary visual cortex (Van Bavel, Packer, & Cunningham, 2011). Consistent with the effects of ‘in-group’ categorisation influencing visual processing from fairly early stages, the N170 component of the event-related potential (i.e., the earliest component reflecting perceptual processing of a face (Rousselet, Husk, Bennett, & Sekuler, 2008)) is larger for ‘in-group’ faces (Ratner & Amodio, 2013). Together, these findings support that social category cues modulate early visual processing of faces, and that the influence of these top-down effects is particularly evident in the fusiform gyrus of the ventral visual stream (Amodio, 2014). Importantly, these neural sites supporting top-down effects in facial processing overlap with those identified for object recognition and contextual processing.

It is perhaps not surprising then that social cues, such as gender, culture, race and eye gaze have all been shown to influence the perception of, and neural responsiveness to, facial displays of threat (Adams, Gordon, Baird, Ambady, & Kleck, 2003; Adams, Hess, & Kleck, 2015; Adams & Kleck, 2005). Consistent with the notion that the visual system might be penetrable by a range of information processing streams, such effects have been found to be modulated by individual differences in trait anxiety and progesterone levels; this is presumably linked to threat vigilance, as demonstrated in both perceptual tasks (e.g., (Conway et al., 2007; Fox, Mathews, Calder, & Yiend, 2007) and in amygdala responsiveness to threat cues (e.g., (Ewbank, Fox, & Calder, 2010)).

Other social factors that are claimed to influence ongoing visual perception, such as morality and social norms, remain to be validated against neuroimaging and electrophysiological markers to substantiate that they do exert top-down effects on visual processing (Gantman & Van Bavel, 2015). In time, social vision may emerge as a particularly salient example of cognitive penetration of visual processing, as social interactions are amongst the most information-rich human activities. Drawing from a wealth of cognitive resources to help interpret social situations would be an effective way of generating top-down predictions and lowering global prediction error. This is likely to be particularly relevant in social vision, where we are required to interpret complex, and often very subtle, social cues.

Emotional penetration

Until quite recently, emotion was thought to be independent (and usually studied separately) from perception and cognition – something that came after the stimulus was recognised. Rather than being separate or competing processes, however, emotion and perception interact to maximise the probability of correctly identifying a stimulus. In this way, a person’s affective state is presumed to be a source of top-down penetration of visual perception (Barrett & Bar, 2009). Effects of valence during visual perception are detectable at a reasonably early time scale, with the earliest ERP modulations as a function of valence emerging at 120–160 ms (Carretié, Hinojosa, Martín-Loeches, Mercado, & Tapia, 2004; Olofsson, Nordin, Sequeira, & Polich, 2008; Schönwald & Müller, 2014; Smith, Cacioppo, Larsen, & Chartrand, 2003). Rapid extraction of affective information about a visual stimulus may be highly adaptive, signalling whether an object should be approached or avoided (Barrett & Bliss-Moreau, 2009). Multiple neural routes exist whereby the affective value of a stimulus can be rapidly processed, including the magnocellular system and projections to the orbitofrontal cortex (Pessoa & Adolphs, 2010).

Supporting the integration of affective state and conscious visual awareness, in a binocular rivalry paradigm pairing faces against houses, unseen faces with an affective value (i.e., scowling or smiling) modulated perception of neutral faces, such that they were imbued with affective value (Anderson, Siegel, White, & Barrett, 2012). The temporal course for visual information entering conscious awareness is also found to be modulated by positive versus negative mood induction (Kuhbandner et al., 2009). These effects are apparent at the neural level, where negative mood induction produced lower recognition reaction times and decreased the latency of responses in a number of visual and affective brain regions, from the primary visual cortex, to fusiform, medial temporal and insular cortex, to the orbitofrontal cortex (Panichello, Kveraga, Chaumon, Bar, & Barrett, under revision). These effects of emotional valence became apparent at around 80–90 ms in V1 and LOC, suggesting at least some early-stage penetrability of early and mid-level visual regions (Panichello et al., under revision). Evidence from fMRI suggests that the orbitofrontal cortex is a site for triggering top-down predictions about the affective value of visual stimuli, as activity in overlapping regions in the medial orbitofrontal cortex indexes both an object’s associative strength and whether is has positive or negative valence (Shenhav et al., 2013). The amygdala may be providing a critical source of input for orbitofrontal cortex computations about the affective value of stimuli, given the amygdala’s connectivity with the orbitofrontal cortex, its position as a target of visual pathways and its role in encoding emotionally salient information (Pessoa & Adolphs, 2010; Roy, Shohamy, & Wager, 2012). Consistent with ERP time scales cited above describing the time course of valence effects in human vision, cellular recordings from the monkey amygdala in response to visual stimuli range from 100–200 ms (Gothard, Battaglia, Erickson, Spitler, & Amaral, 2007; Leonard, Rolls, Wilson, & Baylis, 1985; Nakamura, Mikami, & Kubota, 1992), with differential effects depending on valence detectable in the range of 120–250 ms (Gothard et al., 2007).

Affective predictions play an important role in facilitating visual recognition of not just familiar faces and objects, but also in quickly identifying danger. Viewing threatening visual stimuli is accompanied by increased activity in regions implicated in threat detection and initiating “fight-or-flight” responses in the sympathetic system, such as the amygdala, periaqueductal grey, and the orbitofrontal cortex (Kveraga et al., 2015). However, the activity in these regions, as well as behavioural threat ratings, were found to be dependent on the context of the threat: images of someone handling a gun at a shooting range vs. being robbed at gunpoint evoked vastly different response patterns. Threatening objects or animals oriented towards the observer heightened the personal feeling of danger compared to when they were oriented towards someone else. Moreover, in a threat detection task, context influenced how quickly threatening objects or animals were recognised: threatening objects and animals were recognised quickest in direct-threat contexts (Kveraga et al., 2015, Supp. Mat.). These findings suggest that affective information in threatening situations is integrated with visual processing, modulating both how objects are interpreted and how quickly they are recognised.

Another avenue supporting the penetration of affective information in ongoing visual processing is derived from neuropsychiatric syndromes where visual recognition is impaired in the context of reduced emotional certainty about a decision. Patients with Capgras syndrome have apparently intact visual recognition abilities (though their recognition latencies are unknown), but insist their family members and friends (and sometimes, familiar objects) have been replaced by impostors or “doubles” (Anderson, 1988; Ellis & Lewis, 2001; Ellis, Young, Quayle, & De Pauw, 1997). Capgras syndrome can emerge in the context of varied disorders and its neural substrate is not precisely defined (Thiel, Studte, Hildebrandt, Huster, & Weerda, 2014). A consistent finding in these patients, however, is the absence of an increased skin conductance response when viewing familiar faces, indicating no covert recognition by their autonomic system (Ellis et al., 1997). This highlights an integrative role for affective responses and visual processing facilitating complete recognition – i.e., beyond recognising a physical likeness, but recognising a person or object as one that is known and familiar.

Visual hallucinations

It becomes clear from the preceding sections that given the multiple avenues of top-down penetration into visual processing, accurate perception requires striking a balance between the weighting assigned to our predictions versus that assigned to incoming sensory input. Visual hallucinations illustrate the consequences of an imbalance in this system. To be adaptive, the relative influence of top-down versus bottom-up sensory input must be flexible and context-sensitive (Lupyan & Clark, 2015). For example, navigating our homes in darkness we will rely heavily on a stored model of the topography and less on sensory information, but in a novel, challenging environment – say, navigating precipitous terrain on a bushwalk – reliance on sensory information is paramount, though not exclusive. Predictive coding frameworks allocate precision weighting measurements that signal how much confidence should be assigned to prediction errors, in effect this allows information from the most relevant channels to be weighed higher (Friston et al., 2014). An emerging theme in neuropsychiatry is that imbalances in this system cause undue reliance on either top-down or on incoming sensory information, contributing to a range of symptoms (Corlett, Honey, Krystal, & Fletcher, 2011; Fletcher & Frith, 2009; Stephan et al., 2015; Teufel et al., 2015). Hallucinations are linked to a relative increase in the weighting of prior predictions (Adams, Stephan, Brown, Frith, & Friston, 2013; Friston, 2005a), such that top-down influences tend to dominate visual processing in hallucinations and erroneous perceptions prevail in the face of contradictory sensory evidence. The predictive coding framework provides a mechanistic account of how aberration in the visual processing hierarchy results in hallucinations. However, considering the nature of cognitive penetration in visual perception gives a much richer picture of the qualitative aspects of visual hallucinations and clues to their underlying anatomy.

Phenomenological enquiry in conditions that manifest visual hallucinations, including schizophrenia, psychosis, Parkinson’s disease and dementia with Lewy bodies, suggests that the content of hallucinatory percepts can be furnished by contextual associations drawn from the environment or by autobiographical memories. For example, identifiable faces or objects can be construed from ambiguous scenery as is the case with pareidolic hallucinations (Uchiyama et al., 2012), and hallucinations of familiar people or pets are commonly reported (Barnes & David, 2001). Furthermore, the frequency and severity of visual hallucinations has been linked to mood and physiological states (e.g., stress, depression and fatigue), with mood also playing an important role in determining the content of hallucinations (e.g., when the image of a deceased spouse is perceived during a period of bereavement) (Waters et al., 2014). Together, the influence of contextual associations, autobiographical memories and emotion on the hallucinatory percept dovetails with the above sections to show that cognitive penetration of visual processing accommodates diverse information streams.

Convergent evidence across disorders suggests that abnormal activity in those regions supplying top-down influences during normal visual processing, identified in the previous sections to include medial temporal and prefrontal sites, is instrumental in generating hallucinatory visual percepts (Shine, O’Callaghan, Halliday, & Lewis, 2014). Hippocampal and parahippocampal regions of the medial temporal lobe, combined with midline posterior cingulate and medial frontal sites comprise the default network (Andrews-Hanna, Reidler, Sepulcre, Poulin, & Buckner, 2010). This functionally defined network sits at the nexus of dorsal and ventral attentional networks responsible for top-down direction of attention and orientation to salient events, respectively (Corbetta & Shulman, 2002; Fox et al., 2005). Across disorders with visual hallucinations increased instability, or elevated activity and connectivity, within the default network have been identified (Franciotti et al., 2015; Jardri, Thomas, Delmaire, Delion, & Pins, 2013; Shine et al., 2015a; Yao et al., 2014). More broadly this is in keeping with observations that an over-active/poorly supressed default network may underscore many positive and ruminative symptoms in neuropsychiatric disorders (Anticevic et al., 2012; Whitfield-Gabrieli & Ford, 2012). Collectively the default network is associated with memory, self-referential processing, construction of mental imagery and creative cognition, mind wandering and emotional processing (Beaty, Benedek, Silvia, & Schacter, 2015; O’Callaghan, Shine, Lewis, Andrews-Hanna, & Irish, 2015).

As outlined above, key nodes in the default network, including the medial prefrontal cortex and parahippocampal cortex, are sources of the top-down input that directly influences ongoing visual perception. It follows that abnormal activity in these input zones could conceivably produce the combination of visual, autobiographical, and emotive qualities that characterise complex visual hallucinations (O’Callaghan, Muller, & Shine, 2014). Importantly, the emergence of hallucinatory phenomena with abnormal activity in these sites has been verified independently in direct electrical stimulation studies conducted during neurosurgery. Stimulation to medial temporal areas, including parahippocampal regions, as well as to the prefrontal cortex, is capable of generating complex visual hallucinations (Blanke, Landis, & Seeck, 2000; Mégevand et al., 2014; Selimbeyoglu & Parvizi, 2010). Aside from abnormal activity in default network regions emerging as a neural signature, patients with visual hallucinations due to Parkinson’s disease and schizophrenia exhibit higher functional and structural connectivity from the visual cortex to medial temporal and medial prefrontal sites (Amad et al., 2014; Ford et al., 2014; Yao et al., 2015) and increased coupling between visual cortex and default network (Shine et al., 2015b). Hyperconnectivity of these pathways suggests a possible route whereby top-down recurrent feedback may be overactive, and dominating early visual regions in people with chronic visual hallucinations.

Examining cognitive penetration in visual hallucinations at the macro scale of brain regions and large-scale neural networks gives important clues to the brain systems involved and explains the convergence of sensory, memory and emotional based content that comprises hallucinations. More research is needed to marry this with neural function at the micro scale associated with predictive coding and precision weighting. Current evidence suggests that pyramidal cells are critically involved in comparing prediction error against top-down expectations at each level of the processing hierarchy, but neuromodulatory control flexibly influences the weighting (or precision) of these comparisons (Friston et al., 2014). Therefore ‘leaky’ systems that result in improper weighting of incoming sensory versus top-down input in perception could result from neuropathology at levels of the cortical hierarchy or imbalances in neuromodulatory systems. Emerging evidence from schizophrenia has begun to relate predictive coding disruptions to neural changes at high levels in the processing hierarchy, including prefrontal cortex and medial temporal lobe, as well as disruption to key neuromodulators (glutamate, GABA and dopamine) (Adams et al., 2013). Continued understanding of the specific predictive coding deficits in visual hallucinations will help explain what causes the system to be so vulnerable to cognitive penetration into the visual perception process.

The timescale of top-down penetration in vision

Much of the empirical evidence we have discussed here has not directly addressed the timescale of cognitive penetration in vision, and where timing of effects have been reported they are in the vicinity of 120–200 ms post stimulus onset. Indeed, it has been argued that the first 100 ms of visual processing are cognitively impenetrable (Raftopoulos, 2009, 2014), and there is certainly scarcer evidence to support penetration of this very early processing stage. However some investigations have highlighted that the C1 component (originating in V1, with onset around 50 ms post-stimulus and peak latencies before 100 ms) is penetrable by top-down influence (Rauss, Schwartz, & Pourtois, 2011). For example, C1 amplitude is increased when subjects actively attend to a location (Kelly, Gomez-Ramirez, & Foxe, 2008), and using MEG with source localisation modulation of V1 as a function of attentional engagement is evident within the first 100 ms post-stimulus (although not the first 50 ms) (Poghosyan & Ioannides, 2008; Poghosyan, Shibata, & Ioannides, 2005). Aside from attentional engagement, modulation of the C1 is also found in response to attentional load (Rauss, Pourtois, Vuilleumier, & Schwartz, 2012; Rauss, Pourtois, Vuilleumier, & Schwartz, 2009). These human findings are supported and extended by cell recordings in monkeys that show effects of attention (Ito & Gilbert, 1999) and task context (Li, Piëch, & Gilbert, 2004) are evident in V1 from the very onset of neural response. Such findings raise the possibility that V1 is not impenetrable, but that cells there are dynamic and their functional properties and ensemble organisation is influenced by top-down effects (Gilbert & Li, 2013; Ramalingam, McManus, Li, & Gilbert, 2013) and that these effects are evident from the earliest stages.

It is important to note, however, that the detection of these very early top-down effects is much less robust that those top-down effects detected after 100 ms (Rauss et al., 2011). Furthermore, support for very early top-down influences mostly comes from studies where attention is found to have modulatory effects. Attention is, arguably, not necessarily an example of cognitive penetration as its effect on perception may only be the extent to which it influences the visual input we receive, but not directly modulating how the visual processing stream operates (Firestone & Scholl, in press). As attention could represent a pre-perceptual process, to help resolve the current debate about cognitive penetrability in very early vision more thorough investigation of different cognitive processes is warranted. Given the inconsistencies in detecting effects at this time-scale (Rauss et al., 2011) there may also be methodological challenges to be addressed. One possibility is that the effects of very early top-down modulation on V1 may be mostly evident in the subtle tuning of cells’ functional properties (Gilbert & Li, 2013; Nienborg & Roelfsema, 2015), as opposed to net effects of increased neural/BOLD activity that are documented in higher cortical regions to indicate the origins of top-down effects. Careful consideration of how to detect these potentially subtler effects in human studies is needed. However, one could speculate that, given the initial evidence and the fact that the neural architecture is in place to support penetration even at these earliest levels, additional evidence of cognitive penetration at this earliest stage may be uncovered.

Summary and outlook

The existence of top-down predictions in visual processing has received substantial empirical support, and fits with the broader notion of predictions as a framework for explaining much of the information processing in the brain. However, the penetration of other cognitive and behavioural processes into ongoing visual perception is often contested. The basis of top-down predictions is to utilise past experience to streamline visual processing, it follows that information-rich top-down signals will be best suited to forming accurate predictions. Cognitive penetration into visual perception enables diverse sources of information to contribute to the formulation of predictions that influence ongoing visual perception.

We have described examples at the behavioural level demonstrating that affective states and cognitive biases can alter visual perceptual judgements. At a mechanistic level, a prominent role for the orbitofrontal cortex emerges as a site for early generation of top-down predictions, alongside higher-level visual regions in the lateral temporal cortex, and subcortical memory and limbic structures (hippocampus; amygdala). Much of the work to date characterising orbitofrontal function has focused on value-based decision making. An important goal for future research in visual processing will be extending what is currently known about orbitofrontal computational mechanisms (signalling and evaluating potential outcomes or choices), to prediction in vision. The involvement of the orbitofrontal cortex in early top-down predictions for vision may ultimately be a similar computational process of signalling potential interpretations of a visual stimulus and selecting the most tenable. However, accommodating insights drawn from reinforcement theories of choice and behaviour into a predictive processing framework remains an ongoing challenge (Pezzulo, Rigoli, & Friston, 2015).

Going forward, the work reviewed here highlights the continued need to validate claims of cognitive penetration on vision against neural markers. This is particularly relevant for behavioural experiments in social and emotional vision where methodological pitfalls have called in to question whether these experiments reveal true top-down effects (Firestone & Scholl, 2014, in press). An obvious extension would be to combine these behavioural paradigms with temporally sensitive brain imaging and recording techniques, to establish neural evidence of top-down effects. The ongoing debate regarding the time course of cognitive penetration in vision emphasises a need to combine both spatial (e.g., fMRI) and temporal (e.g., EEG, MEG) information sources. Furthermore, work in non-human primates can make valuable contributions to this debate; however future studies using simultaneous recordings in both higher-level regions and V1 sites are needed to assess both the origins and targets of top-down effects (Nienborg & Roelfsema, 2015).

The broad range of cognitive and affective information streams that converge to influence visual perception emphasises the highly dynamic and adaptive nature of visual perception. Neuropsychiatric conditions, however, confirm that this system relies on a finely tuned balance between top-down predictions and sensory input, where imbalances can have significant consequences. The various lines of evidence discussed here suggest that predictions in visual perception are intimately tied to a process of cognitive penetrability, as the very formation of those predictions is not only based on features intrinsic to a visual stimulus, but also on prior knowledge and current affective state. Going forward, prediction and cognitive penetration should be considered as complementary processes and mechanistic frameworks to describe them must account for the vast range of information that is encapsulated in top-down predictions for vision.

Acknowledgments

We would like to thank Ishan Walpola for his helpful comments on the manuscript. CO is supported by a National Health and Medical Research Council Neil Hamilton Fairley Fellowship GNT1091310; KK is supported by grants R01 MH101194, BRAINS R01 MH107797; JMS is supported by a National Health and Medical Research Council CJ Martin Fellowship GNT1072403; RBA is supported by grant R01 MH101194; and MB supported by The Israeli Center of Research Excellence in Cognition, grant 51/11

References

  1. Adams RA, Stephan KE, Brown HR, Frith CD, Friston KJ. The Computational Anatomy of Psychosis. Frontiers in Psychiatry. 2013;4:47. doi: 10.3389/fpsyt.2013.00047. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Adams RB, Gordon HL, Baird AA, Ambady N, Kleck RE. Effects of gaze on amygdala sensitivity to anger and fear faces. Science. 2003;300(5625):1536–1536. doi: 10.1126/science.1082244. [DOI] [PubMed] [Google Scholar]
  3. Adams RB, Hess U, Kleck RE. The intersection of gender-related facial appearance and facial displays of emotion. Emotion Review. 2015;7(1):5–13. [Google Scholar]
  4. Adams RB, Kleck RE. Effects of direct and averted gaze on the perception of facially communicated emotion. Emotion. 2005;5(1):3. doi: 10.1037/1528-3542.5.1.3. [DOI] [PubMed] [Google Scholar]
  5. Alter AL, Balcetis E. Fondness makes the distance grow shorter: Desired locations seem closer because they seem more vivid. Journal of Experimental Social Psychology. 2011;47(1):16–21. [Google Scholar]
  6. Amad A, Cachia A, Gorwood P, Pins D, Delmaire C, Rolland B, Mondino M, Thomas P, Jardri R. The multimodal connectivity of the hippocampal complex in auditory and visual hallucinations. Molecular Psychiatry. 2014;19(2):184–191. doi: 10.1038/mp.2012.181. [DOI] [PubMed] [Google Scholar]
  7. Aminoff EM, Gronau N, Bar M. The parahippocampal cortex mediates spatial and nonspatial associations. Cerebral Cortex. 2007;17(7):1493–1503. doi: 10.1093/cercor/bhl078. [DOI] [PubMed] [Google Scholar]
  8. Aminoff EM, Kveraga K, Bar M. The role of the parahippocampal cortex in cognition. Trends in Cognitive Sciences. 2013;17(8):379–390. doi: 10.1016/j.tics.2013.06.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Aminoff EM, Tarr MJ. Associative Processing Is Inherent in Scene Perception. PLoS ONE. 2015;10(6):e0128840. doi: 10.1371/journal.pone.0128840. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Amodio DM. The neuroscience of prejudice and stereotyping. Nat Rev Neurosci. 2014;15(10):670–682. doi: 10.1038/nrn3800. [DOI] [PubMed] [Google Scholar]
  11. Anderson D. The delusion of inanimate doubles. Implications for understanding the Capgras phenomenon. The British Journal of Psychiatry. 1988;153(5):694–699. doi: 10.1192/bjp.153.5.694. [DOI] [PubMed] [Google Scholar]
  12. Anderson E, Siegel E, White D, Barrett LF. Out of sight but not out of mind: unseen affective faces influence evaluations and social impressions. Emotion. 2012;12(6):1210. doi: 10.1037/a0027514. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Andrews-Hanna JR, Reidler JS, Sepulcre J, Poulin R, Buckner RL. Functional-Anatomic Fractionation of the Brain’s Default Network. Neuron. 2010;65(4):550–562. doi: 10.1016/j.neuron.2010.02.005. doi: http://dx.doi.org/10.1016/j.neuron.2010.02.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Angelucci A, Levitt JB, Walton EJ, Hupe JM, Bullier J, Lund JS. Circuits for local and global signal integration in primary visual cortex. The Journal of Neuroscience. 2002;22(19):8633–8646. doi: 10.1523/JNEUROSCI.22-19-08633.2002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Anticevic A, Cole MW, Murray JD, Corlett PR, Wang XJ, Krystal JH. The role of default network deactivation in cognition and disease. Trends in Cognitive Sciences. 2012;16(12):584–592. doi: 10.1016/j.tics.2012.10.008. doi: http://dx.doi.org/10.1016/j.tics.2012.10.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Balcetis E, Dunning D. See what you want to see: motivational influences on visual perception. Journal of personality and social psychology. 2006;91(4):612. doi: 10.1037/0022-3514.91.4.612. [DOI] [PubMed] [Google Scholar]
  17. Banerjee P, Chatterjee P, Sinha J. Is it light or dark? Recalling moral behavior changes perception of brightness. Psychological Science. 2012 doi: 10.1177/0956797611432497. 0956797611432497. [DOI] [PubMed] [Google Scholar]
  18. Bar M. A cortical mechanism for triggering top-down facilitation in visual object recognition. Journal of Cognitive Neuroscience. 2003;15(4):600–609. doi: 10.1162/089892903321662976. [DOI] [PubMed] [Google Scholar]
  19. Bar M. Visual objects in context. Nature Reviews Neuroscience. 2004;5(8):617–629. doi: 10.1038/nrn1476. [DOI] [PubMed] [Google Scholar]
  20. Bar M, Aminoff EM. Cortical analysis of visual context. Neuron. 2003;38(2):347–358. doi: 10.1016/s0896-6273(03)00167-3. [DOI] [PubMed] [Google Scholar]
  21. Bar M, Kassam KS, Ghuman AS, Boshyan J, Schmid AM, Dale AM, Hämäläinen M, Marinkovic K, Schacter D, Rosen B. Top-down facilitation of visual recognition. Proceedings of the National Academy of Sciences of the United States of America. 2006;103(2):449–454. doi: 10.1073/pnas.0507062103. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Bar M, Tootell RB, Schacter DL, Greve DN, Fischl B, Mendola JD, Rosen BR, Dale AM. Cortical mechanisms specific to explicit visual object recognition. Neuron. 2001;29(2):529–535. doi: 10.1016/s0896-6273(01)00224-0. [DOI] [PubMed] [Google Scholar]
  23. Bar M, Ullman S. Spatial context in recognition. Perception. 1996;25(3):343–352. doi: 10.1068/p250343. [DOI] [PubMed] [Google Scholar]
  24. Barbas H. Anatomic organization of basoventral and mediodorsal visual recipient prefrontal regions in the rhesus monkey. Journal of Comparative Neurology. 1988;276(3):313–342. doi: 10.1002/cne.902760302. [DOI] [PubMed] [Google Scholar]
  25. Barbas H. Anatomic basis of cognitive-emotional interactions in the primate prefrontal cortex. Neuroscience & Biobehavioral Reviews. 1995;19(3):499–510. doi: 10.1016/0149-7634(94)00053-4. [DOI] [PubMed] [Google Scholar]
  26. Barnes J, David AS. Visual hallucinations in Parkinson’s disease: a review and phenomenological survey. Journal of Neurology, Neurosurgery & Psychiatry. 2001;70(6):727–733. doi: 10.1136/jnnp.70.6.727. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Barrett LF, Bar M. See it with feeling: affective predictions during object perception. Philosophical Transactions of the Royal Society B: Biological Sciences. 2009;364(1521):1325–1334. doi: 10.1098/rstb.2008.0312. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Barrett LF, Bliss-Moreau E. Affect as a psychological primitive. Advances in experimental social psychology. 2009;41:167–218. doi: 10.1016/S0065-2601(08)00404-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Bartz JA, Zaki J, Bolger N, Ochsner KN. Social effects of oxytocin in humans: context and person matter. Trends in Cognitive Sciences. 2011;15(7):301–309. doi: 10.1016/j.tics.2011.05.002. [DOI] [PubMed] [Google Scholar]
  30. Beaty RE, Benedek M, Silvia PJ, Schacter DL. Creative Cognition and Brain Network Dynamics. Trends in Cognitive Sciences. 2015 doi: 10.1016/j.tics.2015.10.004. doi: http://dx.doi.org/10.1016/j.tics.2015.10.004. [DOI] [PMC free article] [PubMed]
  31. Bhalla M, Proffitt DR. Visual–motor recalibration in geographical slant perception. Journal of Experimental Psychology: Human Perception and Performance. 1999;25(4):1076. doi: 10.1037//0096-1523.25.4.1076. [DOI] [PubMed] [Google Scholar]
  32. Blanke O, Landis T, Seeck M. Electrical Cortical Stimulation of the Human Prefrontal Cortex Evokes Complex Visual Hallucinations. Epilepsy and Behavior. 2000;1(5):356–361. doi: 10.1006/ebeh.2000.0109. doi: http://dx.doi.org/10.1006/ebeh.2000.0109. [DOI] [PubMed] [Google Scholar]
  33. Bridge H, Leopold DA, Bourne JA. Adaptive Pulvinar Circuitry Supports Visual Cognition. Trends in Cognitive Sciences. 2015 doi: 10.1016/j.tics.2015.10.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Bruner JS. On perceptual readiness. Psychological Review. 1957;64(2):123. doi: 10.1037/h0043805. [DOI] [PubMed] [Google Scholar]
  35. Bruner JS, Goodman CC. Value and need as organizing factors in perception. The journal of abnormal and social psychology. 1947;42(1):33. doi: 10.1037/h0058484. [DOI] [PubMed] [Google Scholar]
  36. Bullier J. Integrated model of visual processing. Brain Research Reviews. 2001;36(2):96–107. doi: 10.1016/s0165-0173(01)00085-6. [DOI] [PubMed] [Google Scholar]
  37. Burgess N, Maguire EA, O’Keefe J. The human hippocampus and spatial and episodic memory. Neuron. 2002;35(4):625–641. doi: 10.1016/s0896-6273(02)00830-9. [DOI] [PubMed] [Google Scholar]
  38. Carmichael S, Price JL. Sensory and premotor connections of the orbital and medial prefrontal cortex of macaque monkeys. Journal of Comparative Neurology. 1995;363(4):642–664. doi: 10.1002/cne.903630409. [DOI] [PubMed] [Google Scholar]
  39. Carretié L, Hinojosa JA, Martín-Loeches M, Mercado F, Tapia M. Automatic attention to emotional stimuli: neural correlates. Human Brain Mapping. 2004;22(4):290–299. doi: 10.1002/hbm.20037. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Chaumon M, Kveraga K, Barrett LF, Bar M. Visual predictions in the orbitofrontal cortex rely on associative content. Cerebral Cortex. 2014;24(11):2899–2907. doi: 10.1093/cercor/bht146. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Clark A. Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences. 2013;36(03):181–204. doi: 10.1017/S0140525X12000477. [DOI] [PubMed] [Google Scholar]
  42. Clarke A, Taylor KI, Tyler LK. The evolution of meaning: spatio-temporal dynamics of visual object recognition. Journal of Cognitive Neuroscience. 2011;23(8):1887–1899. doi: 10.1162/jocn.2010.21544. [DOI] [PubMed] [Google Scholar]
  43. Conway C, Jones B, DeBruine L, Welling L, Smith ML, Perrett D, Sharp MA, Al-Dujaili EA. Salience of emotional displays of danger and contagion in faces is enhanced when progesterone levels are raised. Hormones and behavior. 2007;51(2):202–206. doi: 10.1016/j.yhbeh.2006.10.002. [DOI] [PubMed] [Google Scholar]
  44. Corbetta M, Shulman GL. Control of goal-directed and stimulus-driven attention in the brain. Nature Reviews Neuroscience. 2002;3(3):201–215. doi: 10.1038/nrn755. [DOI] [PubMed] [Google Scholar]
  45. Corlett PR, Honey GD, Krystal JH, Fletcher PC. Glutamatergic Model Psychoses: Prediction Error, Learning, and Inference. Neuropsychopharmacology. 2011;36(1):294–315. doi: 10.1038/npp.2010.163. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Correll J, Wittenbrink B, Crawford MT, Sadler MS. Stereotypic vision: How stereotypes disambiguate visual stimuli. Journal of personality and social psychology. 2015;108(2):219. doi: 10.1037/pspa0000015. [DOI] [PubMed] [Google Scholar]
  47. Dayan P, Hinton GE, Neal RM, Zemel RS. The helmholtz machine. Neural Computation. 1995;7(5):889–904. doi: 10.1162/neco.1995.7.5.889. [DOI] [PubMed] [Google Scholar]
  48. Den Daas C, Häfner M, de Wit J. Sizing Opportunity Biases in Estimates of Goal-Relevant Objects Depend on Goal Congruence. Social Psychological and Personality Science. 2013;4(3):362–368. [Google Scholar]
  49. Ellis HD, Lewis MB. Capgras delusion: a window on face recognition. Trends in Cognitive Sciences. 2001;5(4):149–156. doi: 10.1016/s1364-6613(00)01620-x. [DOI] [PubMed] [Google Scholar]
  50. Ellis HD, Young AW, Quayle AH, De Pauw KW. Reduced autonomic responses to faces in Capgras delusion. Proceedings of the Royal Society of London B: Biological Sciences. 1997;264(1384):1085–1092. doi: 10.1098/rspb.1997.0150. [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Engel AK, Fries P, Singer W. Dynamic predictions: oscillations and synchrony in top–down processing. Nature Reviews Neuroscience. 2001;2(10):704–716. doi: 10.1038/35094565. [DOI] [PubMed] [Google Scholar]
  52. Enns JT, Lleras A. What’s next? New evidence for prediction in human vision. Trends in Cognitive Sciences. 2008;12(9):327–333. doi: 10.1016/j.tics.2008.06.001. doi: http://dx.doi.org/10.1016/j.tics.2008.06.001. [DOI] [PubMed] [Google Scholar]
  53. Epstein R, Harris A, Stanley D, Kanwisher N. The parahippocampal place area: Recognition, navigation, or encoding? Neuron. 1999;23(1):115–125. doi: 10.1016/s0896-6273(00)80758-8. [DOI] [PubMed] [Google Scholar]
  54. Ewbank MP, Fox E, Calder AJ. The interaction between gaze and facial expression in the amygdala and extended amygdala is modulated by anxiety. Frontiers in human neuroscience. 2010;4 doi: 10.3389/fnhum.2010.00056. [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Felleman DJ, Van Essen DC. Distributed hierarchical processing in the primate cerebral cortex. Cerebral Cortex. 1991;1(1):1–47. doi: 10.1093/cercor/1.1.1-a. [DOI] [PubMed] [Google Scholar]
  56. Firestone C, Scholl BJ. “Top-down” effects where none should be found the El Greco fallacy in perception research. Psychological Science. 2014;25(1):38–46. doi: 10.1177/0956797613485092. [DOI] [PubMed] [Google Scholar]
  57. Firestone C, Scholl BJ. Can you experience ‘top-down’effects on perception?: The case of race categories and perceived lightness. Psychonomic bulletin & review. 2015;22(3):694–700. doi: 10.3758/s13423-014-0711-5. [DOI] [PubMed] [Google Scholar]
  58. Firestone C, Scholl BJ. Cognition does not affect perception: Evaluating the evidence for “top-down” effects. Behavioral and Brain Sciences. :1–72. doi: 10.1017/S0140525X15000965. (in press) [DOI] [PubMed] [Google Scholar]
  59. Fletcher PC, Frith CD. Perceiving is believing: a Bayesian approach to explaining the positive symptoms of schizophrenia. Nature Reviews Neuroscience. 2009;10(1):48–58. doi: 10.1038/nrn2536. [DOI] [PubMed] [Google Scholar]
  60. Ford JM, Palzes VA, Roach BJ, Potkin SG, van Erp TG, Turner JA, Mueller BA, Calhoun VD, Voyvodic J, Belger A. Visual Hallucinations Are Associated With Hyperconnectivity Between the Amygdala and Visual Cortex in People With a Diagnosis of Schizophrenia. Schizophrenia Bulletin. 2014:sbu031. doi: 10.1093/schbul/sbu031. [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Fox E, Mathews A, Calder AJ, Yiend J. Anxiety and sensitivity to gaze direction in emotionally expressive faces. Emotion. 2007;7(3):478. doi: 10.1037/1528-3542.7.3.478. [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Fox MD, Snyder AZ, Vincent JL, Corbetta M, Van Essen DC, Raichle ME. The human brain is intrinsically organized into dynamic, anticorrelated functional networks. Proceedings of the National Academy of Sciences of the United States of America. 2005;102(27):9673–9678. doi: 10.1073/pnas.0504136102. [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Franciotti R, Delli Pizzi S, Perfetti B, Tartaro A, Bonanni L, Thomas A, Weis L, Biundo R, Antonini A, Onofrj M. Default mode network links to visual hallucinations: A comparison between Parkinson’s disease and multiple system atrophy. Movement Disorders. 2015;30(9):1237–1247. doi: 10.1002/mds.26285. [DOI] [PubMed] [Google Scholar]
  64. Freeman JB, Rule NO, Adams RB, Ambady N. The neural basis of categorical face perception: graded representations of face gender in fusiform and orbitofrontal cortices. Cerebral Cortex. 2010;20(6):1314–1322. doi: 10.1093/cercor/bhp195. [DOI] [PubMed] [Google Scholar]
  65. Friston K. The free-energy principle: a unified brain theory? Nature Reviews Neuroscience. 2010;11(2):127–138. doi: 10.1038/nrn2787. [DOI] [PubMed] [Google Scholar]
  66. Friston KJ. Hallucinations and perceptual inference. Behavioral and Brain Sciences. 2005a;28(06):764–766. [Google Scholar]
  67. Friston KJ. A theory of cortical responses. Philosophical Transactions of the Royal Society B: Biological Sciences. 2005b;360(1456):815–836. doi: 10.1098/rstb.2005.1622. [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Friston KJ, Stephan KE, Montague R, Dolan RJ. Computational psychiatry: the brain as a phantastic organ. The Lancet Psychiatry. 2014;1(2):148–158. doi: 10.1016/S2215-0366(14)70275-5. [DOI] [PubMed] [Google Scholar]
  69. Gantman AP, Van Bavel JJ. Moral Perception. Trends in Cognitive Sciences. 2015;19(11):631–633. doi: 10.1016/j.tics.2015.08.004. doi: http://dx.doi.org/10.1016/j.tics.2015.08.004. [DOI] [PubMed] [Google Scholar]
  70. Gilbert CD, Li W. Top-down influences on visual processing. Nat Rev Neurosci. 2013;14(5):350–363. doi: 10.1038/nrn3476. [DOI] [PMC free article] [PubMed] [Google Scholar]
  71. Golby AJ, Gabrieli JD, Chiao JY, Eberhardt JL. Differential responses in the fusiform region to same-race and other-race faces. Nature Neuroscience. 2001;4(8):845–850. doi: 10.1038/90565. [DOI] [PubMed] [Google Scholar]
  72. Goodale MA, Milner AD. Separate visual pathways for perception and action. Trends in Neurosciences. 1992;15(1):20–25. doi: 10.1016/0166-2236(92)90344-8. [DOI] [PubMed] [Google Scholar]
  73. Gothard KM, Battaglia FP, Erickson CA, Spitler KM, Amaral DG. Neural responses to facial expression and face identity in the monkey amygdala. Journal of Neurophysiology. 2007;97(2):1671–1683. doi: 10.1152/jn.00714.2006. [DOI] [PubMed] [Google Scholar]
  74. Guillery RW, Sherman SM. Thalamic Relay Functions and Their Role in Corticocortical Communication: Generalizations from the Visual System. Neuron. 2002;33(2):163–175. doi: 10.1016/s0896-6273(01)00582-7. doi: http://dx.doi.org/10.1016/S0896-6273(01)00582-7. [DOI] [PubMed] [Google Scholar]
  75. Harel A, Kravitz DJ, Baker CI. Task context impacts visual object processing differentially across the cortex. Proceedings of the National Academy of Sciences. 2014;111(10):E962–E971. doi: 10.1073/pnas.1312567111. [DOI] [PMC free article] [PubMed] [Google Scholar]
  76. Horr NK, Braun C, Volz KG. Feeling before knowing why: The role of the orbitofrontal cortex in intuitive judgments—an MEG study. Cognitive, Affective, & Behavioral Neuroscience. 2014;14(4):1271–1285. doi: 10.3758/s13415-014-0286-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  77. Ito M, Gilbert CD. Attention Modulates Contextual Influences in the Primary Visual Cortex of Alert Monkeys. Neuron. 1999;22(3):593–604. doi: 10.1016/s0896-6273(00)80713-8. doi: http://dx.doi.org/10.1016/S0896-6273(00)80713-8. [DOI] [PubMed] [Google Scholar]
  78. Jardri R, Thomas P, Delmaire C, Delion P, Pins D. The Neurodynamic Organization of Modality-Dependent Hallucinations. Cerebral Cortex. 2013;23(5):1108–1117. doi: 10.1093/cercor/bhs082. [DOI] [PubMed] [Google Scholar]
  79. Kelly SP, Gomez-Ramirez M, Foxe JJ. Spatial attention modulates initial afferent activity in human primary visual cortex. Cerebral Cortex. 2008;18(11):2629–2636. doi: 10.1093/cercor/bhn022. [DOI] [PMC free article] [PubMed] [Google Scholar]
  80. Kok P, Jehee Janneke FM, de Lange Floris P. Less Is More: Expectation Sharpens Representations in the Primary Visual Cortex. Neuron. 2012;75(2):265–270. doi: 10.1016/j.neuron.2012.04.034. doi: http://dx.doi.org/10.1016/j.neuron.2012.04.034. [DOI] [PubMed] [Google Scholar]
  81. Koob GF, Volkow ND. Neurocircuitry of addiction. Neuropsychopharmacology. 2010;35(1):217–238. doi: 10.1038/npp.2009.110. [DOI] [PMC free article] [PubMed] [Google Scholar]
  82. Kringelbach ML, Rolls ET. The functional neuroanatomy of the of the human orbitofrontal cortex: evidence form neuroimaging and neuropsychology. Progress in Neurobiology. 2004;72:341–372. doi: 10.1016/j.pneurobio.2004.03.006. [DOI] [PubMed] [Google Scholar]
  83. Kuhbandner C, Hanslmayr S, Maier MA, Pekrun R, Spitzer B, Pastötter B, Bäuml KH. Effects of mood on the speed of conscious perception: behavioural and electrophysiological evidence. Social Cognitive and Affective Neuroscience. 2009;4(3):286–293. doi: 10.1093/scan/nsp010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  84. Kveraga K, Boshyan J, Adams RB, Mote J, Betz N, Ward N, Hadjikhani N, Bar M, Barrett LF. If it bleeds, it leads: separating threat from mere negativity. Social Cognitive and Affective Neuroscience. 2015;10(1):28–35. doi: 10.1093/scan/nsu007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  85. Kveraga K, Boshyan J, Bar M. Magnocellular Projections as the Trigger of Top-Down Facilitation in Recognition. The Journal of Neuroscience. 2007;27(48):13232–13240. doi: 10.1523/jneurosci.3481-07.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  86. Kveraga K, Ghuman AS, Kassam KS, Aminoff EA, Hämäläinen MS, Chaumon M, Bar M. Early onset of neural synchronization in the contextual associations network. Proceedings of the National Academy of Sciences. 2011;108(8):3389–3394. doi: 10.1073/pnas.1013760108. [DOI] [PMC free article] [PubMed] [Google Scholar]
  87. Leonard C, Rolls E, Wilson F, Baylis G. Neurons in the amygdala of the monkey with responses selective for faces. Behavioural Brain Research. 1985;15(2):159–176. doi: 10.1016/0166-4328(85)90062-2. [DOI] [PubMed] [Google Scholar]
  88. Levin DT, Banaji MR. Distortions in the perceived lightness of faces: the role of race categories. Journal of Experimental Psychology: General. 2006;135(4):501. doi: 10.1037/0096-3445.135.4.501. [DOI] [PubMed] [Google Scholar]
  89. Li W, Piëch V, Gilbert CD. Perceptual learning and top-down influences in primary visual cortex. Nature Neuroscience. 2004;7(6):651–657. doi: 10.1038/nn1255. [DOI] [PMC free article] [PubMed] [Google Scholar]
  90. Lupyan G. Cognitive Penetrability of Perception in the Age of Prediction: Predictive Systems are Penetrable Systems. Review of philosophy and psychology. 2015:1–23. doi: 10.1007/s13164-015-0253-4. [DOI] [Google Scholar]
  91. Lupyan G, Clark A. Words and the World Predictive Coding and the Language-Perception-Cognition Interface. Current Directions in Psychological Science. 2015;24(4):279–284. [Google Scholar]
  92. MacLin OH, Malpass RS. Racial categorization of faces: The ambiguous race face effect. Psychology, Public Policy, and Law. 2001;7(1):98. [Google Scholar]
  93. Macpherson F. Cognitive penetration of colour experience: Rethinking the issue in light of an indirect mechanism. Philosophy and Phenomenological Research. 2012;84(1):24–62. [Google Scholar]
  94. Macrae CN, Martin D. A boy primed Sue: feature-based processing and person construal. European Journal of Social Psychology. 2007;37(5):793–805. [Google Scholar]
  95. Marois R, Leung HC, Gore JC. A stimulus-driven approach to object identity and location processing in the human brain. Neuron. 2000;25(3):717–728. doi: 10.1016/s0896-6273(00)81073-9. [DOI] [PubMed] [Google Scholar]
  96. Maunsell J, Nealey TA, DePriest DD. Magnocellular and parvocellular contributions to responses in the middle temporal visual area (MT) of the macaque monkey. The Journal of Neuroscience. 1990;10(10):3323–3334. doi: 10.1523/JNEUROSCI.10-10-03323.1990. [DOI] [PMC free article] [PubMed] [Google Scholar]
  97. Mégevand P, Groppe DM, Goldfinger MS, Hwang ST, Kingsley PB, Davidesco I, Mehta AD. Seeing Scenes: Topographic Visual Hallucinations Evoked by Direct Electrical Stimulation of the Parahippocampal Place Area. The Journal of Neuroscience. 2014;34(16):5399–5405. doi: 10.1523/jneurosci.5202-13.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  98. Murray SO, Kersten D, Olshausen BA, Schrater P, Woods DL. Shape perception reduces activity in human primary visual cortex. Proceedings of the National Academy of Sciences. 2002;99(23):15164–15169. doi: 10.1073/pnas.192579399. [DOI] [PMC free article] [PubMed] [Google Scholar]
  99. Nakamura K, Mikami A, Kubota K. Activity of single neurons in the monkey amygdala during performance of a visual discrimination task. Journal of Neurophysiology. 1992;67(6):1447–1463. doi: 10.1152/jn.1992.67.6.1447. [DOI] [PubMed] [Google Scholar]
  100. Nienborg H, Roelfsema PR. Belief states as a framework to explain extra-retinal influences in visual cortex. Current Opinion in Neurobiology. 2015;32:45–52. doi: 10.1016/j.conb.2014.10.013. [DOI] [PubMed] [Google Scholar]
  101. Nowak LG, Bullier J. Extrastriate cortex in primates. Springer; 1997. The timing of information transfer in the visual system; pp. 205–241. [Google Scholar]
  102. O’Callaghan C, Kveraga K, Shine JM, Adams RB, Bar M. Convergent evidence for top-down effects from the “predictive brain”. Behavioral and Brain Sciences. doi: 10.1017/S0140525X15002599. (in press) [DOI] [PMC free article] [PubMed] [Google Scholar]
  103. O’Callaghan C, Muller AJ, Shine JM. Clarifying the Role of Neural Networks in Complex Hallucinatory Phenomena. The Journal of Neuroscience. 2014;34(36):11865–11867. doi: 10.1523/jneurosci.2429-14.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  104. O’Callaghan C, Shine JM, Lewis SJG, Andrews-Hanna JR, Irish M. Shaped by our thoughts – A new task to assess spontaneous cognition and its associated neural correlates in the default network. Brain and Cognition. 2015;93(0):1–10. doi: 10.1016/j.bandc.2014.11.001. doi: http://dx.doi.org/10.1016/j.bandc.2014.11.001. [DOI] [PubMed] [Google Scholar]
  105. O’Reilly RC, Wyatte D, Herd S, Mingus B, Jilk DJ. Recurrent Processing during Object Recognition. Frontiers in Psychology. 2013;4:124. doi: 10.3389/fpsyg.2013.00124. [DOI] [PMC free article] [PubMed] [Google Scholar]
  106. Oliva A, Torralba A. The role of context in object recognition. Trends in Cognitive Sciences. 2007;11(12):520–527. doi: 10.1016/j.tics.2007.09.009. [DOI] [PubMed] [Google Scholar]
  107. Olofsson JK, Nordin S, Sequeira H, Polich J. Affective picture processing: an integrative review of ERP findings. Biological psychology. 2008;77(3):247–265. doi: 10.1016/j.biopsycho.2007.11.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  108. Panichello MF, Cheung OS, Bar M. Predictive feedback and conscious visual experience. Frontiers in Psychology. 2012;3 doi: 10.3389/fpsyg.2012.00620. [DOI] [PMC free article] [PubMed] [Google Scholar]
  109. Panichello MF, Kveraga K, Chaumon M, Bar M, Barrett LF. Internal valence modulates the speed of object recognition. doi: 10.1038/s41598-017-00385-4. (under revision) [DOI] [PMC free article] [PubMed] [Google Scholar]
  110. Passingham RE, Wise SP. The neurobiology of the prefrontal cortex: anatomy, evolution, and the origin of insight. Oxford University Press; 2012. [Google Scholar]
  111. Payne BK. Weapon bias split-second decisions and unintended stereotyping. Current Directions in Psychological Science. 2006;15(6):287–291. [Google Scholar]
  112. Pennartz CMA, Ito R, Verschure PFMJ, Battaglia FP, Robbins TW. The hippocampal–striatal axis in learning, prediction and goal-directed behavior. Trends in Neurosciences. 2011;34(10):548–559. doi: 10.1016/j.tins.2011.08.001. doi: http://dx.doi.org/10.1016/j.tins.2011.08.001. [DOI] [PubMed] [Google Scholar]
  113. Pessoa L, Adolphs R. Emotion processing and the amygdala: from a’low road’to’many roads’ of evaluating biological significance. Nature Reviews Neuroscience. 2010;11(11):773–783. doi: 10.1038/nrn2920. [DOI] [PMC free article] [PubMed] [Google Scholar]
  114. Peyrin C, Michel CM, Schwartz S, Thut G, Seghier M, Landis T, Marendaz C, Vuilleumier P. The neural substrates and timing of top–down processes during coarse-to-fine categorization of visual scenes: A combined fMRI and ERP study. Journal of Cognitive Neuroscience. 2010;22(12):2768–2780. doi: 10.1162/jocn.2010.21424. [DOI] [PubMed] [Google Scholar]
  115. Pezzulo G, Rigoli F, Friston K. Active Inference, homeostatic regulation and adaptive behavioural control. Progress in Neurobiology. 2015;134:17–35. doi: 10.1016/j.pneurobio.2015.09.001. doi: http://dx.doi.org/10.1016/j.pneurobio.2015.09.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  116. Poch C, Garrido MI, Igoa JM, Belinchón M, García-Morales I, Campo P. Time-Varying Effective Connectivity during Visual Object Naming as a Function of Semantic Demands. The Journal of Neuroscience. 2015;35(23):8768–8776. doi: 10.1523/JNEUROSCI.4888-14.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  117. Poghosyan V, Ioannides AA. Attention modulates earliest responses in the primary auditory and visual cortices. Neuron. 2008;58(5):802–813. doi: 10.1016/j.neuron.2008.04.013. [DOI] [PubMed] [Google Scholar]
  118. Poghosyan V, Shibata T, Ioannides AA. Effects of attention and arousal on early responses in striate cortex. European Journal of Neuroscience. 2005;22(1):225–234. doi: 10.1111/j.1460-9568.2005.04181.x. [DOI] [PubMed] [Google Scholar]
  119. Radel R, Clément-Guillotin C. Evidence of motivational influences in early visual perception hunger modulates conscious access. Psychological Science. 2012;23(3):232–234. doi: 10.1177/0956797611427920. [DOI] [PubMed] [Google Scholar]
  120. Raftopoulos A. Cognition and perception. RA, Cognition and Perception 2009 [Google Scholar]
  121. Raftopoulos A. The cognitive impenetrability of the content of early vision is a necessary and sufficient condition for purely nonconceptual content. Philosophical Psychology. 2014;27(5):601–620. [Google Scholar]
  122. Ramalingam N, McManus JNJ, Li W, Gilbert CD. Top-Down Modulation of Lateral Interactions in Visual Cortex. The Journal of Neuroscience. 2013;33(5):1773–1789. doi: 10.1523/jneurosci.3825-12.2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  123. Rao RP, Ballard DH. Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nature Neuroscience. 1999;2(1):79–87. doi: 10.1038/4580. [DOI] [PubMed] [Google Scholar]
  124. Ratner KG, Amodio DM. Seeing “us vs. them”: Minimal group effects on the neural encoding of faces. Journal of Experimental Social Psychology. 2013;49(2):298–301. doi: http://dx.doi.org/10.1016/j.jesp.2012.10.017. [Google Scholar]
  125. Rauss K, Pourtois G, Vuilleumier P, Schwartz S. Effects of attentional load on early visual processing depend on stimulus timing. Human Brain Mapping. 2012;33(1):63–74. doi: 10.1002/hbm.21193. [DOI] [PMC free article] [PubMed] [Google Scholar]
  126. Rauss K, Schwartz S, Pourtois G. Top-down effects on early visual processing in humans: A predictive coding framework. Neuroscience & Biobehavioral Reviews. 2011;35(5):1237–1253. doi: 10.1016/j.neubiorev.2010.12.011. [DOI] [PubMed] [Google Scholar]
  127. Rauss KS, Pourtois G, Vuilleumier P, Schwartz S. Attentional load modifies early activity in human primary visual cortex. Human Brain Mapping. 2009;30(5):1723–1733. doi: 10.1002/hbm.20636. [DOI] [PMC free article] [PubMed] [Google Scholar]
  128. Rolls ET. The functions of the orbitofrontal cortex. Brain and Cognition. 2004;55(1):11–29. doi: 10.1016/S0278-2626(03)00277-X. [DOI] [PubMed] [Google Scholar]
  129. Rolls ET, Baylis LL. Gustatory, olfactory, and visual convergence within the primate orbitofrontal cortex. The Journal of Neuroscience. 1994;14(9):5437–5452. doi: 10.1523/JNEUROSCI.14-09-05437.1994. [DOI] [PMC free article] [PubMed] [Google Scholar]
  130. Rousselet GA, Husk JS, Bennett PJ, Sekuler AB. Time course and robustness of ERP object and face differences. Journal of vision. 2008;8(12):3–3. doi: 10.1167/8.12.3. [DOI] [PubMed] [Google Scholar]
  131. Roy M, Shohamy D, Wager TD. Ventromedial prefrontal-subcortical systems and the generation of affective meaning. Trends in Cognitive Sciences. 2012;16(3):147–156. doi: 10.1016/j.tics.2012.01.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  132. Rudebeck Peter H, Murray Elisabeth A. The Orbitofrontal Oracle: Cortical Mechanisms for the Prediction and Evaluation of Specific Behavioral Outcomes. Neuron. 2014;84(6):1143–1156. doi: 10.1016/j.neuron.2014.10.049. doi: http://dx.doi.org/10.1016/j.neuron.2014.10.049. [DOI] [PMC free article] [PubMed] [Google Scholar]
  133. Salin PA, Bullier J. Corticocortical connections in the visual system: structure and function. Physiological reviews. 1995;75(1):107–154. doi: 10.1152/physrev.1995.75.1.107. [DOI] [PubMed] [Google Scholar]
  134. Schoenbaum G, Roesch MR, Stalnaker TA, Takahashi YK. A new perspective on the role of the orbitofrontal cortex in adaptive behaviour. Nature Reviews. Neuroscience. 2009;10(12):885–892. doi: 10.1038/nrn2753. [DOI] [PMC free article] [PubMed] [Google Scholar]
  135. Schönwald LI, Müller MM. Slow biasing of processing resources in early visual cortex is preceded by emotional cue extraction in emotion–attention competition. Human Brain Mapping. 2014;35(4):1477–1490. doi: 10.1002/hbm.22267. [DOI] [PMC free article] [PubMed] [Google Scholar]
  136. Selimbeyoglu A, Parvizi J. Electrical stimulation of the human brain: perceptual and behavioral phenomena reported in the old and new literature. Frontiers in human neuroscience. 2010;4 doi: 10.3389/fnhum.2010.00046. [DOI] [PMC free article] [PubMed] [Google Scholar]
  137. Shenhav A, Barrett LF, Bar M. Affective value and associative processing share a cortical substrate. Cognitive, Affective, & Behavioral Neuroscience. 2013;13(1):46–59. doi: 10.3758/s13415-012-0128-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  138. Shine JM, Keogh R, O’Callaghan C, Muller AJ, Lewis SJG, Pearson J. Imagine that: elevated sensory strength of mental imagery in individuals with Parkinson’s disease and visual hallucinations. Proceedings of the Royal Society of London B: Biological Sciences. 2015a;282(1798) doi: 10.1098/rspb.2014.2047. [DOI] [PMC free article] [PubMed] [Google Scholar]
  139. Shine JM, Muller AJ, O’Callaghan C, Hornberger M, Halliday GM, Lewis SJG. Abnormal connectivity between the default mode and the visual system underlies the manifestation of visual hallucinations in Parkinson’s disease: a task-based fMRI study. Npj Parkinson’s Disease. 2015b;1:15003. doi: 10.1038/npjparkd.2015.3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  140. Shine JM, O’Callaghan C, Halliday GM, Lewis SJG. Tricks of the mind: Visual hallucinations as disorders of attention. Progress in Neurobiology. 2014;116:58–65. doi: 10.1016/j.pneurobio.2014.01.004. doi: http://dx.doi.org/10.1016/j.pneurobio.2014.01.004. [DOI] [PubMed] [Google Scholar]
  141. Smith NK, Cacioppo JT, Larsen JT, Chartrand TL. May I have your attention, please: Electrocortical responses to positive and negative stimuli. Neuropsychologia. 2003;41(2):171–183. doi: 10.1016/s0028-3932(02)00147-1. [DOI] [PubMed] [Google Scholar]
  142. Song H, Vonasch AJ, Meier BP, Bargh JA. Brighten up: Smiles facilitate perceptual judgment of facial lightness. Journal of Experimental Social Psychology. 2012;48(1):450–452. [Google Scholar]
  143. Stephan KE, Binder EB, Breakspear M, Dayan P, Johnstone EC, Meyer-Lindenberg A, Schnyder U, Wang X-J, Bach DR, Fletcher PC. Charting the landscape of priority problems in psychiatry, part 2: pathogenesis and aetiology. The Lancet Psychiatry. 2015 doi: 10.1016/S2215-0366(15)00360-0. [DOI] [PubMed] [Google Scholar]
  144. Stokes D. Cognitive Penetrability of Perception. Philosophy Compass. 2013;8(7):646–663. doi: 10.1111/phc3.12043. [DOI] [Google Scholar]
  145. Summerfield C, de Lange FP. Expectation in perceptual decision making: neural and computational mechanisms. Nat Rev Neurosci. 2014;15(11):745–756. doi: 10.1038/nrn3838. [DOI] [PubMed] [Google Scholar]
  146. Summerfield C, Egner T. Expectation (and attention) in visual cognition. Trends in Cognitive Sciences. 2009;13(9):403–409. doi: 10.1016/j.tics.2009.06.003. doi: http://dx.doi.org/10.1016/j.tics.2009.06.003. [DOI] [PubMed] [Google Scholar]
  147. Summerfield C, Egner T, Greene M, Koechlin E, Mangels J, Hirsch J. Predictive codes for forthcoming perception in the frontal cortex. Science. 2006;314(5803):1311–1314. doi: 10.1126/science.1132028. [DOI] [PubMed] [Google Scholar]
  148. Summerfield C, Koechlin E. A Neural Representation of Prior Information during Perceptual Inference. Neuron. 2008;59(2):336–347. doi: 10.1016/j.neuron.2008.05.021. doi: http://dx.doi.org/10.1016/j.neuron.2008.05.021. [DOI] [PubMed] [Google Scholar]
  149. Teufel C, Subramaniam N, Dobler V, Perez J, Finnemann J, Mehta PR, Goodyer IM, Fletcher PC. Shift toward prior knowledge confers a perceptual advantage in early psychosis and psychosis-prone healthy individuals. Proceedings of the National Academy of Sciences. 2015;112(43):13401–13406. doi: 10.1073/pnas.1503916112. [DOI] [PMC free article] [PubMed] [Google Scholar]
  150. Thiel CM, Studte S, Hildebrandt H, Huster R, Weerda R. When a loved one feels unfamiliar: A case study on the neural basis of Capgras delusion. Cortex. 2014;52:75–85. doi: 10.1016/j.cortex.2013.11.011. doi: http://dx.doi.org/10.1016/j.cortex.2013.11.011. [DOI] [PubMed] [Google Scholar]
  151. Thorpe S, Rolls E, Maddison S. Neuronal activity in the orbitofrontal cortex of the behaving monkey. Exp Brain Res. 1983;49:93–115. doi: 10.1007/BF00235545. [DOI] [PubMed] [Google Scholar]
  152. Tootell RB, Switkes E, Silverman MS, Hamilton SL. Functional anatomy of macaque striate cortex. II. Retinotopic organization. The Journal of Neuroscience. 1988;8(5):1531–1568. doi: 10.1523/JNEUROSCI.08-05-01531.1988. [DOI] [PMC free article] [PubMed] [Google Scholar]
  153. Trapp S, Bar M. Prediction, context and competition in visual recognition. Annals of the New York Academy of Sciences. 2015;1339:190–198. doi: 10.1111/nyas.12680. [DOI] [PubMed] [Google Scholar]
  154. Uchiyama M, Nishio Y, Yokoi K, Hirayama K, Imamura T, Shimomura T, Mori E. Pareidolias: complex visual illusions in dementia with Lewy bodies. Brain. 2012 doi: 10.1093/brain/aws126. [DOI] [PMC free article] [PubMed] [Google Scholar]
  155. Ungerleider L, Mishkin M. Two Cortical Visual Systems. In: Ingle D, Goodale M, Mansfield R, editors. Analysis of Visual Behavior. Cambridge, Mass: The MIT Press; 1982. [Google Scholar]
  156. Van Bavel JJ, Packer DJ, Cunningham WA. The neural substrates of in-group bias a functional magnetic resonance imaging investigation. Psychological Science. 2008;19(11):1131–1139. doi: 10.1111/j.1467-9280.2008.02214.x. [DOI] [PubMed] [Google Scholar]
  157. Van Bavel JJ, Packer DJ, Cunningham WA. Modulation of the fusiform face area following minimal exposure to motivationally relevant faces: evidence of in-group enhancement (not out-group disregard) Journal of Cognitive Neuroscience. 2011;23(11):3343–3354. doi: 10.1162/jocn_a_00016. [DOI] [PubMed] [Google Scholar]
  158. Vetter P, Newen A. Varieties of cognitive penetration in visual perception. Consciousness and Cognition. 2014;27:62–75. doi: 10.1016/j.concog.2014.04.007. doi: http://dx.doi.org/10.1016/j.concog.2014.04.007. [DOI] [PubMed] [Google Scholar]
  159. Waters F, Collerton D, Ffytche DH, Jardri R, Pins D, Dudley R, Blom JD, Mosimann UP, Eperjesi F, Ford S. Visual hallucinations in the psychosis spectrum and comparative information from neurodegenerative disorders and eye disease. Schizophrenia Bulletin. 2014;40(Suppl 4):S233–S245. doi: 10.1093/schbul/sbu036. [DOI] [PMC free article] [PubMed] [Google Scholar]
  160. Whitfield-Gabrieli S, Ford J. Default mode network activity and connectivity in psychopathology. Annual review of clinical psychology. 2012;8:49. doi: 10.1146/annurev-clinpsy-032511-143049. [DOI] [PubMed] [Google Scholar]
  161. Yao N, Pang S, Cheung C, Chang RS-k, Lau KK, Suckling J, Yu K, Mak HKF, McAlonan G, Ho SL, Chua S-e. Resting activity in visual and corticostriatal pathways in Parkinson’s disease with hallucinations. Parkinsonism & Related Disorders. 2015;21(2):131–137. doi: 10.1016/j.parkreldis.2014.11.020. doi: http://dx.doi.org/10.1016/j.parkreldis.2014.11.020. [DOI] [PubMed] [Google Scholar]
  162. Yao N, Shek-Kwan Chang R, Cheung C, Pang S, Lau KK, Suckling J, Rowe JB, Yu K, Ka-Fung Mak H, Chua SE. The default mode network is disrupted in parkinson’s disease with visual hallucinations. Human Brain Mapping. 2014;35(11):5658–5666. doi: 10.1002/hbm.22577. [DOI] [PMC free article] [PubMed] [Google Scholar]
  163. Yoshiura T, Zhong J, Shibata DK, Kwok WE, Shrier DA, Numaguchi Y. Functional MRI study of auditory and visual oddball tasks. Neuroreport. 1999;10(8):1683–1688. doi: 10.1097/00001756-199906030-00011. [DOI] [PubMed] [Google Scholar]
  164. Zadra JR, Clore GL. Emotion and perception: the role of affective information. Wiley Interdisciplinary Reviews: Cognitive Science. 2011;2(6):676–685. doi: 10.1002/wcs.147. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES