Skip to main content
Sage Choice logoLink to Sage Choice
. 2016 May 25;30(11):1145–1155. doi: 10.1177/0269881116650087

Prediction error, ketamine and psychosis: An updated model

Philip R Corlett 1, Garry D Honey 2, Paul C Fletcher 3,4,
PMCID: PMC5105325  PMID: 27226342

Abstract

In 2007, we proposed an explanation of delusion formation as aberrant prediction error-driven associative learning. Further, we argued that the NMDA receptor antagonist ketamine provided a good model for this process. Subsequently, we validated the model in patients with psychosis, relating aberrant prediction error signals to delusion severity. During the ensuing period, we have developed these ideas, drawing on the simple principle that brains build a model of the world and refine it by minimising prediction errors, as well as using it to guide perceptual inferences. While previously we focused on the prediction error signal per se, an updated view takes into account its precision, as well as the precision of prior expectations. With this expanded perspective, we see several possible routes to psychotic symptoms – which may explain the heterogeneity of psychotic illness, as well as the fact that other drugs, with different pharmacological actions, can produce psychotomimetic effects. In this article, we review the basic principles of this model and highlight specific ways in which prediction errors can be perturbed, in particular considering the reliability and uncertainty of predictions. The expanded model explains hallucinations as perturbations of the uncertainty mediated balance between expectation and prediction error. Here, expectations dominate and create perceptions by suppressing or ignoring actual inputs. Negative symptoms may arise due to poor reliability of predictions in service of action. By mapping from biology to belief and perception, the account proffers new explanations of psychosis. However, challenges remain. We attempt to address some of these concerns and suggest future directions, incorporating other symptoms into the model, building towards better understanding of psychosis.

Keywords: Drug model, ketamine, psychosis, schizophrenia, delusions, hallucinations, computational psychiatry

Introduction

In 2007, in this journal, we outlined a theory of delusion formation expressed in terms of associative learning theory (Corlett et al., 2007a). It was not the first theory of delusions expressed in this framework (Gray et al., 1991; Miller, 1976), but it did implicate a specific psychological process (prediction error [PE]) and its neurochemical underpinnings in the genesis of delusions. We are very honoured to have been invited to revisit this article, and would like to take this opportunity to discuss the origins of the ideas, the key features of the theory, the evidence that has emerged to support and challenge it, and, importantly, the ways in which it has evolved in the context of a cognitive neuroscience field that has advanced rapidly and helped to shape its development.

We begin with a consideration of what attracted us to the central ideas outline in the original article and what precisely we were hypothesising by invoking PE to account for how delusional thinking might emerge.

Delusions and PE: The key ideas

We, like many, feel the need for a perspective on the key features of psychosis – delusions and hallucinations – that links the perceptual and inferential aspects of these experiences with the underlying biological mechanisms. Without such a linking mechanism, biological explanations of mental illness will remain incomplete. Computational cognitive neuroscience offers the potential to unite multiple levels of explanation through deployment of computational models that can be plausibly related to activity of brain systems, the instantiation of cognitive processes and to high-level behaviour and experience (Corlett and Fletcher, 2014). Our early attempt to do this drew on a number of areas in the existing literature. In particular, we noted that, going back to Bleuler’s (1908) earliest formulations (Miller, 1976), the delusions characteristic of schizophrenia occur against a background of strange and sometimes bizarre associations and experiences. This theme – linking delusional thinking to associative learning (Hartley, 1749/1976) – was developed more formally (Dickinson, 2001) in the context of the theoretically and empirically rich field of animal reinforcement learning, itself given formal foundations by advances in machine learning (Sutton and Barto, 1998) and artificial intelligence (Widrow and Hoff, 1960). Both of these fields conceptualised a fundamental role of the brain as identifying and updating statistical regularities (or associations), in effect to build an internal model of the world (Conant and Ashby, 1970).

Ensuing and increasingly complex and sophisticated perspectives on this idea have envisaged that this challenge is met through an iterative process of predicting and updating (Adams et al., 2013; Corlett et al., 2009a; Fletcher and Frith, 2009). Briefly, the brain uses prior knowledge (prior experience of associative relationships) to predict what its next input will be, and any mismatch between the prediction and what actually ensues is signalled as a PE. This is sometimes referred to as predictive coding or predictive processing (Clark, 2013). PE is a key drive to new learning in so far as it indicates incorrect predictions and hence a model that may need to be updated.

This simple formulation provided the framework that supported our initial consideration of PE in psychosis. Moreover, it offered us an operationally defined and quantifiable parameter that could readily be applied in analyses of neuroimaging data in humans (Corlett et al., 2004; Corlett and Fletcher, 2012; Corlett et al., 2007b; Fletcher et al., 2001; Murray et al., 2008; Turner et al., 2004). If PE signalling is altered, arising inappropriately or with an anomalous degree of strength or precision, then there would ensue, we argued, a compelling sense that one’s existing model of the world was wrong, that something had changed. Perhaps there would be an enhanced tendency to see spurious associations between stimuli and events that, in reality, were not related. Perhaps too, there would arise a sense of a world that had changed, become more sinister and laden with meaning. And, having established a new set of associations, then these would form the framework dictating how attention might be diverted towards ensuing stimuli and shaping the inferences that arose. Put simply, an aberrant PE signal would lead to new learning, and new learning would engender new expectations that would themselves govern how the individual sampled and interpreted the world. In effect, this could lead to a developing change in which emerging beliefs created the evidence that supported them.

So it may be, we suggested, that delusions begin to emerge. Importantly, this explanatory framework, though far from comprehensive, offered the possibility of linking symptoms to brain processes because associative learning processes, including PE, had all been extensively studied in terms of their underlying neurobiology (Fletcher et al., 2001; Lavin et al., 2005; Schultz and Dickinson, 2000; Turner et al., 2004). Our attempts to establish this link focused on dopaminergic and glutamatergic systems and their interactions, given the evidence that these are critical to PE signalling (Lavin et al., 2005; Schultz and Dickinson, 2000). Since ketamine, an influential and compelling model of early psychosis (Krystal et al., 1994; Pomarol-Clotet et al., 2006), impacts both transmitter systems (Kegeles et al., 2000), this framework enabled a comprehensive account of the drug’s psychotomimetic effects.

This early model, as we review below, has proven useful in contributing to subsequent ideas and research in our groups and beyond. Moreover, it has garnered support from a number of experimental approaches. But, like all models, it was necessarily a simplification. In particular, it focused primarily on the formation of simple associations between stimuli or between cause and effect, and it drew out some basic ideas of how a perturbed PE signal might disrupt association formation as well as perception and attention. But it is important to move beyond deterministic first-order associations: the statistical regularities of the world exist at many levels and interact in complex ways. Just as the association between, for example, an apple and an apple tree is dependent on a higher level concept of seasons and weather, so the challenge faced by the brain is to represent associations in ways that are sensitive to context and to more remote second- and third-order associations. And within this framework, not only does the model become complex and intricate, but the circumstances under which it must be updated become ever more opaque. Consider seeing an apple tree with no apples: there are numerous reasons why this PE should not be a reason for updating our association between apples and apple trees, and the likelihood of modifying our belief about this association would depend on many factors. Computational models coming to grips with this complexity are producing insights that show great promise for informing and testing hypotheses about mechanisms by which psychosis – aberrant world modelling – may emerge (Adams et al., 2013; Corlett and Fletcher, 2014; Fletcher and Frith, 2009).

Thus, there were a number of important parameters that were not part of our model, and we consider these in the latter part of this article, showing how their inclusion enriches, without substantially changing, the basic ideas adumbrated in the original article. In particular, we consider how an aberrant PE signal, over time, could lead to adjustments in attention and perception, as well as the readiness to update one’s model of the world in response to new information (Adams et al., 2013; Corlett and Fletcher, 2014; Fletcher and Frith, 2009). In this richer and more nuanced consideration, we find promising ways of extending the model to explain other key aspects of delusions, as well as important accompanying features such as hallucinations and negative symptoms (Adams et al., 2013; Corlett, 2015; Corlett and Fletcher, 2014; Fletcher and Frith, 2009).

Prior to this, we focus on how the basic theory, as originally outlined, has fared empirically and how it relates to other accounts of delusions (Coltheart and Davies, 2000) and to schizophrenia more broadly.

Empirical approaches to the PE model of delusions

In 2007, we made the case that ketamine infusion in healthy people provided a window on a hitherto experimentally challenging situation: the emergence of psychotic experience and belief. With the advent of early intervention approaches in psychosis and studies of the psychosis prodrome, it became possible to study patients closer to this illness phase. Furthermore, the continuum from healthy beliefs through to delusions has been increasingly appreciated. Studying attenuated psychotic symptoms (in otherwise healthy subjects) has proven a fruitful avenue of inquiry for testing the model. It should be noted that the basic experimental design that we have initially favoured, in provoking PE signal in order to characterise neural responses using functional neuroimaging, has been challenged (Griffiths et al., 2014). Having responded to this challenge (Corlett and Fletcher, 2015), we do not propose to revisit the argument here, but do note that the fundamental ideas underlying the model were applauded and the overall evidence in their favour was considered robust (Griffiths et al., 2014). There is a more fundamental challenge relating to whether a PE signal disruption is a sufficient circumstance to engender delusions, and given that this relates to a long-standing dialogue among delusion theorists, we consider this in more detail below.

Before considering the challenges to the model and its shortcomings, it is reassuring to note that there is a good deal of support for the idea that psychosis is associated with altered PE signalling. Importantly, in a study of patients with first-episode psychosis, using an identical task, we observed a very similar pattern of PE responses in the right dorsolateral prefrontal cortex as that seen in healthy people administered ketamine (Corlett et al., 2006, 2007b). Crucially, these aberrant responses correlated with the severity of altered beliefs across subjects. In the same participants, using a different (reward-based) learning task, we observed a pattern of altered PE response that was highly comparable to controls (Murray et al., 2008). In both tasks, causal learning and reward PE signals in frontal, striatal and midbrain regions were inappropriately engaged. However, only rDLPFC PE during causal belief formation correlated with delusion scores in patients with first-episode psychosis. In healthy, non-psychotic people too, the degree to which their prefrontal PE response resembles that observed in patients with delusions correlates with the distress they feel with regard to their delusion-like ideas, though ventral striatal responses associated with the ideas themselves, irrespective of accompanying distress (Corlett and Fletcher, 2012). That is, if you hold your beliefs like a patient with psychosis, your right frontal PE response approximates that observed in patients with delusions (Corlett and Fletcher, 2012). These data came from our own work. Others have similarly observed aberrant PE signals in striatum, amygdala and frontal cortex in patients with psychotic illness that correlate with the severity of delusions (Gradin et al., 2011; Romaniuk et al., 2010; Schlagenhauf et al., 2009; Waltz et al., 2010).

In short, empirical data from patients with psychosis have consistently linked aberrant PE signal (measured with functional magnetic resonance imaging in various task contexts; electroencephalogram [EEG] and magnetoencephalogram correlates of PE have yet to be related to delusions) to the severity of delusions. However, there have been a number of theoretical and empirical challenges to the model that we now go on to discuss.

Shortcomings of the 2007 model

When we presented the model in 2007, it was an initial sketch of how ketamine might give rise to delusion-like ideas. While it gave us some elbow room to begin carving a more complete explanation of clinical delusions in terms of mind and brain function, it was by no means complete. It did not, for example, address one of the cardinal features of delusions: their fixity in the face of contradictory evidence. Indeed, it could be argued that in positing a model that could explain how beliefs are too readily updated, we were inherently failing to explain why the new beliefs themselves – the delusions – are actually tenacious and seemingly immune to new and contradictory evidence.

Moreover, the model had little to say directly about the content of delusions and in particular their focus on the social realm (the fact that they are often about other people and one’s relationships to them). Nor did it deal with the affective charge of delusions. After all, why are delusional beliefs so deeply coloured by emotion and mood? Finally, delusions do not occur in isolation. They are frequently accompanied by hallucinations and often co-occur with negative symptoms such as social withdrawal, apathy and self-neglect. We consider each of these in turn.

The main advance since 2007 involves an appreciation that models of associative learning might pertain not just to animal conditioning and human beliefs but also to perception. This allows us to address many of the earlier model’s shortcomings. It entails conceiving of perception not as a passive process of sensory reception, but rather as active analysis by synthesis.

Our perceptual experience comprises not the actual sensory input but rather the most likely (based on prior experience) cause of this input. We do so by exploiting past regularities (encapsulated in our world model). Herman Von Helmholtz called this ‘unconscious inference’ and made the provocative claim that all perception was a form of controlled hallucination (given how reliant perception is upon these prior regularities rather than raw sense data). Pavlov also appreciated the deep connection between conditioning and perception: ‘Evidently what the genius Helmholtz was referring to in unconscious inference, is the mechanism of the conditioned reflex’ (Pavlov, 1928).

At the neuro-computational level, these ideas are realised in the hierarchical organisation of the brain. Prior expectations are realised top-down via NMDA and GABA signalling. Any mismatch between those priors and incoming information is signalled bottom-up via AMPA receptors. The impact of a given PE on future predictions is governed by its precision (or inverse variance). This computational motif is recapitulated across successive layers in a hierarchical manner – moving away from raw sensory data, the representations become increasingly complex, multifaceted (Mesulam, 1998) and perhaps distant from the immediate evidence of perception. The priors from the level above impact the signals from the level below. Different neuromodulators (dopamine, acetylcholine, serotonin, oxytocin) may implement the precision of priors and PEs in particular processing hierarchies.

Is altered PE signalling enough to explain delusions?

The PE account of delusions can be considered a one-factor account in that it argues that disruption of a single process (or collection of interrelated processes) may suffice to explain such beliefs. This raises a conflict with neuropsychological models asserting that two factors are necessary for delusions to form (a perceptual disruption and a belief evaluation deficit). We argue that actually this conflict is more apparent than real and arises because the PE account is pitched at a different level of explanation to that of such neuropsychological accounts.

The argument for there being two necessary factors (disruptions) in delusions is compelling. It is often related to monothematic delusions following brain damage, although it has been applied to patients with schizophrenia who have delusions. The main focus of the two-factor theory is Capgras delusion – the belief that a loved one has been replaced by an imposter – and it has been suggested that it can only arise when two things happen. First, a person fails to show the normal autonomic response to a person (leading to a lack of feeling of familiarity, even though there is a strong recognition). This would fit with the unlikely explanation that this person has been replaced, but such an explanation would only be accepted (i.e. the delusion would only form) if the sufferer also had a deficit in their ability to evaluate and reject improbable beliefs. An advantage of this explanation is that it is based on standard neuropsychological methodology and draws on the existence of a single dissociation across the two factors (the experience of a lack of sense of familiarity and the emergence of the ensuing belief).

The two-factor theory implies a separation between perception and cognition. The essence of the predictive processing approach that underpins the PE model of delusions is that such separation, though it functions well at a descriptive level (some mental phenomena can meaningfully be described as beliefs and some as perceptions) does not require a separation at a deeper level. More specifically, it argues that both perceptions and beliefs are inferences (based upon an integration of upcoming data and existing prior expectation). Though they act at different levels within the hierarchy, the fundamental processes that underpin them and that may cause their perturbation may be common. In our model, hierarchy is key. Factor one (altered experience) could be specified lower in the hierarchy, and factor two (altered belief evaluation) higher up. But crucially, PE and its resolution unite them. Furthermore, enough disruption low down or high up can result in delusions.

It has been argued that the two-factor theory subsumes a PE model, but that the PE model alone is insufficient, since it only explains the abnormal experience but not the altered belief. We disagree with this perspective. While it is compelling to us that PE may offer a good explanation for abnormal experiences, we encourage the question of what the effects might be of a perturbed PE signal as we move up to higher, more abstract and complex levels of inference. There, we would see a disruption in the ability to evaluate and, where appropriate, reject, models of the world. In short, viewed at this deeper level, the same process that accounts for abnormal perception can also account for abnormal belief.

Simply put, the two explanations (two-factor and predictive processing) are cast at different explanatory levels. The two-factor theory is concerned with describing cognitive architectures. Predictive processing aims to unite brain, behavioural and phenomenological data for all delusions (neurological and those that occur in schizophrenia) and, as we argue presently, other psychotic symptoms such as hallucinations and amotivation.

The psychologist Kurt Lewin coined the aphorism ‘there is nothing as practical as a good theory’. Since 2007, this expanded hierarchical model has been applied to explain other aspects of delusions, other psychotic symptoms (hallucinations and negative symptoms) and the psychotomimetic effects of other interventions (serotonergic hallucinogens, tetra-hyrdo-cannabinol [THC]). We now enumerate some of those advances.

Why do delusions persist?

One remarkable feature of delusional beliefs is their elasticity: they expand and morph to include new contradictory data. The person with Capgras, claiming their spouse is an imposter, might respond to other family members who greet the spouse warmly by saying, ‘Of course they hug her [the impostor spouse] – they’re in on it!’ This is difficult to understand in at least two respects. First, the sufferer can often learn about other new things (they don’t have an all-encompassing deficit in learning), so why do they seem unable to incorporate new and often strongly contradictory evidence into their beliefs rather than so tenaciously holding the central delusional belief? Second, and related to this, the PE model of delusional emergence hypothesises an inappropriately enhanced tendency to develop new beliefs. Prima facie, this would surely militate against those beliefs being unshakeable.

In trying to answer these questions, we pursued two related lines of thought. The first concerns a disruption in the ways in which newly learned associations become updated. The idea here is that delusions differ from other beliefs in several ways that change their encoding and reconsolidation in memory. First, we suggest, their emergence occurs in response to a world that has become strange and mysterious. There is a puzzle to be solved. Compelling coincidences and seemingly significant events provoke a search for meaning, and the belief that eventually seems to resolve the ambiguity and uncertainty has a powerful function in relieving stress and anxiety. The fact that the belief is often unpleasant does not detract from this function, since it may be easier to bear and respond to a difficult certainty than a nameless and shapeless fear. Given their explanatory utility, they are rehearsed extensively. When delusions are questioned, bringing them to mind may actually serve to reinforce rather than to disrupt the memory (Corlett et al., 2010; Corlett et al., 2009b; Corlett et al., 2013). The idea here is that re-evocation of an association may, under certain circumstances (notably when PE signalling is inherently disrupted), strengthen a memory, even when it is not formally reinforced. We have modelled this process in humans with ketamine. By creating new associations (either appetitive or aversive) and then, a day after, reactivating them (in the absence of the original reinforcer) under ketamine, we observed that the memories became strengthened in comparison to the same procedure under placebo (Corlett et al., 2013). Indeed, the magnitude of this effect correlated with ketamine-induced psychosis and PE brain signal (Corlett et al., 2013). We replicated this memory-enhancing effect in rodents (Honsberger et al., 2015). Conditioned fear memories reactivated under ketamine are subsequently strengthened (Honsberger et al., 2015). This effect was blocked by ifenprodil infusion in the amygdala (a procedure commonly used to block memory destabilisation and updating in preclinical studies of reconsolidation; Honsberger et al., 2015).

The above, empirically supported but currently tentative explanation for how a belief begins to strengthen, even in the absence of objective evidence, can be considered alongside another important phenomenon relating to PE signal. Specifically, there is growing evidence that we are sensitive not just to the magnitude of PE but also to its variability. A high degree of variability can be encoded and leads in time to an adaptation of learning such that a given magnitude of a specific PE produces less updating (Diederen and Schultz, 2015; Preuschoff and Bossaerts, 2007; Preuschoff et al., 2008). In effect, we encode not just surprise or the unpredictability of a single event, but also keep a running tally of how likely we are to be able to predict the current environment, downregulating PE-dependent learning when our best predictions are unable to reduce PE. Thus, one can envisage that in the emergence of psychosis, a person is not just updating beliefs to suppress PE, but also, at a larger timescale, beginning to downregulate the importance of PE. This could lead to the possibility that early beliefs persist and can then remain relatively unchallenged as the person adapts learning rate (learning not to update). Again, this is speculative, but it perhaps demonstrates the enhanced power to the PE model of taking into account dynamic, evolving effects and, importantly, suggests further experiments.

What may account for the characteristic contents of delusions?

Our original article was primarily concerned with describing a mechanism by which delusions might arise and had little to say about the characteristic content of delusional beliefs. Here, we outline some ideas that may fill this gap in relation to four characteristics of delusional thinking. First, delusions are highly personal in one sense, reflecting the fears and preoccupations of the individual, while at the same time, there are more generic and common aspects to them, and they draw more broadly on the contents of that individual’s culture and era (Stompe et al., 2003). Second, delusions usually relate to agents and can often seem to reflect a fundamental alteration in the ability to attribute agency appropriately. Third, related to this, delusions are predominantly social (Bell, 2013; Fineberg and Corlett, 2016). They are often about people and their intentions or goals rather than merely physical entities. Thus, for example, a person may come to view an array of unusual coincidences and sinister experiences as arising from the actions of a persecutor (Kihlstrom and Hoyt, 1988). We note of course that there are some delusions with apparent positive content, and have recently argued that delusions may serve an adaptive function of facilitating continued engagement with an unpredictable environment (Fineberg and Corlett, 2016). Ultimately though, even grandiose delusions can be a source of distress and uncertainty (the person may feel themselves to be responsible for important events, including unpleasant ones, and may see themselves as vulnerable to envious and powerful enemies; Corlett et al., 2007a).

The first characteristic is, in one sense, very straightforward to understand. The delusion is seen as a person’s hypothesis about the origins of their perceptual experiences. It is, as Coltheart et al. (2010) have observed, an abductive inference in which data are used to infer their underlying cause. Given that such inference relies on a person’s best guess (Pierce, 1931–1958), then it follows that their own prior knowledge and expectation will necessarily determine the content of the emergent belief. And since their own expectations are, inter alia, socioculturally determined, there will be a strong overlap between those of the person and the time and culture they inhabit. The shifting nature of delusions across the decades testifies to this (Stompe et al., 2003).

A further idea that emerges from this model is that if the delusion arises to explain uncertainty, it seems feasible that certain features of our environment, being inherently more uncertain, may prove more likely to become the subject of delusional thoughts. Phenomena such as the intentions of others, their hidden goals, the import of their actions and their facial expressions may all be areas that are most likely to change in the face of altered PE (Corlett, 2015).

The question of agency in delusions, and psychosis more generally, is a very interesting one. While delusions are by no means uniquely about agents, they are nonetheless frequently preoccupied with the intentions and actions of other agents, and, in some cases, may explicitly entail a sense that another agent is exerting control over oneself (Frith, 2005a, 2005b). More broadly, other psychotic symptoms that can co-occur with delusions such as hearing voices have also been attributed to a failure to attribute agency correctly such that one’s own inner speech feels as though it has been externally generated (Ford, 2016; Ford and Mathalon, 2005; Ford et al., 2007). This has been framed in terms of a general source-monitoring deficit in schizophrenia (Keefe et al., 1999; Keefe and Kraus, 2009; Kraus et al., 2009).

In fact, there is a compelling body of theoretical literature relating prediction and expectation to the attribution and experience of agency. This comes from observations that even under normal circumstances, it is possible to produce a sense of agency in people who are not in fact the authors of an action and, conversely, to disavow agency for their own actions (Frith, 2005a, 2005b). Both of these phenomena can be produced by altering expectations and external cues, leading to the emerging view that sense of agency emerges from the integration of prior expectations with internal (e.g. proprioceptive) and external cues (Moore et al., 2011a; Moore and Fletcher, 2012). This has been very well-illustrated in the work of Daniel Wegner who shows that both expectations as well as external cues can profoundly alter the degree to which one sees oneself as the cause of events or attributes them to some external agent (Wegner and Wheatley, 1999).

In this sense, attribution of agency has the same status as other abductive inferences characterising delusional thinking (as discussed above), and perturbations within the system, as would be the case with disrupted PE signalling, could fundamentally alter the experience both of one’s own agency and of that attributed to events in the outside world. Put in simple terms, one’s own actions are characterised partly by the predictability of their consequences, so an action accompanied by an unpredictable consequence is perhaps more likely to originate externally (Frith, 2005a, 2005b).

There is some evidence for the above perspective. Ketamine also enhances intentional binding (the perceived compression in time between action and outcome for deeds for which we feel agency; Moore et al., 2011b). Intentional binding is likewise enhanced in patients with first-episode psychosis (Hauser et al., 2011; Voss et al., 2010). PEs have been implicated in intentional binding effects; binding effects are subject to Kamin blocking (prior learning of action–event associations can block the intentional binding effect). The blocking is weaker in individuals with higher schizotypy scores (Moore et al., 2011a). Furthermore, one’s sense of agency for actions can be probed by considering forward modelling comparing predictions to feedback sensations and cancelling what was predicted to guide one’s sense of ownership for thoughts and actions. Specifically, we are more likely to feel ourselves to be agents when the consequences of actions are predicted and more likely to assume external agency when they are unpredicted. It has been shown that for self-produced forces, we are likely to cancel out the sensory consequences (which are predicted) such that when trying to match an external force that we have just experienced on our finger by pressing down on the same finger, we overcompensate. The extra force exerted is thought to overcome the cancellation effect of our anticipation prior to performing an action (Shergill et al., 2003, 2005).

Patients with schizophrenia do not overcompensate; they are more accurate (Shergill et al., 2005). Also, more accurate force matching (without predictive over-compensation) correlates with delusion-like ideation in healthy people (Teufel et al., 2010). Taken together, these data confirm that PE-driven inferences are central to a range of experiences. Ketamine and psychosis similarly perturb these PEs, and those perturbations relate to the severity of endogenous and ketamine induced psychotic symptoms (Moore et al., 2011b).

PEs have been invoked to explain the sense of agency for our actions and ownership for our bodies. Ketamine augments experience of the rubber-hand illusion – the spurious sense of ownership of a prop-hand if the hand is stroked at the same time as one’s own hand (Morgan et al., 2011). People on ketamine get the illusion more strongly, and they experience it even in a control condition when the real and rubber hands are stroked asynchronously (Morgan et al., 2011). Patients with schizophrenia (Peled et al., 2003) and chronic ketamine abusers evince the same excessive experience of the illusion in the synchronous and asynchronous conditions (Tang et al., 2015). Activity in the right anterior insula cortex increases to the extent that individuals experience the illusion. Anil Seth and others have argued that the anterior insula is a key nexus for the PE-driven inferences that guide perceptions of bodily ownership and agency (Palmer et al., 2015; Seth, 2013; Seth et al., 2011).

In the remainder of the paper, we attempt to bring other symptoms – specifically, hallucinations and negative symptoms – into the explanatory fold. In so doing, we consider new dimensions of the theory (including its relationship to artificial intelligence and deep learning).

Predictive processing and hallucinations

Can predictive processing theory explain hallucinations? These have been related to aberrant PE signals in primary sensory cortices (Horga et al., 2014). Furthermore, prior theories of hallucinations can be cast in predictive processing terms. For example, auditory verbal hallucinations (AVH; ‘voices’) have been explained as aberrations of predictive forward models of inner speech (Ford, 2016; Ford and Mathalon, 2005; Ford et al., 2007). This corollary discharge explanation posits that thoughts and inner speech are prosecuted by means of an efferent copy of the motor acts that such inner speech would entail. This efferent copy from cerebellum to parietal cortex is used to cancel the sensorimotor consequences’ actions (Blakemore et al., 1999). This mechanism may explain delusions of motor control in which a person experiences their own actions as arising from, and being controlled by, an external agent.

The same theory has been applied to inner speech (Feinberg, 1978) – that we predict the consequences of speaking in our heads (Frith, 2005a, 2005b). Any mismatch leads to the perception of alien agency for the inner speech – it is perceived as external. Accordingly, having subjects open their mouth wide, preventing pre-articulatory motions of the facial muscles, may well attenuate AVHs, perhaps because of effects on these motor predictions (Bick and Kinsbourne, 1987).

Wilkinson (2014) recently cast this model in predictive processing terms. The phenomenology of AVH comes about because the individual infers the best explanation for unexpected inner speech that reaches the threshold for awareness must be an individual talking inside one’s head. Despite its intuitive appeal, corollary discharge or efferent copy theory has not fared so well empirically (Ford, 2016). Corollary discharge processing as measured by frontal cortex EEG signals during speech production and the perception of perturbed versions of one’s own speech is impaired in patients who hear voices, but it is likewise impaired in patients with schizophrenia who do not have AVH (Ford, 2016). Furthermore, the severity of corollary discharge impairments does not correlate with AVH severity across subjects (Ford, 2016). Finally, recent work suggests that rather than a failure of prediction that subtends aberrant PE, hallucinations may come about via an undue influence of priors on current processing (Teufel et al., 2015). Patients with an at-risk mental state are more likely to use prior visual information in making visual decisions (Teufel et al., 2015).

Taken together with observations of conditioned hallucinations (sensory experiences without stimuli that can be trained in the lab and to which patients with AVH are more sensitive; Kot and Serper, 2002), the empirical data suggest that hallucinations and delusions, whilst related, may be differently driven by the specification of prior expectations and how they shape subsequent processing. How can this be? In what follows, we focus on elaborations of the PE model that might explain the genesis of hallucinations, as well as the co-occurrence of positive and negative symptoms.

Reliability and uncertainty

To explain how hallucinations, delusions and perhaps even negative symptoms can co-occur despite being related to subtly different aspects of predictive processing, we turn to statistical learning theory, in particular how learning theories deal with attentional allocation. Many different, sometimes opposing, events can be salient. Importantly, both unpredicted events and those that are consistent predictors of important outcomes are potentially salient. These types of events (predictive and unpredicted) seem to have very different relationships to prediction. Statistical models of attentional allocation, unlike many formal learning theory models, allow for this. They simultaneously assess the relevance of stimuli for predicting outcomes (this is considered as their ‘reliability’), and the relevance of their uncertainty, or failure to predict outcomes correctly for adjusting the predictions (Dayan et al., 2000). Considering the above, simple example of the apple tree, this might be a highly reliable predictor of the presence of apples, given that it does much better than other trees, but nonetheless a very uncertain one (given the importance of the seasons or weather).

Reliabilities lead to competition between stimuli for making the predictions (Mackintosh, 1975) and so are different from uncertainties in prediction which lead to competition for learning the predictions (Pearce and Hall, 1980). Reliabilities are the statistical account of which stimuli we deem important predictors, whereas uncertainties quantify how well those predictions are known. Reliability and uncertainty have different neurobiological and neurochemical mechanisms (Yu and Dayan, 2005). They have opposing relationships to PE. We argue that ketamine-induced positive and negative symptoms may relate to impaired reliability and enhanced uncertainty processing, respectively.

Both reliability and uncertainty affect attentional allocation. Organisms attend to reliable predictors of salient events (Anderson et al., 2011; Mackintosh, 1975), but on the other hand, PEs elicit surprise. Therefore, reliable events amass fewer PEs and should therefore be attended to less than unreliable predictors (Pearce and Hall, 1980). How can this be? Holland and Schiffino (2016) propose that the different ecological demands on attention in learning and action selection may explain the difference. Action decisions are optimised through a bias towards attending to reliable predictors of future states. On the other hand, learning is best served by focusing on the unknown. Pearce and Hall (1980) made this distinction by contrasting controlled versus automatic processing. The learning rate or associability of cues may not be equivalent to that for actions. Attention in learning may be driven by PEs. For action, predictions may dominate. For example, animals can attend to one element of a stimulus array for action guidance (based on its reliability) and another independent feature for learning (based on uncertainty). In the five-choice serial reaction time task, rats can be cued to which action to select. Degradation of those cues (by shortening the cues or decreasing their salience) can increase both errors of commission and omission. The more reliable predictors of which action to commit garner the most attention. On the other hand, those same cues can be learned as probabilistic predictors of food outcomes. Here, uncertain Pavlovian predictors accrue attention and are thus subsequently more associable: they are more readily learned about. In rodents, lesions of medial prefrontal cortex and parietal cortex doubly dissociate reliability-based from uncertainty-based attention (Holland and Schiffino, 2016).

We believe that this distinction may be helpful in reconciling the co-occurrence of delusions, hallucinations and negative symptoms in the same patients. These disparate (seemingly contradictory) symptoms may be differentially reliant on impairments in reliability processing (action selection) or uncertainty processing (learning).

Given the association between negative symptoms and goal-directed action selection, we might predict that patients with schizophrenia might show attenuated responses to cues that guide action selection (impaired reliability estimates). At the same time, patients with delusions might show spurious responses at the time of the outcome (enhanced uncertainty). Data from the monetary incentive delay task support this assertion. Negative symptoms in people with schizophrenia correlate with attenuated striatal responses to action eliciting cues that portend, for example, which action to select and its associated value (Waltz et al., 2010). On the other hand, positive symptoms (specifically delusions) correlate with aberrant PE signals in lateral prefrontal cortex (Corlett et al., 2007b; Waltz et al., 2010), medial prefrontal cortex (Schlagenhauf et al., 2009) and midbrain (Romaniuk et al., 2010) at the time of the outcome.

Predictive processing theory is cast in terms of the hierarchical arrangement of neural systems. Priors are specified top-down and PEs communicated bottom-up, but at each hierarchical level, there may be different relative precisions of predictions and PE (reliability and uncertainty, respectively). And these trade-offs may be different for visual, auditory, motor and other hierarchies. Thus, it may be that a global PE dysfunction (e.g. from disrupted excitatory inhibitory balance in the cortex; Bastos et al., 2012) may impact these hierarchies to different degrees, and to the extent that specific hierarchies (perceptual, motor) are disrupted, different symptoms (positive and negative) might arise.

A brain that is receiving noisy signals can become hungry for the priors that could possibly make sense of that noise and thus resolve uncertainty (Teufel et al., 2015). Thus, when signal from the lowest levels of sensory input are noisy, it may impose precise priors top-down higher in the hierarchy – weighting perception towards expectation (rather than input) in a listening attitude as Arieti (1974) put it – which produces hallucinations (Hoffman, 2010). Sensory isolation can engender the same effect as bias towards prior expectations that engenders hallucinations (Corlett et al., 2009a).

Overwhelming unreliability of previous actions will likely produce a change in one’s higher-level beliefs about the efficacy of one’s action, perhaps leading to the conclusion that no behaviours will be effective at reducing uncertainty, so it is best not to act at all, producing negative symptoms (Corlett, 2015).

Other drug models

In 2007, we focused on predictive learning and ketamine. By expanding the model in terms of predictive processing and its role in perception and action, we were able to bring the psychotomimetic effects of other drugs into the explanatory fold. Serotonergic hallucinogens such as LSD induce visual hallucinations (Geyer and Vollenweider, 2008).

They do not, however, induce delusions (Young, 1974). In rats, they enhance glutamatergic responses to sensory stimuli in the locus coeruleus (Rasmussen and Aghajanian, 1986) and frontal cortex (Aghajanian and Marek, 1997), and may actually enhance NMDA signalling (Lambe and Aghajanian, 2006). Excessive AMPA signalling in the absence of NMDA impairment would lead to increased sensory noise in the context of normal priors. This is exactly the context in which we expect hallucinations to arise. The neural correlates of priors and PEs, top-down and bottom-up, have yet to be completely delineated. One theory of the default mode network, a brain circuit engaged when subjects are in a task-free mind-wandering state, is that it reflects PEs to be explained and the process of learned resolutions (Carhart-Harris and Friston, 2010). Rodent data support this idea (Berkes et al., 2011). Serotonergic hallucinogens increase default mode responses in human subjects in a manner that correlates with their psychotomimetic effects (Carhart-Harris et al., 2013). However, behavioural tasks that engage associative learning, perception and belief formation have yet to be examined in this context.

Cannabinoids such as Δ-9-THC also have psychotomimetic effects (D’Souza et al., 2004). The binocular depth inversion illusion (a stereoscopic effect thought to be driven by prior expectations about stimulus curvature) is attenuated by Δ-9-THC (Koethe et al., 2006; Semple et al., 2003). This weakening of top-down influences would suggest that hallucinations should not predominate under Δ9-THC administration, and this appears to be the case. However, delusion-like ideas do occur (D’Souza et al., 2004).

Amphetamine elevates dopamine levels in the striatum in healthy volunteers and more so in individuals with schizophrenia (Laruelle et al., 2003). A single dose of amphetamine does not induce delusion-like ideas or hallucinations. Rather, elevated mood, grandiose ideas and hyperactivity are more characteristic (Jacobs and Silverstone, 1986). It also increases perceptual acuity of the whole visual field (Fillmore et al., 2005), unlike ketamine, which enhances the salience of discrete and apparently random objects, events and stimuli (Corlett et al., 2007a; Oye et al., 1992). We suggest that this pattern of psychopathology is due to increased precision of both priors and PEs through enhanced dopaminergic (Kegeles et al., 1999; Laruelle et al., 1995) and cholinergic function (Acquas and Fibiger, 1998).

In sum, considering the trade-off between prior experiences and current inputs via their relative precision opens a whole new explanatory scope for the model, both in terms of symptoms other than delusions and interventions other than ketamine (Corlett et al., 2009a).

The future: The emergence of computational psychiatry

It would be remiss not to acknowledge the fact that the PE model of delusions was formulated in the setting of exciting developments in the application of computational models to psychiatric questions, and arrogant to fail to acknowledge that computational psychiatry, though it has come into the spotlight fairly recently (Corlett and Fletcher, 2014; Friston et al., 2014; Montague et al., 2012), has a long history.

Some of the earliest work in artificial intelligence (AI) was rapidly used to explore the genesis of psychosis. For example, symbolic models of natural language were trained to implement simple responses to verbal questions. By altering the model parameters (e.g. its input–output mappings), paranoid responses could be elicited (Colby, 1960). Hoffman and Dobscha (1989) went beyond the language of thought analogy, implementing an artificial neural network model called a Hopfield network that could memorise inputs and give appropriate outputs. By pruning the allowable connections in this model, it produced spurious recall that Hoffman and Dobscha (1989) related to hallucinations and delusions. Before his untimely death, Ralph Hoffman combined Hopfield networks with a modular cognitive architecture to examine story learning and recall. When he increased PE signalling in the network, it began recalling spurious agents in the narratives it produced, inserting itself into those stories in a manner consistent with some delusions (Hoffman et al., 2011). Again, these networks were not hierarchical.

We and others have argued that hierarchical organisation is likely a key organisational principle for cortical processing. This principle is likewise captured in ‘deep learning’ (LeCun et al., 2015) – state of the art AI that involves hierarchical (or ‘deep’) architectures comprised of many Hopfield networks, separated by hidden layers, that learn representations of data without supervision and use reinforcement learning on these representations to guide action selection, beating humans at Atari games (Mnih et al., 2015) and Go (Silver et al., 2016).

In deep learning, each stage in the hierarchy learns to generate or reconstruct the activation patterns in the stage below. One such network, the Deep Boltzmann machine (DBM), utilises both feedforward and feedback processing (Salakhutdinov and Hinton, 2012), which better suits the recurrent processing in brain hierarchies we have described. In a DBM, each hidden layer receives input from a layer below (that conveys bottom-up information), and from a layer above that has learned predict the activity of the layer below. The hidden units learn latent variable representations of the input data. This means a deep network can synthesise representations of input data, even in the absence of such data (Yuille and Kersten, 2006).

Such network behaviour suggests these architectures might implement Helmholtz’s analysis by synthesis and may be particularly helpful in examining the genesis of hallucinations. Indeed, Reichert et al. (2013) have done just that, using a DBM to model the genesis of visual hallucinations in Charles Bonnet Syndrome, a syndrome occurring in association with visual deficits (e.g. macular degeneration) in which illusory percepts and sometime complex hallucinations occur. As in Yu and Dayan (2002), acetylcholine can be used as a model parameter to set the balance in between feedforward and feedback flow of information in perception. In a version perhaps most relevant to our concerns, Dayan and Hinton describe a Helmholtz machine that uses acetylcholine to trade off priors and PEs and minimise free energy (Dayan et al., 1995; Hinton and Dayan, 1996). This formulation has much in common with the Kalman Filter approach to predictive learning that can embody statistical reliabilities and uncertainty and employs noradrenaline and acetylcholine to do as such (Dayan et al., 2000). These neurotransmitters may be opponent (Yu and Dayan, 2005) and have recently been implicated in the genesis of hallucinations (Collerton et al., 2005; Geddes et al., 2016). Kersten et al. (2004) further characterise a generative model as ‘strong’ if samples can be produced from it that consistently look like the data it learned from (Kersten et al., 2004). Reichert et al. (2013) were able to read out which particular stimuli were being hallucinated, which is a significant advance on earlier work with Hopfield networks.

Summary and conclusion

In conclusion, our 2007 model offered a rudimentary framework for thinking about how delusions may emerge, one that linked the experiences, via a cognitive model of associative learning, to neural processes. Here, we have shown how an extension of the model – one that brings in related parameters (precision, reliability, certainty) and a more dynamic view of how PE learning might change as PE signal evolves – offers new breadths of explanation. A key next step is to turn some of these insights into practical benefits for the patients who, along with their families and friends, may suffer greatly with their experiences. We believe that the elucidation of mechanisms by which these experiences arise is a necessary prelude to a comprehensive and precise diagnostic system, as well as to the development of individually targeted interventions.

Footnotes

Declaration of conflicting interests: The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Connecticut Mental Health Center (CMHC) and Connecticut State Department of Mental Health and Addiction Services (DMHAS). PRC was funded by an IMHRO/Janssen Rising Star Translational Research Award and CTSA Grant Number UL1 TR000142, R01MH067073 from the National Center for Research Resources (NCRR) and the National Center for Advancing Translational Science (NCATS), components of the National Institutes of Health (NIH), and NIH roadmap for Medical Research. The contents of this work are solely the responsibility of the authors and do not necessarily represent the official view of NIH or the CMHC/DMHAS. PCF is supported by the Wellcome Trust and the Bernard Wolfe Health Neuroscience Fund.

References

  1. Acquas E, Fibiger HC. (1998) Dopaminergic regulation of striatal acetylcholine release: the critical role of acetylcholinesterase inhibition. J Neurochem 70: 1088–1093. [DOI] [PubMed] [Google Scholar]
  2. Adams RA, Stephan KE, Brown HR, et al. (2013) The computational anatomy of psychosis. Front Psychiatry 4: 47. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Aghajanian GK, Marek GJ. (1997) Serotonin induces excitatory postsynaptic potentials in apical dendrites of neocortical pyramidal cells. Neuropharmacology 36: 589–599. [DOI] [PubMed] [Google Scholar]
  4. Anderson BA, Laurent PA, Yantis S. (2011) Value-driven attentional capture. Proc Natl Acad Sci U S A 108: 10367–10371. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Arieti S. (1974) An overview of schizophrenia from a predominantly psychological approach. Am J Psychiatry 131: 241–249. [DOI] [PubMed] [Google Scholar]
  6. Bastos AM, Usrey WM, Adams RA, et al. (2012) Canonical microcircuits for predictive coding. Neuron 76: 695–711. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Bell V. (2013) A community of one: social cognition and auditory verbal hallucinations. PLoS Biol 11: e1001723. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Berkes P, Orban G, Lengyel M, et al. (2011) Spontaneous cortical activity reveals hallmarks of an optimal internal model of the environment. Science 331: 83–87. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Bick PA, Kinsbourne M. (1987) Auditory hallucinations and subvocal speech in schizophrenic patients. Am J Psychiatry 144: 222–225. [DOI] [PubMed] [Google Scholar]
  10. Blakemore SJ, Wolpert DM, Frith CD. (1999) The cerebellum contributes to somatosensory cortical activity during self-produced tactile stimulation. Neuroimage 10: 448–459. [DOI] [PubMed] [Google Scholar]
  11. Bleuler E. (1908) Die Prognose der Dementia praecox (Schizophreniegruppe). Allgem Zeit Psychiat Psychisch Gerich Med 65: 436–464. [Google Scholar]
  12. Carhart-Harris RL, Friston KJ. (2010) The default-mode, ego-functions and free-energy: a neurobiological account of Freudian ideas. Brain 133: 1265–1283. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Carhart-Harris RL, Leech R, Erritzoe D, et al. (2013) Functional connectivity measures after psilocybin inform a novel hypothesis of early psychosis. Schizophr Bull 39: 1343–1351. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Clark A. (2013) Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behav Brain Sci 36: 181–204. [DOI] [PubMed] [Google Scholar]
  15. Colby KM. (1960) Artificial Paranoia: A Computer Simulation of Paranoid Processes. New York: Pergamon. [Google Scholar]
  16. Collerton D, Perry E, McKeith I. (2005) Why people see things that are not there: a novel Perception and Attention Deficit model for recurrent complex visual hallucinations. Behav Brain Sci 28: 737–757. [DOI] [PubMed] [Google Scholar]
  17. Coltheart M, Davies M. (2000) Pathologies of Belief. Oxford: Blackwell. [Google Scholar]
  18. Coltheart M, Menzies P, Sutton J. (2010) Abductive inference and delusional belief. Cogn Neuropsychiatry 15: 261–287. [DOI] [PubMed] [Google Scholar]
  19. Conant RC, Ashby WR. (1970) Every good regulator of a system must be a model of that system. Int J Systems Sci 1: 89–97. [Google Scholar]
  20. Corlett PR. (2015) Answering some phenomenal challenges to the prediction error model of delusions. World Psychiatry 14: 181–183. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Corlett PR, Aitken MR, Dickinson A, et al. (2004) Prediction error during retrospective revaluation of causal associations in humans: fMRI evidence in favor of an associative model of learning. Neuron 44: 877–888. [DOI] [PubMed] [Google Scholar]
  22. Corlett PR, Cambridge V, Gardner JM, et al. (2013) Ketamine effects on memory reconsolidation favor a learning model of delusions. PloS One 8: e65088. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Corlett PR, Fletcher PC. (2012) The neurobiology of schizotypy: fronto-striatal prediction error signal correlates with delusion-like beliefs in healthy people. Neuropsychologia 50: 3612–3620. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Corlett PR, Fletcher PC. (2014) Computational psychiatry: a Rosetta Stone linking the brain to mental illness. Lancet Psychiatry 1: 399–402. [DOI] [PubMed] [Google Scholar]
  25. Corlett PR, Fletcher PC. (2015) Delusions and prediction error: clarifying the roles of behavioural and brain responses. Cogn Neuropsychiatry 20: 95–105. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Corlett PR, Frith CD, Fletcher PC. (2009a) From drugs to deprivation: a Bayesian framework for understanding models of psychosis. Psychopharmacology (Berl) 206: 515–530. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Corlett PR, Honey GD, Aitken MR, et al. (2006) Frontal responses during learning predict vulnerability to the psychotogenic effects of ketamine: linking cognition, brain activity, and psychosis. Arch Gen Psychiatry 63: 611–621. [DOI] [PubMed] [Google Scholar]
  28. Corlett PR, Honey GD, Fletcher PC. (2007a) From prediction error to psychosis: ketamine as a pharmacological model of delusions. J Psychopharmacol 21: 238–252. [DOI] [PubMed] [Google Scholar]
  29. Corlett PR, Krystal JH, Taylor JR, et al. (2009b) Why do delusions persist? Front Hum Neurosci 3: 12. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Corlett PR, Murray GK, Honey GD, et al. (2007b) Disrupted prediction-error signal in psychosis: evidence for an associative account of delusions. Brain 130: 2387–2400. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Corlett PR, Taylor JR, Wang XJ, et al. (2010) Toward a neurobiology of delusions. Prog Neurobiol 92: 345–369. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Dayan P, Hinton GE, Neal RM, et al. (1995) The Helmholtz machine. Neural Comput 7: 889–904. [DOI] [PubMed] [Google Scholar]
  33. Dayan P, Kakade S, Montague PR. (2000) Learning and selective attention. Nat Neurosci 3: 1218–1223. [DOI] [PubMed] [Google Scholar]
  34. Dickinson A. (2001) The 28th Bartlett Memorial Lecture. Causal learning: an associative analysis. Q J Exp Psychol B 54: 3–25. [DOI] [PubMed] [Google Scholar]
  35. Diederen KM, Schultz W. (2015) Scaling prediction errors to reward variability benefits error-driven learning in humans. J Neurophysiol 114: 1628–1640. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. D’Souza DC, Perry E, MacDougall L, et al. (2004) The psychotomimetic effects of intravenous delta-9-tetrahydrocannabinol in healthy individuals: implications for psychosis. Neuropsychopharmacology 29: 1558–1572. [DOI] [PubMed] [Google Scholar]
  37. Feinberg I. (1978) Efference copy and corollary discharge: implications for thinking and its disorders. Schizophr Bull 4: 636–640. [DOI] [PubMed] [Google Scholar]
  38. Fillmore MT, Rush CR, Abroms BD. (2005) d-Amphetamine-induced enhancement of inhibitory mechanisms involved in visual search. Exp Clin Psychopharmacol 13: 200–208. [DOI] [PubMed] [Google Scholar]
  39. Fineberg SK, Corlett PR. (2016) The doxastic shear pin: delusions as errors of learning and memory. Cogn Neuropsychiatry 21: 73–89. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Fletcher PC, Anderson JM, Shanks DR, et al. (2001) Responses of human frontal cortex to surprising events are predicted by formal associative learning theory. Nat Neurosci 4: 1043–1048. [DOI] [PubMed] [Google Scholar]
  41. Fletcher PC, Frith CD. (2009) Perceiving is believing: a Bayesian approach to explaining the positive symptoms of schizophrenia. Nat Rev Neurosci 10: 48–58. [DOI] [PubMed] [Google Scholar]
  42. Ford JM. (2016) Studying auditory verbal hallucinations using the RDoC framework. Psychophysiology 53: 298–304. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Ford JM, Mathalon DH. (2005) Corollary discharge dysfunction in schizophrenia: can it explain auditory hallucinations? Int J Psychophysiol 58: 179–189. [DOI] [PubMed] [Google Scholar]
  44. Ford JM, Roach BJ, Faustman WO, et al. (2007) Synch before you speak: auditory hallucinations in schizophrenia. Am J Psychiatry 164: 458–466. [DOI] [PubMed] [Google Scholar]
  45. Friston KJ, Stephan KE, Montague PR, et al. (2014) Computational psychiatry: the brain as a phantastic organ. Lancet Psychiatry 1: 148–158. [DOI] [PubMed] [Google Scholar]
  46. Frith C. (2005a) The neural basis of hallucinations and delusions. C R Biol 328: 169–175. [DOI] [PubMed] [Google Scholar]
  47. Frith C. (2005b) The self in action: lessons from delusions of control. Conscious Cogn 14: 752–770. [DOI] [PubMed] [Google Scholar]
  48. Geddes MR, Tie Y, Gabrieli JD, et al. (2016) Altered functional connectivity in lesional peduncular hallucinosis with REM sleep behavior disorder. Cortex 74: 96–106. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Geyer MA, Vollenweider FX. (2008) Serotonin research: contributions to understanding psychoses. Trends Pharmacol Sci 29: 445–453. [DOI] [PubMed] [Google Scholar]
  50. Gradin VB, Kumar P, Waiter G, et al. (2011) Expected value and prediction error abnormalities in depression and schizophrenia. Brain 134: 1751–1764. [DOI] [PubMed] [Google Scholar]
  51. Gray JA, Feldon J, Rawlins JNP, et al. (1991) The neuropsychology of schizophrenia. Behav Brain Sci 14: 1–84. [Google Scholar]
  52. Griffiths O, Langdon R, Le Pelley ME, et al. (2014) Delusions and prediction error: re-examining the behavioural evidence for disrupted error signalling in delusion formation. Cogn Neuropsychiatry 19: 439–467. [DOI] [PubMed] [Google Scholar]
  53. Hartley D. (1749/1976) Observations on Man, His Frame, His Duty, and His Expectations. New York: Delmar. [Google Scholar]
  54. Hauser M, Moore JW, de Millas W, et al. (2011) Sense of agency is altered in patients with a putative psychotic prodrome. Schizophr Res 126: 20–27. [DOI] [PubMed] [Google Scholar]
  55. Hinton GE, Dayan P. (1996) Varieties of Helmholtz machine. Neural Netw 9: 1385–1403. [DOI] [PubMed] [Google Scholar]
  56. Hoffman RE. (2010) Revisiting Arieti’s ‘listening attitude’ and hallucinated voices. Schizophr Bull 36: 440–442. [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Hoffman RE, Dobscha SK. (1989) Cortical pruning and the development of schizophrenia: a computer model. Schizophr Bull 15: 477–490. [DOI] [PubMed] [Google Scholar]
  58. Hoffman RE, Grasemann U, Gueorguieva R, et al. (2011) Using computational patients to evaluate illness mechanisms in schizophrenia. Biol Psychiatry 69: 997–1005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Holland PC, Schiffino FL. (2016) Mini-review: prediction errors, attention and associative learning. Neurobiol Learn Mem 131: 207–215. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Honsberger MJ, Taylor JR, Corlett PR. (2015) Memories reactivated under ketamine are subsequently stronger: a potential pre-clinical behavioral model of psychosis. Schizophr Res 164: 227–233. [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Horga G, Schatz KC, Abi-Dargham A, et al. (2014) Deficits in predictive coding underlie hallucinations in schizophrenia. J Neurosci 34: 8072–8082. [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Jacobs D, Silverstone T. (1986) Dextroamphetamine-induced arousal in human subjects as a model for mania. Psychol Med 16: 323–329. [DOI] [PubMed] [Google Scholar]
  63. Keefe RS, Arnold MC, Bayen UJ, et al. (1999) Source monitoring deficits in patients with schizophrenia; a multinomial modelling analysis. Psychol Med 29: 903–914. [DOI] [PubMed] [Google Scholar]
  64. Keefe RS, Kraus MS. (2009) Measuring memory-prediction errors and their consequences in youth at risk for schizophrenia. Ann Acad Med Singapore 38: 414–416. [PubMed] [Google Scholar]
  65. Kegeles LS, Abi-Dargham A, Zea-Ponce Y, et al. (2000) Modulation of amphetamine-induced striatal dopamine release by ketamine in humans: implications for schizophrenia. Biol Psychiatry 48: 627–640. [DOI] [PubMed] [Google Scholar]
  66. Kegeles LS, Zea-Ponce Y, Abi-Dargham A, et al. (1999) Stability of [123I]IBZM SPECT measurement of amphetamine-induced striatal dopamine release in humans. Synapse 31: 302–308. [DOI] [PubMed] [Google Scholar]
  67. Kersten D, Mamassian P, Yuille A. (2004) Object perception as Bayesian inference. Annu Rev Psychol 55: 271–304. [DOI] [PubMed] [Google Scholar]
  68. Kihlstrom JF, Hoyt IP. (1988) Hypnosis and the psychology of delusions. In: Oltmanns TF, Maher BA. (eds) Delusional Beliefs. New York: John Wiley. [Google Scholar]
  69. Koethe D, Gerth CW, Neatby MA, et al. (2006) Disturbances of visual information processing in early states of psychosis and experimental delta-9-tetrahydrocannabinol altered states of consciousness. Schizophr Res 88: 142–150. [DOI] [PubMed] [Google Scholar]
  70. Kot T, Serper M. (2002) Increased susceptibility to auditory conditioning in hallucinating schizophrenic patients: a preliminary investigation. J Nerv Ment Dis 190: 282–288. [DOI] [PubMed] [Google Scholar]
  71. Kraus MS, Keefe RS, Krishnan RK. (2009) Memory-prediction errors and their consequences in schizophrenia. Neuropsychol Rev 19: 336–352. [DOI] [PubMed] [Google Scholar]
  72. Krystal JH, Karper LP, Seibyl JP, et al. (1994) Subanesthetic effects of the noncompetitive NMDA antagonist, ketamine, in humans. Psychotomimetic, perceptual, cognitive, and neuroendocrine responses. Arch Gen Psychiatry 51: 199–214. [DOI] [PubMed] [Google Scholar]
  73. Lambe EK, Aghajanian GK. (2006) Hallucinogen-induced UP states in the brain slice of rat prefrontal cortex: role of glutamate spillover and NR2B-NMDA receptors. Neuropsychopharmacology 31: 1682–1689. [DOI] [PubMed] [Google Scholar]
  74. Laruelle M, Abi-Dargham A, van Dyck CH, et al. (1995) SPECT imaging of striatal dopamine release after amphetamine challenge. J Nucl Med 36: 1182–1190. [PubMed] [Google Scholar]
  75. Laruelle M, Kegeles LS, Abi-Dargham A. (2003) Glutamate, dopamine, and schizophrenia: from pathophysiology to treatment. Ann N Y Acad Sci 1003: 138–158. [DOI] [PubMed] [Google Scholar]
  76. Lavin A, Nogueira L, Lapish CC, et al. (2005) Mesocortical dopamine neurons operate in distinct temporal domains using multimodal signaling. J Neurosci 25: 5013–5023. [DOI] [PMC free article] [PubMed] [Google Scholar]
  77. LeCun Y, Bengio Y, Hinton G. (2015) Deep learning. Nature 521: 436–444. [DOI] [PubMed] [Google Scholar]
  78. Mackintosh NJ. (1975) A theory of attention: variations in the associability of stimuli with reinforcement. Psychol Rev 82: 276–298. [Google Scholar]
  79. Mesulam MM. (1998) From sensation to cognition. Brain 121: 1013–1052. [DOI] [PubMed] [Google Scholar]
  80. Miller R. (1976) Schizophrenic psychology, associative learning and the role of forebrain dopamine. Med Hypotheses 2: 203–211. [DOI] [PubMed] [Google Scholar]
  81. Mnih V, Kavukcuoglu K, Silver D, et al. (2015) Human-level control through deep reinforcement learning. Nature 518: 529–533. [DOI] [PubMed] [Google Scholar]
  82. Montague PR, Dolan RJ, Friston KJ, et al. (2012) Computational psychiatry. Trends Cogn Sci 16: 72–80. [DOI] [PMC free article] [PubMed] [Google Scholar]
  83. Moore JW, Dickinson A, Fletcher PC. (2011a) Sense of agency, associative learning, and schizotypy. Conscious Cogn 20: 792–800. [DOI] [PMC free article] [PubMed] [Google Scholar]
  84. Moore JW, Fletcher PC. (2012) Sense of agency in health and disease: a review of cue integration approaches. Conscious Cogn 21: 59–68. [DOI] [PMC free article] [PubMed] [Google Scholar]
  85. Moore JW, Turner DC, Corlett PR, et al. (2011b) Ketamine administration in healthy volunteers reproduces aberrant agency experiences associated with schizophrenia. Cogn Neuropsychiatry 16: 264–381. [DOI] [PMC free article] [PubMed] [Google Scholar]
  86. Morgan HL, Turner DC, Corlett PR, et al. (2011) Exploring the impact of ketamine on the experience of illusory body ownership. Biol Psychiatry 69: 35–41. [DOI] [PMC free article] [PubMed] [Google Scholar]
  87. Murray GK, Corlett PR, Clark L, et al. (2008) Substantia nigra/ventral tegmental reward prediction error disruption in psychosis. Mol Psychiatry 13: 239, 267–276. [DOI] [PMC free article] [PubMed] [Google Scholar]
  88. Oye I, Paulsen O, Maurset A. (1992) Effects of ketamine on sensory perception: evidence for a role of N-methyl-D-aspartate receptors. J Pharmacol Exp Ther 260: 1209–1213. [PubMed] [Google Scholar]
  89. Palmer CJ, Seth AK, Hohwy J. (2015) The felt presence of other minds: predictive processing, counterfactual predictions, and mentalising in autism. Conscious Cogn 36: 376–389. [DOI] [PubMed] [Google Scholar]
  90. Pavlov IP. (1928) Natural science and the brain. In: Pavlov IP. (ed) Lectures on Conditioned Reflexes. New York, pp. 126–127. [Google Scholar]
  91. Pearce JM, Hall G. (1980) A model for Pavlovian learning: variations in the effectiveness of conditioned but not of unconditioned stimuli. Psychol Rev 87: 532–552. [PubMed] [Google Scholar]
  92. Peled A, Pressman A, Geva AB, et al. (2003) Somatosensory evoked potentials during a rubber-hand illusion in schizophrenia. Schizophr Res 64: 157–163. [DOI] [PubMed] [Google Scholar]
  93. Pierce CS. (1931–1958) Collected Papers of Charles Sanders Peirce. Cambridge, MA: Harvard University Press. [Google Scholar]
  94. Pomarol-Clotet E, Honey GD, Murray GK, et al. (2006) Psychological effects of ketamine in healthy volunteers. Phenomenological study. Br J Psychiatry 189: 173–179. [DOI] [PMC free article] [PubMed] [Google Scholar]
  95. Preuschoff K, Bossaerts P. (2007) Adding prediction risk to the theory of reward learning. Ann N Y Acad Sci 1104: 135–146. [DOI] [PubMed] [Google Scholar]
  96. Preuschoff K, Quartz SR, Bossaerts P. (2008) Human insula activation reflects risk prediction errors as well as risk. J Neurosci 28: 2745–2752. [DOI] [PMC free article] [PubMed] [Google Scholar]
  97. Rasmussen K, Aghajanian GK. (1986) Effect of hallucinogens on spontaneous and sensory-evoked locus coeruleus unit activity in the rat: reversal by selective 5-HT2 antagonists. Brain Res 385: 395–400. [DOI] [PubMed] [Google Scholar]
  98. Reichert DP, Series P, Storkey AJ. (2013) Charles Bonnet syndrome: evidence for a generative model in the cortex? PLoS Comput Biol 9: e1003134. [DOI] [PMC free article] [PubMed] [Google Scholar]
  99. Romaniuk L, Honey GD, King JR, et al. (2010) Midbrain activation during Pavlovian conditioning and delusional symptoms in schizophrenia. Arch Gen Psychiatry 67: 1246–1254. [DOI] [PubMed] [Google Scholar]
  100. Salakhutdinov R, Hinton G. (2012) An efficient learning procedure for deep Boltzmann machines. Neural Comput 24: 1967–2006. [DOI] [PubMed] [Google Scholar]
  101. Schlagenhauf F, Sterzer P, Schmack K, et al. (2009) Reward feedback alterations in unmedicated schizophrenia patients: relevance for delusions. Biol Psychiatry 65: 1032–1039. [DOI] [PubMed] [Google Scholar]
  102. Schultz W, Dickinson A. (2000) Neuronal coding of prediction errors. Annu Rev Neurosci 23: 473–500. [DOI] [PubMed] [Google Scholar]
  103. Semple DM, Ramsden F, McIntosh AM. (2003) Reduced binocular depth inversion in regular cannabis users. Pharmacol Biochem Behav 75: 789–793. [DOI] [PubMed] [Google Scholar]
  104. Seth AK. (2013) Interoceptive inference, emotion, and the embodied self. Trends Cogn Sci 17: 565–573. [DOI] [PubMed] [Google Scholar]
  105. Seth AK, Suzuki K, Critchley HD. (2011) An interoceptive predictive coding model of conscious presence. Front Psychol 2: 395. [DOI] [PMC free article] [PubMed] [Google Scholar]
  106. Shergill SS, Bays PM, Frith CD, et al. (2003) Two eyes for an eye: the neuroscience of force escalation. Science 301: 187. [DOI] [PubMed] [Google Scholar]
  107. Shergill SS, Samson G, Bays PM, et al. (2005) Evidence for sensory prediction deficits in schizophrenia. Am J Psychiatry 162: 2384–2386. [DOI] [PubMed] [Google Scholar]
  108. Silver D, Huang A, Maddison CJ, et al. (2016) Mastering the game of Go with deep neural networks and tree search. Nature 529: 484–489. [DOI] [PubMed] [Google Scholar]
  109. Stompe T, Ortwein-Swoboda G, Ritter K, et al. (2003) Old wine in new bottles? Stability and plasticity of the contents of schizophrenic delusions. Psychopathology 36: 6–12. [DOI] [PubMed] [Google Scholar]
  110. Sutton RS, Barto AG. (1998) Reinforcement Learning: An Introduction. Cambridge, MA: MIT Press. [Google Scholar]
  111. Tang J, Morgan HL, Liao Y, et al. (2015) Chronic administration of ketamine mimics the perturbed sense of body ownership associated with schizophrenia. Psychopharmacology (Berl) 232: 1515–1526. [DOI] [PubMed] [Google Scholar]
  112. Teufel C, Kingdon A, Ingram JN, et al. (2010) Deficits in sensory prediction are related to delusional ideation in healthy individuals. Neuropsychologia 48: 4169–4172. [DOI] [PMC free article] [PubMed] [Google Scholar]
  113. Teufel C, Subramaniam N, Dobler V, et al. (2015) Shift toward prior knowledge confers a perceptual advantage in early psychosis and psychosis-prone healthy individuals. Proc Natl Acad Sci U S A 112: 13401–13406. [DOI] [PMC free article] [PubMed] [Google Scholar]
  114. Turner DC, Aitken MR, Shanks DR, et al. (2004) The role of the lateral frontal cortex in causal associative learning: exploring preventative and super-learning. Cereb Cortex 14: 872–880. [DOI] [PMC free article] [PubMed] [Google Scholar]
  115. Voss M, Moore J, Hauser M, et al. (2010) Altered awareness of action in schizophrenia: a specific deficit in predicting action consequences. Brain 133: 3104–3112. [DOI] [PubMed] [Google Scholar]
  116. Waltz JA, Schweitzer JB, Ross TJ, et al. (2010) Abnormal responses to monetary outcomes in cortex, but not in the basal ganglia, in schizophrenia. Neuropsychopharmacology 35: 2427–2439. [DOI] [PMC free article] [PubMed] [Google Scholar]
  117. Wegner DM, Wheatley T. (1999) Apparent mental causation. Sources of the experience of will. Am Psychol 54: 480–492. [DOI] [PubMed] [Google Scholar]
  118. Widrow B, Hoff ME., Jr (1960) Adaptive switching circuits. IRE Wescon Conv Rec 4: 96–104. [Google Scholar]
  119. Wilkinson S. (2014) Accounting for the phenomenology and varieties of auditory verbal hallucination within a predictive processing framework. Conscious Cogn 30: 142–155. [DOI] [PMC free article] [PubMed] [Google Scholar]
  120. Young BG. (1974) A phenomenological comparison of LSD and schizophrenic states. Br J Psychiatry 124: 64–74. [DOI] [PubMed] [Google Scholar]
  121. Yu AJ, Dayan P. (2002) Acetylcholine in cortical inference. Neural Netw 15: 719–730. [DOI] [PubMed] [Google Scholar]
  122. Yu AJ, Dayan P. (2005) Uncertainty, neuromodulation, and attention. Neuron 46: 681–692. [DOI] [PubMed] [Google Scholar]
  123. Yuille A, Kersten D. (2006) Vision as Bayesian inference: analysis by synthesis? Trends Cogn Sci 10: 301–308. [DOI] [PubMed] [Google Scholar]

Articles from Journal of Psychopharmacology (Oxford, England) are provided here courtesy of SAGE Publications

RESOURCES