Summary
In perceptual decision-making, uncertainties regarding both noisy sensory information and changing environmental regularities must be considered. We aimed to clarify the relationship between these two sources of uncertainty using a combined motion discrimination and audiovisual reversal learning task with Bayesian modeling. As predicted, the influence of learned beliefs regarding audiovisual associations on perceptual decisions was greater under high sensory uncertainty. Critically, this modulatory effect was larger under high than low environmental uncertainty. Moreover, the degree to which observers relied on learned beliefs when making perceptual decisions depended on their individual tendency to change beliefs. While these findings suggest that weighting of the available sensory information against learned beliefs is modulated by their respective uncertainties, belief learning was not found to rely on sensory uncertainty. Unraveling of these interactive effects of sensory and environmental uncertainties in perception might aid in the understanding of aberrant perceptual inference in psychopathology such as schizophrenia.
Subject areas: Biological sciences, Neuroscience, Sensory neuroscience
Graphical abstract

Highlights
-
•
Sensory and environmental uncertainty interactively influence perceptual decision-making
-
•
Observers show individual differences in relying on beliefs in perception
-
•
Learning of beliefs is influenced by perceptual decisions
Biological sciences; Neuroscience; Sensory neuroscience
Introduction
To respond adequately to their sensory environment, biological agents have to make perceptual decisions based on the signals registered by their sensory organs. As signals are inherently noisy, perceptual decision-making has to take the uncertainty of sensory information into account.1 In addition to sensory uncertainty, stemming from noise in the sensory signal itself, there is also uncertainty regarding state changes of the environment over time, which has been termed environmental uncertainty.2 Environmental uncertainty can refer to stochastic outcome probabilities given a cue and a target (=expected uncertainty), possible changes in these probabilities (=unexpected uncertainty), and the frequency of such changes over time (=volatility).3,4 Optimal perceptual decision-making thus requires the consideration and integration of both sensory and environmental uncertainties.3,5,6
In the framework of predictive processing, perceptual decision-making under uncertainty is commonly regarded as a Bayesian inferential process in which prior beliefs about the world (=prior) and sensory information (=likelihood) are combined into a posterior belief (=posterior). The posterior corresponds to the brain’s current best guess regarding the possible causes of the sensory data and is thought to determine perceptual decisions.1,7,8 Priors are derived from an internally stored generative model of the world. Whenever these are violated by the incoming sensory data, prediction error (PE) signals are generated, which, as neural learning signals lead to an updating of the internal model.9
For such inference to be successful in the face of uncertainties from different sources, observers need to apply different levels of prior beliefs which are thought to be organized hierarchically. More basic or low-level priors regarding the causes of sensory data will be influenced by higher-level priors regarding environmental regularities.2,6 A computational model that has been successfully used for the analysis of learning and inference under uncertainty is the hierarchical gaussian filter (HGF), which allows for the trial-by-trial analysis of learning trajectories under conditions of uncertainty as learning rates can flexibly adapt to changes in the environment. Within the HGF, priors regarding the reliability of sensory information are integrated with priors regarding environmental regularities.2,6
Numerous behavioral studies have shown that priors regarding sensory information bias perceptual decisions under sensory uncertainty.10 Under environmental uncertainty, observers have been shown to engage in a Bayes-optimal, dynamical learning-process, in which the degree to which new information is incorporated depends on estimates of environmental volatility.5,11 Priors regarding these uncertainties are thought to be represented at different levels of the cortical processing hierarchy and to involve different neurotransmitter systems.12,13 What has remained largely unknown; however, is how different sources of uncertainty are integrated in the updating of prior beliefs and their use in perceptual decision-making. In particular, it is unclear how sensory uncertainty affects belief updating under environmental uncertainty on the one hand; and how the weighting of uncertain sensory information is modulated by environmental uncertainty on the other hand. Finally, it remains to be elucidated how individual differences in belief updating might modulate perceptual inference under sensory uncertainty. The latter point is of particular importance for the understanding of aberrant perceptual inference, which has been related to the psychopathology of mental disorders such as schizophrenia and autism.14,15
To probe the influence of sensory and environmental uncertainty on the learning and use of priors in perceptual decision-making, we created an experimental scenario in which both sources of uncertainty were manipulated independently. We asked how varying levels of environmental uncertainty (here primarily changes in stochastic outcome probabilities given a cue and a target), as induced by audiovisual learning would influence the usage of priors in the perception of noisy visual information and how varying levels of sensory uncertainty would influence the learning of environmental regularities.
We hypothesized that the extent to which prior beliefs induced by associative learning would influence perceptual decisions should depend on both the current strength of the audiovisual association (i.e., environmental uncertainty), and on the currently available visual information (i.e., sensory uncertainty) in an interactive way. Moreover, we expected both sources of uncertainty to influence the updating of prior beliefs. Finally, we reasoned that individual variability in the use of learned priors in perceptual decision-making may be related to individual differences in the tendency to change one’s beliefs about the environment.
Results
Twenty healthy participants (10 female) underwent five runs of a behavioral experiment in which both sensory and environmental uncertainty were manipulated using a combined motion discrimination and audiovisual reversal learning task. Sensory uncertainty was manipulated by randomly varying the proportion of dots that moved coherently either to the left or to the right within random dot kinematograms (RDK), using six levels of motion coherence (0.05, 1.26, 3.15, 7.92, 19.91, and 50%). Environmental uncertainty was manipulated through changing probabilistic associations of auditory cues with leftward and rightward visual motion. Auditory cues were high or low tones that preceded the presentation of the RDK that served as visual target stimulus in each trial. Audiovisual cue-target associations and their strengths (90% or 70%) were varied blockwise with block length ranging from 12 to 20 trials. Observers were uninformed about block durations and had to infer the changing cue-target associations over the course of trials (Figure 1A). There was no explicit manipulation of volatility, i.e., the frequency in changes of cue-target associations over the course of the experiment. Their task was to indicate their prediction regarding the motion direction of the upcoming visual target stimulus after presentation of the auditory cue, as well as their perception of the following visual stimulus by pressing one of two keys. No feedback was provided.
Figure 1.
Schematic illustration of experimental and modelling design
(A) Schematic illustration of experimental design. Perceptual runs 1+2: In each trial, a random dot kinematogram (RDK) of either rightward or leftward motion with one of six different coherence levels (i.e. proportion of signal dots moving coherently, 0.05, 1.26, 3.15, 7.92, 19.91 or 50%) was shown. Learning runs 3–5: Probabilistic audiovisual reversal learning task. In each trial, the RDK was preceded by a tone. A high tone could be associated with rightward and leftward motion with high (bold arrow) and low (thin arrow) probabilities, respectively, or vice versa. The low tone was associated with the other respective motion direction. These associations reversed unpredictably after variable numbers of trials and with varying probabilities. The strength of association varied between high- and low-contingency schemes with either 90/10% or 70/30%, respectively.
(B) Schematic illustration of modeling design. The HGF model consisted of three levels, with the posterior of each level (, and ) forming the prior for the next level. The updating of priors at each level was influenced by the learning rates at the second level and at the third level. Combination of the first level prior (, cue-target association) with the cue information (high or low tone) resulted in the conditional probability of the perceived net motion given the cue . Combination with the sensory information led to the posterior probability of the perceived net motion , with the individual sensitivity to sensory information of each observer influencing the weighting of . Model inversion was performed from observers’ reports of prediction and perception in each trial ( and .
Conventional analyses
Sensory and environmental uncertainty influence perceptual decisions in an interactive way
In a first step, we sought to confirm that our manipulations of sensory and environmental uncertainty were successful. To this end, we examined the effects of both types of uncertainty on the accuracy of perceptual decisions and hypothesized that accuracy should be high when both types of uncertainty were low. We calculated the proportion of trials in which net-motion of the target stimulus was perceived correctly (= accuracy) across coherence and contingency levels. A two-way repeated measures analysis of variance (rm-ANOVA) with the dependent variable “accuracy” (proportion of correct perceptual choices), and the factors “coherence” (= level of sensory uncertainty) and “contingency” (= level of environmental uncertainty) showed a significant main effect of “coherence” (F(5,209) = 140.10, p = 2 x 10−16, Bayes Factor [] = 2.9 x 1064) with higher accuracies at increasing levels of motion coherence. Additionally, there was a main effect of “contingency” (F(1,209) = 5.64, p = 0.019, = 2.1) with higher perceptual accuracy being associated with increasing contingency levels. This indicates that observers were more accurate when more reliable predictive information was available, thus suggesting that observers used learned priors to make perceptual decisions. There was no significant “coherence” x “contingency” interaction effect (F(5,209) = 0.34, p = 0.89, = 0.038). Thus, as expected, participants inferred correct perceptual decisions from both stimulus as well as predictive information (Figure 2A).
Figure 2.
Results of conventional analyses
(A) Effect of “coherence” and “contingency” on “accuracy” indicating a rise of perceptual accuracy with higher stimulus information (higher coherence levels), as well as with higher predictive information of cues (higher-contingency level).
(B) Interactive effect of learned cue-target associations on perception under different levels of sensory and environmental uncertainty. “Congruency” (proportion of trials perceived congruently with prediction) as indicator of the usage of learned cue-target associations being higher in low coherence trials and in high-contingency trials.
(C) Learning, expressed as the proportion of correctly predicted targets (“Prediction”) between different contingency levels (high: red, low: blue).
(D) Change rate (proportion of trials in which participants changed their prediction regarding the upcoming target) in different coherence levels (1–6) and different contingency levels (blue: low, red: high). Change rate was significantly higher in low-contingency trials as well as in low coherence trials. All data are represented as mean +/− standard deviation.
The key aim of our study was to elucidate the effects of sensory and environmental uncertainty on perceptual decisions. In particular, we asked whether the degree to which perceptual decisions were influenced by learned prior beliefs was modulated by sensory and environmental uncertainty, respectively. We therefore computed the proportion of trials perceived congruently with the reported prediction as a function of coherence and contingency levels. We hypothesized, (a) that the effect of learned predictions, i.e., participants’ learned beliefs on perception would be greater at lower levels of motion coherence (i.e., high sensory uncertainty) and (b) that this effect would be modulated by the level of contingency (i.e., environmental uncertainty).
Confirming these hypotheses, a two-way rm-ANOVA with “congruency” (proportion of prediction-congruent perceptual choices) as dependent variable and the factors “coherence” and “contingency” showed a significant main effect of “coherence” (F(5,209) = 4.35, p = 8.6 x 10−4, = 23) with higher congruency at lower coherence levels (Figure 2B). This indicates that participants relied more on learned beliefs when sensory uncertainty was high. Additionally, we observed a main effect of “contingency” (F(1,209) = 27.61, p = 3.7 x 10−7, = 3.4 x 104) with significantly higher congruency in high-contingency blocks (mean congruency high contingency: 70.11% ± 0.09%, mean congruency low contingency: 62.94% ± 0.11%), indicating that participants relied more on learned beliefs when environmental uncertainty was low.
Critically, we also observed a significant interaction of “coherence” and “contingency” (F(5,209) = 3.23, p = 7.9 x 10−3, = 4.5), indicating that the effect of coherence on prediction-congruency differed between the high- and low-contingency conditions. As can be seen in Figure 2A, and confirmed by separate post-hoc one-way rm-ANOVAs, prediction-congruency was modulated by coherence level in low- but not in high-contingency blocks (low contingency: F(5,209) = 7.4, p = 6.7 x 10−6, = 3.03 x 103; high contingency: F(5,209) = 0.45, p = 0.75, = 0.06). Thus, when predictive information was reliable as in the high-contingency condition, learned beliefs had a strong influence on perception independent of the sensory information available. In contrast, when predictive information was less reliable as in the low-contingency condition, learned beliefs had a strong influence on perception only in the face of uncertain sensory information, but less so when more sensory information was available. In other words, the weighting of the available sensory information against learned beliefs depended on the certainty of these beliefs (Figure 2B).
Effects of sensory and environmental uncertainty on the learning and updating of beliefs
As participants were not informed about the current association between cues (high or low tones) and targets (left or rightward motion) and did not receive feedback on their performance, they had to infer this association over the course of trials. To assess how well participants learned contingencies, we first calculated the proportion of correctly predicted targets within learning runs. “Correct prediction” was hereby defined as “prediction equaled the target”. Across all trials, participants correctly predicted the target in 60.9 ± 0.09% of trials, which was higher than chance-level (one-sample t-test: T(19) = 7.03, p = 1.9 x 10−8, = 6.9 x 105), showing that participants successfully learned to associate a given cue with the following net-motion-direction. Additionally, the proportion of correctly predicted targets differed between contingency levels. It was on average significantly higher in the high-contingency condition (67.62% ± 0.06) than in the low-contingency condition (54.18% ± 0.08%, paired t-test: T(19) = 6, p = 1.5 x 10−6, = 1.7 x 104). As expected, this shows that strong audiovisual cue-target associations (i.e. high-contingency levels) facilitated learning (Figure 2C).
Next, we sought to explore the role of sensory and environmental uncertainty in the updating of beliefs regarding cue-target associations. Such learning is a dynamic process that occurs over the whole sequence of experimental trials and is therefore, difficult to capture by conventional statistical analysis based on averaging across multiple trials.16 As a first approximation, however, we assessed under which circumstances participants changed their predictions regarding cue-target associations. To this aim, we tested how many changes in reported prediction occurred as a function of (a) the coherence level in the trial preceding changes, and (b) the current contingency level. We hypothesized that observers were more likely to change their predictions under conditions of low-sensory uncertainty (i.e. after high coherence trials) and high environmental uncertainty (i.e. during low-contingency blocks).
To test this hypothesis, we performed a two-way rm-ANOVA with the dependent variable “change rate” (proportion of trials in which prediction changed), and the independent factors “coherence” and “contingency”. There were significant main effects for “coherence” (F(5,209) = 7.89, p = 7.9 x 10−7, = 2.3 x 104) and “contingency” (F(1,209) = 28.85, p = 2.1 x 10−7, = 8.3 x 104), showing that participants changed their predictions more often after trials with low-sensory uncertainty and during phases of high environmental uncertainty. Moreover, there was a significant “coherence”-by-“contingency” interaction effect (F(5,209) = 2.82, p = 0.017, = 2.3), showing a more pronounced increase in the change rate with increasing levels of coherence in the low vs. the high-contingency condition (Figure 2D). This suggests that observers’ decisions to change their predictions were influenced by both sensory and environmental uncertainty: changes are more likely when sensory uncertainty is low (high coherence trials) and environmental uncertainty high (low contingency).
Bayesian Modeling
So far, our findings indicate that priors regarding cue-target associations were learned over the course of learning runs and biased observers’ perceptual decisions dynamically and relative to the amount of sensory and environmental uncertainty. Additionally, we found that changes in reported predictions occurred more often after trials of low sensory and high environmental uncertainty (i.e. after high coherence trials and in low-contingency blocks), providing tentative evidence for a role of both sources of uncertainty in belief updating.
As mentioned above, however, these conventional analyses were limited with respect to more detailed assessment of belief updating. They only allowed to compare averages across specific trials within the given block structure of the experiment, but fell short of delineating the trial-to-trial learning trajectory over time. Yet, capturing the evolution of learning across trials and blocks was required to determine the signal that drove belief updating in the context of our experiment. Was learning based just on the binary categories of either stimulus information (i.e., leftward vs. rightward net-motion of the RDK) or the participant’s perceptual choices, or was sensory uncertainty also taken into account? Moreover, observers might have updated their beliefs at different rates, as learning has been suggested to show great inter-individual differences.5 We therefore adopted a Bayesian modeling approach to analyze the trajectories of observers’ beliefs over the course of runs and independent of blocks in order to a) identify the learning signal that updating of beliefs was based on, and to b) estimate the influence of individual beliefs regarding environmental uncertainty on perceptual decision-making.
We used a version of the HGF for learning under sensory uncertainty2,6 since it specifically allows us to model learning in a hierarchical fashion in scenarios in which changes in sensory evidence and environmental regularities occur (i.e., under sensory and environmental uncertainty)—and for which it has been used successfully in previous studies.17,18,19 The HGF uses an adaptation of Bayesian probability distributions, considering each perceptual decision as an inferential process. Here, the combination of learned beliefs about cue-target associations (i.e., priors) with the sensory evidence (i.e., likelihood) results in a posterior distribution, which is calculated as a function of the precisions of prior and likelihood. Precision is equal to the inverse of the variance of a probability distribution and determines the respective impacts of prior and likelihood on the posterior—which determines the perceptual decision. Learning is understood as the updating of priors via PEs, which are elicited when the incoming sensory information violates a prior belief. Updating of priors at different levels of the hierarchy is mediated by PEs. The HGF has three levels. Level 1 represents a sigmoid transformation of level two that represents the estimated strength of the cue-outcome contingency. Uncertainty at level 2 stems from sensory uncertainty within the signal, as well as environmental uncertainty regarding the variability of association between the target and the cue. Level 3 represents participants’ expectations regarding the variability (i.e., volatility) of the cue-target associations, which is mathematically formulated as the log-volatility of the environment. Hereby, modeling enabled a trial-by-trial analyses of the updating of two types of hierarchically structured priors: a “high-level” prior regarding environmental uncertainty and a “low-level” prior regarding sensory uncertainty. For further details see the STAR Methods, Bayesian modeling section, and Figure 1 as well as Table 1.
Table 1.
Summary of model parameters and quantities
| Name | Explanation | |
|---|---|---|
| Model Quantities | Cue (high/low tone) | |
| Sensory information | ||
| Perceptual response | ||
| Prediction response | ||
| Posterior probability of perceived net-motion | ||
| 1st level posterior (= prediction of association between tone and motion direction) | ||
| 2nd level posterior (tendency of = 1) | ||
| 3rd level posterior (log-volatility of tendency) | ||
| conditional probability of motion direction given a tone | ||
| Prior at 1st level | ||
| Prior at 2nd level | ||
| Prior at 3rd level | ||
| Precision of 2nd level posterior (inverse of variance) | ||
| Precision of 3rd level posterior (inverse of variance) | ||
| Precision of 1st level prior (inverse of variance) | ||
| Precision of 2nd level prior (inverse of variance) | ||
| Precision of 3rd level prior (inverse of variance) | ||
| Variance of 2nd level posterior | ||
| Variance of 2nd level posterior | ||
| Variance of 2nd level prior | ||
| Variance of 3rd level prior | ||
| Prediction error at 1st level | ||
| Prediction error at 2nd level | ||
| Model Parameters | Learning rate of 2nd level | |
| Learning rate of 3rd level | ||
| Precision-weighted prediction error 2nd level | ||
| Individual sensitivity to sensory information | ||
| Precision Weighting of prediction error in RW updating | ||
| Coupling strength between 2nd and 3rd level | ||
| Constant value of most likely perception if sensory input x = 1 | ||
| Constant value of most likely perception if sensory input x = 0 |
Table explaining model parameters and quantities of HGF-model.
Bayesian model comparison points to perceptual choices as primary learning signal
To investigate that aspects of sensory information and predictive information drove the learning of priors regarding cue-target contingencies, we first constructed and compared three different HGF models. Information from auditory cues was allowed to inform priors in all three models. However, the models differed with respect to the learning signal that was used for the updating of priors via PEs. In model 1 (see STAR Methods, Bayesian modeling section, Equation 9), learning was driven by a posterior incorporating the uncertainty of sensory information, i.e., the posterior probability of net-motion in addition to the information provided by the auditory cue ; therefore, hypothesizing sensory uncertainty to be the main learning signal. In model 2 (Equation 10), in contrast, learning was influenced by the observer’s binary perceptual decision in combination with the cue information. While sensory uncertainty is inherently a part of the perceptual decision, it did not serve as the primary learning signal in this model. Finally, in model 3 (Equation 11) learning was influenced by the actual net-motion-direction of the visual target stimulus —again in combination with the cue information.
To support our choice of an HGF-modeling approach, we compared the winning model from the first HGF-model comparison with two control models (models 4 and 5). In model 4, priors were learned, but did not influence perceptual decisions as these were driven exclusively by the likelihood information, thereby testing whether priors influence perceptions at all. This was achieved by fixing the prior at the second level () to 0.5, not allowing for updates of the prior. In model 5, a one-level Rescorla-Wagner learning approach with a fixed learning rate was used to test if a non-flexible one-level model would be superior over an HGF model in explaining observers’ behavior.
In the first model comparison, model 2 outperformed models 1 and 3 in best explaining observers’ perceptual decisions (exceedance probability 100%, Figure 3A). The second model comparison provided strong evidence for the superiority of model 2 over the two control models 4 and 5 (exceedance probability 81.9%, Figure 3B). These results confirm that cue-target associations are learned and indeed influence perceptual decisions. However, Bayesian model comparison provided no evidence for a primary influence of sensory uncertainty on learning. Rather, the superiority of model 2 over model 1 suggests that the updating of priors was mainly driven by observers’ binary perceptual decisions in combination with the cue information (Figures 3A and 3B).
Figure 3.
Results of Bayesian modeling
(A) Bayesian model selection (BMS) of three models incorporating different aspects of prior information into the updating of predictions. Model 2 (“perceptual response model”, i.e., the update followed observers perceptual decision) was the clear winning model as shown by exceedance probability of 100%.
(B) BMS control analysis of the winning model 2 with model 4 (“target only”) and model 5 (Rescorla-Wagner) showing superiority of model 2 by exceedance probability of 81.06%.
(C) Positive correlation between (“alpha 1”, un-sensitivity to sensory information) with difference in accuracy between high- and low-contingency blocks, indicating observers with little sensitivity to sensory information needing more predictive information to perceive accurately (Spearman correlation: rho = −0.8226, p = 4.7x10−6).
(D) Positive correlation between (“Pihat”, precision of prior at first level) and Congruency (Spearman correlation: rho = 0.8286, p = 1.5x10−6).
(E) Positive correlation between (“Omega 2”, second-level learning rate) and “congruency”, Spearman correlation: rho = 0.6586, p = 0.0021. Correlative data are represented with 95%-confidence interval.
Assessing the validity of bayesian model selection and model inversion
To verify the validity of our model selection procedure and the success of model inversion, we extracted posterior model parameters of the winning model and compared these to corresponding results of our conventional analyses. As a first “sanity check”, we assessed the parameter α1, which provides an estimate of the “un-sensitivity” to sensory information (i.e., greater values of α1 denote less sensitivity to sensory information). As expected, α1 showed a strong negative correlation with observer’s individual perceptual accuracy, i.e., the proportion of correct choices in the perceptual task (Spearman correlation: rho = −0.78, p = 4.7 x 10−5, = 285) (Figure 3C). Second, we expected that the mean of the model quantity , which denotes the precision of priors (inverse of the variance) at the first-level of the HGF should correspond to the individual prediction-congruency, i.e., the proportion of trials in which an individual’s perceptual decision was congruent with the indicated prediction. Indeed, individual estimates strongly correlated with prediction-congruency between participants (Spearman correlation rho = 0.83, p = 1.5 x 10−5, = 497) (Figure 3D). Third, we reasoned that differences in environmental uncertainty should be reflected by differences in belief updating. In other words, blocks of low cue-target contingency should be associated with more errors and thus, on average, larger precision-weighted PE estimates than high-contingency blocks. In line with this reasoning, we found that, across participants, PEs were on average significantly larger in low-contingency vs. high-contingency blocks (mean PE high contingency: 0.23 ± 0.01, mean PE low contingency: 0.31 ± 0.03, paired t-test: T[119] = −5.05, p = 1.6 x 10−6, = 8.3 x 103) again indicating successful model inversion.
Individual differences in belief updating correlate with the use of learned priors in perceptual decisions
Within the HGF, the parameter denotes the learning rate on the second level of the model and thus, in the context of our experiment, the subject-specific tendency to change beliefs regarding cue-target associations. We hypothesized that individuals with a high second-level learning rate would show a stronger tendency to use learned prior beliefs in perceptual inference and thus, a higher proportion of belief-congruent perceptual decisions. We tested the between-subject correlations of with the average prediction-congruency of perceptual decisions in each participant, i.e., the proportion of trials in which an individual’s perceptual decisions were congruent with the reported prediction. Confirming our hypothesis, we found a significant correlation of the negative second-level learning rate with congruency (Spearman correlation: rho = 0.74, p = 2.1 x 10−4, = 95) that suggests that observers with a strong tendency to change their beliefs regarding cue-target contingencies relied indeed more strongly on learned priors when making perceptual decisions (Figure 3E).
In addition, the use of prior beliefs in perceptual decisions may depend on the observer’s expectation regarding environmental volatility, i.e., the expected probability that cue-target associations change over time. Within the HGF, the individual volatility expectation is reflected by the parameter , which denotes the learning rate on the model’s third level. However, we did not explicitly manipulate volatility in our experiment, that is, any fluctuation in the frequency of changes in cue-target associations was only due to the randomization of block sequences. Accordingly, variance in the parameter estimates for was extremely small compared to the variance of parameter , as reflected by the standard deviations [SD] (SD() = 4.4 x 10−4, SD() = 0.43), thus rendering it unsuitable for between-subject correlation analyses.
To additionally validate our modeling approach, we compared the means and standard deviations of in the three primary HGF models tested. There was a high degree of agreement between models ( in models 1/2/3: −3.7127 ± 0.3428; −3.3035 ± 0.3768; −3.8086 ± 0.4176). Moreover, when comparing the correlation of the learning parameter with prediction-congruency of perceptual decisions, we again find strong agreement between the three models (Model 1: /Congruency: rho = 0.81, p = 1.0 x 10−5; Model 2: /Congruency: rho = 0.74, p = 2.1 x 10−4, = 95; Model 3: /Congruency: rho = 0.46, p = 0.04, = 2.5). These results further corroborate the validity of our modeling approach using the HGF.
Model and parameter recovery
As a final validation of our modeling approach, the winning HGF model underwent data simulations for model recovery as well as parameter recovery. Model recovery with simulated data of n = 5000 participants showed superiority of our winning model 2 with an exceedance probability of 100%. Additionally, parameter recovery of parameters from model 2 showed clear, positive pairwise correlations of simulated with recovered parameters for (rho = 0.95, p = 3.24 x 10−10, = 1.2 x 106 and (rho = 0.83, p = 6.24 x 10−6, = 1.1 x 103). This gives additional confirmation to our modeling results. We could not establish a statistically significant correlation for simulated and recovered parameter (rho = 0.1681, p = 0.49, = 0.77) and have therefore excluded further analyses relating to parameter (see Modeling Methods for a more detailed account).
Discussion
Here, we investigated the influence of sensory and environmental uncertainty on the learning of priors and their influence on perceptual decision-making to pave the way for possible future studies investigating the malfunctioning of different level priors in psychopathology such as schizophrenia. With respect to the use of learned priors in perceptual decision-making, we observed clear effects of both sensory and environmental uncertainty. When environmental uncertainty was low, learned priors had a strong effect on perceptual decisions irrespective of sensory uncertainty. When environmental uncertainty was high, in contrast, the influence of learned priors on perceptual decisions was limited to trials with high sensory uncertainty, while it was absent when sensory uncertainty was minimal. Moreover, computational modeling showed that the use of priors in perceptual decision-making was more pronounced in those individuals who updated their beliefs more readily. With respect to the learning of priors, we found, as expected, that cue-target associations were learned more accurately when environmental uncertainty was low. Our conventional analyses additionally indicated that participants changed their beliefs more readily under conditions of high environmental uncertainty, particularly after trials with low-sensory uncertainty. In contrast, Bayesian model selection favored a model in which learning was driven by the participant’s perceptual choices without taking sensory uncertainty into account. This apparent discrepancy between conventional analysis and model selection will be discussed below.
Influence of sensory and environmental uncertainty on the use of learned priors in perceptual decisions
The key finding from our conventional analyses is that perceptual decision-making is influenced by sensory and environmental uncertainty in an interactive way: There was a generally strong influence of learned priors on perceptual decisions under conditions of low environmental uncertainty, but a clear modulation of this influence by sensory uncertainty under conditions of high environmental uncertainty was additionally observed. This finding suggests that the weighting of sensory information in perceptual decisions is modulated by environmental uncertainty. Additionally, our modeling analyses showed that the degree to which perceptual decisions were influenced by learned priors was modulated by individual differences in expectations regarding environmental uncertainty. Observers with a tendency to change their beliefs more readily (as indicated by a high learning rate for cue-target associations, ) relied more strongly on learned priors in perceptual decision-making.
Our findings are in line with previous work showing that sensory information is weighed according to its uncertainty in perceptual decision-making in the visual and other domains.20,21,22 Furthermore, these behavioral decisions and their neuronal encoding have been shown to proceed in a Bayes’ optimal way.23,24 Accordingly, priors bias perceptual decisions according to their precision (=inverse of variance) and in relation to the precision of the sensory information.25 When a prior is deemed imprecise (less reliable), its impact is down-weighted compared to sensory information.3 Additionally, when sensory information is highly uncertain, perceptual decisions are more influenced by priors, with their impact being influenced by the level of sensory uncertainty.19,26
That perceptual decisions are additionally influenced by environmental uncertainty is in line with previous observations of longer reaction times, higher error rates and a higher adaptability towards new information.11,17 Our findings, indicating that under conditions of high environmental uncertainty observers rely more strongly on sensory information, are in line with these earlier reports and support the more general notion that environmental uncertainty induces exploratory behavior in mammals.27 Moreover, previous studies have shown an integration of both sources of uncertainty with differences in reaction times between more and less predictive trials (high and low environmental uncertainty) being modulated by varying levels of sensory uncertainty.17,18 Also, different sources of uncertainty in different modalities were found to be integrated to provide optimal decision-making in multiple-domain (i.e. visual-motor) tasks.28,29
While earlier studies thus pointed to the integration of different sources of uncertainty in optimal planning and execution of behavior, our current results go beyond these previous findings by showing how different sources of uncertainty are integrated within one sensory modality and, importantly, how they affect the use of priors in perceptual decision-making dynamically. Crucially, our finding of an interactive weighting of the available sensory information against learned beliefs according to their respective uncertainties sheds new light onto the complex process of perceptual decision-making under uncertainty. Additionally, our modeling results also provide new insights into the individual differences in how priors are weighted in perceptual decision-making, highlighting the important role of individual expectations regarding environmental uncertainty.
Influence of sensory and environmental uncertainty on the learning of priors
As hypothesized, our conventional analyses showed that cue-target associations were learned more accurately when environmental uncertainty was low. In addition, we found that changes in participants’ beliefs regarding cue-target associations occurred more often under conditions of high environmental uncertainty, particularly when sensory uncertainty in the previous trial was low. In apparent contradiction to this latter observation, model comparison as well as model recovery showed a superiority of model 2, in which priors were updated primarily by information from participants’ binary perceptual decisions, rather than models in which prior-updating was influenced by the actual sensory information or sensory uncertainty. While divergence between results of conventional statistical analyses and computational modeling have been reported previously,30 this apparent discrepancy in our findings needs explanation. It should be noted that the conventional analysis targeted (and was limited to) a small subset of trials after which participants decided to change their current belief. Modeling with the HGF, in contrast, took the continuous evolution of belief updating into account, thus allowing us to analyze the dynamics of the learning trajectory over time. Thus, while our conventional analysis was highly selective in “putting the spotlight” on trials preceding belief changes, Bayesian model comparison tested which of our candidate models provided the best fit to the entire time series of data. The validity of the latter approach was further supported by model recovery, which also showed superiority of model 2. It is therefore likely that, over the course of trials, learning was indeed most strongly driven by the participants’ perceptual choices (as suggested by model selection). Still, decisions to actually change beliefs might have been momentarily facilitated by low-sensory uncertainty trials (as suggested by conventional analysis), possibly related to higher choice confidence in these trials. We did not record confidence ratings and therefore cannot make strong conclusions in this regard. To explore the role of perceptual confidence in belief updating will be an intriguing question for future research.
The finding that perceptual choices seem to be the primary signal used for belief updating is in line with earlier studies showing that previous perceptual choices influence perceptual decision-making in the form of choice history biases.31,32,33,34 Additionally, in an earlier associative learning study, Bayesian model comparison indicated that the dynamic updating of priors was best explained by a combination of pervious perceptual choices and learned cue-target associations.19 The outcome of the model comparison and model recovery in our current study strongly suggests that previous perceptual choices are a key driving factor in the updating of beliefs and more relevant than the actual stimulus information or the uncertainty regarding this information.
Our additional result of an influence of environmental uncertainty on the updating of priors (as reflected by estimates of precision-weighted PEs) is in line with previous studies showing that learning of priors is driven by “surprise”, i.e., by the violation of learned predictions regarding cue-outcome contingencies with observers updating their expectations faster in uncertain and volatile environments.11,19,26,35 Earlier work has also shown that, in scenarios of environmental stability, previous choices stabilize perception against sensory noise in upcoming trials.31,36 Similarly, when judging the reliability of visual information, observers show increased reliance on past sensory inputs under conditions of environmental stability.37 The finding from our modeling analysis that precision-weighted PEs are larger under conditions of high environmental uncertainty lend further support to the notion that the weighting of new information in learning depends on the expected stability of the environment.
It is noteworthy that the winning HGF model was superior to a standard Rescorla-Wagner model (model 5), in which learning is also based on PEs.38 While Rescorla-Wagner and related models have been used successfully for modeling of reinforcement learning in healthy individuals and patient populations, they may have limitations in the modeling of more complex decision-making processes, especially when uncertainty has to be taken into account.28,29,39,40 The HGF model used here, in contrast, allowed us to model belief updating at different hierarchical levels, thus accounting for both sensory on environmental uncertainty.
To conclude, our results from conventional analyses and computational modeling show interactive effects of sensory and environmental uncertainty in the updating of beliefs and their use in perceptual decision-making. Future studies should use electrophysiological and neuroimaging techniques to shed light on the intriguing question how the integration of different sources of uncertainty is implemented at the neural level. Moreover, studies using this experimental approach in clinical populations, i.e., people with schizophrenia, will help to understand how alterations in the integration of sensory and environmental uncertainty may contribute to aberrant inferences that may result in delusions and hallucinations.26,41,42,43
Limitations of the study
In this study, we investigated the interactive influence of sensory and environmental uncertainty in the learning and subsequent use of priors in perceptual decision-making. Limitations of the study include the sole dependence on behavioral data, which makes an analysis of neuronal underpinnings impossible. Future studies should therefore include methods such as fMRI. Additionally, volatility was not manipulated independently, limiting our ability to model the influence of volatility estimates in learning and use of priors. Future studies should implement different scenarios of volatility to address this question. Additional limitations come from possible alternative explanations regarding behavioral results. Differences in accuracy and perceptual decisions could stem from changes in other neurocognitive domains such as alertness or motivation over the course of the experiment, rather than from an influence of priors. However, we did not see a deterioration in task performance with time as measured by accuracy.
STAR★Methods
Key resources table
| REAGENT or RESOURCE | SOURCE | IDENTIFIER |
|---|---|---|
| Deposited data | ||
| Raw and analyzed data | This paper (is available) | https://doi.org/10.17605/OSF.IO/U52BX; https://osf.io/U52BX/ |
| Custom MATLAB code | This paper (is available) | As above |
| Custom R markdown code | This paper (is available) | As above |
| Software and algorithms | ||
| MATLAB | https://www.mathworks.com | RRID: SCR:001622 |
| RStudio | https://www.rstudio.com/ | RRID: SCR:000432 |
| lme4 | Rstudio | RRID: SCR:015654 |
| Afex | Rstudio | N/A |
| BayesFactor | Rstudio | N/A |
| lmBF | Rstudio | N/A |
| TAPAS toolbox | https://www.tnu.ethz.ch/en/software/tapas | N/A |
| SPM toolbox | https://www.fil.ion.ucl.ac.uk/spm/software/spm12 | RRID: SCR:007037 |
Resource availability
Lead contact
Further information and requests for resources should be directed to and will be fulfilled by the lead contact, Merve Fritsch (merve.fritsch@charite.de).
Materials availability
This study did not generate new unique reagents.
Experimental model and subject details
Twenty participants (human subjects) took part in the experiment, which was conducted with informed written consent after approval by the ethics committee of Charité – University of Medicine Berlin. All participants (mean age: 30 years, range: 18–50 years, 10 female) had normal or corrected-to-normal vision and no prior psychiatric or neurological medical history.
Method details
Participants performed an associative learning task (similar to Schmack, Weilnhammer, Heinzle, Stephan and Sterzer 2016 as well as Weilnhammer, Stuke, Sterzer and Schmack 201819,26), which induced changing expectations about visual stimuli. High or low tones were coupled with a subsequently presented random dot kinematogram (RDK), showing random dot motion with leftward or rightward net-motion of coherent dots. The RDK stimulus was adapted from code provided by Hiobeen Han.44 Visual and auditory stimuli were produced using MATLAB 2017b (The MathWorks, RRID:SCR_001622) and Psychophysics Toolbox 3 (RRID:SCR_002881).
Participants completed a total of 638 trials on average, which were divided into two perceptual and three learning runs with varying length of 10 minutes for perceptual and 15 minutes for learning runs on average. Perceptual runs were implemented for two reasons. First, to familiarize participants with the stimuli and the perceptual task; and second, to minimize perceptual learning effects as a confounding factor in the learning runs, given the well-known phenomenon of short-term perceptual learning that can lead to dramatic increases in perceptual performance within experimental sessions.45 Perceptual runs were divided into 8, learning runs into 12 blocks, each consisting of 12-20 trials with the number of trials per block varying randomly. There were no breaks in between blocks. The RDK was composed of 1000 white dots of 2 pixel (0.08°) diameter on a black screen. Dots were moving in a circular aperture of 500 pixel radius (18.52°). A white fixation cross of 15 x 15 pixel (0.58° x 0.58°) was presented at the center of the circle. The population of dots comprised “signal dots”, which coherently moved in one direction, and “random dots”. Random dots changed position randomly from frame to frame while signal dots moved either leftward or rightward with a velocity of 2.6°/s. The proportion of signal dots determined the level of motion and was thereby a measure of motion signal strength. One of six different levels of motion coherence (0.05, 1.26, 3.15, 7.92, 19.91, 50%) was chosen randomly for each trial, inducing sensory uncertainty of varying degrees.32 In perceptual runs, every coherence level occurred equally often within a block, whereas in learning runs high coherence levels (7.92, 19.91, 50%) were shown in twice as many trials to facilitate learning. Stimuli were presented on a 22 inch monitor with a resolution of 1024 x 768 pixel and a frame rate of 60 Hz at a viewing distance of 70 cm ensured by the use of a headrest.
Participants were instructed to maintain their gaze on the central fixation cross throughout the trial and judge the net-motion direction (either left- or rightward). In perceptual runs, the RDK stimulus was presented for 1.2 s, followed by the presentation of a response screen for 1.2 s, which consisted of a black screen with two double arrows displayed at 2.05° eccentricity left and right of fixation and turning from white to red after the response. Participants had to report their choice by pressing one of two buttons on a computer keyboard, with either the index finger or the ring finger of the right hand. Participants were instructed to report their choice only when the response screen was present.
In learning runs, auditory stimuli were presented at 70 dB using Creative A40 USB-speakers. At the beginning of each trial, a high (900 Hz) or low (600 Hz) tone was presented for a total of 500 ms. Each tone was correlated with either a leftward or rightward motion direction with varying strength of cue-target association (=contingency), inducing two levels of environmental uncertainty. In the high-contingency condition, the cue-to-target-association (i.e. tone-to-motion-direction association) was either 10% or 90 %, meaning that a high tone would be coupled with a leftward motion direction in 90 % and a rightward motion direction in 10% of trials (or vice versa), making the association between tone and target strong (0.9/0.1). In the low-contingency condition, the association was weaker, at either 70 % or 30 % (0.7/0.3). The levels of contingency were randomly assigned to each block, under the constraint that each level was applied equally often. Importantly, participants did not know when a change in contingency occurred.
Presentation of the auditory cue was followed by the presentation of a prediction-response screen for 1.2 seconds. Participants then indicated their prediction about whether the upcoming visual stimulus would be a left- or rightward net-motion by using the same buttons as described above. To differentiate the prediction-response-screen from the perception-response-screen, arrows were single for the prediction-response and double for perception-response. Presentation and task for the subsequent visual stimulus was the same as described above (Figure 1A).
Quantification and statistical analysis
To assess, how sensory and environmental uncertainty affected the learning of cue-target contingencies and consequently the use of these learned beliefs in perceptual decisions, behavioral data were analyzed using conventional statistical methods as well as Bayesian modeling.
Conventional analysis
To assess the learning of cue-target associations induced in learning runs, we inferred the participants’ learned beliefs from their prediction-responses. To this end, we first calculated the proportion of correctly predicted targets (i.e. the proportion of trials in which the participant's prediction of upcoming net-motion was congruent with the shown target). To assess for the effect of learned beliefs on perceptual decisions, we calculated the proportion of prediction-congruent trials (i.e. the proportion of trials participants perceived congruent with the previously stated prediction of the upcoming net-motion) under varying degrees of sensory and environmental uncertainty (i.e. different coherence levels and contingency levels).
Conventional analysis as well as posterior parameter evaluation (from Bayesian modeling, see below) was performed using two-sided, paired t-tests and one-sample t-tests as well as two-way repeated measures analysis of variance (rm-ANOVA) in R (summary statistics). Correlation analyses were carried out using Spearman correlation. In addition to traditional frequentist analyses, we calculated Bayes Factors () for t-tests, ANOVAs and correlations. Bayes Factors for main effects and interactions were calculated by estimating full and reduced models and dividing the respective Bayes Factors. A > 3 can be interpreted as evidence for the alternative over the null hypothesis.46,47,48
We applied the R-method glm with a binomial link-function for logistic regression, using R-packages lme4 and afex for linear mixed effects modeling (RRID:SCR_0156564). Additionally, Bayes Factors were calculated using R-package BayesFactor and specifically the function ttestBF and lmBF for linear models.
Bayesian modeling
To further assess the influence of sensory and environmental uncertainty on associative learning and the dynamical effect of learned priors on perceptual decision making, we adopted a Bayesian modeling approach as implemented previously19,26 using the HGF 4.0 toolbox distributed with the TAPAS toolbox (www.translationalneuromodelling.org/tapas/)2,6 The model was inverted based on the two behavioral responses provided by the participants: The prediction of upcoming net-motion direction and the perceived net-motion direction . The free parameters of the models allowed for inter-individual differences in the sensitivity to noisy stimulus information () and inter-individual differences in how participants learned from new information (learning rates , ). Model inversion enabled a trial-by-trial analysis of the updating of two types of hierarchically structured priors: a “high-level” prior that deals with environmental uncertainty (i.e. the inferred cue-target associations or contingencies between auditory cues and tones) and a “low-level” prior that contributes to the resolution of sensory uncertainty (i.e. the posterior probability of net-motion direction given a previous auditory cue and noisy stimulus information). Importantly, there is a difference between the variance in the prior regarding parameter inference in model fitting on the one side, and the variance in the estimated model quantities on the other side. A schematic overview of the modeling and all model parameters is provided in Figure 1B and Table 1.
At each trial , the possible perceptual responses y (left- or rightward net-motion) were coded as:
| (Equation 1) |
Likewise, trial-wise prediction responses (i.e., the expected visual target given the auditory cue) were coded as:
| (Equation 2) |
The visual input was defined as the combination of the coherence level (i.e. proportion of signal dots moving coherently, 0.05, 1.26, 3.15, 7.92, 19.91 or 50%) as a fraction of 100% with the binomial direction of presented net-motion (1 = right, 0 = left). It thereby combines all aspects of the visual stimulus:
| (Equation 3) |
The learning of the cue-target association was driven by the co-occurrence of the visual target and the pitch of the preceding tone, which was defined as:
| (Equation 4) |
To represent the ongoing learning of contingencies between auditory cues and visual targets, we used a three-level HGF.6 In the HGF, the first-level prior represents the inferred association between tones and direction of net-motion at trial t. To predict the trial-wise prediction response , we transformed this quantity into the conditional probability of rightward net-motion given the tone:
| (Equation 5) |
To predict the trial-wise perceptual response , we combined (the inferred conditional prior probability of visual targets given the auditory cue) with u(t) (the sensory information) into the posterior probability .
This weighted integration takes into account inter-individual differences in the sensitivity to sensory information by considering the sensitivity parameter , which was estimated as a free parameter and increases with greater perceptual uncertainty. The posterior probability was calculated according to Equation 47 and following of Mathys, Daunizeau, Friston and Stephan 2011 (Mathys et al. 2011) with values and (a detailed account can be found there2).
| (Equation 6) |
| (Equation 7) |
| (Equation 8) |
We constructed and compared a set of three-level HGF models which are denoted in Equations 9, 10, and 11. They differ with regard to the information that is used for the estimation of the 1st level posterior . In other words, the models differ on how the prediction regarding the association between tone and motion direction is formed and updated over the course of trials. In Model 1, learning was driven by the conjunction of the posterior probability of net-motion with the auditory cue
| (Equation 9) |
In Model 2, learning was driven by the conjunction of the perceptual response with the auditory cue :
| (Equation 10) |
In Model 3, learning was driven by the conjunction of the presented target with the auditory cue :
| (Equation 11) |
As shown above, Models 1-3 differed with respect to the calculation of , with effects on the model-based prediction of and . Models 1-3 did not differ with respect to the remaining definition of the three-level HGF. In this learning algorithm, the second-level prior represents the tendency of the first level towards . The third-level prior represents the log-volatility of the given tendency on the second level. Updating of priors on the second and third level evolves via PEs and . Prediction-error updates are weighted by the precision terms and . The precision of the second-level prior is updated as follows:
| (Equation 12) |
Evolution over time of the precision term on the second level is furthermore related to the second level learning rate :
| (Equation 13) |
All update equations are directly derived from Equation 21 and following in Mathys et al. 2011 and can additionally be found in the supplementary material (Equations S1–S11).
In addition to Model 1–3, we constructed two additional control models: In Model 4, the perceptual responses depended only on the likelihood information (i.e., by fixing to 0.5). In Model 5, learning was driven by a classical Rescorla-Wagner learning rule:
| (Equation 14) |
Perceptual responses were modeled using a sigmoid function with z, relating to the noise in the decision, fixed to 1. The sigmoid function defines the probability of observing a decision of = 1, rather than 0, given the current probability of input u(t) = 1.
| (Equation 15) |
Models were optimized using the quasi-Newton Broyden-Fletcher-Goldfarb-Shanno minimization (as implemented in the HGF 4.0 toolbox). Parameters were inverted using the following priors:
-
•
: prior mean of log(0.05) and prior variance of 0.1
-
•
: prior mean of -4 and prior variance of 0.1
-
•
: prior mean of -9 and prior variance of 0.1
-
•
: prior mean of 1 and prior variance of 0.
Acknowledgments
This study was funded by the German Research Foundation (HE 2597/19-1, STE 1430/8-1). P.S. is further supported by the German Ministry for Research and Education (ERA-NET NEURON program; 01EW2007A), and the clinical fellow program of the Berlin Institute of Health. V.W. is a fellow of the clinician scientist program funded by the Charité – Universitätsmedizin Berlin and the Berlin Institute of Health. We would like to thank Hiobeen Han for providing his code for the RDK stimulus, which we adapted for this study.
Author contributions
Conceptualization, A.H., P.S., and V.W.; Methodology, M.F., P.S., and V.W.; Investigation, M.F. and P.T.; Writing – Original Draft, M.F., A.H., P.S., and V.W.; Writing – Review & Editing, M.F., A.H., P.S., P.T., and V.W.
Declaration of interests
The authors declare no competing interests.
Inclusion and diversity
We support inclusive, diverse, and equitable conduct of research.
Published: March 15, 2023
Footnotes
Supplemental information can be found online at https://doi.org/10.1016/j.isci.2023.106412.
Supplemental information
Data and code availability
-
•
All data has been deposited at OSF and is publicly available as of the date of publication. DOIs are listed in the key resources table.
-
•
All original code has been deposited at OSF and is publicly available as of the date of publication. DOIs are listed in the key resources table.
-
•
Any additional information required to reanalyze the data reported in this paper is available from the lead contact upon request.
References
- 1.Friston K. A theory of cortical responses. Philos. Trans. R. Soc. Lond. B Biol. Sci. 2005;360:815–836. doi: 10.1098/rstb.2005.1622. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Mathys C., Daunizeau J., Friston K.J., Stephan K.E. A Bayesian foundation for individual learning under uncertainty. Front. Hum. Neurosci. 2011;5:39. doi: 10.3389/fnhum.2011.00039. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Yu A.J., Dayan P. Uncertainty, neuromodulation, and attention. Neuron. 2005;46:681–692. doi: 10.1016/j.neuron.2005.04.026. [DOI] [PubMed] [Google Scholar]
- 4.Bland A.R., Schaefer A. Different varieties of uncertainty in human decision-making. Front. Neurosci. 2012;6:85. doi: 10.3389/fnins.2012.00085. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Iglesias S., Mathys C., Brodersen K.H., Kasper L., Piccirelli M., denOuden H.E.M., Stephan K.E. Hierarchical prediction errors in midbrain and basal forebrain during sensory learning. Neuron. 2013;80:519–530. doi: 10.1016/j.neuron.2013.09.009. [DOI] [PubMed] [Google Scholar]
- 6.Mathys C.D., Lomakina E.I., Daunizeau J., Iglesias S., Brodersen K.H., Friston K.J., Stephan K.E. Uncertainty in perception and the Hierarchical Gaussian filter. Front. Hum. Neurosci. 2014;8:825–924. doi: 10.3389/fnhum.2014.00825. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Hohwy J. Attention and conscious perception in the hypothesis testing brain. Front. Psychol. 2012;3:96. doi: 10.3389/fpsyg.2012.00096. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Summerfield C., De Lange F.P. Expectation in perceptual decision making: neural and computational mechanisms. Nat. Rev. Neurosci. 2014;15:745–756. doi: 10.1038/nrn3838. [DOI] [PubMed] [Google Scholar]
- 9.O’Reilly J.X., Jbabdi S., Behrens T.E.J. How can a Bayesian approach inform neuroscience? Eur. J. Neurosci. 2012;35:1169–1179. doi: 10.1111/j.1460-9568.2012.08010.x. [DOI] [PubMed] [Google Scholar]
- 10.Seriès P., Seitz A.R. Learning what to expect ( in visual perception ) Front. Hum. Neurosci. 2013;7:668–714. doi: 10.3389/fnhum.2013.00668. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Behrens T.E.J., Woolrich M.W., Walton M.E., Rushworth M.F.S. Learning the value of information in an uncertain world. Nat. Neurosci. 2007;10:1214–1221. doi: 10.1038/nn1954. [DOI] [PubMed] [Google Scholar]
- 12.Nassar M.R., Wilson R.C., Heasly B., Gold J.I. An approximately Bayesian delta-rule model explains the dynamics of belief updating in a changing environment. J. Neurosci. 2010;30:12366–12378. doi: 10.1523/JNEUROSCI.0822-10.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Parr T., Friston K.J. Uncertainty, epistemics and active Inference. J. R. Soc. Interface. 2017;14:20170376. doi: 10.1098/rsif.2017.0376. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Sterzer P., Adams R.A., Fletcher P., Frith C., Lawrie S.M., Muckli L., Petrovic P., Uhlhaas P., Voss M., Corlett P.R. The predictive coding account of psychosis. Biol. Psychiatr. 2018;84:634–643. doi: 10.1016/j.biopsych.2018.05.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Pellicano E., Burr D. When the world becomes ‘too real’: a Bayesian explanation of autistic perception. Trends Cognit. Sci. 2012;16:504–510. doi: 10.1016/j.tics.2012.08.009. [DOI] [PubMed] [Google Scholar]
- 16.Wilson R.C., Collins A.G. Ten simple rules for the computational modeling of behavioral data. Elife. 2019;8:e49547. doi: 10.7554/eLife.49547. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Lawson R.P., Mathys C., Rees G. Adults with autism overestimate the volatility of the sensory environment. Nat. Neurosci. 2017;20:1293–1299. doi: 10.1038/nn.4615. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Lawson R.P., Bisby J., Nord C.L., Burgess N., Rees G. The computational, pharmacological, and physiological determinants of sensory learning under uncertainty. Curr. Biol. 2021;31:163–172.e4. doi: 10.1016/j.cub.2020.10.043. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Schmack K., Weilnhammer V., Heinzle J., Stephan K.E., Sterzer P. Learning what to see in a changing world. Front. Hum. Neurosci. 2016;10:263–312. doi: 10.3389/fnhum.2016.00263. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Ernst M.O., Banks M.S. Humans integrate visual and haptic information in a. Nature. 2002;415:429–433. doi: 10.1038/415429a. [DOI] [PubMed] [Google Scholar]
- 21.Knill D.C., Saunders J.A. Do humans optimally integrate stereo and texture information for judgments of surface slant? Vis. Res. 2003;43:2539–2558. doi: 10.1016/S0042-6989(03)00458-9. [DOI] [PubMed] [Google Scholar]
- 22.Saunders J.A., Knill D.C. Humans use continuous visual feedback from the hand to control fast reaching movements. Exp. Brain Res. 2003;152:341–352. doi: 10.1007/s00221-003-1525-2. [DOI] [PubMed] [Google Scholar]
- 23.Knill D.C., Pouget A. The Bayesian brain: the role of uncertainty in neural coding and computation. Trends Neurosci. 2004;27:712–719. doi: 10.1016/j.tins.2004.10.007. [DOI] [PubMed] [Google Scholar]
- 24.Kording K., Wolpert D. Bayesian integration in sensorimotor learning. Nature. 2004;427:1–4. doi: 10.1038/nature02169. [DOI] [PubMed] [Google Scholar]
- 25.Chalk M., Seitz A.R., Seriès P. Rapidly learned stimulus expectations alter perception of motion. J. Vis. 2010;10:2–18. doi: 10.1167/10.8.2. [DOI] [PubMed] [Google Scholar]
- 26.Weilnhammer V.A., Stuke H., Sterzer P., Schmack K. The neural correlates of hierarchical predictions for perceptual decisions. J. Neurosci. 2018;38:5008–5021. doi: 10.1523/JNEUROSCI.2901-17.2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.McClure S.M., Gilzenrat M.S., Cohen J.D. Advances in Neural Information Processing Systems. 2005. An exploration-exploitation model based on norepinepherine and dopamine activity. [Google Scholar]
- 28.Faisal A.A., Wolpert D.M. Near optimal combination of sensory and motor uncertainty in time during a naturalistic perception-action task. J. Neurophysiol. 2009;101:1901–1912. doi: 10.1152/jn.90974.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Tassinari H., Hudson T.E., Landy M.S. Combining priors and noisy visual cues in a rapid pointing task. J. Neurosci. 2006;26:10154–10163. doi: 10.1523/JNEUROSCI.2779-06.2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Hauser T.U., Iannaccone R., Ball J., Mathys C., Brandeis D., Walitza S., Brem S. Role of the medial prefrontal cortex in impaired decision making in juvenile attention-deficit/hyperactivity disorder. JAMA Psychiatr. 2014;71:1165–1173. doi: 10.1001/jamapsychiatry.2014.1093. [DOI] [PubMed] [Google Scholar]
- 31.Bosch E., Fritsche M., Ehinger B.V., de Lange F.P. Opposite effects of choice history and stimulus history resolve a paradox of sequential choice bias. bioRxiv. 2020 doi: 10.1101/2020.02.14.948919. Preprint at. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Braun A., Urai A.E., Donner T.H. Adaptive history biases result from confidence-weighted accumulation of past choices. J. Neurosci. 2018;38:2418–2429. doi: 10.1523/JNEUROSCI.2189-17.2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Cicchini G.M., Mikellidou K., Burr D. Serial dependencies act directly on perception. J. Vis. 2017;17:6. doi: 10.1167/17.14.6. [DOI] [PubMed] [Google Scholar]
- 34.Fritsche M., Mostert P., de Lange F.P. Opposite effects of recent history on perception and decision. Curr. Biol. 2017;27:590–595. doi: 10.1016/j.cub.2017.01.006. [DOI] [PubMed] [Google Scholar]
- 35.Courville A.C., Daw N.D., Touretzky D.S. Bayesian theories of conditioning in a changing world. Trends Cognit. Sci. 2006;10:294–300. doi: 10.1016/j.tics.2006.05.004. [DOI] [PubMed] [Google Scholar]
- 36.Feigin H., Baror S., Bar M., Zaidel A. Perceptual decisions are biased toward relevant prior choices. Sci. Rep. 2021;11:648. doi: 10.1038/s41598-020-80128-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Beierholm U., Rohe T., Ferrari A., Stegle O., Noppeney U. Using the past to estimate sensory uncertainty. Elife. 2020;9:541722–e54222. doi: 10.7554/ELIFE.54172. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Rescorla R.A., Wagner A.R. A theory of pavlovian conditioning: variations in the effectiveness of reinforcement and nonreinforcement. Clasical Cond. II Curr. Res. Theory. 1972 [Google Scholar]
- 39.Gershman S.J., Niv Y. Learning latent structure: carving nature at its joints. Curr. Opin. Neurobiol. 2010;20:251–256. doi: 10.1016/j.conb.2010.02.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Murray G.K., Corlett P.R., Clark L., Pessiglione M., Blackwell A.D., Honey G., Jones P.B., Bullmore E.T., Robbins T.W., Fletcher P.C. Substantia nigra/ventral tegmental reward prediction error disruption in psychosis. Mol. Psychiatr. 2008;13:239–276. doi: 10.1038/sj.mp.4002058. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Weilnhammer V., Röd L., Eckert A.L., Stuke H., Heinz A., Sterzer P. Psychotic experiences in schizophrenia and sensitivity to sensory evidence. Schizophr. Bull. 2020;46:927–936. doi: 10.1093/schbul/sbaa003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Jardri R., Duverne S., Litvinova A.S., Denève S. Experimental evidence for circular inference in schizophrenia. Nat. Commun. 2017;8:14218. doi: 10.1038/ncomms14218. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Fletcher P.C., Frith C.D. Perceiving is believing: a Bayesian approach to explaining the positive symptoms of schizophrenia. Nat. Rev. Neurosci. 2009;10:48–58. doi: 10.1038/nrn2536. [DOI] [PubMed] [Google Scholar]
- 44.Han H.-B., Hwang E., Lee S., Kim M.-S., Choi J.H. Gamma-band activities in mouse frontal and visual cortex induced by coherent dot motion. Sci. Rep. 2017;7:43780. doi: 10.1038/srep43780. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Karni A., Sagi D. The time course of learning a visual skill. Nature. 1993;365:250–252. doi: 10.1038/365250a0. [DOI] [PubMed] [Google Scholar]
- 46.Faulkenberry T.J. Computing Bayes factors to measure evidence from experiments: an extension of the BIC approximation. Biom. Lett. 2018;55:31–43. doi: 10.2478/bile-2018-0003. [DOI] [Google Scholar]
- 47.Wetzels R., Wagenmakers E.-J. A default Bayesian hypothesis test for correlations and partial correlations. Psychon. Bull. Rev. 2012;19:1057–1064. doi: 10.3758/s13423-012-0295-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48.Wagenmakers E.-J., Marsman M., Jamil T., Ly A., Verhagen J., Love J., Selker R., Gronau Q.F., Šmíra M., Epskamp S., et al. Bayesian inference for psychology. Part I: theoretical advantages and practical ramifications. Psychon. Bull. Rev. 2018;25:35–57. doi: 10.3758/s13423-017-1343-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
-
•
All data has been deposited at OSF and is publicly available as of the date of publication. DOIs are listed in the key resources table.
-
•
All original code has been deposited at OSF and is publicly available as of the date of publication. DOIs are listed in the key resources table.
-
•
Any additional information required to reanalyze the data reported in this paper is available from the lead contact upon request.



