Skip to main content
eLife logoLink to eLife
. 2020 Dec 15;9:e54172. doi: 10.7554/eLife.54172

Using the past to estimate sensory uncertainty

Ulrik Beierholm 1,†,, Tim Rohe 2,3,, Ambra Ferrari 4, Oliver Stegle 5,6,7, Uta Noppeney 4,8
Editors: Tobias Reichenbach9, Andrew J King10
PMCID: PMC7806269  PMID: 33319749

Abstract

To form a more reliable percept of the environment, the brain needs to estimate its own sensory uncertainty. Current theories of perceptual inference assume that the brain computes sensory uncertainty instantaneously and independently for each stimulus. We evaluated this assumption in four psychophysical experiments, in which human observers localized auditory signals that were presented synchronously with spatially disparate visual signals. Critically, the visual noise changed dynamically over time continuously or with intermittent jumps. Our results show that observers integrate audiovisual inputs weighted by sensory uncertainty estimates that combine information from past and current signals consistent with an optimal Bayesian learner that can be approximated by exponential discounting. Our results challenge leading models of perceptual inference where sensory uncertainty estimates depend only on the current stimulus. They demonstrate that the brain capitalizes on the temporal dynamics of the external world and estimates sensory uncertainty by combining past experiences with new incoming sensory signals.

Research organism: Human

Introduction

Perception has been described as a process of statistical inference based on noisy sensory inputs (Knill and Pouget, 2004; Knill and Richards, 1996). Key to this perceptual inference is the estimation and/or representation of sensory uncertainty (as measured by variance, i.e. the inverse of reliability/precision). Most prominently, in multisensory perception, a more reliable or ‘Bayes-optimal’ percept is obtained by integrating sensory signals that come from a common source weighted by their relative reliabilities with less weight assigned to less reliable signals. Likewise, sensory uncertainty shapes observers’ causal inference. It influences whether observers infer that signals come from a common cause and should hence be integrated or else be processed independently (Aller and Noppeney, 2019; Körding et al., 2007; Rohe et al., 2019; Rohe and Noppeney, 2015b; Rohe and Noppeney, 2015a; Rohe and Noppeney, 2016; Wozny et al., 2010; Acerbi et al., 2018). Indeed, accumulating evidence suggests that human observers are close to optimal in many perceptual tasks (though see Acerbi et al., 2014; Drugowitsch et al., 2016; Shen and Ma, 2016; Meijer et al., 2019) and weight signals approximately according to their sensory reliabilities (Alais and Burr, 2004; Ernst and Banks, 2002; Jacobs, 1999; Knill and Pouget, 2004; van Beers et al., 1999; Drugowitsch et al., 2014; Hou et al., 2019).

An unresolved question is how human observers compute their sensory uncertainty. Current theories and experimental approaches generally assume that observers access sensory uncertainty near-instantaneously and independently across briefly (≤200 ms) presented stimuli (Ma and Jazayeri, 2014; Zemel et al., 1998). At the neural level, theories of probabilistic population coding have suggested that sensory uncertainty may be represented instantaneously in the gain of the neuronal population response (Ma et al., 2006; Hou et al., 2019). Yet, in our natural environment, sensory noise often evolves at slow timescales. For instance, visual noise slowly varies when walking through a snow storm. Observers may capitalize on the temporal dynamics of the external world and use the past to inform current estimates of sensory uncertainty. In this alternative account, more reliable estimates of sensory uncertainty would be obtained by combining past estimates with current sensory inputs as predicted by Bayesian learning.

To arbitrate between these two critical hypotheses, we presented observers with audiovisual signals in synchrony but with a small spatial disparity in a sound localization task. Critically, the spatial standard deviation (STD) of the visual signal changed dynamically over time continuously (experiments 1–3) or discontinuously (i.e. with intermittent jumps; experiment 4). First, we investigated whether the influence of the visual signal location on observers’ perceived sound location depended on the noise only of the current visual signal or also of past visual signals. Second, using computational modeling and Bayesian model comparison, we formally assessed whether observers update their visual uncertainty estimates consistent with (i) an instantaneous learner, (ii) an optimal Bayesian learner, or (iii) an exponential learner.

Results

In a spatial localization task, we presented participants with audiovisual signals in a series of four experiments, in which the physical visual noise changed dynamically over time either continuously or discontinuously (Figure 1). Visual (V) signals (clouds of 20 bright dots) were presented every 200 ms for a duration of 32 ms. The cloud’s horizontal STD varied over time at this temporal rate of 5 Hz either continuously (experiments 1–3) or discontinuously with intermittent jumps (experiment 4). The cloud’s location mean was temporally independently resampled from five possible locations (−10°, −5°, 0°, 5°, 10°) on each trial with the inter-trial asynchrony jittered between 1.4 and 2.8 s. In synchrony with the change in the cloud’s mean location, the dots changed their color and a sound was presented (AV signal). The location of the sound was sampled from the two possible locations adjacent to the visual cloud’s mean location (i.e. ±5° AV spatial disparity). Participants localized the sound and indicated their response using five response buttons.

Figure 1. Audiovisual localization paradigm and Bayesian causal inference model for learning visual reliability.

(A) Visual (V) signals (cloud of 20 bright dots) were presented every 200 ms for 32 ms. The cloud’s location mean was temporally independently resampled from five possible locations (−10°, −5°, 0°, 5°, 10°) with an inter-trial asynchrony jittered between 1.4 and 2.8 s. In synchrony with the change in the cloud’s mean location, the dots changed their color and a sound was presented (AV signal) which the participants localized using five response buttons. The location of the sound was sampled from the two possible locations adjacent to the visual cloud’s mean location (i.e. ±5° AV spatial). (B) The generative model for the Bayesian learner explicitly modeled the potential causal structures, that is whether visual (Vi) signals and an auditory (A) signal were generated by one common audiovisual source St, that is C = 1, or by two independent sources SVt and SAt, that is C = 2 (n.b. only the model component for the common source case is shown to illustrate the temporal updating, for complete generative model, see Figure 1—figure supplement 1). Importantly, the reliability (i.e. 1/variance) of the visual signal at time t (λt) depends on the reliability of the previous visual signal (λt-1) for both model components (i.e. common and independent sources).

Figure 1.

Figure 1—figure supplement 1. Generative model for the Bayesian learner.

Figure 1—figure supplement 1.

The Bayesian Causal Inference model explicitly models whether auditory and visual signals are generated by one common (C = 1) or two independent sources (C = 2) (for further details see Körding et al., 2007). We extend this Bayesian Causal Inference model into a Bayesian learning model by making the visual reliability (λV,t, i.e. the inverse of uncertainty or variance) of the current trial dependent on the previous trial.

The small audiovisual disparity enabled an influence of the visual signal location on the perceived sound location as a function of visual noise (Alais and Burr, 2004; Battaglia et al., 2003; Meijer et al., 2019). As a result, observers’ visual uncertainty estimate could be quantified in terms of the relative weight of the auditory signal on the perceived sound location with a greater auditory weight indicating that observers estimated a greater visual uncertainty.

In the first three experiments, we used continuous sequences, where the visual cloud’s STD changed periodically according to a sinusoid (n = 25; period = 30 s), a random walk (RW1; n = 33; period = 120 s) or a smoothed random walk (RW2; n = 19; period = 30 s; Figure 2). In an additional fourth experiment, we inserted abrupt increases or decreases into a sinusoidal evolution of the visual cloud’s STD (n = 18, period = 30 s, Figure 5). We will first describe the results for the three continuous sequences followed by the discontinuous sequence.

Figure 2. Time course of visual noise and relative auditory weights for continuous sequences of visual noise.

The visual noise (i.e. STD of the cloud of dots, right ordinate) and the relative auditory weights (mean across participants ± SEM, left ordinate) are displayed as a function of time. The STD of the visual cloud was manipulated as (A) a sinusoidal (period 30 s, N = 25), (B) a random walk (RW1, period 120 s, N = 33) and (C) a smoothed random walk (RW2, period 30 s, N = 19). The overall dynamics as quantified by the power spectrum is faster for RW2 than RW1 (peak in frequency range [0 0.2] Hz: Sinusoid: 0.033 Hz, RW1: 0.025 Hz, RW2: 0.066 Hz). The RW1 and RW2 sequences were mirror-symmetric around the half-time (i.e. the second half was the reversed first half). The visual clouds were re-displayed every 200 ms (i.e. at 5 Hz). The trial onsets, that is audiovisual (AV) signals (color change with sound presentation, black dots), were interspersed with an inter-trial asynchrony jittered between 1.4 and 2.8 s. On each trial observers located the sound. The relative auditory weights were computed based on regression models for the sound localization responses separately for each of the 20 temporally adjacent bins that cover the entire period within each participant. The relative auditory weights vary between one (i.e. pure auditory influence on the localization responses) and zero (i.e. pure visual influence). For illustration purposes, the cloud of dots for the lowest (i.e. V signal STD = 2°) and the highest (i.e. V signal STD = 18°) visual variance are shown in (A).

Figure 2.

Figure 2—figure supplement 1. Time course of the relative auditory weights for continuous sequences of visual noise when controlling for location of the cloud of dots in the previous trial.

Figure 2—figure supplement 1.

Relative auditory weights (mean across participants ± SEM, left ordinate) and visual noise (i.e. STD of the cloud of dots, right ordinate) are displayed as a function of time as shown in Figure 2 of the main text. To compute the relative auditory weights, the sound localization responses where regressed on the A and V signal locations within bins of 1.5 s (A, B) or 6 s (C) width across sequence repetitions within each participant. To control for a potential effect of past visual locations, the location of the visual cloud of dots in the previous trial was included in this regression model as a covariate (Supplementary file 1-Table 3).

We assigned the sound localization responses and the associated physical visual noise (i.e. the cloud’s STD) to 20 (resp. 15 for experiment 4) temporally adjacent bins covering the entire period of each of the three sequences. Each experiment repeated the same 30 s (Sin, RW2) or 120 s (RW1) period throughout the experiment resulting in ~32 periods for the RW1 and ~130 periods for the Sin and RW2 sequences. The trial and hence sound onsets were jittered with respect to this periodic evolution of the visual cloud’s STD resulting in a greater effective sampling rate than expected for an inter-trial asynchrony of 1.4–2.8 s. In total, we assigned at least 44–87 trials to each bin (Supplementary file 1-Table 1). We quantified the auditory and visual influence on observers’ perceived auditory location for each bin based on regression models (separately for each of the 20 temporally adjacent bins). For instance, for bin = 1 we computed:

RA,trial,bin=1=LA,trial,bin=1ß A,bin=1 +LV,trial,bin=1ß V,bin=1 +ß const,bin=1 +etrial,bin=1

with RA,trial, bin=1 = Localization response for trial t and bin 1; LA,trial,bin=1 or LV,trial,bin=1  = ‘true’ auditory or visual location for trial t and bin 1; ßA,bin=1 or ßV,bin=1 = auditory or visual weight for bin 1; ßconst,bin=1 = constant term; etrial,bin=1 = error term. For each bin b, we thus obtained one auditory and one visual weight estimate. The relative auditory weight for a particular bin was computed as wA,bin = ßA,bin / (ßA,bin + ßV,bin).

Figure 2 and Figure 3 show the temporal evolution of the STD of the physical visual noise and observers’ relative auditory weight indices wA,bin. If observers estimate sensory uncertainty instantaneously, observer’s relative auditory weight indices should closely track the visual cloud’s STD (Figure 2). By contrast, we observed systematic biases: while the temporal evolution of the physical visual noise was designed to be symmetrical for each time period, we observed a temporal asymmetry for wA in all of the three experiments. For the monotonic sinusoidal sequence, wA was smaller for the 1st half of each period, when visual noise increased, than the 2nd half, when visual noise decreased over time (Figure 3A). For the non-monotonic RW1 and RW2 sequences, we observed more complex temporal profiles, because the visual noise increased and decreased in each half. WA was larger for increasing visual noise in the 1st as compared to the 2nd half, while wA was smaller for decreasing visual noise in the 1st as compared to the 2nd half (Figure 3B, C). These impressions were confirmed statistically in 2 (1st vs. flipped 2nd half) x 9 (bins) repeated measures ANOVAs (Table 1) showing a significant main effect of the 1st versus flipped 2nd half period for the sinusoidal (F(1, 24)=12.162, p=0.002, partial η2 = 0.336) and the RW1 sequence (F(1, 32)=14.129, p<0.001, partial η2 = 0.306). For the RW2 sequence, we observed a significant interaction (F(4.6, 82.9)=3.385, p=0.010, partial η2 = 0.158), because the visual noise did not change monotonically within each half period. Instead, monotonic increases and decreases in visual noise alternated at nearly the double frequency in RW2 as compared to RW1. The asymmetry in the auditory weights’ time course across the three experiments suggested that the visual noise in the past influenced observers’ current visual uncertainty estimate resulting in smaller auditory weights for ascending visual noise and greater auditory weights for descending visual noise.

Figure 3. Observers’ relative auditory weights for continuous sequences of visual noise.

Figure 3.

Relative auditory weights wA of the 1st (solid) and the flipped 2nd half (dashed) of a period (binned into 20 bins) plotted as a function of the normalized time in the sinusoidal (red), the RW1 (blue), and the RW2 (green) sequences. Relative auditory weights were computed from auditory localization responses of human observers.

Table 1. Analyses of the temporal asymmetry of the relative auditory weights across the four sequences of visual noise using repeated measures ANOVAs with the factors sequence part (1st vs. flipped 2nd half), bin and jump position (only for the sinusoidal sequences with intermittent jumps).

Effect F df1 df2 p Partial η2
Sinusoid Part 12.162 1 24 0.002 0.336
Bin 92.007 3.108 74.584 <0.001 0.793
PartXBin 2.167 2.942 70.617 0.101 0.083
RW1 Part 14.129 1 32 0.001 0.306
Bin 76.055 4.911 157.151 <0.001 0.704
PartXBin 1.225 4.874 155.971 0.300 0.037
RW2 Part 2.884 1 18 0.107 0.138
Bin 60.142 3.304 59.467 <0.001 0.770
PartXBin 3.385 4.603 82.849 0.010 0.158
Sinusoid with intermittent jumps Jump 28.306 2 34 <0.001 0.625
Part 24.824 1 17 <0.001 0.594
Bin 76.476 1.873 31.839 <0.001 0.818
JumpXPart 0.300 2 34 0.743 0.017
JumpXBin 8.383 3.309 56.247 <0.001 0.330
PartXBin 1.641 3.248 55.222 0.187 0.088
JumpXPartXBin 0.640 5.716 97.175 0.690 0.036

Note: The factor bin comprised nine levels in the first three and seven levels in the fourth sequence. In this sequence, the factor Jump comprised three levels. If Mauchly tests indicated significant deviations from sphericity (p<0.05), we report Greenhouse-Geisser corrected degrees of freedom and p values.

To further investigate the influence of past visual noise on observers’ auditory weights, we estimated a regression model in which the relative auditory weights wA for each of the 20 bins were predicted by the visual STD in the current bin and the difference in STD between the current and the previous bin (see Equation 2). Indeed, both the current visual STD (p<0.001 for all three sequences; Sinusoid: t(24)=15.767, Cohen’s d = 3.153; RW1: t(32) = 15.907, Cohen’s d = 2.769; RW2: t(18) = 12.978, Cohen’s d = 2.977, two sided one-sample t test against zero) and the difference in STD between the current and the previous bin (i.e. Sinusoid t(24) = −3.687, p=0.001, Cohen’s d = −0.737; RW1 t(32) = −2.593, p=0.014, Cohen’s d = −0.451; RW2 t(18) = -2.395, p=0.028, Cohen’s d = −0.549) significantly predicted observers’ relative auditory weights (for complementary results of nested model comparisons see Appendix 1 and Supplementary file 1-Table 5). Collectively, these results suggest that observers’ visual uncertainty estimates (as indexed by the relative auditory weights wA) depend not only on the current sensory signal, but also on the recent history of the sensory noise. These results were also validated in a control analysis that regressed out and thus accounted for potential influences of the previous visual location on observers’ sound localization, suggesting that the effects of past visual uncertainty cannot be explained by effects of past visual location mean (Appendix 1, Figure 2—figure supplement 1, Supplementary file 1-tables 2-4).

To characterize how human observers use information from the past to estimate current sensory uncertainty, we compared three computational models that differed in how visual uncertainty is learnt over time (Figure 4): Model 1, the instantaneous learner, estimates visual uncertainty independently for each trial as assumed by current standard models. Model 2, the optimal Bayesian learner, estimates visual uncertainty by updating the prior uncertainty estimate obtained from past visual signals with the uncertainty estimate from the current signal. Model 3, the exponential learner, estimates visual uncertainty by exponentially discounting past uncertainty estimates. All three models account for observers’ uncertainty about whether auditory and visual signals were generated by common or independent sources by explicitly modeling the two potential causal structures (Körding et al., 2007) underlying the audiovisual signals (n.b. only the model component pertaining to the ‘common cause’ case is shown in Figure 1B, for the full model see Figure 1—figure supplement 1). Models were fit individually to observers’ data by sampling from the posterior over parameters for each observer (Table 2).

Figure 4. Observed and predicted relative auditory weights for continuous sequences of visual noise.

Figure 4.

Relative auditory weights wA of the 1st (solid) and the flipped 2nd half (dashed) of a period (binned into 20 bins) plotted as a function of the normalized time in the sinusoidal (red), the RW1 (blue) and the RW2 (green) sequences. Relative auditory weights were computed from auditory localization responses of human observers (A), Bayesian (B), exponential (C), or instantaneous (D) learning models. For comparison, the standard deviation of the visual signal is shown in (E). Please note that all models were fitted to observers’ auditory localization responses (i.e. not the auditory weight wA). (F) Bayesian model comparison – Random effects analysis: The matrix shows the protected exceedance probability (color coded and indicated by the numbers) for pairwise comparisons of the Instantaneous (Inst), Bayesian (Bayes) and Exponential (Exp) learners separately for each of the four experiments. Across all experiments we observed that the Bayesian or the Exponential learner outperformed the Instantaneous learner (i.e. a protected exceedance probability >0.94) indicating that observers used the past to estimate sensory uncertainty. However, it was not possible to arbitrate reliably between the Exponential and the Bayesian learner across all experiments (protected exceedance probability in bottom row).

Table 2. Model parameters (median), absolute WAIC and relative.

ΔWAIC values for the three candidate models in the four sequences of visual noise.

Sequence Model σA Pcommon σ0 κ or γ WAIC ΔWAIC
Sinusoid Instantaneous learner 5.56 0.63 8.95 - 81931.2 109.9
Bayesian learner 5.64 0.65 9.03 κ: 7.37 81821.3 0
Exponential discounting 5.62 0.64 9.02 γ: 0.23 81866.9 45.6
RW1 Instantaneous learner 6.30 0.69 8.46 - 110051.2 89.0
Bayesian learner 6.29 0.72 8.68 κ: 8.06 109962.2 0
Exponential discounting 6.26 0.70 8.75 γ: 0.33 109929.9 −32.3
RW2 Instantaneous learner 6.36 0.72 10.79 - 62576.4 201.3
Bayesian learner 6.49 0.78 10.9 κ: 6.7 62375.2 0
Exponential discounting 6.46 0.73 11.0 γ: 0.25 62421.5 46.3
Sinusoid with intermittent jumps Instantaneous learner 6.38 0.65 8.19 - 83891.4 94.9
Bayesian learner 6.45 0.68 8.26 κ: 6.13 83796.5 0
Exponential discounting 6.43 0.67 8.20 γ: 0.24 83798.1 1.64

Note: WAIC values were computed for each participant and summed across participants. A low WAIC indicates a better model. ΔWAIC is relative to the WAIC of the Bayesian learner.

We compared the three models in a fixed and random effects analysis (Penny et al., 2010; Rigoux et al., 2014) using the Watanabe-Akaike information criterion (WAIC) as appropriate for evaluating model samples (Gelman et al., 2014) (i.e. a low WAIC indicates a better model, a difference greater than 10 is considered very strong evidence for a model). In the fixed-effects analysis (see Table 2 for details), the Bayesian learner was substantially better than the instantaneous learner across all three experiments, but outperformed the exponential learner reliably only in the sinusoidal sequence. Likewise, the random-effects analysis based on hierarchical Bayesian model selection (Penny et al., 2010; Rigoux et al., 2014) showed a protected exceedance probability that was substantially greater for the Bayesian learner (Sin, RW2) or the exponential learner (RW1, RW2) than for the instantaneous learner (Figure 4F). However, the direct comparison between the Bayesian and the exponential learner did not provide consistent results across experiments. As shown in Figure 4A and B, both the Bayesian and the exponential learner accurately reproduced the temporal asymmetry for the auditory weights across all three experiments.

From the optimal Bayesian learner, we inferred observers’ estimated rate of change in visual reliability (i.e. parameter 1κ). The sinusoidal sequence was estimated to change at a faster pace (median κ = 7.4 across observers, 95% confidence interval, 95% CI [4.8, 10.8] estimated via bootstrapping) than the RW1 sequence (median κ = 8.1, 95% CI [7.0,14.9]), but slower than the RW2 sequence (median κ = 6.7, 95% CI [4.4,11.2]) indicating that the Bayesian learner accurately inferred that visual reliability changed at different pace across the three continuous sequences (see legend of Figure 2). Likewise, the learning rates 1-γ of the exponential learner accurately reflect the different rates of change across the sequences (Sinusoid γ= 0.23, 95% CI [0.14, 0.28]; RW1: γ= 0.33, 95% CI [0.21, 0.38]; RW2: γ= 0.25, 95% CI [0.21, 0.29]). Both the Bayesian and the exponential learner thus estimated a smaller rate of change for the RW1 than for the sinusoidal sequence – although caution needs to be applied when interpreting these results given the extensive confidence intervals. Further, the learning rates of the exponential learner imply that observers gave the visual signals presented 4.1 (Sinusoid), 5.4 (RW1), and 4.3 (RW2) seconds before the current stimulus 5% of the weight they assigned to the current visual signal to estimate the visual reliability.

To further disambiguate between the Bayesian and the exponential learner, we designed a fourth experimental ‘jump sequence’ that introduced abrupt increases or decreases in physical visual noise at three positions into the sinusoidal sequence (Figure 5A). Using the same analysis approach as for experiments 1–3, we replicated the temporal asymmetry for the auditory weights (Figure 5B). For all three ‘jump positions’, wA was significantly smaller for the 1st half of each period, when visual noise increased, than the 2nd half, when visual noise decreased over time. The 3 (jump positions) x 2 (1st vs. flipped 2nd half) x 7 (bins) repeated measures ANOVA showed a significant main effect of 1st versus flipped 2nd period’s half (F(1,17) = 24.824, p<0.001, partial η2 = 0.594), while this factor was not involved in any higher-order interaction (see Table 1). Further, in a regression model the current visual STD (t(17) = 11.655, p<0.001, Cohen’s d = 2.747) and the difference between current and previous STD (t(17) = −4.768, p<0.001, Cohen’s d = −1.124) significantly predicted the relative auditory weights. Thus, we replicated our finding that the visual noise in the past influenced observers’ current visual uncertainty estimate as indexed by the relative auditory weights wA.

Figure 5. Time course of visual noise and relative auditory weights for sinusoidal sequence with intermittent jumps in visual noise (N = 18).

(A) The visual noise (i.e. STD of the cloud of dots, right ordinate) is displayed as a function of time. Each cycle included one abrupt increase and decrease in visual noise. The sequence of visual clouds was presented every 200 ms (i.e. at 5 Hz) while audiovisual (AV) signals (black dots) were interspersed with an inter-trial asynchrony jittered between 1.4 and 2.8 s. (B, C) Relative auditory weights wA of the 1st (solid) and the flipped 2nd half (dashed) of a period (binned into 15 bins) plotted as a function of the time in the sinusoidal sequence with intermitted inner (light gray), middle (gray), and outer (dark gray) jumps. Relative auditory weights were computed from auditory localization responses of human observers (B) and the Bayesian learning model (C). Please note that all models were fitted to observers’ auditory localization responses (i.e. not the auditory weight wA).

Figure 5.

Figure 5—figure supplement 1. Time course of relative auditory weights and visual noise for the sinusoidal sequence with intermittent jumps in visual noise for the exponential and instantaneous learning models.

Figure 5—figure supplement 1.

Relative auditory weights wA,bin (mean across participants) of the 1st (solid) and the flipped 2nd half (dashed) of a period (binned into 15 time bins) plotted as a function of the time in the sinusoidal sequence with intermitted inner (light gray), middle (gray), and outer (dark gray) jumps. Relative auditory weights were computed from auditory localization responses of exponential (A) or instantaneous (B) learning models. For comparison, the standard deviation of the visual signal is shown in (C). Please note that all models were fitted to observers’ auditory localization responses (i.e. not the auditory weight wA).

Figure 5—figure supplement 2. Time course of relative auditory weights and root mean squared error of the computational models before and after the jumps in the sinusoidal sequence with intermittent jumps.

Figure 5—figure supplement 2.

(A) Relative auditory weights wA (mean across participants) shown as a function of time around the up-jumps (left panel) and the down-jumps (right panel) for observers’ behavior, the instantaneous, exponential and Bayesian learner. Relative auditory weights were computed from auditory localization responses for behavioral data and for the predictions of the three computational models in time bins of 200 ms (i.e. 5 Hz rate of the visual clouds). Trials from the three types of up- and down-jumps were pooled to increase the reliability of the wA estimates. Because time bins included only few trials in some participants, individual wA values that were smaller or larger than the three times the scaled median absolute deviation were excluded from the analysis. Note that the up jumps occurred around the steepest increase in visual noise, so that the Bayesian and exponential learners underestimated visual noise (Figure 5C), leading to smaller wA as compared to the instantaneous learner already before the up jump. (B) Root mean squared error (RMSE; computed across participants) between wA computed from behavior and the models’ predictions (as shown in A), shown as a function of the time around the up-jumps (left panel) and the down-jumps (right panel). Please note that all models were fitted to observers’ auditory localization responses (i.e. not the auditory weight wA).

Bayesian model comparison using a fixed-effects analysis showed that both the Bayesian learner and the exponential learner substantially outperformed the instantaneous learner (see Table 2). However, consistent with our Bayesian model comparison results for the continuous sequences, the Bayesian learner did not provide a better explanation for observers’ responses than the exponential learner (ΔWAIC = +2, see Table 2, Figure 5C and Figure 5—figure supplement 1A). Likewise, a random-effects analysis based on hierarchical Bayesian model selection showed that the Bayesian and the exponential learners outperformed the instantaneous learner, but again we were not able to adjudicate between the Bayesian and exponential learner (Figure 4F, see also methods and results in Appendix 1, Figure 5—figure supplement 2 and Supplementary file 1-Table 6 for further analyses justifying the choice of continuous learning models in the jump sequence).

In summary, across four experiments that used continuous and discontinuous sequences of visual noise, we have shown that the Bayesian or exponential learners outperform the instantaneous learner. However, across the four experiments we were not able to decide whether observers adapted to changes in visual noise according to a Bayesian or an exponential learner. The key feature that distinguishes between the Bayesian and the exponential learner is that only the Bayesian learner adapts dynamically based on its uncertainty about its visual reliability estimates. As a consequence, the Bayesian learner should adapt faster than the exponential learner to increases in physical visual noise (i.e. spread of the visual cloud) but slower to decreases in visual noise. From the Bayesian learner’s perspective, the faster learning for increases in visual noise emerges because it is unlikely that visual dots form a large spread cloud under the assumption that the true visual spread of the cloud is small. Conversely, the Bayesian learner will adapt more slowly to decreases in visual variance, because under the assumption of a visual cloud with a large spread visual dots may form a small cloud by chance. Indeed, previous research has shown that observers adapt their variance estimates faster for changes from small to large than for changes from large to small variance (Berniker et al., 2010). However, these results have been shown for learning about a hidden variable such as the prior that defines the spatial distribution from which an object’s location is sampled. In our study, we manipulated the variance of the likelihood, that is the variance of the clouds of dots.

Asymmetric differences in adaptation rate between the exponential and the Bayesian learner should thus be amplified if we increase observer’s uncertainty about its visual reliability estimate by reducing the number of dots of the visual cloud from 20 to 5 dots. Based on simulations, we therefore explored whether we could experimentally discriminate between the Bayesian and exponential learner using continuous sinusoidal or discontinuous ‘jump’ sequences with visual clouds of only five dots. For the two sequences, we simulated the sound localization responses of 12 observers based on the Bayesian learner model and fitted the Bayesian and exponential learner models to the responses of each simulated Bayesian observer. Figure 6 shows observers’ auditory weights indexing their estimated visual reliability across time that we obtained from the fitted responses of the Bayesian (blue) and the exponential learner (green). The simulations reveal the characteristic differences in how the Bayesian and the exponential learner adapt their visual uncertainty estimates to increases and decreases in visual noise. As expected, the Bayesian learner adapts its visual uncertainty estimates faster than the exponential learner to increases in visual noise, but slower to decreases in visual noise. Nevertheless, these differences are relatively small, so that the difference in mean log likelihood between the Bayesian and exponential learner is only −1.82 for the sinusoidal sequence and −2.74 for the jump sequence.

Figure 6. Time course of the relative auditory weights, the standard deviation (STD) of the visual cloud and the STD of the visual uncertainty estimates.

Figure 6.

(A) Relative auditory weights wA of the 1st (solid) and the flipped 2nd half (dashed) of a period (binned into 15 bins) plotted as a function of the time in the sinusoidal sequence. Relative auditory weights were computed from the predicted auditory localization responses of the Bayesian (blue) or exponential (green) learning models fitted to the simulated localization responses of a Bayesian learner based on visual clouds of 5 dots. (B) Relative auditory weights wA computed as in (A) for the sinusoidal sequence with intermitted jumps. Only the outer-most jump (dark brown in Figure 5B/C and Figure 5—figure supplement 1) is shown. (C, D) STD of the visual cloud of 5 dots (gray) and the STD of observers’ visual uncertainty as estimated by the Bayesian (blue) and exponential (green) learners (that were fitted to the simulated localization responses of a Bayesian learner) as a function of time for the sinusoidal sequence (C) and in the sinusoidal sequence with intermitted jumps (D). Note that only an exemplary time course from 600 to 670 s after the experiment start is shown.

Next, we investigated whether our experiments successfully mimicked situations in which observers benefit from integrating past and current information to estimate their sensory uncertainty. We compared the accuracy of the instantaneous, exponential and Bayesian learner’s visual uncertainty estimates in terms of their mean absolute deviation (in percentage) from the true variance. For Gaussian clouds of 20 dots, the instantaneous learner’s error in the visual uncertainty estimates of 21.7% is reduced to 13.7% and 14.9% for the exponential and Bayesian learners, respectively (with best fitted γ = 0.6, in the sinusoidal sequence). For Gaussian clouds composed of only five dots, the exponential and Bayesian learners even cut down the error by half (i.e. 46.8% instantaneous learner, 29.5% exponential learner, 23.9% Bayesian learner, with best fitted γ = 0.7).

Collectively, these simulation results suggest that even in situations in which observers benefit from combining past with current sensory inputs to obtain more precise uncertainty estimates, the exponential learner is a good approximation of the Bayesian learner, making it challenging to dissociate the two experimentally based on noisy human behavioral responses.

Discussion

The results from our four experiments challenge classical models of perceptual inference where a perceptual interpretation is obtained using a likelihood that depends solely on the current sensory inputs (Ernst and Banks, 2002). These models implicitly assume that sensory uncertainty (i.e. likelihood variance) is instantaneously and independently accessed from the sensory signals on each trial based on initial calibration of the nervous system (Jacobs and Fine, 1999). Most prominently, in the field of cue combination it is generally assumed that sensory signals are weighted by their uncertainties that are estimated only from the current sensory signals (Alais and Burr, 2004; Ernst and Banks, 2002; Jacobs, 1999) (but see Mikula et al., 2018; Triesch et al., 2002).

By contrast, our results demonstrate that human observers integrate inputs weighted by uncertainties that are estimated jointly from past and current sensory signals. Across the three continuous and the one discontinuous jump sequences, observers’ current visual reliability estimates were influenced by visual inputs that were presented 4–5 s in the past albeit their influence amounted to only 5% of the current visual signals.

Critically, observers adapted their visual uncertainty estimates flexibly according to the rate of change in the visual noise across the experiments. As predicted by both Bayesian and exponential learning models, observers’ visual reliability estimates relied more strongly on past sensory inputs, when the visual noise changed more slowly across time. While observers did not explicitly notice that each of the four experiments was composed of repetitions of temporally symmetric sequence components, we cannot fully exclude that observers may have implicitly learnt this underlying temporal structure. However, implicit or explicit knowledge of this repetitive sequence structure should have given observers the ability to predict and preempt future changes in visual reliability and therefore attenuated the temporal lag of the visual reliability estimates. Put differently, our experimental choice of repeating the same sequence component over and over again in the experiment cannot explain the influence of past signals on observers’ current reliability estimate, but should have reduced or even abolished it.

Importantly, the key feature that distinguishes the Bayesian from the exponential learner is how the two learners adapt to increases versus decreases in visual noise. Only the Bayesian learner represents and accounts for its uncertainty about its visual reliability estimates. As compared to the exponential learner, it should therefore adapt faster to increases but slower to decreases in visual noise (e.g. see Berniker et al., 2010). Our simulation results show this profile qualitatively, when the learner’s uncertainty about its visual reliability estimate is increased by reducing the number of dots (see Figure 6). But even for visual clouds of five dots, the differences in learning curves between the Bayesian and exponential learner are very small making it difficult to adjudicate between them given noisy observations from real observers. Unsurprisingly, therefore, Bayesian model comparison showed consistently across all four experiments that observers’ localization responses can be explained equally well by an optimal Bayesian and an exponential learner. These results converge with a recent study showing that learning about a hidden variable such as observers’ priors can be accounted for by an exponential averaging model (Norton et al., 2019).

Collectively, our experimental and simulation results suggest that under circumstances where observers substantially benefit from combining past and current sensory inputs for estimating sensory uncertainty, optimal Bayesian learning can be approximated well by more simple heuristic strategies of exponential discounting that update sensory weights with a fixed learning rate irrespective of observers’ uncertainty about their visual reliability estimate (Ma and Jazayeri, 2014; Shen and Ma, 2016). Future research will need to assess whether observers adapt their visual uncertainty estimates similarly if visual noise is manipulated via other methods such as stimulus luminance, duration, or blur.

From the perspective of neural coding, our findings suggest that current theories of probabilistic population coding (Beck et al., 2008; Ma et al., 2006; Hou et al., 2019) may need to be extended to accommodate additional influences of past experiences on neural representations of sensory uncertainties. Alternatively, the brain may compute sensory uncertainty using strategies of temporal sampling (Fiser et al., 2010).

In conclusion, our study demonstrates that human observers do not access sensory uncertainty instantaneously from the current sensory signals alone, but learn sensory uncertainty over time by combining past experiences and current sensory inputs as predicted by an optimal Bayesian learner or approximate strategies of exponential discounting. This influence of past signals on current sensory uncertainty estimates is likely to affect learning not only at slower timescales across trials (i.e. as shown in this study), but also at faster timescales of evidence accumulation within a trial (Drugowitsch et al., 2014). While our research unravels the impact of prior sensory inputs on uncertainty estimation in a cue combination context, we expect that they reveal fundamental principles of how the human brain computes and encodes sensory uncertainty.

Materials and methods

Participants

Seventy-six healthy volunteers participated in the study after giving written informed consent (40 female, mean age 25.3 years, range 18–52 years). All participants were naïive to the purpose of the study. All participants had normal or corrected-to normal vision and reported normal hearing. The study was approved by the human research review committee of the University of Tuebingen (approval number 432 2007 BO1) and the research review committee of the University of Birmingham (approval number ERN_11–0470P).

Stimuli

The visual spatial stimulus was a Gaussian cloud of twenty bright gray dots (0.56° diameter, vertical STD 1.5°, luminance 106 cd/m2) presented on a dark gray background (luminance 62 cd/m2, i.e. 71% contrast). The auditory spatial cue was a burst of white noise with a 5 ms on/off ramp. To create a virtual auditory spatial cue, the noise was convolved with spatially specific head-related transfer functions (HRTFs). The HRTFs were pseudo-individualized by matching participants’ head width, heights, depth, and circumference to the anthropometry of subjects in the CIPIC database (Algazi et al., 2001). HRTFs from the available locations in the database were interpolated to the desired locations of the auditory cue.

Experimental design and procedure

In a spatial ventriloquist paradigm, participants were presented with audiovisual spatial signals. Participants indicated the location of the sound by pressing one of five spatially corresponding buttons and were instructed to ignore the visual signal. Participants did not receive any feedback on their localization response. The visual signal was a cloud of 20 dots sampled from a Gaussian. The visual clouds were re-displayed with variable horizontal STDs (see below) every 200 ms (i.e. at a rate of 5 Hz; Figure 1A). The cloud’s location mean was temporally independently resampled from five possible locations (−10°, −5°, 0°, 5°, 10°) on each trial with the inter-trial asynchrony jittered between 1.4 and 2.8 s in steps of 200 ms. In synchrony with the change in the cloud’s location, the dots changed their color and a concurrent sound was presented. The location of the sound was sampled from ±5° visual angle with respect to the mean of the visual cloud. Observers’ visual uncertainty estimate was quantified in terms of the relative weight of the auditory signal on the perceived sound location. The change in the dot’s color and the emission of the sound occurred in synchrony to enhance audiovisual binding.

Continuous sinusoidal and RW sequences

Critically, to manipulate visual noise over time, the cloud’s STD changed at a rate of 5 Hz according to (i) a sinusoidal sequence, (ii) an RW sequence 1 or (iii) an RW sequence 2 (Figure 2). In all sequences, the horizontal STD of the visual cloud spanned a range from 2 to 18°:

  1. Experiment1 - Sinusoidal sequence (Sinusoid): A sinusoidal sequence was generated with a period of 30 s. During the ~65 min of the experiment, each participant completed ~130 cycles of the sinusoidal sequence.

  2. Experiment2 - Random walk sequence 1 (RW1): First, we generated an RW sequence of 60 s duration using a Markov chain with 76 discrete states and transition probabilities of stay (1/3), change to lower (1/3) or upper (1/3) adjacent states. To ensure that the RW sequence segment starts and ends with the same value, this initial 60-s sequence segment was concatenated with its temporally reversed segment resulting in an RW sequence segment of 120 s duration. Each participant was presented with this 120 s RW1 sequence approximately 32 times during the experiment.

  3. Experiment3 - Random walk sequence 2 (RW2): Likewise, we created a second random-walk sequence of 15 s duration using a Markov chain with only 38 possible states and transition probabilities similar to above. The 15-s sequence was concatenated with its temporally reversed version resulting in a 30-s sequence. The smoothness of this sequence segment was increased by filtering it (without phase shift) with a moving average of 250 ms. Each participant was presented with this sequence segment ~130 times.

Generally, a session of a sinusoid, RW1, or RW2 sequence included 1676 trials. Because of experimental problems, four sessions included only 1128, 1143, or 1295 trials. Before the experimental trials, participants practiced the auditory localization task in 25 unimodal auditory trials, 25 audiovisual congruent trials with a single dot as visual spatial cue and 75 trials with stimuli as in the main experiment.

Experiment 4 - Sinusoidal sequence with intermittent changes in visual noise (sinusoidal jump sequence)

To dissociate the Bayesian learner from approximate exponential discounting, we designed a sinusoidal sequence (period = 30 s) with intermittent increases/decreases in visual variance (Figure 5). As shown in Figure 5A, we inserted increases by 8° in visual STD at three levels of visual STD: 7.2°, 8.6°, 9.6° STD. Conversely, we inserted decreases by 8° in visual STD at 15.3°, 16.7°, 17.7° STD. We inserted jumps selectively in the period sections of high visual variance to make the jumps less apparent and maximize the chances that observers treated the series as a continuous sequence. As a result, the up-jumps occurred when the increases in visual variance were fastest (i.e. steeper slope), while the down-jumps occurred after sections in which the visual variance was relatively constant (i.e. shallow slope). We factorially combined these 3 (increases) x 3 (decreases) such that each sinewave cycle included exactly one sudden increase and decrease in visual STD (i.e. nine jump types). Otherwise, the experimental paradigm and stimuli were identical to the continuous sinusoidal sequence described above. During the ~80 min of this experiment, each participant completed ~154 cycles of the sinusoidal sequence including 16–18 cycles for each of the nine jump types. This sinusoidal jump sequence was expected to maximize differences in adaptation rate for the Bayesian and exponential learner. If participants continuously update their estimates of the visual reliability, as opposed to using a change point model (Adams and Mackay, 2007; Heilbron and Meyniel, 2019), the exponential learner will weight past and present uncertainty estimates throughout the entire sequence according to the same exponential function. By contrast, the Bayesian learner will take into account its uncertainty about the visual reliability and therefore adapt its visual reliability estimate for jumps from high to low visual variance (resp. low to high visual reliability, see Figure 6) more slowly than the exponential learner (see Appendix 1).

Subject numbers and inclusion criteria

Of the 76 subjects, 30 participated in the sinusoidal and the RW1 sequence session. Eight additional subjects participated only in the RW1 sequence session. Eighteen additional subjects participated in the RW2 sequence session. One participant completed all three continuous sequences. Twenty subjects participated in the sinusoidal sequence with intermittent changes in visual uncertainty. In total, we collected data from 30 participants for the sinusoidal, 38 participants for the RW1, 19 participants for the RW2, and 20 participants for the sinusoidal jump sequence. The sample sizes of 20–38 participants were based on a pilot experiment, which showed individually significant effects of past visual noise on the weighting of audiovisual spatial signals in 6/6 pilot participants. From these samples, we excluded participants if their perceived sound location did not depend on the current visual reliability (i.e. inclusion criterion p<0.05 in the linear regression; please note that this inclusion criterion is orthogonal to the question of whether participants’ visual uncertainty estimate depends on visual signals prior to the current trial). Thus, we excluded five participants of the sinusoidal and RW1 sequence and two participants from the sinusoidal jump sequence. Finally, we analyzed data from 25 participants for the sinusoidal, 33 participants for the RW1, 19 participants for the RW2, and 18 participants for the sinusoidal jump sequence.

Experimental setup

Audiovisual stimuli were presented using Psychtoolbox 3.09 (Brainard, 1997; Kleiner et al., 2007) (http://www.psychtoolbox.org) running under Matlab R2010b (MathWorks) on a Windows machine (Microsoft XP 2002 SP2). Auditory stimuli were presented at ~75 dB SPL using headphones (Sennheiser HD 555). As visual stimuli required a large field of view, they were presented on a 30″ LCD display (Dell UltraSharp 3007WFP). Participants were seated at a desk in front of the screen in a darkened booth, resting their head on an adjustable chin rest. The viewing distance was 27.5 cm. This setup resulted in a visual field of approximately 100°. Participants responded via a standard QWERTY keyboard. Participants used the buttons [i, 9, 0, -, = ] with their right hand for localization responses.

Data analysis

Continuous sinusoidal and RW sequences

At trial onset the visual cloud’s location mean was independently resampled from five possible locations (−10°, −5°, 0°, 5°, 10°). Concurrently, the cloud’s dots changed their color and a sound was presented sampled from ±5° visual angle with respect to the mean of the visual cloud. The inter-trial asynchrony was jittered between 1.4 and 2.8 s in steps of 200 ms. Therefore, across the experiment the trial onsets occurred at different times relative to the period of the changing visual cloud’s STD resulting in a greater effective sampling rate than provided if the inter-trial asynchrony had been fixed.

For each period of the three continuous sinusoidal and RW sequences, we sorted the trials (i.e. trial-specific visual cloud’s STD, visual location, auditory location, and observers’ sound localization responses) into 20 temporally adjacent bins that covered one complete period of the changing visual STD. This resulted in about 1676 trials in total/20 bins = approximately 80 trials on average per bin in each subject (more specifically: a range of 52–96 (Sin), 52–92 (RW 1), or 71–93 (RW2) trials, for details see Supplementary file 1-Table 1).

We quantified the influence of the auditory and visual locations on observers’ perceived auditory location for each bin by estimating a regression model separately for each bin (i.e. one regression model per bin). For instance, for bin = 1 we computed:

RA,trial,bin=1=LA,trial,bin=1ßA,bin=1 +LV,trial,bin=1ßV,bin=1 +ßconst,bin=1 +etrial,bin=1 (1)

with RA,trial, bin=1 = Localization response for trial t and bin 1; LA,trial,bin=1 or LV,trial,bin=1 = ‘true’ auditory or visual location for trial t and bin 1; ßA,bin=1 or ßV,bin=1 = auditory or visual weight for bin 1; ßconst,bin=1 = constant term; etrial,bin=1 = error term for trial t and bin 1. For each bin b, we thus obtained one auditory and one visual weight estimate. The relative auditory weight for a particular bin was computed as wA,bin = ßA,bin / (ßA,bin + ßV,bin) (Figure 2A–C).

By design, the temporal evolution of the physical visual variance (i.e. STD of the visual cloud) is symmetric for each period in the sinusoidal, RW1 and RW2 sequences. In other words, for physical visual noise, the 1st half and the flipped 2nd half within a period are identical (Figure 3E). Given this symmetry constraint, we evaluated the influence of past visual noise on participants’ auditory weight wA,bin by comparing the wA for the bins in the 1st half and the flipped 2nd half in a repeated measures ANOVA. If human observers estimate visual uncertainty by combining prior with current visual uncertainty estimates as expected for a Bayesian learner, wA should differ between the 1st half and the mirror-symmetric flipped 2nd half of the sequence. More specifically, wA should be smaller for the 1st half in which visual variance increased than for the mirror-symmetric time points of the 2nd half in which visual variance decreased. To test this prediction, we entered the subject-specific wA,bin into 2 (1st vs. flipped 2nd half) x 9 (bins, i.e. removing the bins at maximal and minimal visual noise values) repeated measures ANOVAs separately for the sinusoidal, RW1 and RW2 experiments (Table 1). For the sinusoidal sequence, we expected a main effect of ‘half’ because the sequence increased/decreased monotonically within each half period. For the RW1 and RW2 sequences, an influence of prior visual noise might also be reflected in an interaction effect of ‘half x bin’ because these sequences increased/decreased non-monotonically within each half period.

To further test whether the noise of past visual signals influenced observers’ current visual uncertainty estimate, we employed a regression model in which the relative auditory weights wA,bin were predicted by the visual STD in the current bin and the difference in STD between the current and the previous bin:

wA,bin=σV,bin ß σV + (σV,bin σV,bin1) ß ΔσV + ß const +ebin (2)

with wA,bin = relative auditory weight in bin b; σV,bin = mean visual STD in current bin b or previous bin b-1; ßconst = constant term; ebin = error term. To allow for generalization to the population level, the parameter estimates (ßσV, ßΔσV) for each participant were entered into two-sided one-sample t-tests at the between-subject random-effects level.

Sinusoidal sequence with intermittent changes in visual uncertainty

For each period of the sinusoidal sequence with intermittent changes, we sorted the values for the physical visual cloud’s variance (i.e. the cloud’s STD) and sound localization responses into 15 temporally adjacent bins which were positioned to capture the jumps in visual noise. For analysis of these sequences, we recombined the first and second halves of the 3 (increases at low, middle, high) x 3 (decreases at low, middle, high) sinewave cycles into three types of sinewave cycles such that both jumps were at low (=outer jump), middle (=middle jump), or high (=inner jump) visual noise. This recombination makes the simplifying assumption that the jump position of the first half will have negligible effects on participants’ uncertainty estimates of the second half. As a result of this recombination, each bin comprised at least 44–51 trials across participants (Supplementary file 1-Table 1). As for the continuous sequences, we quantified the auditory and visual influence on the perceived auditory location for each bin based on separate regression models for the 15 temporally adjacent bins (see Equation 1). Next, we independently computed the relative auditory weight wA,bin = ßA,bin / (ßA,bin + ßV,bin) for each of the 15 temporally adjacent bins. We statistically evaluated the influence of past visual noise on participants’ auditory weight on the wA in terms of the difference between 1st half and flipped 2nd half using a 2 (1st vs. flipped 2nd half) x 7 (bins) x 3 (jump: inner, middle, outer) repeated measures ANOVAs (Table 1).

Computational models (for continuous and discontinuous sequences)

To further characterize whether and how human observers use their uncertainty about previous visual signals to estimate their uncertainty of the current visual signal, we defined and compared three models in which visual reliability (λV) was (1) estimated instantaneously for each trial (i.e. instantaneous learner), was updated via (2) Bayesian learning or (3) exponential discounting (i.e. exponential learner) (Figure 1—figure supplement 1).

In the following, we will first describe the generative model that accounts for the fact that (1) visual uncertainty usually changes slowly across trials (i.e. time-dependent uncertainty changes) and (2) auditory and visual signals can be generated by one common or two independent sources (i.e. causal structure). Using this generative model as a departure point, we then describe how the instantaneous learner, the Bayesian learner and the exponential learner perform inference. Finally, we will explain how we account for participants’ internal noise and predict participants’ responses from each model (i.e. the experimenter’s uncertainty).

Generative model

On each trial t, the subject is presented with an auditory signal At, from a source SA,t, (see Figure 1—figure supplement 1) together with a visual cloud of dots at time t arising from a source, SV,t, drawn from a Normal distribution SV,t ~ N(0, 1/λS) with the spatial reliability (i.e. inverse of the spatial variance): λS=1/σS2. Critically, SA,t and SV,t, can either be two independent sources (C = 2) or one common source (C = 1): SA,t = SV,t = St (Körding et al., 2007).

We assume that the auditory signal is corrupted by noise, so that the internal signal is At ~ N(SA,t,  1/λA). By contrast, the individual visual dots (presented at high visual contrast) are assumed to be uncorrupted by noise, but presented dispersed around the location SV,t according to Vi,t ~ N(Ut,  1/λV,t), where Ut ~ N(SV,t,  1/λV,t). The dispersion of the individual dots,  1/λV,t, is assumed to be identical to the uncertainty about the visual mean, allowing subjects to use the dispersion as an estimate of the uncertainty about the visual mean.

The visual reliability of the visual cloud, λV,t=1/σV,t2, varies slowly at the re-display rate of 5 Hz according to a log RW: logλV,tN(logλV,t1,1/κ) with 1κ being the variability of λV,t in log space. We also use this log RW model to approximate learning in the four jump sequence (see Behrens et al., 2007).

The generative models of the instantaneous, Bayesian, and exponential learners all account for the causal uncertainty by explicitly modeling the two potential causal structures. Yet, they differ in how they estimate the visual uncertainty on each trial, which we will describe in greater detail below.

Observer inference

The instantaneous, Bayesian, and exponential learners invert this (or slightly modified, see below) generative model during perceptual inference to compute the posterior probability of the auditory location, SA,t, given the observed At and Vi,t. The observer selects a response based on the posterior using a subjective utility function which we assume to be the minimization of the squared error (SA,t - Strue)2. For all models, the estimate for the location of the auditory source is obtained by averaging the auditory estimates under the assumption of common and independent sources by their respective posterior probabilities (i.e. model averaging, see Figure 1—figure supplement 1):

S^A,t =S^A,C=1,t P(Ct =1|At ,V1:n,t )+S^A,C=2,t(1P(Ct =1|At ,V1:n,t )) (3)

where S^A,C=1,t and S^A,C=2,t depend on the model (see below), and P(C =1|At ,V1:n,t ) is the posterior probability that the audio and visual stimuli originated from the same source according to Bayesian causal inference (Körding et al., 2007).

P(Ct =1|At ,V1:n,t )=(P(At ,V1:n,t |C=1)P(Ct=1))(P(At ,V1:n,t |Ct=1)P(Ct=1))+(P(At ,V1:n,t |Ct=2)(1P(Ct=1)) (4)

Finally, for all models, we assume that the observer pushes the button associated with the position closest to S^A,t. In the following, we describe the generative and inference models for the instantaneous, Bayesian, and exponential learners. For the Bayesian learner, we focus selectively on the model component that assumes a common cause, C = 1 (for full derivation including both model components, see Appendix 2).

Model 1: Instantaneous learner

The instantaneous learning model ignores that the visual reliability (i.e. the inverse of visual uncertainty) of the current trial depends on the reliability of the previous trial. Instead, it estimates the visual reliability independently for each trial from the spread of the cloud of visual dots:

P(SA,t,Ut,λV,t| A1:t, V1:n,1:t)=P(SA,t,Ut,λV,t| At, V1:n,t)=P(C=1|At ,V1:n,t )PC=1(St,Ut,λV,t| At, V1:n,t)+P(C=2|At ,V1:n,t )PC=2(SA,t,Ut,λV,t| At, V1:n,t)=P(C=1|At ,V1:n,t )P(St)P(At|St)PC=1(Ut|St,λV,t)P(V1:n,t|Ut,λV,t)P(λV,t)Z1+(1P(C=1|At ,V1:n,t )) P(SA,t)P(At|SA,t)/Z2. (5)

with Z1, Z2 as normalization constants.

Apart from P(C=1|At,Vt), these terms are all normal distributions, while we assume in this model that P(λV,t) is uninformative. Hence, visual reliability is computed from the variance: λVt^=1/(σVt2+σVt2n) where σVt2=1/(n1)i=1n(Vi,tV¯i,t)2 is the sample variance (and  V¯t=1/ni=1nVi,t is the sample mean). The causal component estimates are given by:

S^A,C=1,t=λ^V,tVt¯ +λAAtλV,t+λA+λS (6)
S^A,C=2,t=λAAtλA+λS (7)

These two components are then combined based on the posterior probabilities of common and independent cause models (see Equation 3). This model is functionally equivalent to a Bayesian causal inference model as described in Körding et al., 2007, but with visual reliability computed directly from the sample variance rather than a fixed unknown parameter (which the experimenter estimates during model fitting).

Model 2: Bayesian learner

The Bayesian learner capitalizes on the slow changes in visual reliability across trials and combines past and current inputs to provide a more reliable estimate of visual reliability and hence auditory location. It computes the posterior probability based on all auditory and visual signals presented until time t (here only shown for C = 1, see Appendix 2).

According to Bayes rule, the joint probability of all variables until time t can be written based on the generative model as:

P(λV,1:t, A1:t, U1:t, V1:n,1:t,S1:t)=P(A1|S1)P(V1:n,1|U1,λV,1)P(U1|S1,λV,1)P(S1)P(λV,1)k=2tP(Ak|Sk)P(V1:n,k|Uk,λV,k)P(Uk|Sk,λV,k)P(λV,k|λV,k1)P(Sk) (8)

As above, the visual likelihood is given by the product of individual Normal distributions for each dot i: P(V1:n,t|Ut,λV,t)=i=1nN(Vi,t|Ut,1/λV,t), and P(Ut|St,λV,t)=N(Ut|St,1/λV,t).

The prior P(St) is a Normal distribution N(St|0,1/λS)  and the auditory likelihood.

P(At,|St) is a Normal distribution N(At|St,1/λA). As described in the generative model, P(λV,k|λV,k1)  is given by logλV,tN(logλV,t1,1/κ).

Importantly, only the visual reliability, λV,t, is directly dependent on the previous trial (P(λV,k,λV,k1)=P(λV,k|λV,k1)P(λV,k1)P(λV,k)P(λV,k1)). Because of the Markov property (i.e. λV,t  depends only on λV,t1), the joint distribution for time t can be written as

P(λV,t, λV,t1, At, Ut, V1:n,t,St)=P(At|St)P(Ut|St,λV,t)P(V1:n|Ut,λV,t)P(λV,t|λV,t1)P(λV,t1|V1:n,t1,At1)P(St). (9)

Hence, the joint posterior probability over location and visual reliability given a stream of auditory and visual inputs can be rewritten as:

P(St,Ut,λV,t| A1:t, V1:n,1:t)=P(St)P(At|St)P(Ut|St,λV,t)P(V1:n|Ut,λV,t)P(λV,t|λV,t1)P(λV,t1|V1:n,t1,At1)dλV,t1/Z. (10)

As this equation cannot be solved analytically, we obtain an approximate solution by factorizing the posterior in terms of the unknown variables (St,Ut,λV,t) according to the method of variational Bayes (Bishop, 2006). In this approximate method (for details see Appendix 2), the posterior is factorized into three terms, each a normal distribution:

P(St,Ut,λV,tAt,V1:n,t)q(St,Ut,λV,tt)=q(St)q(Ut)q(λV,tt).

In order to estimate the set of parameters (mean and variance) of q(St), q(Ut) and q(λV,tt), the Free Energy is minimized iteratively (and thereby the Kullback–Leibler divergence between the true and approximate distribution), until a convergence criterion is reached (here, the change in each fitted parameter is less than 0.0001 between iterations).

This is done separately for the common cause model component (C = 1) and the independent cause model component (C = 2). The auditory location, for the common cause model is based on the approximation over the posterior location of S^A,C=1,t from, q1(St)=N(S^A,C=1,t,σ1,t). The auditory location for the independent cause model is simply computed as S^A,C=2,t=At/(1+σA2/σ02), because it is independent of the visual signal.

The marginal model evidence is estimated based on the minimized Free Energy for each mode component, P(At ,V1:n,t|C=1 ), respectively P(At ,V1:n,t|C=2 ) to form the posterior probability P(C=1|At ,V1:n,t ), as described above in Equation 4. These values can then be used to compute the predicted responses for a particular participant according to Equation 3.

Model 3: Exponential learner

Finally, the observer may approximate the full Bayesian inference of the Bayesian learner by a more simple heuristic strategy of exponential discounting. In the exponential discounting model, the observer learns the visual reliability by exponentially discounting past visual reliability estimates:

λ^V,t1=1/σVt2 (1γ)+ λ^V,t1γ (11)

where σVt2=1/(n1)i=1n(Vi,tV¯i,t)2 is the sample variance and V¯t=1/ni=1nVi,t is the sample mean.

Similar to the optimal Bayesian learner (above), this observer model uses the past to compute the current reliability, but it does so based on a fixed learning rate 1 - γ. Computation is otherwise performed in accordance with models 1 and 2, Equations 3-4 and 6-7.

Assumptions of the computational models: motivation and caveats

Computational models inherently make simplifying assumptions about the generation of the sensory inputs and observers’ inference.

First, we modeled that visual signals (i.e. the cloud’s mean) were sampled from a Gaussian, while they were sampled from a uniform discrete distribution (i.e. [−10°, −5°, 0°, 5°, 10°]) in the experiment. Gaussian assumptions about the stimuli locations have nearly exclusively been made in the recent series of studies focusing on Bayesian Causal Inference in multisensory perception (Körding et al., 2007; Rohe and Noppeney, 2015b; Rohe and Noppeney, 2015a). Because visual signals have been sampled from a wide range of visual angle (i.e. 20°) and are corrupted by physical (i.e. cloud of dots) and internal neural noise, we used the simplifying assumption of a Gaussian spatial prior consistent with previous research.

Second, we assumed that the auditory signal location is sampled from a Gaussian, while the experiments presented sounds ±5° from the visual location. These Gaussian assumptions about sound location can be justified by the fact that observers are known to be limited in their sound localization ability, particularly when generic HRTFs were used to generate spatial sounds. Moreover, because sounds are presented together with visual signals, it is even harder for observers to obtain an accurate estimate of the sound’s location.

Third, in the experiment we generated the cloud of dots directly from a Gaussian distribution centred on St. By contrast, in the model we introduced a hidden variable Ut that is sampled from a Gaussian centred on St. The visual cloud of dots is then centred on this hidden variable Ut. We introduced this additional hidden variable Ut to account for observers’ additional causal uncertainty in natural environments, in which even signals from a common source may not fully coincide in space. Critically, the dispersion of the cloud of dots is set to be equal to the STD of the distribution from which Ut is sampled, so that the cloud’s STD informs observers about the variance of the hidden variable Ut.

Inference by the experimenter

From the observer’s viewpoint, this completes the inference process. However, from the experimenter’s viewpoint, the internal variable for the auditory stimulus, At, is unknown and not directly under the experimenter’s control. To integrate out this unknown variable, we generated 1000 samples of the internal auditory value for each trial from the generative process At ~ N(SA,t,true, σA2), where SA,t,true was the true location the auditory stimulus came from. For each value of At, we obtained a single estimate S^A,t (as described above). To link these estimates with observers’ button response data, we assumed that subjects push the button associated with the position closest to S^A,t. In this way, we obtained a histogram of responses for each subject and trial which provide the likelihood of the model parameters given a subject’s responses: P(respt|κ,σA,Pcommon,SA,t,true,SV,t,true).

Model estimation and comparison

Parameters for each model (for all models: σA, Pcommon = P(C = 1), σ0, Bayesian learner: κ, exponential learner: γ) were fit for each individual subject by sampling using a symmetric proposal Metropolis-Hasting (MH) algorithm (with At integrated out via sampling, see above). The MH algorithm iteratively draws samples setn from a probability distribution through a variant of rejection sampling: if the likelihood of the parameter set is larger than the previous set, the new set is accepted, otherwise it is accepted with probability L(model|setn)/L(model|setn-1), where L(resp|setn) = tP(respt|κ,σA,Pcommon,SA,t,true,SV,t,true) (for Bayesian learner). We sampled 4000 steps from four sampling chains with thinning (only using every fourth sampling to avoid correlations in samples), giving a total of 4000 samples per subject data sets. Convergence was assessed through scale reduction (using criterion R < 1.1 [Gelman et al., 2013]). Using sampling does not just provide a single parameter estimate for a data set (as when fitting maximum likelihood), but can instead be used to assess the uncertainty in estimation for the data set. The model code was implemented in Matlab (Mathworks, MA) and ran on two dual Xeon workstations. Each sample step, per subject data set, took 30 s on a single core (~42 hr per sampling chain).

Quantitative Bayesian model comparison of the three candidate models was based on the Watanabe-Akaike Information Criterion (WAIC) as an approximation to the out of sample expectation (Gelman et al., 2013). At the fixed-effects level, Bayesian model comparison was performed by summing the WAIC over all participants within each experiment. For a random-effects analysis, we transformed the WAIC into log-likelihoods by dividing them by minus 2. We then computed the protected exceedance probability that one model is better than the other model beyond chance using hierarchical Bayesian model selection (Penny et al., 2010; Rigoux et al., 2014).

To qualitatively compare the localization responses given by the participants and the responses predicted by the instantaneous, Bayesian and exponential learner, we computed the auditory weight wA from the predicted responses of the three models exactly as in the analysis for the behavioral data. For illustration, we show and compare the model’s wA from the 1st and the flipped 2nd half of the periods for each of the four experiments (Figure 3, Figure 4, Figure 5B/C and Figure 5—figure supplement 1).

Parameter recovery

To test the validity of the models, we performed parameter recovery and were able to recover the generating values with a bias of all parameters smaller than 10% (for full details of bias and variance across parameters, see Appendix 1 and Supplementary file 1-Table 7).

Simulated localization responses

To further compare the Bayesian and exponential learner and assess whether they can be discriminated experimentally, we simulated the choices of 12 subjects for the continuous sinusoidal and sinusoidal jump sequence using the Bayesian learner model (parameters: σA = 6°, κ = 15, Pcommon = 0.7 and σ0 = 12°). To increase observers’ uncertainty about their visual reliability estimates, we reduced the number of dots in the visual clouds from 20 to 5 dots where we ensured that the mean and variance of the five dots corresponded to the experimentally defined visual mean and variance. We then fitted the Bayesian learner and exponential learner models to each simulated data set (using the BADS toolbox for likelihood maximization [Acerbi and Ma, 2017]). The fitted parameters for the Bayesian model, setBayes were very close to the parameters used to generate observers’ simulated responses (sinusoidal sequence, fitted parameters: σA = 6.11°, κ = 17.5, Pcommon = 0.72 and σ0 = 12.4°; sinusoidal jump sequence, fitted parameters: σA = 6.08°, κ = 17.3, Pcommon = 0.71 and σ0 = 12.2°) – thereby providing a simple version of parameter recovery. The parameters of the exponential model, setExp (fitted to observers’ responses generated from the Bayesian model) were very similar to those of the Bayesian learner (sinusoidal sequence: σA = 5.99°, γ = 0.70, Pcommon = 0.61 and σ0 = 12.0°, sinusoidal jump sequence: σA = 6.06°, γ = 0.70, Pcommon = 0.65 and σ0 = 12.0°). Moreover, the fits to the simulated observers’ responses were very close for the two models (Figure 6), with mean log likelihood difference (log(L(resp|setBayes)) – log(L(resp|setExp))) = 1.82 for the sinusoidal and 2.74 for the sinusoidal jump sequence (implying a slightly better fit for the Bayesian learner). Figure 6C and D show the timecourses of observers’ visual uncertainty (STD) as estimated by the Bayesian and exponential learners.

Acknowledgements

This study was funded by the ERC (ERC-multsens, 309349), the Max Planck Society and the Deutsche Forschungsgemeinschaft (DFG; grant number RO 5587/1–1). We thank Peter Dayan for his valuable contributions and very helpful comments on a previous version of the manuscript.

Appendix 1

Additional methods and results

Influence of the visual location of the previous trial on observers’ sound localization responses

We have performed a control regression analysis to assess the influence of the visual location of the previous trial on observers’ sound localization response. This is important because 200 ms prior to trial onset and sound presentation, observers were presented with a visual cloud whose mean was the same as for the previous trial and the cloud’s standard deviation (STD) varied according to a continuous or discontinuous sequence (see main paper). To quantify the influence of the previous visual location, we expanded our regression model that we used in the main paper by another regressor modeling the visual cloud’s location on the previous trial. For instance, for bin = 1, we computed:

RA,trial,bin=1=LA,trial,bin=1 ßA,bin=1 +LV,trial,bin=1 ßV,bin=1 +LV,trial1,bin=1 ßVprevious,bin=1 + ßconst,bin=1 +etrial,bin=1

with RA,trial, bin=1 = localization response for current trial that is assigned to bin 1; LA,trial,bin=1 or LV,trial,bin=1= ‘true’ auditory or visual location for current trial that is assigned to bin 1; LV,trial-1,bin=1 ‘true’ visual location for corresponding previous trial (for explanatory purposes, we assign here the bin of the current trial; the previous trial actually falls into a different bin); ßA,bin=1 or ßV,bin=1 = quantified the influence of the auditory and visual location of the current trial on the perceived sound location of the current trial for bin 1; ßVprevious,bin=1 quantified the influence of the visual location of the previous trial on the perceived sound location of the current trial for bin 1. ßconst,bin=1 = constant term; etrial,bin=1 = error term. For each bin, we thus obtained another visual weight estimate ßVprevious,bin for the previous location.

First, we averaged ßVprevious,bin across bins and entered these participant-specific bin-averaged ßVprevious into two-sided one-sample t-tests at the between-subject random effects level. Results: As shown in Supplementary file 1-Table 2A, this analysis demonstrated that the visual location of the previous trial significantly influenced observers’ perceived sound location on the current trial in the Sinusoidal, RW1 and (marginally) the Sinusoidal jump sequence.

Second, we computed the correlation of ßVprevious,bin with the visual noise in the current trials averaged in a given bin (r(ßVprevious,bin, σVcurrent,bin)) or with the visual noise (i.e. visual cloud’s STD) in the previous trial averaged within a given bin (r(ßVprevious,bin, σVprevious,bin)). The correlations were computed over bins within each participant. We entered the participant-specific Fisher z-transformed correlation coefficients into two-sided one-sample t-tests at the between-subject random-effects level (see Supplementary file 1-Table 2, section B and C). Results: These analyses demonstrated that the influence of the visual location on the previous trial was not correlated with the visual cloud’s STD on the current or previous trial (apart from one significant p-value for the sinusoidal jump sequence, but after Bonferroni correction for the eight additional statistical comparisons this p-value is no longer statistically significant either). This analysis already suggests that the previous visual location is unlikely responsible for the effects of the previous STD on observers’ perceived sound location.

Third and most importantly, this regression model provides us with weights for the auditory (ßA,bin) and visual (ßV,bin) locations in the current trial from a regression model that regressed out the influence of the previous visual location. We used those auditory (ßA,bin) and visual (ßV,bin) weights as in the main paper to compute bin-specific wA,bin = ßA,bin / (ßA,bin + ßV,bin). Following exactly the same procedures as in the main paper, we then assessed in a repeated-measures ANOVA whether these wA,bin differed between first and second half (see Supplementary file 1-Table 3). Moreover, we repeated a second regression model analysis to assess whether wA,bin was predicted not only by the visual cloud’s STD of the current, but also of the previous bin using the following regression model (i.e. Equation 2 in the main text): wA,bin = σV,bin * ßσV + (σV,bin – σV,bin-1)* ßΔσV + ßconst + ebin with wA,bin = relative auditory weight in bin b; σV,bin = mean visual STD in current bin b or previous bin b-1; ßconst = constant term; ebin = error term. To allow for generalization to the population level, the parameter estimates (ßσV, ßΔσV) for each subject were entered into two-sided one-sample t-tests at the between-subject random-effects level (see Supplementary file 1-Table 4). Results: These control analyses (see Figure 2—figure supplement 1, Supplementary file 1-Table 4) replicate our initial analyses reported in the main manuscript. Collectively, they provide further evidence that the effect of previous visual location on observers’ perceived sound location cannot explain the effect of prior visual reliability that are the key focus of our paper.

Nested model comparison to assess the effect of past visual noise on observers’ auditory weights

To assess the effect of past visual noise on auditory weights (Equation 2 in the main text), we also formally compared two nested linear mixed-effects models to predicts observers’ relative auditory weights wA,bin separately for the Sin, RW1, and RW2 sequences. The reduced model included only the STD of the current bin as fixed effects. The full model included both the STD of the current bin and the difference in STD between the current and the previous bin as fixed effects. Both the reduced and the full model included participants as random effects. After fitting the two models using maximum likelihood estimation, we compared them using loglikelihood ratio tests and the Bayesian Inference Criterion as an approximation to the model evidence (see Supplementary file 1-Table 5). The model comparison demonstrated that the full model including the difference in STD provided a better explanation of observers relative auditory weights wA,bin across all four sequences. This corroborates that observers estimate sensory uncertainty by combining information from past and current sensory inputs.

Characterization of observer’s behavior and the model predictions before and after the jumps in visual reliability

One critical question in the discontinuous sinusoidal jump sequence is whether observers continue combining past with current inputs to adapt their visual uncertainty estimates. One may argue that observers detect the discontinuity, ‘reset’ the estimation of visual uncertainty after the jump and therefore do not integrate information from before the jump. In that case Bayesian learning models would not be ideal for modeling observers’ behavior in the jump sequence and better accommodated by Bayesian change-point detection model (Adams and Mackay, 2007; Heilbron and Meyniel, 2019).

First, we inserted the jumps selectively in the period sections in which the visual variance was greater, so that changes in visual variance were more difficult to detect. This experimental choice minimized the chances that observers ‘reset’ their estimation of visual uncertainty.

Second, we assessed observers’ estimation strategy at a greater temporal resolution before and after the jumps. To estimate the relative auditory weight wA at a greater resolution, we applied separate regression models to individual sampling points of the visual cloud of dots presented every 200 ms (i.e. no binning). Thus, wA was computed at 5 Hz resolution before and after the jumps (i.e. at time points [−1.9:0.2:1.9] s; Figure 5—figure supplement 2). Because the number of trials was very low on individual sampling points, we pooled the trials across the three up- and down-jumps before computing the regression models. Nevertheless, the small number of trials on individual sampling points (range 7–28 trials across participants and bins) rendered the estimation of the relative auditory weights very unreliable. Thus, individual wA values that were smaller or larger than three times the scaled median absolute deviation were excluded from the analyses in Figure 5—figure supplement 2 and Supplementary file 1-Table 6. To assess statistically whether participants’ adjusted wA after the jump, we computed a paired t test on wA specifically from the time point before versus after the jump (i.e. −0.1 vs. 0.1 s; Supplementary file 1-Table 6).

Third, we assessed the model fit of our learning models before and after the jumps. If the jumps violate the assumptions of the learning models, we would expect that observers’ behavior deviates from the model’s predictions more strongly after the jump. We computed the root mean squared error of the models’ wA (i.e. (wA,behavior – wA,model)2) before and after the change point (Figure 5—figure supplement 2B) and entered those into a paired t test (i.e. −0.1 vs. 0.1 s; Supplementary file 1-Table 6).

Results: Our data showed that participants and models rapidly and significantly adjusted their weights after the jumps. Critically, the model fits did not significantly differ for the time points just before or after the jumps in visual variance (i.e. if anything, they significantly decreased after the jump; Figure 5—figure supplement 2B and Supplementary file 1-Table 6). Collectively, these control analyses suggest that our Bayesian and exponential learning models adequately modeled observers’ visual uncertainty adaptation both before and after change points (Norton et al., 2019).

Parameter recovery

To test the validity of the Bayesian, exponential, and instantaneous models, we performed parameter recovery by assessing the bias and variability of the parameters fitted to simulated data sets with respect to the true parameters used to generate the data.

For each model, we selected four different parameter sets (within a realistic range of values for parameters σΑ = [6:12]°, Pcommon = [0.7:0.9], σ0 = [6:20]°, κ = [5:20], γ = [0.3:0.7]) and generated data sets of simulated observers for the RW2 sequence. We repeated this process six times (with different initial random seeds), creating a total of 24 simulated data sets for each model. We then fitted the Bayesian, exponential, and instantaneous learner models to each simulated data set (using exactly the same fitting procedures as for observers’ data in the experiments) resulting in 24 sets of best fitting parameters for each model.

In order to assess how well the fitting procedure recovers the generating parameters, we compared the fitted parameters to the ‘true’ parameters used to generate the data. Specifically, we assessed the parameter recovery in terms of bias and variability of the fitted parameters as follows: The recovered parameters’ bias was computed as the signed deviation from the true generating value in percentage. As an example, if a data set was generated with a model parameter of 5, but the fitted (i.e. recovered) parameter was 4, we would compute a −20% deviation. As a measure of the variability for the recovered parameters, we calculated the absolute (i.e. unsigned) deviation from the true generating values in percentage. As an example, a fitted value of 4, relative to a generating value of 5 would be a 20% absolute deviation. We report the median (and first and third quartile) across simulated data sets as a robust measure for this bias and variability.

Appendix 2

This document describes a Variational Bayes approximation to inference on a generative model that allows for two possible ways that stimuli data was generated (thus allowing subjects to perform causal inference).

Section (1) describes the full generative model for both a single and two sources, section (2) explains how an optimal observer can perform inference within either sub-model, through a variational Bayes approximation to the posteriors and section (3) shows how to calculate the model likelihood for either sub-model, as necessary for combining the two sub-models.

Section (4) finally describes how the results for each causal model are combined into a single posterior.

1 Generative model

The model presented here is an extension of the Causal Inference model of Körding et al., 2007, with the reliability of the visual signal assumed to be changing smoothly over trials according to a random walk (RW). In the case where the visual reliability is constant the model approximates the original Causal Inference model.

In this model (figure Appendix 2—figure 1), at each stimulus presentation, t, subjects assume that the visual dots at positions Vi,t and auditory stimulus at At, are generated through either of two causal models (Ct=1 or Ct=2) with fixed prior probabilities:

CtBernoulli(pcommon) (1)

If Ct=1, (single source, St, leading to forced fusion)

(2)St𝒩(St;μ0,σ02)(3)At𝒩(At;St,σA2)(4)Ut𝒩(Ut;St,1/λV,t)(5)Vi,t𝒩(Vi,t;Ut,1/λV,t)(6)logλV,t𝒩(log(λV,t);log(λV,t1),k)

If Ct=2 (independent sources, SA,t and SV,t )

(7)SA,t𝒩(SA,t;μ0,σ02)(8)At𝒩(At;SA,t,σA2)(9)SV,t𝒩(SV,t;μ0,σ02)(10)Ut𝒩(Ut;SV,t,1/λV,t)(11)Vi,t𝒩(Vi,t;Ut,1/λV,t)(12)logλV,t𝒩(log(λV,t);log(λV,t1),k)

The intermediate variable Ut means that the mean of the visual dots is not located at the true source (St or SV,t), but normally distributed around it. Note that for Ct=1 we can explicitly write SA,t=SV,t=St.

For simplicity, we assume that μ0=0, that is, the prior mean is located at the horizontal center. The auditory STD σA, the prior probability of a single cause, Pcommon and the prior STD, σ0 , are fixed individually for each subject (see main text for fitting procedure).

In the following, we will simplify the notation by referring to P(*|Ct=1) by P1(*) and P(*|Ct=2) by P2(*).

Appendix 2—figure 1. Generative model, for one (C = 1) or two sources (C = 2).

Appendix 2—figure 1.

2 Posterior

The full posterior over the latent variables in the model up until time t is

P(SV,1:t,SA,1:t,,λ1:t,U1:t,C1:t|A1:t,V1:N,1:t)=P(SV,1,SA,1,λV,1,U1,C1|A1,V1:N,1)j=1tP(SV,j,SV,j,λV,j,Uj,Cj|A1:j,V1:N,1:j,λV,j-1) (13)

Recursively we can write

P(SV,t,SA,t,λV,t,Ut,Ct|A1:t,V1:N,1:t)=P(SV,t,SA,t,λV,t,Ut,Ct|A1:t,V1:N,1:t,λV,t-1)P(SV,t-1,SA,t-1,Ut-1,Ct,λV,1:t-1|A1:t-1,V1:N,1:t-1)dSV,t-1dSA,tdUt-1dλV,1:t-1 (14)
P(SV,t,SA,t,λV,t,Ut,Ct|A1:t,V1:N,1:t)P(Ct)P(SA,t,SV,t)P(At|SA,t)P(V1:N,t|Ut,λV,t)P(Ut|SV,t,λV,t)P(λV,t|λV,t1)P(λV,1:t1|A1:t1,V1:N,1:t1)dλV,1:t1/Z (15)

where P(SA,t,SV,t) obviously depends on Ct through the generative model (Appendix 2—figure 1).

If we marginalize over the latent Ct:

P(SV,t,SA,t,λV,t,Ut|A1:t,V1:N,1:t)=P(SV,t,SA,t,λV,t,Ut|A1:t,V1:N,1:t,Ct=1)P(Ct=1|A1:t,V1:N,1:t)+P(SV,t,SA,t,λV,t,Ut|A1:t,V1:N,1:t,Ct=2)P(Ct=2|A1:t,V1:N,1:t) (16)

At this point it should be clear that the posterior is a mixture of the forced fusion and independent solutions, with the mixture determined by the posterior probability of either model generating the data:

P(Ct=1|A1:t,V1:N,1:t)=P(At,V1:N,t|Ct=1)P(Ct=1)P(At,V1:N,t|Ct=1)P(Ct=1)+P(At,V1:N,t|Ct=2)P(Ct=2) (17)

To evaluate this, we need to calculate the marginal model evidence, P(At,V1:N,t|Ct), for either model, see the later section.

2.1 Posterior for C = 1

The full posterior over the latent variables in the single source sub-model is

P1(S1:t,λ1:t,U1:t|A1:t,V1:N,1:t)=P(S1,λV,1,U1|A1,V1:N,1:t)i=1tP(Si,λV,i,Ui|Ai,Vi,λV,i-1) (18)

Recursively we can write

P1(St,λV,t,Ut|A1:t,V1:N,1:t)=P1(St,λV,t,Ut|A1:t,V1:N,1:t,λV,t-1)P1(St-1,λV,1:t-1,Ut-1|A1:t-1,V1:N,1:t-1)dSt-1dUt-1dλV,1:t-1 (19)
P1(St,λV,t,Ut|A1:t,V1:N,1:t)=P1(St,λV,t,Ut|A1:t,V1:N,1:t,λV,t-1)P(λV,1:t-1|A1:t,V1:N,1:t-1)dλV,1:t-1 (20)
P1(St,λV,t,Ut|A1:t,V1:N,1:t)P(St)P(At|St)P(Ut|St,λV,t)P(V1:N,t|Ut,λV,t)P(λV,t|λV,t-1)P(λV,1:t-1|A1:t-1,V1:N,1:t-1)𝑑λV,1:t-1/Z (21)

As we will see later it is convenient to use a change of parameters

θt=log(λV,t) (22)

allowing us to rewrite

P1(St,θt,Ut|A1:t,V1:N,1:t)P(St)P(At|St)P(Ut|St,θt)P(V1:N,t|Ut,θt)P(θt|θV,t-1)P(θV,1:t-1|A1:t-1,V1:N,1:t-1)𝑑θV,1:t-1/Z (23)

where

P(Ut|St,θt)=𝒩(Ut;St,1/exp(θt)) (24)
P(V1:N,t|Ut,θt)=n𝒩(Ut;Vn,t,1/exp(θt)) (25)
P(θt|θV,t-1)=𝒩(θt;θV,t-1,1/κ) (26)

We will assume that

P(θV,1:t-1|A1:t-1,V1:N,1:t-1) (27)

can be approximated by a Normal distribution (see below), thus allowing us to write

P(θt|θV,t-1)P(θV,1:t-1|A1:t-1,V1:N,1:t-1)𝑑θV,1:t-1=𝒩(θt|θV,t-1,1/κt) (28)

where 1/κt=1/κ+1/τθ,t-1 (due to properties of convolution of two Normal distributions).

The log-posterior (to be used for a variational approximation) is now

logP1(St,λV,t,Ut|A1:t,V1:N,1:t)-λSSt2/2-λA(At-St)2/2-λV,t(Ut-St)2/2+logλV,t/2-λV,tiN(Ut-Vi,t)2/2+(N/2)logλV,t-κ(θt-θV,t-1)2/2 (29)

2.2 Variational Bayes approximation for C = 1

We will now approximate the log-posterior with variational Bayes by factorization:

P1(St,θt,Ut|At,V1:N,t)q1(St,Ut,θt)=q1(St)q1(Ut)q1(θt) (30)
2.2.1 q1(St)

For q1(St)

logq1(St)-λSSt2/2-λA(At-St)2/2-Eθ(exp(θt))EU((Ut-St)2)/2 (31)

where EY(X) signifies the expectation of X over the distribution of Y: EY(X)=P(Y)X𝑑Y.

Using EU((UtSt)2)=EU(Ut2+St22UtSt)=EU(Ut2)+St22StEU(Ut) = (St2StEU(Ut)2+EU(Ut)2)EU(Ut)2+EU(Ut2)=(StEU(Ut))2EU(Ut)2+EU(Ut2), where the last two terms do not depend on St (and thus can be discarded) we can rewrite the last term:

logq1(St)-λSSt2/2-λA(At-St)2/2-Eθ(exp(θt))(St-EU(Ut))2/2 (32)
2.2.2 q1(Ut)

For q1(Ut)

logq1(Ut)-Eθ(exp(θt))/2i(Ut-Vi,t)2-Eθ(exp(θt))ES((Ut-St)2)/2 (33)

Here, we use the same trick

logq1(Ut)Eθt(exp(θt))/2i(UtVi,t)2Eθ(exp(θt))(UtES(St))2)/2 (34)
2.2.3 q1(θ)

For q1(θ)

logq1(θ)=-expθtEU,S((Ut-St)2)/2-expθtEU(iN(Ut-Vi,t)2)/2+(N+1)log(exp(θ))/2-κ(θt-θt-1)2/2 (35)
2.2.4 Simplifying q1(St) and logq1(Ut)

Inspecting logq1(St) and logq1(Ut) we can see that both q1(St) and q1(Ut) are products of Normal distributions, and thus themselves Normal distributed

q1(St)𝒩(St;μS,t,1/τS,t) (36)

and

q1(Ut)𝒩(Ut;μU,t,1/τU,t) (37)

where

μS,t=(λS*0+λAAt+E(expθt)μU,t)/τS,t (38)
τS,t=λS+λA+E(exp(θt)) (39)
μU,t=(Eθ(exp(θt))iNVi,t+Eθ(exp(θt))μS,t)/τU,t (40)
τU,t=(N+1)*Eθ(exp(θt)) (41)

Note that ES(St)q1(St)St𝑑St=μS,t and EU(Ut)q1(Ut)Ut𝑑Ut=μU,t

2.2.5 Simplifying q1(θt)

Regarding q1(θt) we can expand a little using that

EU,S((Ut-St)2)=EU,S((Ut2+St2-2StUt))=EU,S(Ut2)+E(St2)-2E(St)E(Ut)=μU2+1/τU+μS2+1/τS-2μSμU=(μU-μS)2+1/τU+1/τS (42)

(using that E(X2)=μ2+1/τ for a normal distribution 𝒩(X;μ,1/τ)) and

EU(iN(Ut-Vi,t)2)=EU(iN(Ut2+Vi,t2-2UtVi,t))=EU(iN(Ut2)+iN(Vi,t2)-2iN(UtVi,t)=EU(N*(Ut2)+iN(Vi,t2)-2UtiNVi,t)=N*(μU2+1/τU)+iN(Vi,t2)-2μUiNVi,t)=N/τU+iN(Vi,t-μU)2 (43)

Which together gives:

EU,S((Ut-St)2)+EU(iN(Ut-Vi,t)2)=(μU-μS)2+1/τU+1/τS+(N+1)/τU+iN(Vi,t-μU)2 (44)

2.3 Approximating q(θ)

We will approximate q1(θ) with a Normal distribution.

To do this, we use a Laplace approximation around the max of q1(θ) (see figure Appendix 2—figure 2): argmax(q1(θt))=μθ,t and with second derivative -τθ,t

This gives

q1(θt)𝒩(θt|μθ,t,1/τθ,t) (45)
2.3.1 First derivative

However, in order to find argmax(q1(θt)), we differentiate logq1(θt) and set equal to 0:

dlogq1(θt)/dθt=expθtEU,S((UtSt)2)/2expθtEU(iN(UtVi,t)2)/2+(N+1)/2κ(θtθt1)=0 (46)
expθt(EU,S((UtSt)2)/2+EU(iN(UtVi,t)2)/2)=(N+1)/2κ(θtθt1) (47)

At this point there is no analytical solution.

2.3.2 Taylor expansion of first derivative

Although we could use a numerical approximation for speed of implementation, we use Taylor expansion. We need to solve for θ

expθt(EU,S((Ut-St)2)/2+EU(iN(Ut-Vi,t)2)/2)-(N+1)/2+κ(θt-θt-1)=0 (48)

For simplicity we refer to SV=(EU,S((Ut-St)2)+EU(iN(Ut-Vi,t)2))

We can solve this by using the third-order Taylor expansion of the exponential:

expθte(θ)+e(θ)(θtθ)+e(θ)(θtθ)2/2+e(θ)(θtθ)3/6=e(θ)(1θ+θ2/2θ3/6+(12/2θ+3/6θ2)θt+(1/23/6θ)θt2+θt3/6)=e(θ)(1θ+θ2/2θ3/6+(1θ+θ2/2)θt+(1/2θ/2)θt2+θt3/6) (49)

We can set θ* as θt-1, as we expect that θt will be close to θ*. For big changes in the variance of Vi this can be off; however, this was not a problem in this stimulus set which relied on slow gradual changes.

In order to find argmax(q1(θt)) we therefore have to solve

SV/2*e(θ*)(1-θ*+θ*2/2-θ*3/6+(1-θ*+θ*2/2)θt+(1/2-θ*/2)θt2+θt3/6)-(N+1)/2+κ(θt-θt-1)=0 (50)

which can be rewritten as a third order polynomial

θt3+c1θt2+c2θt+c3=0 (51)

where

c1=1/2-θ*/2c2=1-θ*+θ*2+κ/(SV/2*e(θ*))c3=1-θ*+θ*2/2+θ*3/6-(N/2+1/2+κθt-1)/(SV/2*e(θ*)) (52)

the solution to which, θtoptim, can be numerically found using Matlab’s nthroot function.

As we assume that the log-variance only changes slightly between trials the solution closest to the previous value θt-1 is automatically chosen, argmax(q1(θt))=μθt=θtoptim.

2.3.3 Second derivative

The second derivative is

d2logq1(θt)/dθt2=-expθt*SV/2-κ (53)
Appendix 2—figure 2. Approximation of theta using Laplace approximation.

Appendix 2—figure 2.

We evaluate this at argmax(q1(θt)), so we insert θt=μθ,t

Hence we can finally write

q1(θt)𝒩(θt|μθ,t,1/τtheta,t) (54)

where

μθ,t=θtoptim (55)
τθ,t=exp(μθ,t)*(SV)/2+κ (56)

and where SV=(μU-μS)2+1/τS+(N+1)/τU+iN(Vi,t-μU)2

With q1(θt) a Normal distribution, that makes q1(λV,t) a log-normal distribution with μλV,t=E(λV,t)=E(exp(θt))=exp(μθt+1/(2*τθt)) (general property of log-normal distribution).

2.4 Final algorithm for C = 1

We can now create an iterative algorithm that for each time step t represents the model posterior. Variables σA2=1/λA, σ02=1/λ0 and κ have to be set before hand, together with the input data A1:t and V1:N,1:t. For time step t:

1. initially set

(57)μθ,t=μθ,t1,(58)μS,t=0,(59)μU,t=1/NVi,t,(60)τθ,t=1

2. set μS,τS,t

μS,t=(λAAt+exp(μθt+1/(2*τθt))μU,t)/τS,t (61)
τS,t=λS+λA+exp(μθt+1/(2*τθt)) (62)

3. set μU,t,τU,t

μU,t=(N/(N+1))Vt¯+(1/(N+1))μS,t (63)
τU,t=(N+1)*exp(μθt+1/(2*τθt)) (64)

where Vt¯=1/NiNVi,t

4. find μθt by solving third order polynomial, Equation 51,

μθ,t=θoptim (65)

then set τθt

τθ,t=k+exp(μθ,t)*((μU,t-μS,t)2+1/τS,t+(N+1)/τU,t+iN(Vi,t-μU,t)2)/2 (66)

where κt=1/(1/κ+1/τθ,t-1)

5. Repeat steps 2–4 until the change in each parameter is small (<0.0001)

This is then repeated for each time step t, providing us with the approximation to the posterior P1(St,θt,Ut|At,V1:N,t)q1(St,Ut,θt))=q1(St)q1(Ut)q1(θt).

2.5 Posterior for C = 2

Due to the independent structure this posterior can be written as

P2(SA,t,SV,t,λV,t,Ut|At,V1:N,t)=P(SA,t|At)P(SV,t,λV,t,Ut|V1:N,t) (67)

where

P2(SA,t|At)=P(SA,t)P(At|SA,t)/Z (68)

which is simple enough given the Normal distribution of both P(SA,t) and P(At|SA,t)

P2(SA,t|At)=𝒩(SA,t;AtσA02/σA2,σA02) (69)

where σA02=1/(1/σA2+1/σ02).

Note that for the subject response the posterior P2(SA,t|At) is all that is needed, but for the calculation of the prior P(λV,t) for subsequent trial t+1 we need to compute the full posterior.

We again use the transformation of parameters

θt=log(λV,t) (70)

Proceeding with just the posterior over SV,t, Ut and θt

PC=2(SV,t,θt,Ut|A1:t,V1:N,1:t)P(SV,t)P(Ut|SV,t,θt)P(V1:N,t|Ut,θt)P(θt|θV,t-1)P(θV,1:t-1|A1:t-1,V1:N,1:t-1)𝑑θV,1:t-1/Z (71)

where

P(Ut|SV,t,θt)= 𝒩(Ut;SV,t,1/exp(θt)) (72)
P(V1:N,t|Ut,θt)=n𝒩(Ut;Vn,t,1/exp(θt)) (73)
P(θt|θt1)= 𝒩(θt;θt1,1/κ) (74)

We will assume that P(θV,1:t-1|A1:t-1,V1:N,1:t-1) can be approximated by a Normal distribution (see q1(θ) below), thus allowing us to use properties of Normal distributions.

Hence,

P(θt|θt1)P(θV,1:t1|A1:t1,V1:N,1:t1)dθV,1:t1= 𝒩(θt;θt1,1/κt) (75)

where 1/κt=1/κ+1/τθ,t-1 (due to the convolution of P(θt|θt-1) with P(θt-1|A1:t-1,V1:N,1:t-1), both Normal distributed).

While any estimate of θt will depend on A1:t-1 and V1:N,1:t1) for ease of notation, we will omit those in the following.

The log-posterior is now

logP2(SV,t,θt,Ut|A1:t,V1:N,1:t)-λ0SV,t2/2-expθt(Ut-SV,t)2/2+θt/2-expθtiN(Ut-Vi,t)2/2+Nθt/2-κ(θt-θV,t-1)2/2 (76)

2.6 Variational Bayes approximation for C = 2

We will now approximate the log-posterior with variational Bayes by factorization: P2(St,θt,Ut|At,V1:N,t)q2(St,Ut,θt))=q2(St)q2(Ut)q2(θt) This proceeds similarly to the combined (C=1) model, but with SV,t instead of St, and with no influence from At. For completeness the calculations are included here:

2.6.1 q2(St)

For q2(St)

logq2(SV,t)-λ0SV,t2/2-Eθ(exp(θt))EU((Ut-SV,t)2)/2 (77)

where EY(X) signifies the expectation of X over the distribution of Y: EY(X)=P(Y)X𝑑Y.

Using EU((UtV,t)2)=EU(Ut2+SV,t22UtSV,t)=EU(Ut2)+SV,t22SV,tEU(Ut) = (SV,t2SV,tEU(Ut)2+EU(Ut)2)EU(Ut)2+EU(Ut2)=(SV,tEU(Ut))2EU(Ut)2+EU(Ut2), where the last two terms do not depend on SV,t (and thus can be discarded) we can rewrite the last term:

logq2(SV,t)-λ0SV,t2/2-Eθ(exp(θt))(SV,t-EU(Ut))2/2 (78)
2.6.2 q2(Ut)

For q2(Ut)

logq2(Ut)-Eθ(exp(θt))/2i(Ut-Vi,t)2-Eθ(exp(θt))ES((Ut-SV,t)2)/2 (79)

Here, we use the same trick

logq2(Ut)Eθt(exp(θt))/2i(UtVi,t)2Eθ(exp(θt))(UtES(SV,t))2)/2 (80)
2.6.3 q2(θt)

For q2(θ)

logq2(θ)=-expθtEU,S((Ut-SV,t)2)/2-expθtEU(iN(Ut-Vi,t)2)/2+(N+1)log(exp(θ))/2-κ(θt-θt-1)2/2 (81)
2.6.4 Simplifying q2(St) and logq2(Ut)

Inspecting logq2(SV,t) and logq2(Ut) we can see that both q2(SV,t) and q2(Ut) are products of Normal distributions, and thus themselves Normal distributed

q2(SV,t) 𝒩(SV,t|μSV,t,1/τSV,t) (82)

and

q2(Ut)𝒩(Ut|μU,t,1/τU,t) (83)

where

μSV,t=(λ0*0+E(exp(θt))μU,t)/τSV,t (84)
τSV,t=λ0+E(exp(θt)) (85)
μU,t=(Eθ(exp(θt))iNVi,t+Eθ(exp(θt))μSV,t)/τU,t (86)
τU,t=(N+1)*Eθ(exp(θt)) (87)

Note that ES(SV,t)q2(SV,t)SV,t𝑑SV,t=μSV,t and EU(Ut)q2(Ut)Ut𝑑Ut=μU,t

We can approximate q2(θ) with a Normal distribution, in exactly the same way as for C = 1. As equations are identical (see above) they will not be repeated here.

2.7 Final algorithm for C = 2

We can now create an iterative algorithm that for each time step t represents the variational Bayes approximation of the model posterior over SV,t,Ut and λV,t (or rather θt):, P(SV,t,Ut,θt|V1:N,t). Variable κ has to be set before hand, together with the input data A1:t and V1:N,1:t. For time step t:

1. initially set μθ,t=μθ,t-1, μU,t=1/NVi,t, and τθ,t=1

2. set μSV,τSV,t

μSV,t=(exp(μθt+1/(2*τθ,t))μU,t)/τSV,t (88)
τSV,t=λ0+exp(μθt+1/(2*τθ,t)) (89)

3. set μU,τU,t

μU,t=(NVt¯+μSV,t)/(N+1) (90)
τU,t=(N+1)*exp(μθt+1/(2*τθ,t)) (91)

where Vt¯=1/NiNVi,t

4. find μθ,t by numerically solving polynomial, Equation 51,

μθ,t=θoptim (92)

then set τθ,t

τθ,t=k+exp(μθ,t)*((μU,t-μSV,t)2+1/τSV,t+(N+1)/τU,t+iN(Vi,t-μU,t)2)/2 (93)

where κt=1/(1/κ+1/τθ,t-1)

5. Repeat steps 2–4 until convergence, that is, until the change in each parameter is small (<0.0001)

This is then repeated for each time step t, providing us with the approximation to the posterior P2(St,θt,Ut|At,V1:N,t)q2(St,Ut,θt))=q2(St)q2(Ut)q2(θt).

See figure Appendix 2—figure 3 below for an example of the learned inference of the visual variance σV,t2=1/λV,t1/logq2(θt), compared with a simple instantaneous learner model that assumes σV,t2=1/Ni(Vi,t-V¯t)2.

Appendix 2—figure 3. Comparing variational Bayes approximation with a numerical discretised grid approximation.

Appendix 2—figure 3.

Top row: Example visual stimuli over eight subsequent trials. Middle row: The distribution of estimated sample variance, with no learning over trials. Bottom row: The distribution of _V;t for the Bayesian model that incorporates the learning across trials. Red line is the numerical comparison when using a discretised grid to estimate variance, as opposed to the variational Bayes (green line).

3 Marginal model evidence

Recall that the posterior is a mixture of the forced fusion and independent solutions, with the mixture determined by the posterior probability of either model generating the data:

P(Ct=1|At,V1:N,t)=P(At,V1:N,t|Ct=1)P(Ct=1)P(At,V1:N,t|Ct=1)P(Ct=1)+P(At,V1:N,t|Ct=2)P(Ct=2) (94)

To evaluate this, we need to calculate the marginal model evidence, P(At,V1:N,t|Ct), for either model.

One way to do so is by a sampling approximation, but here we utilise the variational results we have already found.

3.1 Model likelihood for C = 2, two sources SV,t,SA,t

We need to evaluate the model likelihood for both C=1 and C=2. The case for C=2 is slightly simpler, hence we start with this:

P(At,V1:N,t|Ct=2)=P(At|Ct)P(V1:N,t|Ct=2)=P(At|SA,t,Ct=2)P(SA,t|Ct=2)dSA,t*P(V1:N,t|Ut,λt,Ct=2)P(Ut,λt,SV,t|Ct=2)dVdλdSV,t (95)

The first integral is easy as it is just the integral of the product of two Normal distributions.

P2(At|SA,t)P2(SA,t)dS=1(2π(σA2+1/τ0)exp((Aμ0)22(σA2+1/τ0)) (96)

It is however more convenient to operate in log-space

logP2(At)=-log(2π(σA2+1/τ0))-(A-μ0)2/(2(σA2+1/τ0)) (97)

The second integral we approximate through the Free Energy that we already maximise iteratively in the variational Bayes algorithm.

logP(V1:N,t|Ct=2)=logP(V1:N,t|Ut,θt,SV,t,Ct=2)P(Ut,θt,SV,t|Ct=2)dUtdθtdSV,tL2(q)=q2(Ut,θt,SV,t)logP2(V1:N,t|Ut,θt,Ct)P2(Ut|,SV,t,θt,Ct)P2(SV,t|Ct)P2(θt|Ct)q2(Ut,θt,SV,t)dUtdθtdSV,t (98)

(this approximation becomes exact if the variational approximation is exact, that is, if the Kulback-Leibler difference between the posterior P2(Ut,θt,SV,t|V1:N) and the approximation q2(Ut,θt,SV,t) becomes zero.)

This can be interpreted as taking the expectation with regard to the posterior approximation, and due to the properties of the logarithm this can be separated into a sum of expectations:

L2(q)=E(logP2(V1:N,t|Ut,θt))+E(logP2(Ut|SV,t,θt)+E(logP2(SV,t))+E(logP2(θt))E(logq2(Ut))E(logq2(θt))E(logq2(SV,t)) (99)

where (due to Equation 43)

E(logP2(V1:N,t|Ut,θt))=E(logiP2(Vi,t|Ut,θ))=E(logiexp(θt)2πexp(-(Vi,t-Ut)2exp(θt)/2)=E(ilog(exp(θt)2π)-(Vi,t-Ut)2exp(θt)/2)=N/2(Eθt(θ)-log(2π))+EUt(i-(Vi,t-Ut)2))E(exp(θt))/2=N/2(μθ-log(2π))-(N/τU,t+i(Vi,t-μU)2)exp(μθ,t+1/(2τθ,t))/2 (100)

and (since E(X2)=μX2+σX2)

E(logP2(Ut)|SV,t,θt))=E(log 𝒩(Ut;SV,t,1/exp(θt)))=E(logexpθt2πexp((UtSV,t)2)exp(θt)/2))=(μθ,tlog(2π))/2[(μUμSV,t)2+1/τU,t+1/τSV,t]exp(μθ,t+1/(2τθ,t))/2 (101)

and

E(logP2(SV,t))=E(log 𝒩(SV,t;μ0,σ02))=E(log1σ022πexp((SV,tμ0)2)/(2σ02))=log(σ022π)/2[(μSV,tμ0)2+1/τSV,t]/(2σ02) (102)

and (due to Equation 75)

E(logP2(θt))=E( 𝒩(θt;θt1,1/κ))=E(logκ2πexp((θtμθ,t1)2κ/2))=(logκlog(2π))/2[(μθ,tμθ,t1)2+1/τθ,t]κ/2 (103)

and

E(logq2(Ut))=E(log 𝒩(Ut;μU,t,1/τU,t))=E(log(τU,t2πexp((UtμU,t)2τU,t/2)))=logτU,t2πE((UtμU,t)2τU,t/2)=(logτU,tlog(2π))/2(E(Ut2)+μU,t22μU,tE(Ut))τU,t/2=(logτU,tlog(2π))/2(μU,t2+1/τU,t2+μU,t22μU,tμU,t)τU,t/2=(logτU,tlog(2π)1)/2 (104)

and

E(logq2(St))=E(log 𝒩(SV,t;μSV,t,1/τSV,t))=E(log12πσ02exp((SV,tμSV,t)2τSV,t/2))=(logτSV,tlog(2π)1)/2 (105)

and

E(logq2(θt))=E(log 𝒩(θt;μθ,t,1/τθ,t))=(logτθ,tlog(2π)1)/2 (106)

In total we now have

logP2(At,V1:N,t|Ct)logP2(At)+L2=(log(2π)+log(σA2+1/τ0))/2(Aμ0)2/(2(σA2+1/τ0))+(μθ,tlog(2π))N/2[N/τU,t+iN(Vi,tμU)2]exp(μθ,t+1/(2τθ,t))/2+((μθ,tlog(2π))/2[(μUμSV,t)2+1/τU,t+1/τSV,t]exp(μθ,t+1/(2τθ,t))/2log(σ022π)/2[(μSV,tμ0)2+1/τSV,t]/(2σ02)+(logκlog(2π))/2[(μθ,tμθ,t1)2+1/τθ,t]κ/2(logτU,tlog(2π)1)/2(logτSV,tlog(2π)1)/2(logτθ,tlog(2π)1)/2 (107)

Although lengthy, this is trivial and fast to compute numerically in Matlab (e.g.). Note that all estimates come from the variational Bayes approximation q2(St,Ut,θt).

3.2 Model likelihood for C = 1, one source St=SV,t=SA,t

We now need to do the same for the one source model.

P(At,V1:N,t|Ct=1)=P1(At,V1:N,t)=P1(At|St,Ct=1)P1(V1:N,t|Ut,λt,Ct=1)P1(Ut,λt,St|Ct=1)dVdλdSt (108)

Note that for simplicity in notation we will use P1 to indicate the probability within the model given Ct=1

We again approximate through the Free Energy that we already maximised iteratively in the variational Bayes algorithm.

logP1(At,V1:N,t)=logP1(V1:N,t|Ut,θt,St)P1(At|St)P1(Ut,θt,St)𝑑Ut𝑑θt𝑑StLCt=1(q1)=q1(Ut,θt,SV,t)logP1(V1:N,t|Ut,θt)P1(At|St)P1(Ut|,St,θt)P1(St)P1(θt)q1(Ut,θt,St)dUtdθtdSt (109)

(this approximation becomes exact if the variational approximation is exact, ie if the Kulback-Leibler difference between the posterior P1(V,θt,SV,t|V1:N,t) and the approximation q1(Ut,θt,SV,t) becomes zero.)

This can be interpreted as taking the expectation with regard to the posterior approximation, and due to the properties of the logarithm this can be separated into a sum of expectations:

L1(q)=E(logP1(At|St))+E(logP1(V1:N,t|Ut,λt))+E(logP1(Ut))+E(logP1(θt))+E(logP1(SV,t))-E(logq1(Ut))-E(logq1(θt))-E(logq1(SV,t)) (110)

where (since E(X2)=μX2+σX2)

E(logP1(At|St))=E(log12πσA2exp((AtSt)2/(2σA2))=log(σA22π)/2[(AtμS)2+1/τS,t]/(2σA2) (111)
E(logP1(V1:N,t|Ut,θt))=E(logiP1(Vi,t|Ut,θ))=E(logiexp(θt)2πexp(-(Vi,t-Ut)2exp(θt)/2)=E(ilog(exp(θt)2π)-(Vi,t-Ut)2exp(θt)/2)=n/2(Eθt(θ)-log(2π))+E(exp(θt))/2EUt(i-(Vi,t-Ut)2))=n/2(μθ,t-log(2π))+exp(μθ,t-1/(2τθ,t))/2(N/τU,t+i(Vi,t-μU)2) (112)

and

E(logP1(Ut)|St,θt))=E(log 𝒩(Ut;St,1/exp(θt)))=E(logexpθt2πexp((UtSt)2)exp(θt)/2))=(μθ,tlog(2π))/2[(μUμS,t)2+1/τU,t+1/τS,t]exp(μθ,t+1/(2τθ,t))/2 (113)

and

E(logP1(St))=E(log 𝒩(St;μ0,σ02))=E(log1σ022πexp((Stμ0)2)/(2σ02))=log(σ022π)/2[(μS,tμ0)2+1/τS,t]/(2σ02) (114)

and (due to Equation 75)

E(logP1(θt))=E(𝒩(θt;θt1,1/κ))=E(logκ2πexp((θtμθ,t1)2κ/2))=(logκlog(2π))/2[(μθ,tμθ,t1)2+1/τθ,t]κ/2 (115)

and

E(logq1(Ut))=E(log 𝒩(Ut;μU,t,1/τU,t))=E(logτU,t2πexp((UtμU,t)2τU,t/2))=logτU,t2πE((UtμU,t)2τU,t/2)=(logτU,tlog(2π))/2(E(Ut2)+μU,t22μU,tE(Ut))τU,t/2=(logτU,tlog(2π)1)/2 (116)

and

E(logq1(St))=E(log 𝒩(St;μS,t,1/τS,t))=E(log12πσ02exp((StμS,t)2τS,t/2))=(logτS,tlog(2π)1)/2 (117)

and

E(logq1(θt))=E(log 𝒩(θt;μθ,t,1/τθ,t))=(logτθ,tlog(2π)1)/2 (118)

In total we now have

logP1(At,V1:N,t)L1(q1)=log(σA22π)/2[(AtμS)2+1/τS,t]/(2σA2)+N/2(μθ,tlog(2π))(N/τU,t+iN(Vi,tμU)2)exp(μθ,t+1/(2τθ,t))/2+((μθ,tlog(2π))/2[(μUμSV,t)2+1/τU,t+1/τSV,t]exp(μθ,t+1/(2τθ,t))/2log(σ022π)/2[(μSV,tμ0)2+1/τSV,t]/(2σ02)+(logκlog(2π))/2[(μθ,tμθ,t1)2+1/τθ,t]κ/2(logτU,tlog(2π)1)/2(logτS,tlog(2π)1)/2(logτθ,tlog(2π)1)/2 (119)

(where all first and second order moments (μ,τ) have been derived from q1).

This is identical to the result from C=2 except for the first line.

In total this provides us with an approximation to the model evidence for each model, P(At,V1:N,t|Ct=1) and P(At,V1:N,t|Ct=2), which can be used to calculate the posterior probability of either model given data, P(Ct|A1:t,V1:N,1:t).

4 Putting it all together

For either sub-model, the factorization (due to assumptions and variational Bayes approximation) allows us to write out the equations for the variable for subject choice:

P(St|A1:t,V1:N,1:t)=P(St|A1:t,V1:N,1:t,Ct=1)P(Ct=1|A1:t,V1:N,1:t)+P(SA,t|A1:t,V1:N,1:t,Ct=2)P(Ct=2|A1:t,V1:N,1:t)qC=1,t(St)P(Ct=1|A1:t,V1:N,1:t)+P(SA,t|A1:t,Ct=2)P(Ct=2|A1:t,V1:N,1:t) (120)

This is now a mixture of two Gaussian distributions (due to the Variational Bayes approximation), with mixture weights given by the model evidence (partly approximated by the Free Energy).

We will assume subjects report the mean of the distribution that is,

St^=S^C=1,tP(Ct=1|A1:t,V1:N,1:t)+S^C=2,tP(Ct=2|A1:t,V1:N,1:t) (121)

where S^C=1,t=μS,t for C=1 and S^C=2,t=μS,A,t for C=2.

We also need a prior over the visual log-reliability for the following trial

P(θt|A1:t,V1:N,1:t)=P(θt|A1:t,V1:N,1:t,C=1)P(C=1|A1:t,V1:N,1:t)+P(θt|A1:t,V1:N,1:t,C=2)P(C=2|A1:t,V1:N,1:t) (122)

While this is a mixture of two Gaussians, we need the prior to be a single Gaussian in order for our approximation scheme above to work. We will approximate this mixture with a single Gaussian (essentially fitting a Gaussian to the mixture of two Gaussians).

P(θt|A1:t,V1:N,1:t)𝒩(θt|μθ,t,1/τθ,t) (123)

where (due to the first- and second-order moments of mixture distributions)

μθ,t=μθ,t,C=1P(Ct=1|A1:t,V1:N,1:t)+μθ,t,C=2P(Ct=2|A1:t,V1:N,1:t) (124)
1/τθ,t=(1/τθ,t,C=1+μθ,t,C=12)P(Ct=1|A1:t,V1:N,1:t)+(1/τθ,t,C=2+μθ,t,C=22)P(Ct=2|A1:t,V1:N,1:t)-μθ,t2 (125)

While fitting a Gaussian to the sum of two Gaussians could be a very in-exact approximation, in practice the two individual distributions are close enough for this not to be a problem (as any contribution from At to the posterior of θt is very small).

In conclusion, subjects report St^ (through a button response, see Equation 121) and they propagate the posterior P(θt|A1:t,V1:N,1:t) (see Equation 123) as prior for the next trial.

Funding Statement

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Contributor Information

Ulrik Beierholm, Email: ulrik.beierholm@durham.ac.uk.

Tobias Reichenbach, Imperial College London, United Kingdom.

Andrew J King, University of Oxford, United Kingdom.

Funding Information

This paper was supported by the following grants:

  • European Research Council ERC-multsens,309349 to Uta Noppeney.

  • Max Planck Society to Tim Rohe, Uta Noppeney.

  • Deutsche Forschungsgemeinschaft DFG RO 5587/1-1 to Tim Rohe.

Additional information

Competing interests

No competing interests declared.

Author contributions

Conceptualization, Resources, Software, Formal analysis, Investigation, Visualization, Methodology, Writing - review and editing.

Conceptualization, Resources, Data curation, Formal analysis, Funding acquisition, Investigation, Visualization, Methodology, Writing - original draft, Project administration, Writing - review and editing.

Data curation, Project administration.

Conceptualization, Methodology.

Conceptualization, Formal analysis, Supervision, Funding acquisition, Investigation, Methodology, Writing - review and editing.

Ethics

Human subjects: All volunteers participated in the study after giving written informed consent. The study was approved by the human research review committee of the University of Tuebingen (approval number 432 2007 BO1) and the research review committee of the University of Birmingham (approval number ERN_15-1458AP1).

Additional files

Supplementary file 1. Seven tables showing results of additional analyses.
elife-54172-supp1.docx (54.1KB, docx)
Transparent reporting form

Data availability

The human behavioral raw data and computational model predictions as well as the code for computational modelling and analyses scripts are available in an OSF repository: https://osf.io/gt4jb/.

The following dataset was generated:

Beierholm U, Rohe T, Noppeney U. 2020. Using the past to estimate sensory uncertainty. Open Science Framework.

References

  1. Acerbi L, Vijayakumar S, Wolpert DM. On the origins of suboptimality in human probabilistic inference. PLOS Computational Biology. 2014;10:e1003661. doi: 10.1371/journal.pcbi.1003661. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Acerbi L, Dokka K, Angelaki DE, Ma WJ. Bayesian comparison of explicit and implicit causal inference strategies in multisensory heading perception. PLOS Computational Biology. 2018;14:e1006110. doi: 10.1371/journal.pcbi.1006110. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Acerbi L, Ma WJ. Practical bayesian optimization for model fitting with bayesian adaptive direct search. Advances in Neural Information Processing Systems; 2017. pp. 1836–1846. [Google Scholar]
  4. Adams RP, Mackay DJ. Bayesian online changepoint detection. arXiv. 2007 https://arxiv.org/abs/0710.3742
  5. Alais D, Burr D. The ventriloquist effect results from near-optimal bimodal integration. Current Biology. 2004;14:257–262. doi: 10.1016/j.cub.2004.01.029. [DOI] [PubMed] [Google Scholar]
  6. Algazi VR, Duda RO, Thompson DM, Avendano C. The cipic hrtf database. IEEE Workshop on the Applications of Signal Processing to Audio and Acoustics, 2001; 2001. [DOI] [Google Scholar]
  7. Aller M, Noppeney U. To integrate or not to integrate: temporal dynamics of hierarchical bayesian causal inference. PLOS Biology. 2019;17:e3000210. doi: 10.1371/journal.pbio.3000210. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Battaglia PW, Jacobs RA, Aslin RN. Bayesian integration of visual and auditory signals for spatial localization. Journal of the Optical Society of America A. 2003;20:1391–1397. doi: 10.1364/JOSAA.20.001391. [DOI] [PubMed] [Google Scholar]
  9. Beck JM, Ma WJ, Kiani R, Hanks T, Churchland AK, Roitman J, Shadlen MN, Latham PE, Pouget A. Probabilistic population codes for bayesian decision making. Neuron. 2008;60:1142–1152. doi: 10.1016/j.neuron.2008.09.021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Behrens TE, Woolrich MW, Walton ME, Rushworth MF. Learning the value of information in an uncertain world. Nature Neuroscience. 2007;10:1214–1221. doi: 10.1038/nn1954. [DOI] [PubMed] [Google Scholar]
  11. Berniker M, Voss M, Kording K. Learning priors for bayesian computations in the nervous system. PLOS ONE. 2010;5:e12686. doi: 10.1371/journal.pone.0012686. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Bishop CM. Pattern Recognition and Machine Learning. New York: Springer; 2006. [Google Scholar]
  13. Brainard DH. The psychophysics toolbox. Spatial Vision. 1997;10:433–436. doi: 10.1163/156856897X00357. [DOI] [PubMed] [Google Scholar]
  14. Drugowitsch J, DeAngelis GC, Klier EM, Angelaki DE, Pouget A. Optimal multisensory decision-making in a reaction-time task. eLife. 2014;3:e03005. doi: 10.7554/eLife.03005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Drugowitsch J, Wyart V, Devauchelle AD, Koechlin E. Computational precision of mental inference as critical source of human choice suboptimality. Neuron. 2016;92:1398–1411. doi: 10.1016/j.neuron.2016.11.005. [DOI] [PubMed] [Google Scholar]
  16. Ernst MO, Banks MS. Humans integrate visual and haptic information in a statistically optimal fashion. Nature. 2002;415:429–433. doi: 10.1038/415429a. [DOI] [PubMed] [Google Scholar]
  17. Fiser J, Berkes P, Orbán G, Lengyel M. Statistically optimal perception and learning: from behavior to neural representations. Trends in Cognitive Sciences. 2010;14:119–130. doi: 10.1016/j.tics.2010.01.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Gelman A, Stern HS, Carlin JB, Dunson DB, Vehtari A, Rubin DB. Bayesian data analysis. Chapman and Hall/CRC; 2013. [Google Scholar]
  19. Gelman A, Hwang J, Vehtari A. Understanding predictive information criteria for bayesian models. Statistics and Computing. 2014;24:997–1016. doi: 10.1007/s11222-013-9416-2. [DOI] [Google Scholar]
  20. Heilbron M, Meyniel F. Confidence resets reveal hierarchical adaptive learning in humans. PLOS Computational Biology. 2019;15:e1006972. doi: 10.1371/journal.pcbi.1006972. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Hou H, Zheng Q, Zhao Y, Pouget A, Gu Y. Neural correlates of optimal multisensory decision making under Time-Varying reliabilities with an invariant linear probabilistic population code. Neuron. 2019;104:1010–1021. doi: 10.1016/j.neuron.2019.08.038. [DOI] [PubMed] [Google Scholar]
  22. Jacobs RA. Optimal integration of texture and motion cues to depth. Vision Research. 1999;39:3621–3629. doi: 10.1016/S0042-6989(99)00088-7. [DOI] [PubMed] [Google Scholar]
  23. Jacobs RA, Fine I. Experience-dependent integration of texture and motion cues to depth. Vision Research. 1999;39:4062–4075. doi: 10.1016/S0042-6989(99)00120-0. [DOI] [PubMed] [Google Scholar]
  24. Kleiner M, Brainard D, Pelli D, Ingling A, Murray R, Broussard C. What’s new in Psychtoolbox-3. Perception. 2007;36:1–16. doi: 10.1068/v070821. [DOI] [Google Scholar]
  25. Knill DC, Pouget A. The Bayesian brain: the role of uncertainty in neural coding and computation. Trends in Neurosciences. 2004;27:712–719. doi: 10.1016/j.tins.2004.10.007. [DOI] [PubMed] [Google Scholar]
  26. Knill DC, Richards W. Perception as Bayesian Inference. Cambridge University Press; 1996. [DOI] [Google Scholar]
  27. Körding KP, Beierholm U, Ma WJ, Quartz S, Tenenbaum JB, Shams L. Causal inference in multisensory perception. PLOS ONE. 2007;2:e943. doi: 10.1371/journal.pone.0000943. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Ma WJ, Beck JM, Latham PE, Pouget A. Bayesian inference with probabilistic population codes. Nature Neuroscience. 2006;9:1432–1438. doi: 10.1038/nn1790. [DOI] [PubMed] [Google Scholar]
  29. Ma WJ, Jazayeri M. Neural coding of uncertainty and probability. Annual Review of Neuroscience. 2014;37:205–220. doi: 10.1146/annurev-neuro-071013-014017. [DOI] [PubMed] [Google Scholar]
  30. Meijer D, Veselič S, Calafiore C, Noppeney U. Integration of audiovisual spatial signals is not consistent with maximum likelihood estimation. Cortex. 2019;119:74–88. doi: 10.1016/j.cortex.2019.03.026. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Mikula L, Gaveau V, Pisella L, Khan AZ, Blohm G. Learned rather than online relative weighting of visual-proprioceptive sensory cues. Journal of Neurophysiology. 2018;119:1981–1992. doi: 10.1152/jn.00338.2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Norton EH, Acerbi L, Ma WJ, Landy MS. Human online adaptation to changes in prior probability. PLOS Computational Biology. 2019;15:e1006681. doi: 10.1371/journal.pcbi.1006681. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Penny WD, Stephan KE, Daunizeau J, Rosa MJ, Friston KJ, Schofield TM, Leff AP. Comparing families of dynamic causal models. PLOS Computational Biology. 2010;6:e1000709. doi: 10.1371/journal.pcbi.1000709. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Rigoux L, Stephan KE, Friston KJ, Daunizeau J. Bayesian model selection for group studies - revisited. NeuroImage. 2014;84:971–985. doi: 10.1016/j.neuroimage.2013.08.065. [DOI] [PubMed] [Google Scholar]
  35. Rohe T, Ehlis AC, Noppeney U. The neural dynamics of hierarchical bayesian causal inference in multisensory perception. Nature Communications. 2019;10:1907. doi: 10.1038/s41467-019-09664-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Rohe T, Noppeney U. Cortical hierarchies perform bayesian causal inference in multisensory perception. PLOS Biology. 2015a;13:e1002073. doi: 10.1371/journal.pbio.1002073. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Rohe T, Noppeney U. Sensory reliability shapes perceptual inference via two mechanisms. Journal of Vision. 2015b;15:22. doi: 10.1167/15.5.22. [DOI] [PubMed] [Google Scholar]
  38. Rohe T, Noppeney U. Distinct computational principles govern multisensory integration in primary sensory and association cortices. Current Biology. 2016;26:509–514. doi: 10.1016/j.cub.2015.12.056. [DOI] [PubMed] [Google Scholar]
  39. Shen S, Ma WJ. A detailed comparison of optimality and simplicity in perceptual decision making. Psychological Review. 2016;123:452–480. doi: 10.1037/rev0000028. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Triesch J, Ballard DH, Jacobs RA. Fast temporal dynamics of visual cue integration. Perception. 2002;31:421–434. doi: 10.1068/p3314. [DOI] [PubMed] [Google Scholar]
  41. van Beers RJ, Sittig AC, Gon JJ. Integration of proprioceptive and visual position-information: an experimentally supported model. Journal of Neurophysiology. 1999;81:1355–1364. doi: 10.1152/jn.1999.81.3.1355. [DOI] [PubMed] [Google Scholar]
  42. Wozny DR, Beierholm UR, Shams L. Probability matching as a computational strategy used in perception. PLOS Computational Biology. 2010;6:e1000871. doi: 10.1371/journal.pcbi.1000871. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Zemel RS, Dayan P, Pouget A. Probabilistic interpretation of population codes. Neural Computation. 1998;10:403–430. doi: 10.1162/089976698300017818. [DOI] [PubMed] [Google Scholar]

Decision letter

Editor: Tobias Reichenbach1
Reviewed by: Tobias Reichenbach2, Luigi Acerbi3

In the interests of transparency, eLife publishes the most substantive revision requests and the accompanying author responses.

Acceptance summary:

Our perception is notoriously inaccurate, and estimating the uncertainty of a percept is an important task of our brain. The present paper shows for the first time that our brain uses past history in the estimation of the uncertainty of a current percept. It opens up further research questions including those related to the role of learning in perception.

Decision letter after peer review:

Thank you for submitting your article "Using the past to estimate sensory uncertainty" for consideration by eLife. Your article has been reviewed by three peer reviewers, including Tobias Reichenbach as the Reviewing Editor and Reviewer #1, and the evaluation has been overseen by Andrew King as the Senior Editor. The following individual involved in review of your submission has agreed to reveal their identity: Luigi Acerbi (Reviewer #3).

The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.

Summary:

Beierholm et al. present a well-executed psychophysical study in which participants judged the location of a conflicting visual-auditory stimulus under varying degrees of visual noise. Unlike other studies in this field, the visual noise was varied slowly, enabling participants to take advantage of this fact by incorporating estimates of uncertainty from the recent past into their current cue-weighting strategy. The authors then compare the data to three computational models: a simple learner that does not use past information, a Bayesian learner, and an exponential learner that accounts for past information at a fixed learning rate. The conclusion of the paper is that subjects' estimate of visual variability is influenced by the past history of visual noise.

While we find these results intriguing, and the experiment and analysis thorough, we have several major comments that we would like the authors to address.

Essential revisions:

1) The current analysis does not consider the effect that past visual locations, in addition to the uncertainty in the visual signal, may have on the estimate of the current location. In particular, if the location at t-1 is the same or close to the location at time t, it might be that past estimates of location get integrated (in fact, this becomes another causal inference problem). This effect might be further related to the question investigated here, because, if present, it might be modulated by the visual noise in the previous trial. Have you investigated whether this effect is present, and if so, how did you account for it in your analysis?

2) The authors currently refer to a preprint of this manuscript on bioarxiv for a full derivation of both model components for the Bayesian learner. Instead, please provide the full model derivation in the supplementary information in a self-contained form. Please also make the code for the modelling and the analysis as well as the data publicly available.

3) In the sinusoidal condition the bins had a duration of only 1.5 s, but the trials were 1.4 to 2.8 s apart. It therefore appears as if there were either 1 or 0 (or very rarely 2) responses in each bin. How did you handle the zero-response bins? And how can weights – presumed to vary smoothly between 0 and 1 – be reliably estimated from a single behavioural response?

4) The computational model makes certain assumptions that appear to differ from the experiment. We would like the authors to comment on these discrepancies. First, the computational model assumes that the auditory signal follows a normal distribution around a particular mean – . However, in the experiment, the location of the sound was either +5 degrees or -5 degrees away from the mean of the visual signal. Second, regarding the computational model, the authors write that "the dispersion of the individual dots is assumed to be identical to the uncertainty about the visual mean, allowing subjects to use the dispersions as an estimate of the uncertainty about the visual mean". But in the experiment there is no notion of an uncertainty (noise) in the visual mean. Third, the authors write that all probabilities, except for one, are Gaussian. As for the first point raised above, in the experiment, this only seems true for the distribution of the dots around the mean, but not for the other distributions. In particular, the mean of the visual signal is sampled from a discrete uniform distribution that encompasses only five different locations. Fourth, each dot location Vi,t is drawn from a normal distribution with mean Ut, but Ut is drawn from another distribution with mean SV,t – are the variance of these two distributions the same? Wouldn't Ut simply be the location (-10, -5, etc) on that trial, and wouldn't this mean instead that the dot positions are doubly stochastic? If so, why? The actual dispersion (not to mention the observers' estimates thereof) would be very noisy if dot locations were simply resampled at 5 Hz from a fixed distribution for a given trial. Doesn't resampling the SD at 5 Hz just complicate the modeling even more than it already is? Please also explain the purpose of the log random walk.

[Editors' note: further revisions were suggested prior to acceptance, as described below.]

Thank you for resubmitting your article "Using the past to estimate sensory uncertainty" for consideration by eLife. Your revised article has been reviewed by three peer reviewers, including Tobias Reichenbach as the Reviewing Editor and Reviewer #1, and the evaluation has been overseen by Andrew King as the Senior Editor. The following individual involved in review of your submission has agreed to reveal their identity: Luigi Acerbi (Reviewer #3).

The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.

We would like to draw your attention to changes in our revision policy that we have made in response to COVID-19 (https://elifesciences.org/articles/57162). Specifically, we are asking editors to accept without delay manuscripts, like yours, that they judge can stand as eLife papers without additional data, even if they feel that they would make the manuscript stronger. Thus the revisions requested below only address clarity and presentation.

Summary:

The authors have addressed our previous comments well in their extensively revised version of the manuscript. We only have a few remaining queries.

Revisions:

Difference in STD between current and previous bin predicts auditory weight, would that be expected given the autocorrelation of the STD sequence? Our intuition is that the null hypothesis (no impact of previous bin) would only be valid if STDs were (temporally) conditionally independent. In other words, if it were only the current STD that affected the weight, you might still see an apparent influence of the previous STD simply because STD of bin N is highly correlated with STD of bin N-1 (for the sinusoids at least).

This logic may only apply if the regression was on the absolute STDs, not the difference between current and previous STD, which is what the authors did. So perhaps it's not an issue. But if it is, we think one could perform a nested model comparison to test whether adding the previous time bin significantly improves the fit enough to justify the extra parameter. (It could also be that the current analysis is effectively doing this.)

Alternatively, one could perform this analysis separately for the first half vs. second half and see whether you observe a change in the regression coefficient for the δ-STD. If the authors' interpretation is correct, the coefficient should systematically change (sign flip?) when STD is increasing vs decreasing, whereas if the autocorrelation were driving its significance, it should not depend on increasing vs decreasing.

eLife. 2020 Dec 15;9:e54172. doi: 10.7554/eLife.54172.sa2

Author response


Essential revisions:

1) The current analysis does not consider the effect that past visual locations, in addition to the uncertainty in the visual signal, may have on the estimate of the current location. In particular, if the location at t-1 is the same or close to the location at time t, it might be that past estimates of location get integrated (in fact, this becomes another causal inference problem). This effect might be further related to the question investigated here, because, if present, it might be modulated by the visual noise in the previous trial. Have you investigated whether this effect is present, and if so, how did you account for it in your analysis?

We thank the reviewer for this suggestion. To quantify the influence of the previous visual location, we expanded our regression model by another regressor modelling the visual cloud’s location on the previous trial. For instance, for bin = 1 we computed:

RA,trial, bin=1= LA,trial,bin=1* ßA,bin=1 + LV,trial,bin=1* ßV,bin=1 + LV,trial-1,bin=1* ßVprevious,bin=1 + ßconst,bin=1 + etrial,bin=1 with RA,trial, bin=1 = Localization response for current trial that is assigned to bin 1; LA,trial,bin=1 or LV,trial,bin=1= ‘true’ auditory or visual location for current trial that is assigned to bin 1; LV,trial-1,bin=1 ‘true’ visual location for corresponding previous trial (for explanatory purposes, we assign the bin of the current trial; the previous trial actually falls into a different bin); ßA,bin=1 or ßV,bin=1 = auditory or visual weight for bin = 1; ßVprevious,bin=1 quantified the influence of the visual location of the previous trial on the perceived sound location of the current trial for bin 1. ßconst,bin=1 = constant term; etrial,bin=1 = error term.

This analysis indeed reveals that the location of the visual cloud on the previous trial influences observers’ perceived sound location (Supplementary file 1—table 2). But surprisingly, it has a repellent effect, i.e. observers’ perceived that sound location shifts away from the true visual location. Importantly, having regressed out the influence of the previous V location on observers’ perceived sound location, we have repeated our main analyses, i.e. the repeated-measures ANOVA assessing whether wA,bin differed for the bins in first vs. second half (see Supplementary file 1—table 3). Moreover, we repeated the regression model analysis to assess whether wA,bin was predicted not only by the cloud’s STD of the current, but also by the previous bin (Supplementary file 1—table 4). Both analyses replicated our initial findings.

In addition, we also demonstrated that the regression weight quantifying the influence of the previous visual location did not correlate with the visual noise in the current trial r(ßVprevious,bin, σVcurrent,bin) and in the previous trial (r(ßVprevious,bin, σVprevious,bin)) (see Supplementary file 1—table 2).

We have now included additional methods in Appendix 1, report those results in Supplementary file 1—table 2-4 and Figure 2—figure supplement 1 and refer to the control analyses in the main text.

2) The authors currently refer to a preprint of this manuscript on bioarxiv for a full derivation of both model components for the Bayesian learner. Instead, please provide the full model derivation in the supplementary information in a self-contained form. Please also make the code for the modelling and the analysis as well as the data publicly available.

We have now added the full model derivation to the Appendix 2. Further, we uploaded the code for modelling and analyses scripts along with the behavioral data and model predictions to an OSF repository: https://osf.io/gt4jb/

We refer to this website in the main text.

3) In the sinusoidal condition the bins had a duration of only 1.5 s, but the trials were 1.4 to 2.8 s apart. It therefore appears as if there were either 1 or 0 (or very rarely 2) responses in each bin. How did you handle the zero-response bins? And how can weights – presumed to vary smoothly between 0 and 1 – be reliably estimated from a single behavioural response?

We are sorry for this confusion. Indeed, the reviewer is absolutely right. The bins in the four sequences had durations (Sin, RW1 = 1.5s, RW2 = 6s, Sin Jump = 2s) which were partially shorter than the ITI of 1.4-2.8 s, so that during the presentation of a single sequence bins had only 0,1 or rarely 2 responses. However, the experiment looped multiple times (Sin, RW1, Sin Jump ~ 130x, RW2 ~ 32) through the sequences during the course of the experiment. As a result of the jittered trial onset asynchrony, trials sampled different bins over replications/cycles of the same sequence throughout the experiment. In fact, each time bin was informed by at least 44-87 trials (see Supplementary file 1—table 1). Thus, the auditory weights could be estimated quite reliably.

We have now described the experimental design and analysis strategy in greater detail in the Results and Materials and method sections. We have also introduced a new notation for the equations and parameters for clarification. Moreover, we have included the following table into Supplementary file 1

4) The computational model makes certain assumptions that appear to differ from the experiment. We would like the authors to comment on these discrepancies.

Thank you for giving us the opportunity to motivate the assumptions of our model. We have now clarified the models’ assumptions in the Materials and method section.

First, the computational model assumes that the auditory signal follows a normal distribution around a particular mean. However, in the experiment, the location of the sound was either +5 degrees or -5 degrees away from the mean of the visual signal.

Indeed, the reviewer is absolutely right that the sounds are ± 5° from the visual location. However, observers are known to be limited in their sound localization ability, particularly if sounds do not come from natural sound sources but are generated with generic head-related transfer functions as in our study. Given observers’ substantial spatial uncertainty when locating sounds, we feel the model’s normal assumptions about sound location can be justified.

Second, regarding the computational model, the authors write that "the dispersion of the individual dots is assumed to be identical to the uncertainty about the visual mean, allowing subjects to use the dispersions as an estimate of the uncertainty about the visual mean". But in the experiment there is no notion of an uncertainty (noise) in the visual mean.

We have introduced the additional hidden variable Ut to account for the fact that even when auditory and visual signals come from a common source, they do not necessarily fully coincide in space in our natural environment. This introduces additional spatial uncertainty, so that observers cannot fully rely on the visual cloud of dots to locate the sound even in the common source situation. Critically, – as cited by the reviewer- because the dispersion of the dots and the uncertainty about the mean were set to be equal, observers could estimate this visual uncertainty from the spread of the dots.

Third, the authors write that all probabilities, except for one, are Gaussian. As for the first point raised above, in the experiment, this only seems true for the distribution of the dots around the mean, but not for the other distributions. In particular, the mean of the visual signal is sampled from a discrete uniform distribution that encompasses only five different locations.

Again, given the uncertainty about visual location this seems like a justifiable assumption. In fact, this assumption has been made by a growing number of studies that fitted the Bayesian Causal Inference model to observers’ localization responses, even though in all of those previous studies (Kording et al., 2007, Rohe and Noppeney, 2015b, Rohe and Noppeney, 2015a), the mean of the visual and auditory signals were sampled from a discrete uniform distribution.

Fourth, each dot location Vi,t is drawn from a normal distribution with mean Ut, but Ut is drawn from another distribution with mean SV,t – are the variance of these two distributions the same?

Yes – as explained in response to point 2, otherwise they would not be informative

Wouldn't Ut simply be the location (-10, -5, etc) on that trial, and wouldn't this mean instead that the dot positions are doubly stochastic? If so, why?

No, Ut is not identical to the location on that trial but is modelled as sampled from a Gaussian centred on this location; so in this sense, the model make the inference doubly stochastic; but importantly the two standard deviations are the same, so one is informative about the other one.

The actual dispersion (not to mention the observers' estimates thereof) would be very noisy if dot locations were simply resampled at 5 Hz from a fixed distribution for a given trial. Doesn't resampling the SD at 5 Hz just complicate the modeling even more than it already is?

It is true that the SD was resampled at 5Hz in order to provide the observer with the impression of a continuous stimulus. But for modelling, we focused selectively on the SD of the visual cloud at the trial onset times.

Please also explain the purpose of the log random walk.

We performed a random walk on the logarithm of the visual reliability λV,t as a convenience for the modeling. Previous research in the reward learning domain has compared a log random walk with a change point model and found very similar results (Behrens et al., 2007). Moreover, as even the exponential learner provided a reasonable fit to the data, we suspect that this type of assumption is unlikely to have a relevant effect.

As mentioned above, we included a critical discussion of our modelling assumptions in the Materials and methods section in which we discuss the aforementioned points.

[Editors' note: further revisions were suggested prior to acceptance, as described below.]

Revisions:

Difference in STD between current and previous bin predicts auditory weight, would that be expected given the autocorrelation of the STD sequence? Our intuition is that the null hypothesis (no impact of previous bin) would only be valid if STDs were (temporally) conditionally independent. In other words, if it were only the current STD that affected the weight, you might still see an apparent influence of the previous STD simply because STD of bin N is highly correlated with STD of bin N-1 (for the sinusoids at least).

This logic may only apply if the regression was on the absolute STDs, not the difference between current and previous STD, which is what the authors did. So perhaps it's not an issue. But if it is, we think one could perform a nested model comparison to test whether adding the previous time bin significantly improves the fit enough to justify the extra parameter. (It could also be that the current analysis is effectively doing this.)

Alternatively, one could perform this analysis separately for the first half vs. second half and see whether you observe a change in the regression coefficient for the δ-STD. If the authors' interpretation is correct, the coefficient should systematically change (sign flip?) when STD is increasing vs decreasing, whereas if the autocorrelation were driving its significance, it should not depend on increasing vs decreasing.

Thanks for raising this point. Indeed, as the reviewer notes, the difference in STD between the current and previous bin is only weakly correlated to current STD ( r ~ 0.3 across the sequences). Further, because we inserted the difference in STD and the STD in the same regression model, each parameter estimate reflects only the unique variance that cannot be explained by any other regressor in the model. Hence, testing for the significance of one parameter estimate is equivalent to comparing two nested models that do or do not include this regressor. However, in the two-stage summary-statistic approach this relationship is less transparent.

Following the reviewer’s suggestions, we have now implemented a nested model comparison. We fitted two linear mixed effects models to the relative auditory weights: a full model with current STD and the difference in STD versus a reduced model with only current STD. The model comparison shows a greater model evidence for the full model.

We mention the control analysis in the Result section, added a supplementary table (Supplementary file—table 5) and describe the analysis in more detail in Appendix 1.

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Data Citations

    1. Beierholm U, Rohe T, Noppeney U. 2020. Using the past to estimate sensory uncertainty. Open Science Framework. [DOI] [PMC free article] [PubMed]

    Supplementary Materials

    Supplementary file 1. Seven tables showing results of additional analyses.
    elife-54172-supp1.docx (54.1KB, docx)
    Transparent reporting form

    Data Availability Statement

    The human behavioral raw data and computational model predictions as well as the code for computational modelling and analyses scripts are available in an OSF repository: https://osf.io/gt4jb/.

    The following dataset was generated:

    Beierholm U, Rohe T, Noppeney U. 2020. Using the past to estimate sensory uncertainty. Open Science Framework.


    Articles from eLife are provided here courtesy of eLife Sciences Publications, Ltd

    RESOURCES