Skip to main content
PLOS Computational Biology logoLink to PLOS Computational Biology
. 2022 Jan 13;18(1):e1009738. doi: 10.1371/journal.pcbi.1009738

An initial ‘snapshot’ of sensory information biases the likelihood and speed of subsequent changes of mind

William Turner 1,*, Daniel Feuerriegel 1, Robert Hester 1, Stefan Bode 1
Editor: Woo-Young Ahn2
PMCID: PMC8757993  PMID: 35025889

Abstract

We often need to rapidly change our mind about perceptual decisions in order to account for new information and correct mistakes. One fundamental, unresolved question is whether information processed prior to a decision being made (‘pre-decisional information’) has any influence on the likelihood and speed with which that decision is reversed. We investigated this using a luminance discrimination task in which participants indicated which of two flickering greyscale squares was brightest. Following an initial decision, the stimuli briefly remained on screen, and participants could change their response. Using psychophysical reverse correlation, we examined how moment-to-moment fluctuations in stimulus luminance affected participants’ decisions. This revealed that the strength of even the very earliest (pre-decisional) evidence was associated with the likelihood and speed of later changes of mind. To account for this effect, we propose an extended diffusion model in which an initial ‘snapshot’ of sensory information biases ongoing evidence accumulation.

Author summary

To avoid harm in an ever-changing world we need to be able to rapidly change our minds about our decisions. For example, imagine being unable to overrule a decision to run across a street when you realise a speeding car is approaching. In this study, we examined the information processing dynamics which underlie perceptual judgements and changes of mind. By reverse correlating participants decisions with the moment-to-moment sensory evidence they received, we show that the very earliest information, processed prior to an initial decision being made, can have a lasting influence over the speed and likelihood of subsequent changes of mind. To account for this, we develop a model of perceptual decisions in which initial sensory evidence exerts a lasting bias over later evidence processing. When fit to participants’ behavioural responses alone, this model predicted their observed information usage patterns. This suggests that an initial ‘snapshot’ of sensory information may influence the ongoing dynamics of the perceptual decision process, thus influencing the speed and likelihood of decision reversals.

Introduction

The ability to rapidly revise decisions in the face of new information is critical for avoiding harm in an ever-changing world. For example, imagine being unable to overrule a decision to run across a street when you realise that a speeding car is approaching. In situations such as this, even small delays in the time it takes to change your mind can have serious consequences. Given this, it is important to understand the cognitive processes which underlie rapid decision reversals, and the factors which influence the likelihood and speed with which they occur.

There is extensive evidence that perceptual decisions are made via the noisy accumulation of sensory information over time [1,2]. Once a certain amount of evidence has been accumulated in favor of one choice over another, an initial decision is made. However, following this the decision process does not immediately halt. Instead, sensory evidence continues to be accumulated, and, if enough subsequent evidence is accumulated in favor of the initially-unchosen response, a change of mind occurs [3].

Relative to the point at which an initial decision is made, two broad time periods can be defined: 1) a pre-decisional period, in which sensory evidence is being accumulated but a decision is yet to be reached, and 2) a post-decisional period in which a decision has been made but the evidence accumulation process is continuing to unfold. When considering these two time periods, one fundamental question which arises is whether information processed in the pre-decisional period (‘pre-decisional evidence’) has any influence on the likelihood and speed with which a change of mind subsequently occurs. Intuitively, it is appealing to think that pre-decisional evidence will affect subsequent change-of-mind behaviour. For example, if a decision is made on the basis of strong sensory evidence then it is reasonable to assume that this decision will be less likely to be overruled, or, if it is overruled, it will take longer for this to occur. However, it is also possible that change-of-mind decisions are be based solely on information processed after an initial decision has been made. Indeed, the most prominent model of perceptual changes of mind, the extended diffusion decision model [3], makes just this prediction. In particular, this model proposes that (controlling for decision accuracy) the decision process is in exactly the same state at the time of the initial decision, meaning that only post-decisional evidence influences change-of-mind speed and likelihood. It is therefore important to test between these two possibilities, to better understand how we rapidly change our minds about perceptual decisions.

Previous studies that investigated how people make discrimination judgements about dynamic stimuli (e.g., random dot motion stimuli or flickering dot arrays) have shown that, in trials where people change their mind, the sensory evidence provided by the stimulus initially favors one decision. However, just prior to the behavioral response being enacted, the evidence switches to favoring the alternative option, driving the change of mind [35]. It has been argued that, when controlling for initial response accuracy, the likelihood of a change of mind occurring depends on how strongly the initial decision was supported by the pre-decisional evidence [4]. However, in the study which purported to show this, sensory and motoric delays were not accounted for. Because it takes time for the brain to process incoming sensory information and output a motor response, decisions are actually made on the basis of information which was presented some hundreds of milliseconds earlier in time, and behavioural responses lag behind decisions by ~80 ms [3]. Since these delays were not accounted for, it is unclear whether the reported association was driven by truly pre-decisional evidence, or rather evidence which was presented prior to a behavioural response being made, but after the point at which it could have influenced the initial decision. Beyond the question of whether pre-decisional information affects change of mind likelihood, the associated question of whether this information influences more detailed response characteristics, such as change-of-mind speed, has also not been explored. Understanding the influence of pre-decisional information on the speed with which decisions are overruled is important because the timing of decisions often offers a rich source of information about the underlying decision process [6].

In previous studies of change-of-mind behaviour, random dot motion tasks have most often been employed [3,5,7,8]. To examine information usage in these tasks motion energy filtering [9] is required, leading to a smearing of information across time that prohibits the examination of early information usage due to a 50–150 ms lag in filter build up. To sidestep these issues, we employed a dynamic luminance discrimination task in which it is possible to examine information usage independently on a frame-by-frame basis, without filtering. To allow for fine-grained estimation of the participants’ sensory information usage patterns (their ‘psychophysical kernels’), we employed a small-N design in which each participant (n = 4) completed a very large number of trials [10]. Participants indicated which of two flickering grey squares was the brightest by pressing one of two response keys. After an initial judgment, the stimuli remained on screen for a brief period (1.5 seconds), and the participants were free to change their response. Critically, with each screen refresh (i.e. every 13.33 ms) a random luminance value was added to the mean luminance value of each square. Using psychophysical reverse correlation [11,12] we then retrospectively examined the impact that this residual evidence had on participants’ decisions, on a frame-by-frame basis. If the luminance fluctuations at each frame are averaged across all trials they will cancel to zero, due to the fact that they are randomly distributed with a mean of zero. However, if the fluctuations systematically affect participants’ decisions, then averaging across trials with shared decision outcomes (e.g., averaging over correct responses) will reveal when and how the fluctuations influenced participants’ decisions across the pre-decisional and post-decisional time periods [13]. Below, we first describe the psychophysical kernels that this reverse correlation analysis revealed. Then, we account for these with a variant of the extended diffusion decision model.

Results

Participants’ changed their mind on 23.91% of trials (18.53% corrected errors and 5.38% spoilt responses) with an average initial response accuracy of 66.95% (80.10% of final responses were correct). To investigate whether pre-decisional evidence affected change of mind likelihood we sorted trials into four possible types: correct and error responses that were not followed by a change of mind, and correct and error responses that were followed by a change of mind (termed ‘spoilt correct’ responses and ‘corrected error’ responses, respectively). We then time-locked the residual evidence relative to: 1) stimulus presentation, 2) the initial response, and 3) the change of mind response, and averaged across trials within each response type (Fig 1A–1C). This revealed clear patterns in the luminance residuals for change of mind trials. Evidence in early time windows favoured the initial response, whilst evidence in later time windows favoured the final response, replicating previous findings [35]. This can be most clearly observed in Fig 1C, where the psychophysical kernels for ‘corrected error’ and ‘spoilt correct’ trials slowly shift from favouring one choice outcome to the other over the course of ~1 second. Interestingly, the kernels both remain significantly different from zero even after the change of mind was enacted. This is likely due to the exclusion of double-change-of-mind trials from our analyses. That is, additional evidence in favour of the revised response may have been needed in order to avoid another secondary change-of-mind from occurring, and trials with secondary changes of mind were excluded from these plots.

Fig 1. Psychophysical reverse correlation results.

Fig 1

Panels A) and D) show the residual stimulus fluctuations time-locked to stimulus onset. Panel A) shows correct responses (green), error responses (red), corrected errors (blue) and spoilt correct responses (orange). Panel D) shows fast (light blue) and slow (dark blue) corrected errors, and fast (light orange) and slow (dark orange) spoilt responses. Panels B and E show the same data, time-locked to the initial response (the time of the initial response is indicated by the grey dashed lines). Panels C and F show the data from trials containing a change of mind, time-locked to the change-of-mind response (the time of the change-of-mind response is indicated by the grey dashed lines). In all panels positive values on the y-axis indicate more evidence supporting the correct response. For display purposes a moving average smoothing function with a span of 3 frames was applied and a median split for change-of-mind response time was used in panels D-F. However, statistical analyses were conducted on the unsmoothed, continuous data. For illustrative purposes, t-tests (alpha level of .01) were also conducted to compare levels of residual evidence between specific trial types, at each frame. Residual evidence values were pooled across participants for these tests. Orange dots denote timepoints at which there was a significant difference between correct and spoilt correct trials (Panels A-B), or where there was a significant difference between fast and slow spoilt trials (Panels D-F). Blue dots denote timepoints at which there was a significant difference between error and corrected error trials (Panels A-B), or where there was a significant difference between fast and slow corrected error trials (Panels D-F). Finally, black dots (Panel C) denote significant differences between corrected errors and spoilt correct responses, and grey dots (all panels) denote time points at which there was a significant difference for an alpha level of .05.

Strikingly, the reverse correlation analyses also revealed that the strength of evidence favoring the initial response, for even the very first frame of evidence presented, differed depending on whether or not participants ultimately changed their mind (Fig 1A). To formally examine this effect, we fit a generalized linear mixed effects model to predict the probability of a change of mind occurring. This revealed a significant interaction between initial decision accuracy and the strength of the first frame of evidence on the probability of a change of mind occurring (likelihood ratio test, χ2 (1) = 42.08, p < 8.75 x 10−11, see Table A and Fig A in S1 Text). This indicates that when the very first frame of evidence strongly supported participants’ initial decisions, then changes of mind were less likely to occur. Note, here we have restricted our analyses to just the initial frame of evidence, as a pre-decisional evidence source. However associations between stimulus evidence strength and decision behaviour were by no means restricted to just the first frame (see below for additional analyses regarding the effects of subsequent evidence). For illustrative purposes, we also plot the results of frame-by-frame t-tests to compare differences in residual evidence throughout the trial (see Fig 1).

Having found that the strength of the very first frame of evidence was associated with change-of-mind likelihood, we then investigated whether the strength of this information was associated with the speed with which changes of mind occurred. Plotting the residual stimulus fluctuations for fast and slow corrected errors and fast and slow spoilt responses revealed that changes of mind occurred more slowly after the initial response when the first frame of evidence strongly supported the initial decision (Fig 1D–1F). Note, at around -500 ms in Fig 1F it may appear as if there is actually stronger evidence in favour of the initial response for fast change-of-mind trials, as compared to slow change of mind trials. However, this is simply due to a shifting of the relative timing of kernels, given the underlying differences in change-of-mind response time (see Fig 2F for a model-based recreation of this). To formally test the effect of initial evidence strength, we fit a linear mixed effects model to predict the speed with which changes of mind occurred. This revealed a significant interaction between initial decision accuracy and evidence strength on change-of-mind speed (likelihood ratio test, χ2 (1) = 4.27, p = .039, see S1 Text). This indicates that changes of mind were slower when the very first frame of evidence strongly supported participants’ initial decisions.

Fig 2. Model-derived psychophysical kernels.

Fig 2

Panels A-F show the model predicted psychophysical kernels. To create these kernels we simulated decision variable trajectories in 100,000 experimental trials. We then took the within-trial noise in the model–used to simulate the moment-to-moment fluctuations in stimulus luminance–and reverse correlated this with the predicted response outcome on each trial. All plotting conventions are the same as in Fig 1.

Overall, these analyses demonstrate that the likelihood and speed with which participants changed their mind was already associated with the strength of even the very first frame of sensory evidence they saw (i.e. a pre-decisional source of information). These effects were consistent on the individual level and were symmetric across the two stimuli (reported in S2 Text).

In addition to the main analyses reported above, we also examined the overall effect of pre-response evidence on the speed and likelihood of changes of mind. In particular, we fit a generalized linear mixed effects model to predict the probability of a change of mind occurring from the average pre-response evidence. Note, when calculating pre-response residual evidence, we excluded the initial frame. This allowed us to investigate the impact that subsequent sensory evidence had on change-of-mind behaviour, to get a general sense of the relative importance of the ‘primacy effect’ we report above. This analysis revealed a significant interaction between initial decision accuracy and pre-response evidence strength (likelihood ratio test, χ2(1) = 192.36, p < 2.20 x 10−16; see Fig A in S4 Text). This indicates that when pre-response evidence (excluding the initial frame) strongly supported participants’ initial decisions, then changes of mind were less likely to occur. To formally test the effect of pre-response evidence on change of mind latency, we fit a linear mixed effects model to predict the speed with which changes of mind occurred. This revealed a significant interaction between initial decision accuracy and subsequent pre-response evidence strength (likelihood ratio test, χ2(1) = 82.36, p = 2.20 x 10−16; see Fig A in S4 Text).

Importantly, these analyses show that the initial frame of evidence is neither the sole, nor necessarily the best, predictor of change of mind behaviour. Indeed, under the current experimental conditions, mean evidence between 200–400 ms after stimulus onset was a better predictor of change of mind likelihood than mean evidence between 0-200ms (non-nested GLMM comparison: AIC 14961 vs. AIC 15039; Marginal R2/Conditional R2 0.370/0.372 vs. 0.364/0.367, see Supplement 6 for model parameter estimates). Considering the patterns revealed by the psychophysical reverse correlation analysis, as well as the results of these additional analyses, it is clear that changes-of-mind arise due to stereotyped changes in the sensory evidence across many timepoints. Clearly, most frames of evidence are able to contribute to these decisions in some way, with late arriving evidence potentially have a greater impact than early evidence. However, as we demonstrate below, the fact that the very earliest frame of evidence can even influence change-of-mind behaviour at all, regardless of the magnitude of this effect, is theoretically interesting and reveals important details about the structure of the processes underlying change of mind decisions.

Computational modelling

Having found that the strength of even the very first frame of evidence was associated with the likelihood and speed of subsequent changes of mind, we then sought to account for this effect within a computational framework to provide a mechanistic account of our findings. To this end, we employed the most prominent model of change-of-mind behaviour, the extended diffusion decision model (extended DDM; [3]). In this model, decisions are made by noisily accumulating the relative evidence between the two choice options (i.e. the difference in luminance between the two squares) to a threshold level. Once an initial decision has been made, sensory evidence continues to be accumulated. If enough evidence is accumulated against the initial decision, such that a separate change-of-mind threshold is crossed, then a change of mind occurs.

The original extended DDM predicts an insensitivity to pre-decisional evidence

In the original version of the extended DDM [3], the drift rate (i.e. the rate at which sensory evidence is accumulated over time) is assumed to be constant across trials of matched stimulus difficulty. As such, the only source of variability within the decision process is within-trial noise (i.e. moment-to-moment fluctuations in the decision process). Critically, this leads to the prediction that pre-decisional evidence will have no influence on change-of-mind behaviour, in direct contrast to the current findings (see Fig 3D and 3H). This is because, when controlling for initial decision accuracy, the decision process in the extended DDM is always in the same state at the time of the initial decision (i.e. there is always the same amount of accumulated evidence). As such, change-of-mind decisions depend entirely on the quality of post-decisional evidence. However, in the fields of cognitive and mathematical psychology it is typical to assume that the drift rate varies from trial to trial [14]. This enables the DDM to account for the relative timing of correct and error responses, and, as we will demonstrate below, under this assumption the extended DDM no longer predicts an insensitivity to pre-decisional information (see Fig 3).

Fig 3. Actual and predicted stimulus-locked psychophysical kernels.

Fig 3

Panels A and E depict the psychophysical kernels calculated from the data. To create panels B-C and F-G, we simulated 100,000 trials from three variants of the extended DDM model. The first variant (panels B and F) contained both external and internal sources of across-trial drift rate variability (same as in Fig 2). For the second variant (panels C and G) across-trial drift rate variability had exactly the same distribution, but was de-coupled from the stimulus (i.e. purely internal). Finally, for the third model variant (panels D and H) we simulated a version of the extended DDM in which drift rate was constant across trials. After simulating each model, we took the stimulus driven within-trial noise–used to simulate the moment-to-moment fluctuations in stimulus luminance–and reverse correlated this with the response outcome on each trial. All plotting conventions are the same as in Fig 1. See Fig B in S3 Text for a plot of this data for the first frame only (i.e. at x = 0). We note the slight apparent differences between the kernels in the first frame of Fig 3D and 3H, the largest being between fast and slow spoilt correct responses. These are not indicative of a true effect and arise due to the relative rarity of these trials (e.g., the rarity of spoilt correct responses). Across repeated simulations of the model, these differences are not consistent and will average to zero. Importantly, these differences highlight two points which it is important to be mindful of when conducting and interpreting psychophysical reverse correlation analyses. Firstly, they demonstrate the importance of aiming for maximum practicable statistical power, given the inherent noisiness of this technique. Secondly, they highlight the importance of not placing strong weight on single effects, but rather interpreting collective patterns of differences which are consistent across individuals.

A novel variant of the extended DDM

To account for the current findings, we fit a variant of the extended DDM which included trial-to-trial variability in drift rate. In particular, we assumed that there are both external (stimulus-driven) and internal (endogenous) sources of across-trial drift rate variability. We assumed that the external variability component is a function of the residual evidence in the first frame of each trial, which linearly decreases over time (i.e. a decaying ‘snapshot’ of initial evidence). This means that when initial sensory evidence strongly supports one decision over another, the drift rate is temporarily biased in favor of that decision. This is consistent with recent evidence suggesting that across-trial variability in drift rate is partly stimulus driven [14]. To model the effect of trial-wise differences in internal states (e.g., attention, motivation, and arousal), we included an internal drift-rate variability component which was combined in an additive fashion with the externally-driven component. Finally, we assumed that moment-to-moment variability in the decision process (i.e. within-trial variability) was driven by both moment-to-moment variability in the stimulus as well as endogenous variability. This allowed us to reverse correlate the simulated moment-to-moment variability in the stimulus with the model-predicted behaviour, to construct model-based psychophysical kernels, which could then be compared to the observed psychophysical kernels.

We fit our variant of the extended DDM to the response proportions and response time quantiles for initial and change-of-mind decisions simultaneously (see Fig A in S3 Text). Strikingly, after being fit to just the behavioural responses, the model was able to predict the observed patterns in the psychophysical kernels (e.g., weaker initial evidence in favour of the initial response on change of mind trials; Fig 2). Importantly, when trial-to-trial drift rate variability was de-coupled from the first frame of evidence, but was otherwise identically distributed, the model was no longer able to capture the observed patterns in the psychophysical kernels (Fig 3C and 3G). In this case, the predicted pattern for the first frame was in the opposite direction to the observed results (i.e. stronger initial evidence in favor of the initial choice on change of mind trials). Finally, when there was no across-trial variability in drift rate the model predicted that there would be no differences in the psychophysical kernels for the first frame of evidence (Fig 3D and 3H). By comparing these predictions, it is clear that stimulus-driven across-trial drift rate variability is the key feature within our variant of the extended DDM that allows it to capture the patterns in the psychophysical kernels. In simple terms, this indicates that the patterns we observed in the psychophysical kernels can be explained by an initial ‘snapshot’ of evidence exerting a slowly decaying bias on ongoing evidence accumulation.

Discussion

We investigated whether information processed prior to a perceptual decision being made (‘pre-decisional information’) influences the likelihood and speed of subsequent changes of mind. Participants made comparative luminance judgements between two flickering grey squares and indicated their decision with a button press. Following an initial decision, the stimuli remained on screen for a brief period and participants could change their mind. Using psychophysical reverse correlation, we examined the effects of moment-to-moment random fluctuations in luminance on participants’ decisions. We found that the likelihood and speed with which participants changed their mind was reliably associated with the strength of the very first frame of evidence they saw. To account for this observation, we developed a variant of the extended diffusion decision model (extended DDM), in which across-trial variability in drift rate is partially driven by the first frame of sensory evidence. Fitted to just the behavioural responses, this model was able to predict the observed patterns in the psychophysical kernels. Broadly put, this suggests that an ‘initial snapshot’ of sensory evidence exerts a slowly decaying bias on decision evidence accumulation, thus influencing later self-corrective behaviour.

The current study extends a nascent line of research examining the time-course of information processing underlying perceptual changes of mind [35]. One previous study has attempted to address the question of whether pre-decisional information affects change-of-mind behaviour [4]. However, because sensory and motoric delays were not accounted for, it was unclear whether the reported effect–a negative association between pre-decisional evidence strength and change-of-mind likelihood–was driven by evidence which was truly pre-decisional. Other studies investigating perceptual changes of mind have typically employed random dot motion tasks, making it impossible to examine the frame-by-frame processing of early sensory information in these experiments due the 50–150 ms lag induced by motion energy filtering. By employing a luminance discrimination task, we were able to circumvent this issue and present clear evidence that change-of-mind behaviour is affected by even the very earliest (pre-decisional) information one receives.

Importantly, it may be the case that the ‘primacy effect’ we observed is only detectable in the context of low-level visual tasks which require minimal temporal integration. Similarly, the fact that the timing of stimulus onset was predictable in the present study, may have led participants to attend more to early information. If future work identified a differential use of information for change of mind, depending on the features of evidence integration, and/or the predictability of stimulus timing, this would in itself advance our understanding of decision dynamics. This would also help to clarify the degree to which change-of-mind behaviour is influenced by initial sensory evidence under different environmental conditions or decision scenarios–something which is of relevance when considering real-world decisions which rely on either rapid or more sustained integration of sensory information (e.g., gauging the color of a traffic light in clear conditions compared to judging the speed of approaching traffic in heavy rain). Nevertheless, given the dominant view that relatively general computational processes underlie simple perceptual decisions and changes of mind across different perceptual tasks [3,15,16], the effect we demonstrate also has general implications. In particular, it demonstrates that the pre- and post-decisional dynamics are not distinct, and do in fact interact, counter to dominant perspectives [3].

A novel variant of the extended DDM was able to recreate the patterns we observed in the psychophysical kernels, after being fit to just the behavioural response data [3]. In this model, trial-to-trial variability in drift rate is partly driven by a linearly decreasing function of the first frame of sensory evidence one receives (the fitted slope parameter causes this to decay over the course of ~1–1.5 seconds). This is consistent with the emerging view that a combination of both internal and external drift rate variability components is necessary to explain behaviour on perceptual decision tasks [14]. Comparing the predicted psychophysical kernels from our model to an otherwise identical model in which drift-rate variability is identically distributed, but de-coupled from the stimulus, it is clear that a coupling between the first frame of sensory evidence and drift-rate variability is the critical assumption for capturing the observed patterns in the psychophysical kernels.

We chose the extended DDM as a modelling framework because it is the most prominent model of change of mind behaviour, and because the parameters clearly link to different aspects of decision-making processes and are thus readily interpretable. However, there are other more complex–and arguably more biologically plausible–models which have been proposed to explain perceptual changes of mind [17,18]. Previous work has shown that under certain parameterisations of these models a ‘primacy effect’ occurs whereby early sensory information is weighted more heavily than later sensory information [13,19]. On face value this is similar to the bias exerted by the initial frame of evidence in the extended DDM variant in this study. Given this, it is possible that the initial evidence bias we have implemented in the current model, is in fact mimicking the primacy effect displayed by these more complex models. Future studies could examine the degree to which these models mimic one another, to uncover potential neural mechanisms through which initial sensory information might bias ongoing evidence accumulation.

One limitation of the current modelling approach is that, while our variant of the extended DDM was able to predict the weighting of the earliest frames of evidence (i.e. the experimental effect of interest), it did not perfectly predict all observed patterns in the psychophysical kernels. For example, it predicted sharper peaks in the kernels around the time at which changes of mind occurred (compare Fig 1C and 1F to Fig 2C and 2F). This discrepancy may simply be due to the fact that the model was not directly fit to the psychophysical kernels. Alternatively, this may be a general limitation of the modelling framework or specific simplifying assumptions we adopted. For example, we assumed that the distributions of non-decision times for initial and change of mind responses were identical and were normally distributed. However, recent work has questioned whether alternative distributions of non-decision times might better explain behavioural data [20]. Ultimately, our modelling approach is constrained by the knowledge and data that are currently available to us and it is possible that future work may identify some complex alternative mixture of effects/assumptions within an evidence accumulation framework which offers a different explanation for the psychophysical kernel we observed. As such, we are not claiming to have necessarily discovered the ‘one true model’ for the current data, but instead, are simply aiming to provide a coherent account of the novel effects we have observed, with the goal of fostering further exploration in this area.

We chose to keep the means of the two stimuli fixed, and did not systematically vary their overall luminance. This was to maximise the number of useable trials in the psychophysical reverse correlation analyses. Because the dynamics of the decision process change if the stimulus values are changed, even if just their absolute values are manipulated while their difference is held constant [e.g., 21,22], each additional stimulus condition would have effectively halved the number of useable trials. However, one concern which stems from this decision is that participants may have adopted a ‘detection’ rather than ‘discrimination’ strategy. That is, participants may have initially focused on just one stimulus and made an initial ‘detection’ decision as to whether this was the brighter/darker stimulus. Then subsequently, they may have evaluated this via a more comparative judgement process. However, two factors decrease the likelihood of this possibility: firstly, the high degree of variability within the stimuli, relative to their mean difference, and secondly, the fact that both stimulus distributions were truncated at 1 SD from the mean to avoid extremely bright or dark stimulus values. Moreover, since participants’ initial response times were well fit by a model which assumed a bounded evidence accumulation process, a hybrid-decision process does not offer the most parsimonious account of our findings. Nevertheless, future research is ultimately needed to fully examine the possibility of participants adopting a hybrid decision process in certain tasks or contexts.

We also chose to record responses in a binary rather than continuous fashion. This allowed us to precisely measure the onset times of change-of-mind responses, adding further constraint on the computational models. However, continuous response measures, such as movement trajectories, have been employed in a number of past studies [3,5,7,8] and have the advantage of allowing for changes of mind to be observed in a potentially more graded fashion. Moreover, physical effort costs are likely to differ for discrete and continuous responses, influencing the computations underlying change-of-mind decisions [7,8]. One question which arises in light of these considerations is whether the use of discrete response measures may have led to slightly different decision-making behaviour. For example, when continuous response measures are employed, participants may adopt different strategies because movements can be initiated before a strict decision has been finalized, and because they may be weighing up effort costs which evolve over time [7,8,23]. However, the general pattern of changes of mind we observed in this study, and in another recent study, which also employed a binary response measure [22], mirrors those observed in studies employing continuous response measures [3,5,7,8]. In particular, across all these studies changes of mind were more common following errors and were driven by stereotyped shifts in stimulus evidence. These commonalities suggest a common underlying process, in line with the dominant view that a general process explains changes of mind across perceptual decision tasks [3]. Ultimately however, further research is needed to properly uncover the degree to which decision-making processes differ across binary and continuous responses.

To conclude, we have shown that pre-decisional evidence does influence the likelihood and speed of perceptual change-of-mind decisions. In particular, we have shown that the strength of even the very first frame of evidence one receives is associated with the speed and likelihood of later decision reversals. Moreover, we have shown that this finding can be accounted for by an extended diffusion decision model, in which initial sensory evidence exerts a slowly decaying bias on the decision process. This suggests that an initial ‘snapshot’ of sensory evidence biases subsequent sensory evidence accumulation, thus influencing later self-corrective behaviour.

Materials and method

Ethics statement

The experimental procedure was approved by the University of Melbourne ethics committee (ID 1749951.1). All participants gave informed written consent prior to the beginning of the experiment.

Participants

Five people gave written informed consent and participated in the experiment. Participants had normal or corrected to normal vision and were aged 22–35 years (M = 26.4, SD = 5.32, 2 Female). Two participants were authors on this study (WT and DF). The others had pre-existing relationships with the authors, but were naïve as to the purpose of the experiment. Each participant completed 5 sessions of the experiment (except WT who complete 4.5 sessions due to a technical fault). Each participant was remunerated $15 AUD per session (except WT who did not receive payment). One participant was excluded from the final sample as they failed to respond in 19.44% of trials (the remaining participants only failed to respond in 1.9–3.1% of trials). The final sample was aged between 22–28 years (M = 24.25).

Materials

All stimuli were presented on a Sony Trinitron Multiscan G420 CRT Monitor (Resolution 1280 x 1024 pixels; Frame Rate 75 Hz). Responses were recorded using a Tesoro Tizona Numpad (Polling Rate 1000 Hz). The task was coded in MATLAB 2015b using functions from the Psychophysics Toolbox Version 3.0.14 [24,25]. Whilst performing the experiment participants were seated in a dark room with their chin on a chinrest ~65cm from the screen.

Procedure

In each experimental session, participants performed 1000 trials of a luminance discrimination task (Fig 4). On each trial of the task they indicated which of two flickering greyscale squares (70 x 70 pixels; ~2.18 x 2.18 degrees of visual angle) was the brightest. The squares were presented side-by-side with 70 pixels separating them horizontally. Participants made their responses with their left and right index fingers on the 1 (left response) and 3 (right response) keys of the numpad. Participants had 800 ms from stimulus onset to make an initial response. From the time of the initial response, the stimuli remained on screen for a fixed duration of 1.5 s. During this time participants were free to change their response. Participants were told to try to be as accurate as possible in their initial responses and to correct any errors they felt they had made. At the end of each trial, feedback, corresponding to the final response participants had made (“correct”, “error” or “too slow”), was presented for 300 ms. A red fixation dot was presented for 500 ms before stimulus presentation. Self-paced breaks were provided every 100 trials.

Fig 4. Schematic of the trial structure in the luminance judgement task.

Fig 4

Each trial began with the presentation of a red fixation dot for 500 ms. The flickering stimuli were then presented, and participants were given up to 800 ms to respond. From the time of the initial response the post-decision period began, lasting for a fixed duration of 1.5 s. Feedback was then presented for 300 ms in the form of (‘correct’ or ‘error’). If participants failed to respond within 800 ms of the stimuli being presented, the post-decision period was skipped and ‘too slow’ was presented for 300 ms.

The mean RGB values for the brighter and dark squares were 142 and 130 respectively. On each frame, independent greyscale values for the two stimuli were drawn from separate Gaussian distributions centered around their respective mean values. The standard deviation of the distributions was 55 and the distributions were truncated at 1 standard deviation.

Psychophysical reverse correlation analysis

With each screen refresh (i.e. every 13.33 ms) a random luminance value was added to the mean luminance of each square. This enabled the use psychophysical reverse correlation [11,12] to reveal participants ‘psychophysical kernels’ (their information usage patterns) across time. The logic behind this analysis is as follows: if the residual luminance fluctuations at each frame are averaged across all trials they will cancel to zero, because they are randomly distributed. However, if the fluctuations systematically affect participants’ decisions, then averaging across trials with shared decision outcomes will reveal how, and when, the fluctuations influenced participants’ decisions.

To calculate participants’ psychophysical kernels, the frame-by-frame luminance values for the darker stimulus were subtracted from those of the brighter stimulus. The across-trial mean difference (i.e. the difference in mean luminance between the two squares) was then subtracted, and the residual luminance fluctuations were normalized between -1 and 1. These fluctuations were sorted into trials which shared a response outcome (e.g., purely correct) or response characteristic (e.g., fast corrected errors). When sorting the trials by change of mind speed we calculated median change-of-mind response times (relative to the time of the initial response) for each participant, within each testing session. After sorting by trial type, the fluctuations were pooled across participants and averaged (Fig 1). To obtain the results in S2 Text, fluctuations for each trial type of interest were averaged within each participant. For the response-locked and change-of-mind-locked kernels, time points where there were fewer than 100 trials were excluded, to avoid noisy estimates.

Statistical analyses

Trials in which participants failed to respond (~2.6% of trials on average per session), or in which they changed their mind more than once (~3.2% of trials) were excluded from all analyses. Trials in which the change-of-mind response time was less than 50 ms were also excluded (~0.0004% of trials). We also screened for trials in which the initial response time was less than 150 ms (no trials were rejected). Linear mixed effects models were used to analyse the data via the lme4 [version 1.1, 26] package in R (version 3.5). A generalized linear mixed effects model was used to predict changes of mind, with main effects for initial decision accuracy and the first frame sensory evidence, as well as an interaction between initial decision accuracy and the first frame of sensory evidence. A linear mixed effects model was used to predict the time at which a change of mind occurred relative to the initial response (i.e. change of mind speed) with main effects for initial decision accuracy and the first frame sensory evidence, as well as an interaction between initial decision accuracy and the first frame of sensory evidence. Likelihood ratio tests were used to formally examine the effects of interest (i.e. the interaction between initial evidence and initial accuracy). The distribution of response times for changes of mind was more normally distributed that typical initial RT distributions. We therefore analysed these responses with a linear mixed effects model. In all models, a random intercept for participant was included. Code to reproduce all of these analyses is available at https://osf.io/a6u4n/.

Computational modeling

We fit a variant of the extended DDM [3] to the response proportions and response time quantiles (0.1 0.3 0.5 0.7 0.9) of both the initial responses and the change of mind responses simultaneously. We simulated a discrete approximation of the extended DMM model (500,000 trials per iteration) and used the fminsearch algorithm (MATLAB 2016a) to minimize the root mean squared error between the simulated data and the actual data. Code for this model is available at https://osf.io/a6u4n/.

In our variant of the extended DDM, sensory evidence is noisily accumulated between two decision boundaries (0 and B). The average starting point of the accumulation process is half-way between the decision thresholds (i.e. B/2). From trial-to-trial the starting point varies uniformly around this average with a range of Sz. Once an initial decision threshold is reached, the evidence accumulation process continues to unfold. Note, that as in the original version of the extended DDM, there was a time limit parameter timeOut which specified the proportion of the post-decisional period for which participants processed additional information. If, during this period, enough evidence is accumulated against the initial decision such that a change-of-mind threshold–at distance of BCoM away from the initial decision threshold–is crossed, then a change of mind occurs.

With each timestep, the decision variable is updated as follows:

ΔDV=drift*stepsize+noise*stepsize

where, drift denotes the drift rate at a given time point, noise denotes the within trial noise at a given time point, and stepsize denotes the magnitude of the simulated timesteps within the model (0.001 s).

The drift rate at a given timepoint was determined as follows:

drift=mu+externalVar(t)+internalVar

where, mu denotes the mean drift rate, externalVar is the externally driven drift rate variability component at a given time point (this varies across time, see below), and internalVar is the internally driven across-trial drift rate variability component (which is constant across time). The externally driven across-trial drift rate variability component was determined as follows:

externalVar=slope*t+s*firstFrame

Where, slope specifies the slope of a linearly decreasing function across t (time within a trial), s is a scaling parameter which is used to weight the internal representation of the first frame of sensory evidence (firstFrame; see below for details as to how sensory evidence is specified in this model). The internally driven across-trial drift rate variability parameter (internalVar) is a normally distributed random variable with a mean of zero and a standard deviation eta.

At each time-step the trial-specific drift rate is affected by within-trial noise. This can be thought of as a noisy representation of the stimulus flicker, which is determined as follows:

noise=stimulusNoise+N(0,0.1)

where, stimulusNoise is a normally distributed random variable with a mean of zero and a standard deviation of theta. Like the luminance fluctuations in the real experiment, the stimulus noise in the model was truncated at 1 standard deviation. Conceptually, the stimulusNoise parameter models the frame-by-frame fluctuations in stimulus evidence which influence participants’ decisions (see Fig 1).

Endogenous within-trial noise was modelled as a normally distributed random variable with a mean of zero and a standard deviation of 0.1. The standard deviation was fixed to 0.1 to act as a scaling parameter. Conceptually, this noise term accounts for within-trial sources of noise which were not stimulus driven (e.g., variability in moment to moment neural firing).

When determining the response times for initial responses and change-of-mind responses, we made the simplifying assumption that the non-decision time tnd and non-decision time variability tndVar components were the same for the two types of responses.

Model-based psychophysical reverse correlation analysis

To understand the effects of pre-decisional evidence on participants’ behaviour, it was important to derive model-predicted kernels, which could be compared with the participants’ actual psychophysical kernels. By comparing these kernels, it was possible to test whether our new model variant was using ‘sensory’ evidence in the same way as the participants.

To construct model-predicted psychophysical kernels, we simulated 100,000 experimental trials from each model of interest. Critically, each model contained simulated stimulus noise (see section on stimulusNoise above), representing the moment-to-moment fluctuations in stimulus evidence that participants saw. After simulating, we sorted the trials into the four possible response outcome types, based on the model-predicted response. We then took the simulated stimulus noise for each trial and time-locked this to stimulus onset, the initial response, and the change of mind. Finally, we then averaged this noise across trials with shared decision outcomes, yielding the model-based psychophysical kernel estimates.

Supporting information

S1 Text. Marginal effects plots and parameter estimates from the mixed-effects models.

(PDF)

S2 Text. Individual-level figures and analyses.

(PDF)

S3 Text. Model predictions and parameters.

(PDF)

S4 Text. Auxiliary analyses of the overall effect of pre-response evidence.

(PDF)

S5 Text. Auxiliary analysis of trials in which the signal favoured the incorrect response.

(PDF)

S6 Text. Parameter estimates for 1–200 ms and 200–400 ms mean evidence models.

(PDF)

Acknowledgments

We thank Milan Andrejević for helpful discussions.

Data Availability

All data and analysis/modelling code for this paper are available at https://osf.io/a6u4n/.

Funding Statement

This work was supported by an Australian Research Council (ARC) Discovery Project Grant [DP160103353] to S.B. and R.H. (https://www.arc.gov.au/grants/discovery-program/discovery-projects) and an Australian Government Research Training Program (RTP) Scholarship to W.T (https://www.education.gov.au/research-training-program). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

PLoS Comput Biol. doi: 10.1371/journal.pcbi.1009738.r001

Decision Letter 0

Samuel J Gershman, Woo-Young Ahn

8 May 2021

Dear Mr. Turner,

Thank you very much for submitting your manuscript "An initial ‘snapshot’ of sensory information biases the likelihood and speed of subsequent changes of mind" for consideration at PLOS Computational Biology.

As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by two independent reviewers. We had a hard time finding reviewers and apologize for the delay in making the initial decision. Both reviewers think this is an interesting study, which might constitute a novel contribution to the field. However, reviewer 1 has some concerns that need to be address. In light of the reviews (below this email), we would like to invite the resubmission of a significantly-revised version that takes into account the reviewers' comments.

We cannot make any decision about publication until we have seen the revised manuscript and your response to the reviewers' comments. Your revised manuscript is also likely to be sent to reviewers for further evaluation.

When you are ready to resubmit, please upload the following:

[1] A letter containing a detailed list of your responses to the review comments and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

[2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file).

Important additional instructions are given below your reviewer comments.

Please prepare and submit your revised manuscript within 60 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email. Please note that revised manuscripts received after the 60-day due date may require evaluation and peer review similar to newly submitted manuscripts.

Thank you again for your submission. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments.

Sincerely,

Woo-Young Ahn

Associate Editor

PLOS Computational Biology

Samuel Gershman

Deputy Editor

PLOS Computational Biology

***********************

Reviewer's Responses to Questions

Comments to the Authors:

Reviewer #1: In this paper, the authors explore how the quantity of evidence accumulated before making a choice continues to impact the likelihood of changing one’s mind later on. Participants were presented two stimuli fluctuating in luminosity around two sets mean values. On each trial, they had to decide which of the two stimuli was the brightest. After the initial choice, the stimuli continued to be displayed, leaving the possibility to participants to change their mind. Reverse correlating the stimuli fluctuations to the choice revealed that early evidence had a long lasting impact on choice, as well as on the likelihood and speed of changes of minds. The authors used an extension of the Drift Diffusion Model to explain how early evidence can be used as a snapshot to which further evidence is compared and used to decide to change one’s mind.

The study is well conducted and certainly provides interesting new evidence on reversals of decisions. The paper is well written and easy to read, despite the complexity of the modelling and the statistics used. The main effect reported in the paper, namely the extension of primacy bias beyond choice to the timing and likelihood of changes of minds, is novel and certainly would constitute a significant contribution to the field. I also found the modelling approach appropriate and interesting, allowing to capture the behavioural data in a comprehensive manner.

My main concern however is the generalizability of the main finding of a long lasting effect of an early snapshot of evidence on changes of mind. Although primacy effects have been reported in the literature, such effect has not been observed on previous studies investigating changes of minds. Considering the novelty of the effect, it seems necessary to test further its robustness and explore alternative hypothesis. In particular, although reverse correlation is a powerful tool to understand the dynamics of decisions process, it is also notoriously hard to interpret and can lead to biases in interpretation. As the whole paper is centred on reverse correlation to proove the presence of a primacy effect on changes of mind, I find that additional analysis and/or experiments would be necessary to justify its existence. I list below additional comments and concerns.

- Test of the primacy effect: As I explained above, I believe further tests should be performed to understand the relative effect of the initial snapshot of evidence compared to future evidence on changes of mind. In particular, I think different GLMs including alternative predictors such as the overall pre-decisional signal strength or the evidence immediately preceding the response should be tested to establish the best model predicting the occurrence of changes of minds.

- Generalizability of the primacy effect to different paradigms: One important question raised by these findings is how likely such primacy effect is going to be observed for other types of decisions and stimuli. The authors do not discuss whether such effect is specific to the luminance task they used. Indeed, luminance could be seen as a stimulus feature that by definition requires less temporal integration than motion perception for instance, explaining why early momentary evidence might have such strong influence on choice and changes of mind. I would suggest to test whether such pattern of results would be observed in RDK stimuli or other stimulus discrimination task. This is important to understand the scope of the findings. A paragraph on this should also be added in the discussion.

- Another concern is the fact that the authors used constant means for the low and high luminance stimuli. Therefore, it is possible for the participants to perform the task not as a discrimination task but as a detection task, learning to detect the presence or absence of the high luminance stimulus on one side of the screen only. Indeed, the presence of the primacy effect could be explained by an early “detection strategy” where the presence of high luminance signal at the onset of the trial in one of the stimuli would lead participants to believe they have detected the presence of the high luminance stimulus, while later accumulation of evidence would switch to a comparison of the evidence of the 2 stimuli. To avoid participants using such strategy, it would have been interesting to vary in the design the overall mean luminance of the two stimuli, while keeping the difference in luminance between them constant. That would have allowed to determine whether the primacy effect on changes of mind is only a by-product of the task design and participants optimizing their choice process for that design. I would suggest such experiment is added to the design and this hypothesis is discussed in the paper too.

- One important potential limit of reverse correlation analysis is to neglect the effect of signal mean on choice, as it focusses only on fluctuations above and below the stimulus mean. However, the mean difference between the two signals has certainly an effect on choice. I would suggest that the authors provide supplementary figures showing the unaltered values of signal fluctuations in each of the luminance stimuli in a supplementary figure. That would allow to understand better what is the actual signal feeding into the decision and changes of mind. In particular, this would allow to determine whether the two stimuli have a symmetrical effect on choice and changes of minds.

- Using the same line of reasoning, I would suggest that the authors provide further analysis to determine the proportion of trials where, by chance, the overall signal favoured the incorrect response. Is it possible that they drive the reverse correlation effect and some very types of changes of mind? These trials may be difficult to analyse and classify as correct or incorrect, and should probably be removed from further analysis or analysed separately.

- Another question that arises from the present finding and modelling developed is what exactly the primacy effect reported means in terms of weight given to each sample of information. In particular, while the reverse correlation can be interpreted as early evidence having stronger effect on changes of mind and therefore being weighted more in the evidence accumulation process, the converse interpretation is also plausible. Namely, the signal fluctuations in the early presentation of the stimulus are weighted less and therefore need to be stronger to have an impact on changes of mind, which is why they appears as deviating from the mean in the reverse correlation time-series. Can the authors comment/discuss this?

- Reverse correlation methods: It is unclear from the methods how the analysis dealt with the absence of data points in the reverse correlation time-series. Presumable in the stimulus locked graph later time-points are less populated? Similarly, in response-locked time-series, early time points are less populated. The authors should report how they dealt with the change in number of trials forming the reverse correlation time-courses and how it could have affected their results.

- In Figure 3D&H, the authors report no differences in the psychophysical kernels for the first frame of evidence in the main text. However, a clear effect is observed on the figure. Can the authors explain?

Reviewer #2: In this experiment, the authors examine changes of mind in a simple perceptual decision.  They find that the early evidence (even the very first frame of evidence) has a subsequent influence on both the likelihood of a subsequent change-of-mind, and the speed at which that change-of-mind occurs. The authors propose a variant on the extended diffusion decision model in which initial sensory information has a slowly decaying bias on subsequent evidence accumulation.

This is a very interesting paper on an important topic. The novel approach to investigating the influence of initial information and data from this approach are a valuable contribution to the field. This also provides some important updates to an influential model of perceptual decision-making and changes-of-mind. A few comments:

- One key difference that might warrant at least a bit of discussion - the current approach uses a discrete binary response (a keypress) whereas a number of the studies cited in the introduction use a continuous response such as cursor movement, joystick movement, hand movement, etc.  In those studies, changes-of-mind were often defined as a change in movement path. This may not be a major issue in interpreting the present data but it may be worth addressing somewhere in the manuscript - it is possible that decision-making processes would differ when continuous movements (which more easily allow for the initiation of movement before an initial decision is really finalized; e.g., Gallivan & Chapman, 2014) are involved.

- It might be helpful to have a bit more discussion about the other data (e.g., the change-of-mind locked data) aside from the first frame of evidence, in addition to the figures that are presented. For example, is there anything worth discussing in Fig 1C and 1F?

- t-tests are conducted in Figure 1, but more detail could be provided as to how those are calculated. Are they based on participant mean scores for residual evidence at each point?

- line 194-195 has a typo: "As such, change-of-mind decisions depend entirely on the quality of post-decisional.

- typo, line 383: "were excluded from all analyse."

**********

Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Lucie Charles

Reviewer #2: No

Figure Files:

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org.

Data Requirements:

Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5.

Reproducibility:

To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols

PLoS Comput Biol. doi: 10.1371/journal.pcbi.1009738.r003

Decision Letter 1

Samuel J Gershman, Woo-Young Ahn

17 Sep 2021

Dear Mr. Turner,

Thank you very much for submitting your manuscript "An initial ‘snapshot’ of sensory information biases the likelihood and speed of subsequent changes of mind" for consideration at PLOS Computational Biology. As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. The reviewers appreciated the attention to an important topic and think the authors have addressed their comments on an earlier version. Based on the reviews, we are likely to accept this manuscript for publication, providing that you modify the manuscript according to the review recommendations. Please check the comments by the two reviewers, especially Reviewer #1 who requested an additional analysis (R1C2). 

Please prepare and submit your revised manuscript within 30 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email.

When you are ready to resubmit, please upload the following:

[1] A letter containing a detailed list of your responses to all review comments, and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out

[2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file).

Important additional instructions are given below your reviewer comments.

Thank you again for your submission to our journal. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments.

Sincerely,

Woo-Young Ahn

Associate Editor

PLOS Computational Biology

Samuel Gershman

Deputy Editor

PLOS Computational Biology

***********************

A link appears below if there are any accompanying review attachments. If you believe any reviews to be missing, please contact ploscompbiol@plos.org immediately:

[LINK]

Reviewer's Responses to Questions

Comments to the Authors:

Reviewer #1: The authors have made a very good effort to address my initial sets of comments and most of the issues I raised are now resolved. I still believe there are some points regarding the primacy effect that need to be addressed however.

I appreciate the authors are not claiming that only the first-frame of evidence influenced the occurrence of changes of mind (fortunately). However, as it seems it is the main point that they are focussing on and which justifies the novelty of the paper (as clearly emphasized in the title and abstract), I still find that some analysis are missing to understand the relevance of this effect for changes of mind. I agree with the authors it is novel and worth reporting, however I think it is equally important not to be misleading in the interpretation of the results and to report clearly the relative importance of this primacy effect compared in the evidence accumulation process leading to changes of mind.

I detail below how I think the authors should address this.

R1C2

I appreciate the authors have run additional analysis confirming that the mean pre-response evidence predicted the probability and latency of changes of minds.

1/ I think this is an important result that should be moved from the S4 appendix to the main text.

2/ I think an additional analysis should be run to compare the predictive power of the initial frame of evidence and the rest of the pre-response window. According to figure S7, it looks like the mean of the pre-response window is a better predictor of occurrence of changes of minds than the initial snapshot singled out by the authors. However, this should be tested and quantified. As the mean of the whole pre-response window is not comparable to evidence presented in only one frame, I would suggest using a different approach than computing the mean pre-response evidence.

One possibility would be to run the same GLM analysis comparing the predictive power of the initial snapshot, and then another sample 100 or 200ms later, correcting for multiple comparison. Another possibility would be to compute the mean over the 0-200ms time window and compare it to the 200-400ms for instance.

R1C3

I thank the authors for their detailed reply and additional results presented. One thing that I think might be worth mentioning in the discussion about the generalisability of the effect is the issue of temporal expectancy. As in the design, the stimulus appeared always at predictable time following the start of the trial, could it be that participants learn to focus their attention to when the initial frame of the stimulus would appear?

R1C6

Thanks for analysing the proportion of trials in which the signal favoured the incorrect response. I agree the proportion is small enough to have a negligible effect but it is worth reporting it still in the supplementary material.

R1C7

Thanks for reporting this interesting finding of the model with the down weighting of the initial evidence. I think that strengthen the point made in the paper and is well highlighted in the updated part of the discussion.

Reviewer #2: The authors have done an excellent job addressing the comments made on the initial submission, and i think this paper is a strong contribution to the field.  I only have a couple minor comments:

1. The figures included in this version are quite blurry - hopefully this is something the authors can fix in the final submission.

2. line 285 typo: "relative rarity of these trial" should be "trials"

3. for the added paragraph about continuous responses vs. discrete responses in the discussion, it might be worth also noting that continuous responses involve more physical effort which may be a contributing factor to the likelihood and nature of changes of mind (several studies showing this are already cited in this paragraph).

4. Regarding the discussion on temporal integration and the RDK tasks, it may be worth discussing or speculating on real-world examples where changes-of-mind may occur that do (or do not) involve temporal integration. In other words, regarding R1's concerns about generalizability, perhaps the initial snapshot of evidence is only useful in a subset of cases. Is it likely that many real-world behaviors involving changes of mind are covered by these cases?

**********

Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

Figure Files:

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org.

Data Requirements:

Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5.

Reproducibility:

To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols

References:

Review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript.

If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

PLoS Comput Biol. doi: 10.1371/journal.pcbi.1009738.r005

Decision Letter 2

Samuel J Gershman, Woo-Young Ahn

9 Dec 2021

Dear Mr. Turner,

We are pleased to inform you that your manuscript 'An initial ‘snapshot’ of sensory information biases the likelihood and speed of subsequent changes of mind' has been provisionally accepted for publication in PLOS Computational Biology.

Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. A member of our team will be in touch with a set of requests.

Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated.

IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript.

Should you, your institution's press office or the journal office choose to press release your paper, you will automatically be opted out of early publication. We ask that you notify us now if you or your institution is planning to press release the article. All press must be co-ordinated with PLOS.

Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Computational Biology. 

Best regards,

Woo-Young Ahn

Associate Editor

PLOS Computational Biology

Samuel Gershman

Deputy Editor

PLOS Computational Biology

***********************************************************

Reviewer's Responses to Questions

Comments to the Authors:

Reviewer #1: I thank the authors for their careful consideration of my comments and their detailed reply.

I think it is fine to leave Figure S7 is the supplementary material considering the result of the corresponding analysis is now mentioned in the main text.

I appreciate that the authors have now performed the analysis of the rest of the pre-response time-window and thank them for their interesting comment regarding their interpretation of the finding.

One could consider that the result that the 200-400ms time window is a better predictor of changes of mind than the early snapshot of evidence does weaken the main point of the paper. However, I think that the fact that this is now explicitly stated and discussed in the main text means an attentive reader cannot be misled into misinterpreting the effect of early evidence on changes of mind. And I fully agree with the authors that the fact that the first frame of evidence is predictive of later changes of mind is a theoretically interesting finding and worth reporting.

I leave the editor to review this last point but it seems to me that the paper makes an interesting and thought-provoking contribution to the field and I therefore recommend publication.

**********

Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Lucie Charles

PLoS Comput Biol. doi: 10.1371/journal.pcbi.1009738.r006

Acceptance letter

Samuel J Gershman, Woo-Young Ahn

17 Dec 2021

PCOMPBIOL-D-21-00441R2

An initial ‘snapshot’ of sensory information biases the likelihood and speed of subsequent changes of mind

Dear Dr Turner,

I am pleased to inform you that your manuscript has been formally accepted for publication in PLOS Computational Biology. Your manuscript is now with our production department and you will be notified of the publication date in due course.

The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript.

Soon after your final files are uploaded, unless you have opted out, the early version of your manuscript will be published online. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers.

Thank you again for supporting PLOS Computational Biology and open-access publishing. We are looking forward to publishing your work!

With kind regards,

Anita Estes

PLOS Computational Biology | Carlyle House, Carlyle Road, Cambridge CB4 3DN | United Kingdom ploscompbiol@plos.org | Phone +44 (0) 1223-442824 | ploscompbiol.org | @PLOSCompBiol

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Text. Marginal effects plots and parameter estimates from the mixed-effects models.

    (PDF)

    S2 Text. Individual-level figures and analyses.

    (PDF)

    S3 Text. Model predictions and parameters.

    (PDF)

    S4 Text. Auxiliary analyses of the overall effect of pre-response evidence.

    (PDF)

    S5 Text. Auxiliary analysis of trials in which the signal favoured the incorrect response.

    (PDF)

    S6 Text. Parameter estimates for 1–200 ms and 200–400 ms mean evidence models.

    (PDF)

    Attachment

    Submitted filename: Turner_Response_to_Reviewers.docx

    Attachment

    Submitted filename: Turner_Response_To_Reviewers.docx

    Data Availability Statement

    All data and analysis/modelling code for this paper are available at https://osf.io/a6u4n/.


    Articles from PLoS Computational Biology are provided here courtesy of PLOS

    RESOURCES