Skip to main content
PLOS Computational Biology logoLink to PLOS Computational Biology
. 2021 Jul 15;17(7):e1009096. doi: 10.1371/journal.pcbi.1009096

Accumulation of continuously time-varying sensory evidence constrains neural and behavioral responses in human collision threat detection

Gustav Markkula 1,*, Zeynep Uludağ 2,¤, Richard McGilchrist Wilkie 2, Jac Billington 2
Editor: Marieke Karlijn van Vugt3
PMCID: PMC8282001  PMID: 34264935

Abstract

Evidence accumulation models provide a dominant account of human decision-making, and have been particularly successful at explaining behavioral and neural data in laboratory paradigms using abstract, stationary stimuli. It has been proposed, but with limited in-depth investigation so far, that similar decision-making mechanisms are involved in tasks of a more embodied nature, such as movement and locomotion, by directly accumulating externally measurable sensory quantities of which the precise, typically continuously time-varying, magnitudes are important for successful behavior. Here, we leverage collision threat detection as a task which is ecologically relevant in this sense, but which can also be rigorously observed and modelled in a laboratory setting. Conventionally, it is assumed that humans are limited in this task by a perceptual threshold on the optical expansion rate–the visual looming–of the obstacle. Using concurrent recordings of EEG and behavioral responses, we disprove this conventional assumption, and instead provide strong evidence that humans detect collision threats by accumulating the continuously time-varying visual looming signal. Generalizing existing accumulator model assumptions from stationary to time-varying sensory evidence, we show that our model accounts for previously unexplained empirical observations and full distributions of detection response. We replicate a pre-response centroparietal positivity (CPP) in scalp potentials, which has previously been found to correlate with accumulated decision evidence. In contrast with these existing findings, we show that our model is capable of predicting the onset of the CPP signature rather than its buildup, suggesting that neural evidence accumulation is implemented differently, possibly in distinct brain regions, in collision detection compared to previously studied paradigms.

Author summary

Evidence accumulation models of decision-making propose that humans accumulate noisy sensory evidence over time up to a decision threshold. We demonstrate that this type of model can describe human behavior well not only in abstract, semi-static laboratory tasks, but also in a task that is relevant to human movement in the real world. Specifically, we show that a model directly accumulating the continuously time-varying visual looming (optical expansion) of an approaching obstacle explains full probability distributions of when humans can detect this collision threat. Using electroencephalography, we find indications that this type of evidence is accumulated differently in the brain compared to evidence accumulation in previously studied, more abstract tasks. Our experimental paradigm, model, and findings open for wider application of this type of decision-making model to laboratory and real-world tasks with ecologically relevant, time-varying sensory evidence, and further studies into how such decisions are implemented neurally. There are also societal implications: In applied safety research and traffic accident litigation it is conventionally assumed that human collision detection is limited by a fixed perceptual threshold, an assumption that our results show to be highly inaccurate.

Introduction

Human decision-making is a long-standing research topic, spanning disciplines such as psychology, neuroscience, economics, and human factors [15]. In recent decades, evidence accumulation models (also known as drift diffusion or sequential sampling models) have emerged as one dominant account, positing that decisions are made once noisy evidence has been integrated over time up to a decision threshold [611]. These models have been successful at explaining distributions of behavioral choices and response times across numerous laboratory paradigms, e.g., where participants make categorization decisions about ambiguous stimuli, or choose between options with different subjective or objective value [611]. There is also strong neurophysiological support for the idea that the brain indeed implements something akin to evidence accumulation in these types of tasks [5, 1012]. Notably, there is mounting support for the idea that signatures of neural evidence accumulation can be observed using human electroencephalography (EEG), in the form of a centroparietal positivity (CPP) that builds up during deliberation and peaks when the overt response is made [1319]. However, computational modelling of evidence accumulation decision-making has so far focused on laboratory paradigms using stimuli that (i) have stationary or only intermittently and/or noisily changing saliency over time [7, 18, 2024], and (ii) are abstract in nature, typically not mapping directly to any real-world task.

It is currently an open question whether decision-making is well-described by evidence accumulation models in less cognitive and more embodied task contexts, relating to human sensorimotor control, movement, and locomotion in the real world. The nervous system performs myriad choices of motor actions to perform tasks such as keeping the body upright [25], balancing a stick [26], intercepting a ball [27] or avoiding collisions with other cars while driving [28]. Do evidence accumulation mechanisms play a role also in these contexts? [29] One challenge in answering this question lies in the nature of the sensory evidence being used: A hallmark feature of real-world sensorimotor behaviors is that they depend on continuously time-varying sensory stimuli, such as joint angles, sight point rotations or optical expansion rates, of which the exact, externally measurable values are important for successfully shaping the behaviour [2529]. This is in stark contrast with most existing evidence accumulation modeling work, which has emphasized stationary evidence (in part because this enables computationally efficient model-fitting [30]), with the rate of evidence in the model typically fitted as a free parameter per experimental condition, without a mechanistic link to the properties of the external stimulus. We and others have begun exploring accumulation models of which the input evidence instead scales directly with external sensory data, in tasks such as stick-balancing [31], visual and vestibular judgment of self-motion [3234], longitudinal and lateral control in car driving [29, 3538], and road-crossing decisions [39, 40], but these studies have so far not performed model testing and selection at the same level of detail as is typical in the broader evidence accumulation model literature.

Here, we aim to close this gap by developing and studying a paradigm where participants detect onset of visually looming (optically expanding) collision threats. We chose this task because it is an ecologically relevant task with time-varying sensory evidence, and accumulation of visual looming has also been suggested–but not conclusively proven–in several of the mentioned previous studies [3638], yet this task nevertheless permits collection of large numbers of repetitions in a controlled laboratory environment, enabling detailed model fits of full per-participant probability distributions of response. There are also some specific predictions to test: Conventionally, it is assumed that humans can detect collision threats once the rate of optical expansion of the obstacle’s projection onto the observer’s retina exceeds a looming detection threshold (LDT) [4143]. This LDT assumption has been adopted in basic perceptual psychology research into collision avoidance and target interception [44, 45], time-to-contact estimation research [4648], sports science [49, 50] and applied research in the road traffic safety domain [5156]. Notably, the LDT assumption is also used in traffic accident litigation, to answer questions about whether an appropriately attentive driver should have been able to avoid a crash [57, 58]. However, some of the early literature on the LDT reported that the kinematics of the collision course (i.e., the movement trajectories of the observer and collision object) could seemingly affect the value of the threshold itself [41, 42, 52]. We have previously proposed that such kinematics-dependencies in human collision detection ability could be understood if, instead of an LDT, collision threat detection were determined by evidence accumulation of the visual looming signal [35], a hypothesis which we test here. We also complement our behavioral observations with concurrent EEG recordings, to investigate whether the previously reported CPP signature could be observed in our paradigm, and if so whether the nature of this neural signature aligned with the predictions of our time-varying evidence accumulation model.

Results

We conducted an experiment where we simultaneously recorded participants’ overt looming detection responses and concurrent EEG. Rather than opting for the type of abstract stimuli conventionally used in neuroscientific research on collision perception [59, 60], to emphasize the connection to real-world collision threat detection we instead chose to create a laboratory version of the driving test track experiment by Lamble et al. [52]. In their experiment, participants followed a lead vehicle at either 20 m or 40 m distance, and pressed their car’s brake pedal as soon as they saw the lead vehicle come closer, which it did by a 0.7 m/s2 deceleration (not accompanied by a brake light signal). In our laboratory implementation, illustrated in Fig 1A and described in detail in Materials and methods, participants were instructed to fixate a location on a screen, where an image of the back of a car appeared at an appropriate optical size for either 20 m or 40 m viewing distance, with minor horizontal and vertical perturbations over time. The car image either remained the same size for 7 s before disappearing (catch trials; 16.7% of the total number of trials) or began, after a random delay in the 1.5–3.5 s range, to optically expand, recreating the looming trajectory of a decelerating lead vehicle (i.e., accelerating toward the observer). Participants were instructed to press a key as soon as they saw the car “coming closer”(i.e., growing on the screen). Lamble et al. included also non-foveal detection conditions, which we omitted. Instead, we extended the design by including a 0.35 m/s2 lead vehicle deceleration, for a total of four kinematical trajectories, with distinct profiles of visual looming, as shown in Fig 1A. We denote the projected optical angle of the lead vehicle stimulus on the participant’s retina θ, and its optical expansion rate θ˙=dθ/dt, increasing nonlinearly with time both because of the vehicle acceleration and because the visual angle of an object is (approximately) proportional to the inverse of its distance from the observer. Note also that in each looming condition there was a direct relationship between response time (the horizontal axis in Fig 1A and 1B and optical expansion rate at response (the vertical axis in Fig 1A and 1C. To align with the existing literature on looming detection, our basic inferential testing on the behavioral data focused on θ˙ at response, whereas to align with the literature on evidence accumulation modeling, our model-fitting was instead focused on distributions of response time.

Fig 1. Overview of paradigm, model, and behavioral results.

Fig 1

(A) In each trial, participants fixated a target, at which an image of the back of a car appeared, and after a variable time delay began to optically expand following one of four different looming trajectories (solid lines; colors per corresponding kinematical conditions as indicated in the boxed legend). The dotted line shows the across-experiment average optical expansion rate θ˙mean at which participants reported detection of this visual looming. The shaded regions all have the same area, to illustrate why evidence accumulation predicts detection at lower θ˙ in conditions where this quantity increases more slowly. (B) Histograms show the participants’ detection response times across the entire experiment for the fastest and slowest of the four looming conditions, overlaid with the corresponding predictions from per-participant maximum likelihood fits of the variable-gain accumulator model (model AV; thick solid lines). This model posits that the visual looming evidence shown in panel A is integrated over time together with normally distributed noise, up to a fixed threshold at which detection occurs (example time histories of noisy integration of the looming input shown as thin solid lines, with circle symbols at the fixed decision threshold). The triangle symbols indicate the response times that would be predicted by a conventional looming detection threshold model, with θ˙mean as threshold. (C) Optical expansion rates at which the participants (histograms) and fitted accumulator model (lines) reported detection, in the four different looming conditions. The dotted lines again show θ˙mean, and the black crosses indicate the detection thresholds reported in [52] for the same kinematical conditions.

Overt responses refute the fixed looming detection threshold assumption

After exclusion of a small minority of trials for early (0.6%) and missing (0.2%) detection responses, and a larger number of trials for electrooculographic indications of eye blinks (15.9%; see Materials and methods for details), the final data set included 22 participants, with an average of 182 trials per participant (an average of 46 trials per looming condition). Fig 1C shows that our data replicated the kinematics-dependency reported by Lamble et al. [52], with detection occurring at lower average optical expansion rates θ˙ for the larger initial distance (F(1, 3698) = 1255.48;p <.0001). Also in an absolute sense, our θ˙ values at detection were similar to those observed in the test track experiment (black crosses in Fig 1C), but slightly lower, potentially due to the reduced noise in the laboratory environment and the use of a finger key press instead of a foot pedal to report response.

As initially suggested in [35], detection at lower θ˙ values for larger initial distances is predicted by a looming accumulation account, because looming grows more slowly from larger distances, and because accumulation (i.e., integration) of a small θ˙ over a long time is equivalent to accumulation of a large θ˙ over a short time; see the shaded areas in Fig 1A. Similarly, for lower deceleration magnitudes, where looming develops even more slowly, the looming accumulation account also predicts detection at further decreased θ˙ values. This was the motivation for the inclusion of the 0.35 m/s2 deceleration condition, and the observed θ˙ values at detection were indeed further reduced for this lower magnitude of deceleration (Fig 1C; F(1, 3698) = 810.26;p <.0001).

These behavioral findings strongly reinforce the idea that looming detection occurs at magnitudes of optical expansion rate that are dependent on the kinematics of the collision course, in contrast with the conventional LDT assumption of a situation-independent threshold for detection. The triangle symbols in Fig 1B indicate the response times that would be predicted by a situation-independent looming threshold fixed at the average θ˙ at detection observed across this experiment. These LDT predictions are too early in fast looming conditions, and too late in slow looming conditions, which is precisely the qualitative pattern of errors that one would expect to see if participants’ responses were instead determined by evidence accumulation of optical expansion rate.

From a methodological point of view it is worth noting that our behavioral analyses also identified statistically significant effects of experimental block (slightly increased looming sensitivity in later blocks) and the 1.5–3.5 s pre-looming wait time (slightly increased looming sensitivity with increased pre-looming wait time). These effects were substantially smaller than the effects of looming condition and between-participant differences (see Table B in S1 Appendix), and were therefore not separated out in the subsequent model fitting described below.

A visual looming accumulator model accounts for full detection distributions

As illustrated in Fig 1B, the looming accumulation hypothesis can be computationally formalized as a single-boundary accumulator (or drift diffusion) model, with its rate of evidence accumulation (sometimes referred to as “drift rate”) at each point in time determined by the momentary optical expansion rate, multiplied by some gain, and where overt detection response occurs once an evidence threshold is reached. Noise in the evidence accumulation process (e.g., due to noisy sensory input, interference from other brain activity, or both) gives rise to variability, i.e., probability distributions of response time. It may be noted that our model, like previous evidence accumulation models of detection of intermittent, subtle changes in abstract stimuli [14, 16], effectively implements Page’s cumulative sum (’CUSUM’) technique for change detection [61].

The conventional LDT assumption is completely deterministic, and as such does not make predictions about probability distributions. However, one might consider a stochastic threshold model, positing that looming detection occurs once a noisy optical expansion rate signal first exceeds a fixed threshold. In fact, such a model would also predict the qualitative findings reported above, since in conditions where looming develops slowly, there would be more time for large noise values to occur by chance, thus eventually exceeding the threshold even if the sensory signal itself is still sub-threshold. We therefore tested also this type of model, and compared it to the accumulator model.

Due to the time-varying drift rate in our accumulator model, there is no closed-form expression for its response time distribution [62]; we estimated these distributions numerically instead. All considered models were fitted per participant, both using maximum likelihood estimation (MLE) and approximate Bayesian computation (ABC), to the per-participant response time distributions in each looming condition. The emphasis in this paper is on the MLE results, whereas the ABC results, mostly reported in the Supporting information, provide additional confirmation of the key conclusions, a more complete view of model parameter estimates, and allowed us to follow up on auxiliary questions which would have been computationally prohibitive under the MLE approach.

Fig 2A shows detection response time distributions, visualized as cross-participant averages (using the “Vincentizing”method [63]) of per-participant data and MLE model fits. The threshold model (model T) does indeed capture the qualitative effect of looming condition, but is unable to accurately reproduce the location and shape of the response time distributions. Similarly to the patterns shown in Fig 1A for the deterministic threshold model, model T has a tendency to predict responses that are substantially too late in slow looming conditions. The accumulator model (model A) does not have this problem, and achieves a noticeably better fit despite having the same number of free parameters as model T.

Fig 2. Model comparisons.

Fig 2

(A) Averaged (“Vincentized”) per-participant cumulative density functions (CDFs) of looming detection response time, for human participants and the maximum-likelihood-fitted threshold model (T), accumulator model (A), and variable-gain accumulator model (AV), in the four looming conditions. (B) Per-participant differences in Akaike Information Criterion (AIC) for the T → A and A → AV model comparisons. Negative ΔAIC values indicate preference for the latter model in the comparison.

When modelling perceptual decision-making in paradigms with stationary stimulus saliency, it has often been found that assuming between-trial variability of the stationary rate of evidence accumulation is needed to closely reproduce human response time distributions [10]. Analogously, we investigated an extended version of our basic accumulator model, where the input gain applied to the optical expansion rate was not constant per participant, but instead drawn at random per trial from a normal distribution, of which the standard deviation thus becomes an additional free model parameter. This extended model (model AV) produced distributions that more closely matched those of the human detection responses. Fig 2B shows the relative goodness of fit of the three models, in terms of differences in Akaike Information Criterion (AIC). These results indicate a very strong preference for the accumulator model over the threshold model for all participants but one, with an average ΔAIC of -43.2 (a difference of more than 14 suggests “very strong support” for the preferred model [64]). This analysis also indicated that for most participants, the additional model complexity introduced by the input gain variability (model AV) was warranted given the improvements in model fit (average ΔAIC = -29.1). Per-participant fits for models T, A, and AV are shown in Fig A in S1 Appendix. Model AV had four free parameters: the non-decision time TND, the accumulator noise intensity σ, and mean and standard deviation K and σK of the looming input gain; estimated values for these parameters across participants are shown in Fig B in S1 Appendix.

The ABC analysis also favored the accumulator model over the threshold model. The geometric mean of the per-participant Bayes factor (an estimate of the expected Bayes factor for hypothetical additional participants [65, 66]) in favor of the accumulator model was 3.0–7.3 (“substantial evidence” [67]), depending on the choice of the ABC distance threshold hyperparameter ϵRT, with the highest Bayes factors for the more stringent ϵRT (see Fig E in S1 Appendix). The ABC comparison of accumulator models with and without variable gain was inconclusive (geometrical mean Bayes factor 1.4–1.5 in favor of model A), possibly because of the relatively broad priors we used, which may have excessively penalized model complexity [68].

Using both MLE and ABC methods we also investigated other model variants, incorporating gating of the looming input (requiring it to exceed a minimum threshold before contributing to the accumulation) or evidence leakage (as a form of short-term memory decay), as well as more complex models incorporating different combinations of these various model assumptions. As further described in S1 Appendix, none of these alternative models were found preferable over the variable-gain accumulator model (model AV).

As an additional test of this best-performing model, we also examined its predictions in response to variations in pre-looming wait time. As mentioned above, the model-fitting was blind to this experimental manipulation. However, as shown in Fig C in S1 Appendix, model AV nonetheless predicted the observed pattern of increased looming sensitivity with increased pre-looming wait times, with approximately correct magnitudes.

Looming accumulation explains onsets of pre-response scalp potentials

Fig 3 illustrates the main EEG findings. The response-locked scalp maps in Fig 3A show a positivity at the overt response, in line with the CPP observed by many others [1319]. Fig 3B and 3C show stimulus-locked and response-locked ERPs, per condition, averaged over five electrodes centered on Pz. The original paper on the CPP centered its analysis on the CPz electrode location [13], but many subsequent reports have shown more parietally located CPPs, consistent with what we observe here [1416, 18, 69]. Again in line with previous observations, Fig 3C shows that this positive wave builds up before the overt response and peaks at the response itself, including a characteristic separation at this peak, with higher CPP amplitudes for the more salient looming conditions [13, 14, 16, 18]; see the dots along the bottom of Fig 3C.

Fig 3. EEG results.

Fig 3

(A) Grand average scalp potentials shortly before, at, and after overt detection response. (B) Event-related potentials (ERPs) relative to the looming stimulus onset, averaged over five electrodes centered at Pz (marked in A), in the four different looming conditions. (C) The same four ERPs as in (B), but instead response-locked, i.e., relative to the time of overt detection response. The dots along the bottom show where an ANOVA indicated a statistically significant main effect of looming condition on these ERPs. (D) As panel C, but without high-pass filtering and ocular artefact removal. (E) Response-locked average accumulated evidence E in each looming condition, for the best-fitting variable-gain accumulator model AV. Note that the model traces converge at the decision threshold E = 1; the exact location of this point in the plot depends on how much of the non-decision time in the model is assumed to be due to sensory and motor delays, respectively. (F) Estimated onsets of the pre-response centroparietal positivity (CPP) relative to the overt detection response, in the four looming conditions. This includes the (17 out of 22) participants for which the CPP onsets could be reliably estimated. The black markers indicate condition means. The intervals plotted in black and gray at the bottom of the panel show, respectively, the maximum observed difference between condition means, and the upper edge of a 95% confidence interval for this difference. (G) Averaged (“Vincentized”) per-participant cumulative density functions (CDFs) of CPP onset time relative to the looming stimulus onset, for the participants and the maximum-likelihood-fitted variable-gain accumulator model. (H) The top panel shows per-participant AIC differences for the T → A and A → AV model comparisons, when fitted to the CPP onset data. The bottom panel shows the total cross-participant sums of AIC differences, with 95% confidence intervals.

However, the CPP we observe here differs in at least two respects from previous observations. First, the build-up of the CPP has previously been reported to be of similar duration as the overt response times, typically only 100–200 ms shorter [1318]. This is consistent with the idea that the CPP reflects evidence accumulation that starts shortly after stimulus presentation. In contrast, average response times per looming condition in our experiment ranged from 1.1 to 2.4 s, yet it is clear from Fig 3C that the average CPP build-up duration was shorter than 0.5 s for all conditions. Second, since in previous studies CPP increase over ERP baseline has been obvious soon after stimulus onset, conditions with slower responses (typically due to less salient stimuli) have produced CPP profiles with build-up commencing earlier in time before the overt response [1315, 17]. In contrast, in our response-locked ERP data, there is no obvious separation between conditions in when the CPP build-up commences.

Fig 3D shows that the CPP signal is subtly affected by EEG pre-processing choices (most notably, the 0.1 Hz low-pass filter we used causes a slight suppression to negative voltage before the CPP build-up; see further Fig F in S1 Appendix), but the general aspect of a late, condition-independent CPP onset remains. Meanwhile, as can be seen in Fig 3E, the onset of evidence build-up in our behavioral model was both early and condition-dependent, as expected since the model directly accumulates the looming input.

Previous studies have indicated that purely behavioral fits of evidence accumulation models can yield model evidence profiles that align qualitatively with the corresponding CPP [14, 15, 1719], but as described above this was not the case here. One possible reason for this could be that our models were flexible enough to achieve good behavioral fits for a range of parameterizations, with a range of widely different evidence build-up profiles, and that our purely behavioral fits were therefore not enough to observe an alignment between model evidence and recorded CPP signatures. Therefore, using ABC, we investigated whether fitting the models simultaneously to the behavioral and neural data could identify a behaviorally well-fitting model which did not exhibit early separation of accumulated model evidence, thus aligning better with our CPP observations, but no such model was identified; see Fig J in S1 Appendix.

A different possible explanation for our late, rapid, and seemingly condition-independent CPP could be that (i) the looming accumulation process indicated by the behavioral modeling results is not directly reflected in the CPP, and (ii) the observed CPP instead reflects a second stage of the response decision process, which begins only once the looming accumulation process reaches threshold. To further investigate this hypothesis, we estimated full distributions of CPP onset times from the EEG data, by averaging ERPs over trials with similar response times, to increase signal-to-noise ratio, and identifying the last time where the averaged ERP exceeded 30% of its value at the overt response. We found that this allowed us to reliably estimate CPP onsets for 17 out of the 22 participants. We then investigated our hypothesis using this dataset in two ways: Firstly, as illustrated in Fig 3F, we analyzed the distributions of CPP onset time relative to overt response. In line with our hypothesis that the process reflected in our CPP has a duration that is independent of looming condition, we found that the largest difference in the average CPP onset between two conditions was 22 ms (interval marked in black at bottom of Fig 3F) and that the upper edge of a 95% confidence interval for this difference was 79 ms (interval marked in gray). In other words, any effect of looming condition on CPP onset relative to detection response was small in magnitude (and not statistically significant in our dataset; F(3, 377) = 1.94;p = .133). Secondly, we refitted, using MLE, the threshold and accumulator models to the estimated CPP onsets instead of to the overt responses. We found that the accumulator models could account well for the distributions of CPP onset, and reliably better than the threshold model; see Fig 3G and 3H (and see Fig I in S1 Appendix for more detailed results for models T and A). Because the CPP onset dataset being fitted to was reduced compared to the behavioral dataset, a reduced statistical power of the model comparisons should be expected, and the obtained per-participant ΔAIC values (Fig 3H, top) were indeed smaller than for the corresponding behavioral model comparisons. However, for the overall test of whether to prefer A over T across our entire experiment, the total difference in AIC was still very large (-81.4 and -31.0 for the T → A and A → AV comparisons; see Fig 3H, bottom, also showing 95% confidence intervals for these differences). The results illustrated in Fig 3F–3H were robust to variations in EEG pre-processing and our CPP onset estimation method; see further Materials and methods, and Figs F, H, and I in S1 Appendix.

Discussion

The results presented here support four main conclusions. First, that human collision threat detection occurs at optical expansion rates that are highly dependent on the kinematics of the collision scenario. We replicate the previously observed effect of initial obstacle distance [41, 42, 52], and additionally demonstrate an effect of obstacle acceleration, predicted by our looming accumulation hypothesis. Taken together with the mentioned previous literature, our findings strongly refute the LDT assumption, i.e., the assumption of a single, kinematics-independent threshold for looming detection. As illustrated in Fig 1A, this conventional assumption can yield estimated detection times that are incorrect by several seconds. We therefore caution against further use of the LDT assumption, not least in the applied context of road traffic safety, where it has influenced research, recommendations, and legal proceedings [5158].

Second, we show that not only the qualitative patterns of kinematics-dependency, but also full probability distributions of collision threat detection can be explained by an evidence accumulation model, assuming that the optical expansion rate information is integrated over time, with noise, up to a threshold at which detection is reported. We thus provide a computational account of how humans perform collision threat detection, which explains both average tendency and precise patterns of variability in performance. From an applied perspective, the accumulator models proposed here can be considered as an alternative to the LDT assumption. It should be noted, however, that the focus here was on investigating human ability of collision threat detection in a controlled laboratory experiment, rather than to provide and validate a model for applied use. The alignment with the test track findings by Lamble et al. [52] is encouraging, but further real-world validation, ideally covering a more diversified set of kinematical scenarios, would be advisable. For example, while an evidence leakage assumption was not required to account for the data in our experiment, such an assumption might become important in even slower looming conditions, where a model without any memory decay could be overly prone to purely noise-driven detection responses. (See the Supporting information for further discussion of the various alternative model variants tested here.) Another interesting question for future work is whether these improved models of collision threat detection (which as mentioned above effectively implement a change detection algorithm which is optimal under certain assumptions [61]) can support improved models of collision avoidance response. In the road traffic context, some existing models of collision avoidance response suggest that detection and response are separate and sequential steps [57, 58, 70], whereas other accounts suggest that defensive responses are instead driven directly by kinematical urgency, without a clear role for a first, separate step of detection [58, 71, 72].

Third, we provide strong support for the idea that established evidence accumulation models of decision-making can be extended beyond typical laboratory paradigms with static or intermittently changing abstract stimuli, to tasks with ecologically relevant, continuously time-varying sensory evidence, directly using the externally measurable stimulus as an input to the evidence accumulation. We and others have reported that evidence accumulation models show promise for modelling decisions in real-world tasks, e.g., when to apply brakes in response to a developing collision threat [3638], or on whether and when to cross a road with oncoming traffic [39, 40]. However, in these contexts it has not been possible to fit full response time probability distributions per participant, a minimum expectation in evidence accumulation modeling of more typical, abstract laboratory tasks. Drugowitsch et al. [32, 33] provided compelling support for evidence accumulation decision-making in their visual-vestibular heading discrimination paradigm, but did not emphasize detailed fits of response distributions. Ratcliff and Strayer achieved this level of model-fitting stringency in a driving setting, but did so by using a paradigm of speeded response to discrete stimuli, thus abstracting away from the continuously time-varying nature of the real-world driving task [73, 74]. Our looming detection paradigm was instead chosen to enable similarly rigorous model analyses of an ecologically relevant, continuously time-varying stimulus. It is notable that, among our alternative models, the accumulator model with between-trial variability in input gain (model AV) performed best for a majority of participants. For static input evidence, (i.e., θ˙=constant, in our case), this input gain variability reduces to between-trial variability in a static accumulation rate, a very common assumption in past modeling work, with much empirical support [10]. Our results thus demonstrate how this model assumption for static evidence paradigms can be usefully generalized to paradigms with time-varying evidence. Our paradigm and models may provide useful starting points for further research into decision making with continuously time-varying evidence, both in other sensory detection tasks (cf., e.g., [75, 76]) as well as in sensorimotor control tasks of basic or applied nature [29, 31, 3640].

Fourth, in contrast with previous studies on the CPP signature, we show that in our paradigm the late onset of the CPP, rather than a build-up rate present from early on after stimulus presentation, can be explained by evidence accumulation. The existing literature features several paradigms that are similar to ours in tasking participants with detecting low-saliency changes in sustained stimuli [13, 14, 16, 18], for example in the form of gradual changes in visual contrast [13], following time courses not dissimilar to the looming trajectories studied here (Fig 1A). These studies and others have provided converging evidence for the notion that the CPP source (i.e., the neural circuits giving rise to the CPP signature; see [77]) is involved in (or connected to) an early, sustained, and saliency-dependent accumulation of evidence for the decision to respond [1318]. Our behavioral modeling results support this type of evidence accumulation account of looming detection, yet the CPP in our paradigm is late, rapid, and without a clear effect of stimulus saliency on its duration. We did not hypothesize in advance that our CPP results would differ from previous findings in this way. For this reason, and because the CPP onset analyses we performed here were simple and exploratory in nature, we are unable to draw any firm conclusions about the underlying reasons for the nature of our CPP signatures. However, one seemingly plausible hypothesis would be that a key factor is our use of an ecologically relevant stimulus, specifically visual looming, known to be processed in phylogenetically old subcortical brain structures [78]. Aligning with findings in non-human species [7982], functional magnetic resonance imaging in humans has implicated structures such as the superior colliculus and the medial pulvinar nucleus of the thalamus in processing of visual looming [60]. These structures play important roles in attentional orientating [83], and have cortical projections circumventing early visual areas, for example to the middle temporal (MT) visual area, known to be involved in processing of motion cues [84]. In a general sense, this difference in connectivity may play a role in why our ERP results stand out. More specifically, both our behavioral and CPP results can be understood if it is assumed that the pathways for looming processing include neural circuits implementing evidence accumulation detection of collision threats, from which only the decision outcome (threat detected or not) is communicated onward to the CPP source, which then carries out a rapid, second-stage evidence accumulation, implementing the higher-level, modality-general decision of mapping stimulus to response in the task at hand [13]. This tentative ‘two-accumulator’ hypothesis would explain why the onset distributions of the late and rapid CPP in our data can be well accounted for by a looming accumulation model. In the Supporting information we provide a computational formulation of this hypothesis and illustrate how it might explain also the at-response CPP separation between looming conditions (Fig 3C and Fig K in S1 Appendix). The two-accumulator hypothesis is interesting not least in light of findings that the CPP correlates with subjectively reported experience of the perceptual decision being formed [69]. From this perspective, the late CPP signatures in our data suggest the empirically testable hypothesis that visual looming evidence accumulation (before CPP onset) occurs with near-zero subjective awareness or confidence. This would align well with conventional notions of an early perceptual limitation on looming detectability, but recasting the limitation as an evidence accumulation decision process instead of a perceptual threshold.

Materials and methods

Ethics statement and open software/data

All procedures were approved by the School of Psychology Research Ethics Committee, University of Leeds, reference number PSC-484. The primary research data for this study, as well as the software code implementing the experimental paradigm, data analyses, and computational models, are available here: https://doi.org/10.17605/OSF.IO/KU3H4.

Experimental design

The objective of the experiment was to observe participants’ detection of a visually looming object, to what extent this detection was influenced by the kinematical details of the object’s approach, and whether traces of the response process could be observed in participant scalp potentials.

The basic task was a computer-simulated replication and extension of the foveal looming conditions of the test track experiment in [52]. The paradigm was implemented in MathWorks MATLAB using PsychToolbox v3.0.14 [85, 86]. The stimulus was a photographic image of the back of a 1.85 m wide and 1.43 m high passenger car (used with permission from Volvo Car Corporation), as shown in Fig 1A. This image was displayed over a dark gray color on a 24 inch (0.53 m × 0.30 m) 60 Hz TFT screen at 1920 × 1080 pixels resolution. The original image was at higher resolution than shown on screen, and was scaled to appropriate size and displayed with antialiasing using the OpenGL trilinear filtering provided by PsychToolbox.

A central fixation target (a red dot, diameter 6 pixels, 0.095 degrees visual angle) was displayed throughout each experimental block. In each trial, initially only this fixation target was shown for 3 s, then the stimulus image appeared centrally on the screen, accompanied by an auditory tone, displayed at a size corresponding to an initial distance of either 20 or 40 m (subtending 5.30 and 2.65 degrees horizontal visual angle, respectively). Some trials were catch trials without any looming, at which the stimulus remained at the same size for 7 s before disappearing. In non-catch trials, the stimulus remained at the same size during an initial pre-looming wait time, one of 1.5, 2, 2.5, 3, 3.5 s, whereafter the size of the stimulus was gradually increased to reproduce the looming visual input from a car decelerating at either 0.35 or 0.7 m/s2; as shown in Fig 1A. The participants were instructed to keep their eyes on the fixation target, refrain from blinking while the car was being shown and until their response, which they were instructed to give by pressing the space bar on a computer keyboard with their right hand “as soon as you see the car coming closer, in other words when it is growing on the screen”. Trials terminated once participants either (a) made a correct looming detection response, after which the stimulus continued looming for another 0.5 s before disappearing, to avoid the impact of this visual transient interfering with the EEG measurements at time of response, (b) made no response before the looming stimulus had reached a trial expiry threshold of 0.03 rad/s (about ten times the threshold typically stated in the literature), or (c) made an incorrect, early detection response before the onset of visual looming; in this last case a distinct auditory tone was played to inform the participant of their incorrect detection response.

The participants viewed the stimulus screen at a distance of 1.00 m, meaning that each screen pixel subtended a visual angle of 0.95 arcmin (0.016 degrees), lower than the 1.6 arcmin threshold reported in [87] for maximum Vernier acuity with antialiased stimuli. To further reduce the risk of pixel effects, and to mimic the conditions of the replicated test track experiment [52], the stimulus was displayed with small horizontal and vertical oscillatory perturbation throughout, generated by moving the simulated viewport as if the participant themselves were sitting in a car, with perturbation spectra based on measurements from real driving; see Table A in S1 Appendix for details.

Procedure

Participants provided written informed consent before taking part in the experiment, which was carried out in a dark room with the participant sitting in front of the stimulus display, supported by a chin rest. In a first demonstration block of four trials, the experimenter demonstrated the task, including the auditory tone given upon incorrect, early responses, as well as a feedback screen that was shown after each block. This feedback screen listed average response times and frequency of correct responses for all blocks so far, and encouraged participants to rest if their response times were increasing. The participants then decided themselves when to start the next block. The participants first completed a practice block of 12 trials, two of each of the four looming conditions (2 initial distances × 2 acceleration levels), and four catch trials. Then followed the five experimental blocks, each with a total of 48 trials, eight catch trials and ten repetitions of each of the four looming conditions (two repetitions for each of the five pre-looming wait times), making for a total of 5 × 40 = 200 looming trials per participant, 50 for each looming condition. Trial order was fully randomized per block and participant.

Participants

The target initial sample size was twenty-five participants, to provide a comfortable margin over the total number of trials collected in previous studies reporting on the CPP [13, 14]. Twenty-six right-handed participants were recruited from a local pool of participants, all with normal or corrected-to-normal vision, and with no history of psychiatric diagnosis, severe brain injury, motor diseases or any skin conditions. EEG recording was incomplete for one participant, and data from the remaining twenty-five participants, of ages between 20 and 46 years (mean 26.5), 12 male and 13 female, were retained for further analysis.

Data acquisition and preprocessing

Behavioral responses were recorded at the 60 Hz refresh rate of the display screen. Out of the 25 × 40 = 1000 catch trials, there were 61 (6.1%) with false detection responses. The catch trials were not further considered in the analyses or modeling. Out of the 25 × 200 = 5000 trials with looming stimuli, participants responded before looming onset in 32 trials (0.6%), and non-responses were observed in 8 trials (0.2%); these early and non-responses were included in the behavioral model fits, but were excluded from all EEG analyses.

EEG data were recorded at 1024 Hz, using a 64 electrode 10–20 international cap Biosemi system. Electro-oculogram (EOG) electrodes were placed above and below the left eye and at the outer canthus of each eye. EEG preprocessing was done using EEGLAB v14.1.1 [88], first resampling to 512 Hz, then using the PREP pipeline EEGLAB plugin v0.55.3 [89] for robust re-referencing to average channel and interpolation of noisy channels. PREP interpolated one of the five channels analyzed as part of the pre-response positivity analyses here (Pz and surrounding channels CPz, POz, P1, P2) for only three participants (one of which was later excluded due to ocular artefacts; see below), in each case only one of the five channels was interpolated. Then, bandpass filtering was done using EEGLAB’s sinc FIR filter with a Kaiser window, with pass/stopband ripple of 0.001, lowpass filter of with 45 Hz cut-off and 5 Hz transition bandwidth, and high-pass filtering at 0.1 Hz cut-off with 1 Hz bandwidth. Following [13], trials with pronounced ocular artefacts were rejected by the vertical EOG difference exceeding 100 μV, excluding 796 trials (15.9%). 395 of these trials were from three specific participants, who therefore failed to reach a minimum of at least 30 trials in each looming condition. These three participants were excluded from further analysis. Further ocular artefacts were identified and removed from the EEG data per participant, using EEGLAB’s independent component analysis (ICA) functionality. The final dataset included 22 participants with a total of 4013 looming trials, i.e., an average of 46 trials per participant and looming condition. For event-related potential (ERP) analysis, the EEG data of looming trials were divided into epochs from 1 s before to 8 s after the looming onset in each trial (sufficient to include all responses in all conditions given the 0.03 rad/s trial expiry threshold mentioned above), and the EEG data for each epoch were baseline-corrected, using the average of the last 200 ms before the looming onset as baseline. Then, the five channels centered on Pz mentioned above were averaged to yield the final signal used in the CPP analyses (per-condition averages across participants shown in Fig 3B and 3C; per-condition averages per participants shown in Fig G of S1 Appendix). The analysis illustrated in Fig 3C was also rerun after disabling the 0.1 Hz EEG high-pass filter and the ICA ocular artefact removal, confirming that neither of these two preprocessing steps substantially altered the obtained CPP signatures; see Fig 3D and Fig F in S1 Appendix.

For the analyses which focused specifically on the onset of the CPP, we increased the signal to noise ratio by sorting trials on response time per participant and looming condition, separating the sorted trials into groups of five trials, and taking the average response-locked ERP within each such group. We then identified the CPP onset for each averaged trial as the last sample where the averaged response-locked ERP was less than 30% of its value at the overt response. We excluded averaged trials where this did not occur within 1 s before the response, or where the ERP at response was less than +2 μV, resulting in a total exclusion of 284 (37.4%) out of the 760 averaged trials. 134 of these exclusions were due to five participants, who distinguished themselves from the rest either by having no clear ERP peak at response (Cohen’s d < 0.3 when comparing ERP between 0.5 s before response and at response) or by the at-response peak occurring at close to zero voltage; see Fig G in S1 Appendix. These five participants were excluded from the CPP onset analyses, leaving a data set of 17 participants and 445 averaged trials (an average of 26 per participant). Since our CPP onset estimation method was novel, and not originally planned for, we also conducted sensitivity analyses. As illustrated in Figs F and H of S1 Appendix, these analyses showed that the obtained CPP onset estimates were robust to variations in the parameters of our method, but also that our method was not suitable for use on non high-pass filtered ERP data, which would have been desirable since the high-pass filtering we used subtly altered the CPP signal (cf. Fig 3C and 3D). Future work should improve on these onset estimation methods, for example along the lines of the approach in [90].

Statistical analysis

To test for effects of the kinematical looming conditions on the optical expansion rate at behavioral detection response, we carried out a repeated-measures ANOVA, implemented using MATLAB’s anovan function with participant as a random factor, independent variables initial car distance (2 levels) × acceleration magnitude (2 levels) × experimental block (5 levels) × pre-looming wait time (5 levels), limited to first-order interactions only, and log(θ˙) as the dependent variable; see Table B in S1 Appendix for full results. To study separation between response-locked ERPs for different looming conditions (Fig 3C and 3D), we performed ANOVAs at 20 ms intervals, with the response-locked ERP as the dependent variable, participant as a random factor, and looming condition (4 levels) as independent variable. We also performed this type of ANOVA to test for an effect of looming condition on CPP onset relative response (Fig 3F). The 95% confidence intervals in Fig 3F and 3H were obtained by 100,000 sample bootstraps from the empirical data in question.

Computational models

The optical expansion rate accumulator models investigated here can be described by the following discrete update equation for the accumulated evidence E at time step i:

E(i)=max(0,E(i-1)+K˜θ˙(i)Δt+σν(i)Δt), (1)

where K˜ is an accumulation gain parameter either drawn at random per trial from a normal distribution N(K,σK2) with σK as a free parameter (model AV), or kept constant per participant (σK = 0; model A), Δt is the discrete simulation time step length, and the final term is a discrete implementation of a Wiener process with noise intensity σ, with ν(i) drawn at random per time step from a standard normal distribution N(0,1). Note that we included a reflecting lower boundary at zero (the max function), as often done for evidence accumulators with a single decision boundary [7, 20, 91, 92]. The optical expansion rate θ˙(i) is the time derivative of the optical size of the collision obstacle (here, the lead vehicle):

θ(i)=2arctanW2D(i)θ˙(i)=WV(i)D2(i)+W2/4 (2)

where W and D(i) are the width of and momentary distance to the collision obstacle, and V(i) is the momentary speed at which it is coming closer [28]. The accumulator model makes a detection decision once E(i)≥1 (E is in arbitrary units, so one of decision threshold, K, or σ can be fixed without loss of generality) overtly responding a non-decision time TND later. A fraction αND = 0.3 of TND was assumed to occur before the evidence accumulation, based on [29, p. 189]; this value did not affect the behavioral model fits or comparisons, only model evidence visualizations like the one in Fig 3E. For a description of the other accumulator model variants that were also tested, see the Supporting information.

The stochastic threshold model investigated here (model T) can be written on a similar form:

E(i)=θ˙(i)θ˙d+σν(i), (3)

again with overt detection response a time TND after E(i)≥1, i.e., the parameter θ˙d is the looming detection threshold parameter. Note that the estimated value of σ for this model will depend on the discrete simulation time step length; we used Δt = 0.02 s across all models. The models were simulated from the start of each trial, i.e., the pre-looming wait time was also simulated, and the accumulator models were initialised at E = 0 for each trial.

Model fitting by approximate Bayesian computation

Our main goal of ABC was parameter estimation rather than model selection, so we used relatively broad, non-informative priors. To make the obtained Bayes Factors reasonably meaningful, and for increased computational efficiency, we identified approximate limits for the model priors by means of a first ABC fit to the information available from the previous test track experiment [52]; see Supplementary material for details. We then fitted each model variant (T, A, AV, etc) to each participant separately. To implement ABC rejection sampling [93, 94], for each parameterization sampled at random from the prior distribution, we generated a simulated data set of the exact same nature and size as the data obtained from one human participant, (same number of repetitions, pre-looming wait times, etc.) and calculated and stored a set of twenty summary statistics; the response time quantiles {0.1, 0.3, 0.5, 0.7, 0.9} for each of the four looming conditions separately. We then compared the simulated RT quantiles to each human participant’s data, for each participant retaining only parameter samples where all twenty absolute differences between simulated and observed RT quantiles were below a rejection threshold ϵRT. These retained samples provide an approximation of the posterior parameter distribution for the participant [93, 94]. There are several more advanced versions of ABC than this basic rejection sampling algorithm, but this method was preferred here because it allowed us to obtain individual per-participant fits without extra simulations of the model (which is the computationally costly step), and because it made computationally feasible the investigations illustrated in Fig E of S1 Appendix, showing that the ABC model comparisons were robust to the choice of the hyperparameter ϵRT. In the Supporting information we also describe how we used ABC to jointly fit both our behavioral and neural data, finding that excessive model flexibility was not the reason for the mismatch between our CPP observations and our behavioral models’ evidence traces.

Model fitting by maximum likelihood estimation

Maximum likelihood estimation of model parameters was carried out using exhaustive grid search over the model parameter ranges identified by the abovementioned initial ABC fits to the previous test track experiment, with each parameter’s range uniformly divided into 20 searched grid values. For each parameterization in this grid, as in the ABC fits a replica of the real experiment was simulated, but upscaled by a factor 20 to 1000 trials per looming condition, yielding a numerical response time distribution per condition, estimated at a bin size of 0.25 s. For these fits, all models were also extended by assuming a probability PC = 0.01 (our conclusions were robust to variations in this value) per trial of ‘contaminant’ responses poorly described by our model, e.g., due to temporary lapses in participant attention [95], modeled as a uniform distribution across the time range from looming onset to trial expiry. (Whereas our ABC fitting method is robust to such contaminant responses, they can disrupt the MLE fits if they fall in low probability response time bins, which may occasionally be numerically estimated to zero probability by the contaminant-free model.) Likelihoods were then estimated from the resulting numerical probability distributions, per participant and model parameterization. This model fitting method was computationally feasible for models with up to four free parameters (T, A, AV, AG, AL); based on the results from the ABC fits we would not expect to see substantial further improvements with the more complex models.

Supporting information

S1 Appendix. Additional method details, analyses, and results.

(PDF)

Data Availability

The primary research data for this study, as well as the software code implementing the experimental paradigm, data analyses, and computational models, are available at: https://doi.org/10.17605/OSF.IO/KU3H4.

Funding Statement

GM was supported by the Wellcome Trust / University of Leeds Institutional Strategic 549 Support Fund (https://wellcome.org/), grant 204825/Z/16/Z, and the UK Engineering and Physical Sciences Research Council (https://epsrc.ukri.org/), grant EP/S005056/1. JB was supported by the Leverhulme Trust (https://www.leverhulme.ac.uk/), grant RF-2019-343\10. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1. James W. The Principles of Psychology. Henry Holt and Company; 1890. [Google Scholar]
  • 2. Kahneman D, Tversky A. Prospect Theory: An Analysis of Decision under Risk. Econometrica. 1979;47(2):263–291. doi: 10.2307/1914185 [DOI] [Google Scholar]
  • 3. Libet B, Gleason CA, Wright EW, Pearl DK. Time of Conscious Intention to Act in Relation to Onset of Cerebral Activity (Readiness-Potential). Brain. 1983;106(3):623–642. doi: 10.1093/brain/106.3.623 [DOI] [PubMed] [Google Scholar]
  • 4. Rasmussen J. Skills, rules, and knowledge; signals, signs, and symbols, and other distinctions in human performance models. IEEE Transactions on Systems, Man, and Cybernetics. 1983;SMC-13(3):257–266. doi: 10.1109/TSMC.1983.6313160 [DOI] [Google Scholar]
  • 5. Gold JI, Shadlen MN. The neural basis of decision making. Annual review of neuroscience. 2007;30. doi: 10.1146/annurev.neuro.29.051605.113038 [DOI] [PubMed] [Google Scholar]
  • 6. Busemeyer JR, Townsend JT. Decision field theory: a dynamic-cognitive approach to decision making in an uncertain environment. Psychological Review. 1993;100(3):432–459. doi: 10.1037/0033-295X.100.3.432 [DOI] [PubMed] [Google Scholar]
  • 7. Usher M, McClelland JL. The time course of perceptual choice: the leaky, competing accumulator model. Psychological Review. 2001;108(3):550–592. doi: 10.1037/0033-295X.108.3.550 [DOI] [PubMed] [Google Scholar]
  • 8. Ratcliff R, Smith PL. A comparison of sequential sampling models for two-choice reaction time. Psychological review. 2004;111(2):333. doi: 10.1037/0033-295X.111.2.333 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Brown SD, Heathcote A. The simplest complete model of choice response time: Linear ballistic accumulation. Cognitive Psychology. 2008;57(3):153–178. doi: 10.1016/j.cogpsych.2007.12.002 [DOI] [PubMed] [Google Scholar]
  • 10. Ratcliff R, Smith PL, Brown SD, McKoon G. Diffusion Decision Model: Current Issues and History. Trends in Cognitive Sciences. 2016;20(4):260–281. doi: 10.1016/j.tics.2016.01.007 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Busemeyer JR, Gluth S, Rieskamp J, Turner BM. Cognitive and Neural Bases of Multi-Attribute, Multi-Alternative, Value-based Decisions. Trends in Cognitive Sciences. 2019;23(3):251–263. doi: 10.1016/j.tics.2018.12.003 [DOI] [PubMed] [Google Scholar]
  • 12. O’Connell RG, Shadlen MN, Wong-Lin K, Kelly SP. Bridging Neural and Computational Viewpoints on Perceptual Decision-Making. Trends in Neurosciences. 2018;41(11):838–852. doi: 10.1016/j.tins.2018.06.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. O’Connell RG, Dockree PM, Kelly SP. A supramodal accumulation-to-bound signal that determines perceptual decisions in humans. Nature Neuroscience. 2012;15(12):1729–1735. doi: 10.1038/nn.3248 [DOI] [PubMed] [Google Scholar]
  • 14. Kelly SP, O’Connell RG. Internal and External Influences on the Rate of Sensory Evidence Accumulation in the Human Brain. Journal of Neuroscience. 2013;33(50):19434–19441. doi: 10.1523/JNEUROSCI.3355-13.2013 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Twomey DM, Murphy PR, Kelly SP, O’Connell RG. The classic P300 encodes a build-to-threshold decision variable. European Journal of Neuroscience. 2015;42(1):1636–1643. doi: 10.1111/ejn.12936 [DOI] [PubMed] [Google Scholar]
  • 16. Boubenec Y, Lawlor J, Górska U, Shamma S, Englitz B. Detecting changes in dynamic and complex acoustic environments. eLife. 2017;6:e24910. doi: 10.7554/eLife.24910 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Afacan-Seref K, Steinemann NA, Blangero A, Kelly SP. Dynamic Interplay of Value and Sensory Information in High-Speed Decision Making. Current Biology. 2018;28(5):795–802.e6. doi: 10.1016/j.cub.2018.01.071 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Kohl C, Spieser L, Forster B, Bestmann S, Yarrow K. Centroparietal activity mirrors the decision variable when tracking biased and time-varying sensory evidence. Cognitive Psychology. 2020;122:101321. doi: 10.1016/j.cogpsych.2020.101321 [DOI] [PubMed] [Google Scholar]
  • 19. van Vugt MK, Beulen MA, Taatgen NA. Relation between centro-parietal positivity and diffusion model parameters in both perceptual and memory-based decision making. Brain research. 2019;1715:1–12. doi: 10.1016/j.brainres.2019.03.008 [DOI] [PubMed] [Google Scholar]
  • 20. Diederich A. Intersensory facilitation of reaction time: Evaluation of counter and diffusion coactivation models. Journal of Mathematical Psychology. 1995;39(2):197–215. doi: 10.1006/jmps.1995.1020 [DOI] [Google Scholar]
  • 21. Tsetsos K, Usher M, McClelland JL. Testing Multi-Alternative Decision Models with Non-Stationary Evidence. Frontiers in Neuroscience. 2011;5. doi: 10.3389/fnins.2011.00063 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Holmes WR, Trueblood JS, Heathcote A. A new framework for modeling decisions about changing information: The Piecewise Linear Ballistic Accumulator model. Cognitive Psychology. 2016;85:1–29. doi: 10.1016/j.cogpsych.2015.11.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23. Maier SU, Raja Beharelle A, Polanía R, Ruff CC, Hare TA. Dissociable mechanisms govern when and how strongly reward attributes affect decisions. Nature Human Behaviour. 2020;4(9):949–963. doi: 10.1038/s41562-020-0893-y [DOI] [PubMed] [Google Scholar]
  • 24. Shinn M, Ehrlich DB, Lee D, Murray JD, Seo H. Confluence of Timing and Reward Biases in Perceptual Decision-Making Dynamics. Journal of Neuroscience. 2020;40(38):7326–7342. doi: 10.1523/JNEUROSCI.0544-20.2020 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25. Asai Y, Tasaka Y, Nomura K, Nomura T, Casadio M, Morasso P. A Model of Postural Control in Quiet Standing: Robust Compensation of Delay-Induced Instability Using Intermittent Activation of Feedback Control. PLoS ONE. 2009;4(7):e6169. doi: 10.1371/journal.pone.0006169 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26. Milton JG. Intermittent Motor Control: The “drift-and-act” Hypothesis. In: Richardson MJ, Riley MA, Shockley K, editors. Progress in Motor Control. Advances in Experimental Medicine and Biology. New York, NY: Springer; 2013. p. 169–193. [DOI] [PubMed] [Google Scholar]
  • 27. McBeath MK, Shaffer DM, work(s): KR MK. How Baseball Outfielders Determine Where to Run to Catch Fly Balls. Science, New Series. 1995;268(5210):569–573. [DOI] [PubMed] [Google Scholar]
  • 28. Lee DN. A theory of visual control of braking based on information about time-to-collision. Perception. 1976;5(4):437–459. doi: 10.1068/p050437 [DOI] [PubMed] [Google Scholar]
  • 29. Markkula G, Boer E, Romano R, Merat N. Sustained sensorimotor control as intermittent decisions about prediction errors: computational framework and application to ground vehicle steering. Biological Cybernetics. 2018;112(3):181–207. doi: 10.1007/s00422-017-0743-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30. Wiecki TV, Sofer I, Frank MJ. HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python. Frontiers in Neuroinformatics. 2013;7. doi: 10.3389/fninf.2013.00014 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Zgonnikov A, Markkula G. Evidence Accumulation Account of Human Operators’ Decisions in Intermittent Control During Inverted Pendulum Balancing. In: 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE; 2018. p. 716–721.
  • 32. Drugowitsch J, DeAngelis GC, Klier EM, Angelaki DE, Pouget A. Optimal Multisensory Decision-Making in a Reaction-Time Task. eLife. 2014;3:e03005. doi: 10.7554/eLife.03005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33. Drugowitsch J, DeAngelis GC, Angelaki DE, Pouget A. Tuning the Speed-Accuracy Trade-off to Maximize Reward Rate in Multisensory Decision-Making. eLife. 2015;4:e06678. doi: 10.7554/eLife.06678 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34. Nesti A, De Winkel K, Bülthoff HH. Accumulation of inertial sensory information in the perception of whole body yaw rotation. PloS one. 2017;12(1):e0170497. doi: 10.1371/journal.pone.0170497 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35. Markkula G. Modeling driver control behavior in both routine and near-accident driving. Proceedings of the Human Factors and Ergonomics Society Annual Meeting. 2014;58(1):879–883. doi: 10.1177/1541931214581185 [DOI] [Google Scholar]
  • 36. Xue Q, Markkula G, Yan X, Merat N. Using perceptual cues for brake response to a lead vehicle: Comparing threshold and accumulator models of visual looming. Accident Analysis & Prevention. 2018;118:114–124. doi: 10.1016/j.aap.2018.06.006 [DOI] [PubMed] [Google Scholar]
  • 37. Boda CN, Lehtonen E, Dozza M. A Computational Driver Model to Predict Driver Control at Unsignalised Intersections. IEEE Access. 2020;8:104619–104631. doi: 10.1109/ACCESS.2020.2999851 [DOI] [Google Scholar]
  • 38.Svärd M, Markkula G, Bärgman J, Victor T. Computational modeling of driver pre-crash brake response, with and without off-road glances: Parameterization using real-world crashes and near-crashes. PsyArXiv; 2020. Available from: https://osf.io/6nkgv. [DOI] [PubMed]
  • 39.Giles OT, Markkula G, Pekkanen J, Yokota N, Matsunaga N, Merat N, et al. At the Zebra Crossing: Modelling Complex Decision Processes with Variable-Drift Diffusion Models. In: Goel A, Seifert C, Freksa C, editors. Proceedings of the 41st Annual Conference of the Cognitive Science Society. Montréal, Canada; 2019. p. 366–372. Available from: https://cogsci.mindmodeling.org/2019/papers/0083/.
  • 40.Zgonnikov A, Abbink D, Markkula G. Should I stay or should I go? Evidence accumulation drives decision making in human drivers. PsyArXiv; 2020. Available from: 10.31234/osf.io/p8dxn. [DOI]
  • 41.Todosiev EP. The action-point model of the driver-vehicle system [PhD]. Ohio State University; 1963.
  • 42. Harvey LO, Michon JA. Detectability of relative motion as a function exposure duration, angular separation, and background. Journal of Experimental Psychology. 1974;103(2):317–325. doi: 10.1037/h0036802 [DOI] [Google Scholar]
  • 43. Regan D, Beverley KI. Looming detectors in the human visual pathway. Vision Research. 1978;18(4):415–421. doi: 10.1016/0042-6989(78)90051-2 [DOI] [PubMed] [Google Scholar]
  • 44. Regan D, Gray R. Visually guided collision avoidance and collision achievement. Trends in Cognitive Sciences. 2000;4(3):99–107. doi: 10.1016/S1364-6613(99)01442-4 [DOI] [PubMed] [Google Scholar]
  • 45. Gómez J, López-Moliner J. Synergies between optical and physical variables in intercepting parabolic targets. Frontiers in Behavioral Neuroscience. 2013;7:46. doi: 10.3389/fnbeh.2013.00046 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46. Cavallo V, Laurent M. Visual Information and Skill Level in Time-To-Collision Estimation. Perception. 1988;17(5):623–632. doi: 10.1068/p170623 [DOI] [PubMed] [Google Scholar]
  • 47. Gray R, Regan D. Accuracy of estimating time to collision using binocular and monocular information. Vision Research. 1998;38(4):499–512. doi: 10.1016/S0042-6989(97)00230-7 [DOI] [PubMed] [Google Scholar]
  • 48. Hosking SG, Crassini B. The influence of optic expansion rates when judging the relative time to contact of familiar objects. Journal of Vision. 2011;11(6):20–20. doi: 10.1167/11.6.20 [DOI] [PubMed] [Google Scholar]
  • 49. Bahill AT, Karnavas WJ. The Perceptual Illusion of Baseball’s Rising Fastball and Breaking Curveball. Journal of Experimental Psychology: Human Perception and Performance. 1993;19(1):3–14. [Google Scholar]
  • 50. Gray R. Behavior of college baseball players in a virtual batting task. Journal of Experimental Psychology: Human Perception and Performance. 2002;28(5):1131–1148. [DOI] [PubMed] [Google Scholar]
  • 51. Hoffmann ER, Mortimer RG. Drivers’ estimates of time to collision. Accident Analysis & Prevention. 1994;26(4):511–520. doi: 10.1016/0001-4575(94)90042-6 [DOI] [PubMed] [Google Scholar]
  • 52. Lamble D, Laakso M, Summala H. Detection thresholds in car following situations and peripheral vision: implications for positioning of visually demanding in-car displays. Ergonomics. 1999;42(6):807–815. doi: 10.1080/001401399185306 [DOI] [Google Scholar]
  • 53. Schmidt S, Färber B. Pedestrians at the kerb—Recognising the action intentions of humans. Transportation Research Part F: Traffic Psychology and Behaviour. 2009;12(4):300–310. doi: 10.1016/j.trf.2009.02.003 [DOI] [Google Scholar]
  • 54. Wann JP, Poulter DR, Purcell C. Reduced Sensitivity to Visual Looming Inflates the Risk Posed by Speeding Vehicles When Children Try to Cross the Road. Psychological Science. 2011;22(4):429–434. doi: 10.1177/0956797611400917 [DOI] [PubMed] [Google Scholar]
  • 55. Seppelt BD, Lee JD. Modeling Driver Response to Imperfect Vehicle Control Automation. Procedia Manufacturing. 2015;3:2621–2628. doi: 10.1016/j.promfg.2015.07.605 [DOI] [Google Scholar]
  • 56. Morando A, Victor T, Dozza M. Drivers anticipate lead-vehicle conflicts during automated longitudinal control: Sensory cues capture driver attention and promote appropriate and timely responses. Accident Analysis & Prevention. 2016;97:206–219. doi: 10.1016/j.aap.2016.08.025 [DOI] [PubMed] [Google Scholar]
  • 57. Maddox ME, Kiefer A. Looming Threshold Limits and Their Use in Forensic Practice. Proceedings of the Human Factors and Ergonomics Society Annual Meeting. 2012;56(1):700–704. doi: 10.1177/1071181312561146 [DOI] [Google Scholar]
  • 58. Green M. Roadway human factors: From science to application. 2nd ed. Tucson, Arizona: Lawyers & Judges Publishing Company; 2018. [Google Scholar]
  • 59. Field DT, Wann JP. Perceiving Time to Collision Activates the Sensorimotor Cortex. Current Biology. 2005;15(5):453–458. doi: 10.1016/j.cub.2004.12.081 [DOI] [PubMed] [Google Scholar]
  • 60. Billington J, Wilkie RM, Field DT, Wann JP. Neural processing of imminent collision in humans. Proceedings of the Royal Society B: Biological Sciences. 2011;278(1711):1476–1481. doi: 10.1098/rspb.2010.1895 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 61. Page ES. Continuous Inspection Schemes. Biometrika. 1954;41(1/2):100–115. doi: 10.2307/2333009 [DOI] [Google Scholar]
  • 62. Broderick T, Wong-Lin KF, Holmes P. Closed-Form Approximations of First-Passage Distributions for a Stochastic Decision-Making Model. Applied Mathematics Research eXpress. 2010; p. abp008. doi: 10.1093/amrx/abp008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 63. Ratcliff R. Group reaction time distributions and an analysis of distribution statistics. Psychological Bulletin. 1979;86(3):446–461. doi: 10.1037/0033-2909.86.3.446 [DOI] [PubMed] [Google Scholar]
  • 64. Murtaugh PA. In defense of P values. Ecology. 2014;95(3):611–617. doi: 10.1890/13-0590.1 [DOI] [PubMed] [Google Scholar]
  • 65. Stephan KE, Penny WD. Chapter 43: Dynamic Causal Models and Bayesian selection. In: FRISTON K, ASHBURNER J, KIEBEL S, NICHOLS T, PENNY W, editors. Statistical Parametric Mapping: The Analysis of Functional Brain Images. London: Academic Press; 2007. p. 577–585. Available from: http://www.sciencedirect.com/science/article/pii/B9780123725608500437. [Google Scholar]
  • 66. Klaassen F, Zedelius CM, Veling H, Aarts H, Hoijtink H. All for one or some for all? Evaluating informative hypotheses using multiple N = 1 studies. Behavior Research Methods. 2018;50(6):2276–2291. doi: 10.3758/s13428-017-0992-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 67. Kass RE, Raftery AE. Bayes Factors. Journal of the American Statistical Association. 1995;90(430):773–795. doi: 10.1080/01621459.1995.10476572 [DOI] [Google Scholar]
  • 68. Trotta R. Bayes in the sky: Bayesian inference and model selection in cosmology. Contemporary Physics. 2008;49(2):71–104. doi: 10.1080/00107510802066753 [DOI] [Google Scholar]
  • 69. Tagliabue CF, Veniero D, Benwell CSY, Cecere R, Savazzi S, Thut G. The EEG signature of sensory evidence accumulation during decision formation closely tracks subjective perceptual experience. Scientific Reports. 2019;9(1):4949. doi: 10.1038/s41598-019-41024-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 70. Muttart JW, Messerschmidt WF, Gillen LG. Relationship Between Relative Velocity Detection and Driver Response Times in Vehicle Following Situations. SAE International; 2005. 2005-01-0427. [Google Scholar]
  • 71. Fajen BR. Perceptual learning and the visual control of braking. Perception & Psychophysics. 2008;70(6):1117–1129. doi: 10.3758/PP.70.6.1117 [DOI] [PubMed] [Google Scholar]
  • 72. Markkula G, Engström J, Lodin J, Bärgman J, Victor T. A Farewell to Brake Reaction Times? Kinematics-Dependent Brake Response in Naturalistic Rear-End Emergencies. Accident Analysis & Prevention. 2016;95:209–226. doi: 10.1016/j.aap.2016.07.007 [DOI] [PubMed] [Google Scholar]
  • 73. Ratcliff R, Strayer D. Modeling simple driving tasks with a one-boundary diffusion model. Psychonomic Bulletin & Review. 2014;21(3):577–589. doi: 10.3758/s13423-013-0541-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 74. Ratcliff R. Modeling one-choice and two-choice driving tasks. Attention, Perception, & Psychophysics. 2015;77(6):2134–2144. doi: 10.3758/s13414-015-0911-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 75. Johnson CA, Leibowitz HW. Velocity-time reciprocity in the perception of motion: Foveal and peripheral determinations. Vision Research. 1976;16(2):177–180. doi: 10.1016/0042-6989(76)90095-X [DOI] [PubMed] [Google Scholar]
  • 76. Soyka F, Bülthoff HH, Barnett-Cowan M. Temporal processing of self-motion: modeling reaction times for rotations and translations. Experimental Brain Research. 2013;228(1):51–62. doi: 10.1007/s00221-013-3536-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 77. Brosnan MB, Sabaroedin K, Silk T, Genc S, Newman DP, Loughnane GM, et al. Evidence accumulation during perceptual decisions in humans varies as a function of dorsal frontoparietal organization. Nature Human Behaviour. 2020;4(8):844–855. doi: 10.1038/s41562-020-0863-4 [DOI] [PubMed] [Google Scholar]
  • 78. Cisek P. Resynthesizing behavior through phylogenetic refinement. Attention, Perception, & Psychophysics. 2019;81(7):2265–2287. doi: 10.3758/s13414-019-01760-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 79. Northmore DPM, Levine ES, Schneider GE. Behavior evoked by electrical stimulation of the hamster superior colliculus. Experimental Brain Research. 1988;73(3):595–605. doi: 10.1007/BF00406619 [DOI] [PubMed] [Google Scholar]
  • 80. Sun H, Frost BJ. Computation of different optical variables of looming objects in pigeon nucleus rotundus neurons. Nature Neuroscience. 1998;1(4):296–303. doi: 10.1038/1110 [DOI] [PubMed] [Google Scholar]
  • 81. Wu LQ, Niu YQ, Yang J, Wang SR. Tectal neurons signal impending collision of looming objects in the pigeon. European Journal of Neuroscience. 2005;22(9):2325–2331. doi: 10.1111/j.1460-9568.2005.04397.x [DOI] [PubMed] [Google Scholar]
  • 82. Cléry JC, Schaeffer DJ, Hori Y, Gilbert KM, Hayrynen LK, Gati JS, et al. Looming and receding visual networks in awake marmosets investigated with fMRI. NeuroImage. 2020;215:116815. doi: 10.1016/j.neuroimage.2020.116815 [DOI] [PubMed] [Google Scholar]
  • 83. Kastner S, Pinsk MA. Visual attention as a multilevel selection process. Cognitive, Affective, & Behavioral Neuroscience. 2004;4(4):483–500. doi: 10.3758/CABN.4.4.483 [DOI] [PubMed] [Google Scholar]
  • 84. Kaas JH, Lyon DC. Pulvinar contributions to the dorsal and ventral streams of visual processing in primates. Brain Research Reviews. 2007;55(2):285–296. doi: 10.1016/j.brainresrev.2007.02.008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 85. Brainard DH. The Psychophysics Toolbox. Spatial Vision. 1997;10(4):433–436. doi: 10.1163/156856897X00357 [DOI] [PubMed] [Google Scholar]
  • 86. Kleiner M, Brainard D, Pelli D. What’s new in Psychtoolbox-3? Perception. 2007;36(1 supplement). [Google Scholar]
  • 87.Lloyd CJ, Winterbottom MD, Gaska JP, Williams LA. Effects of display pixel pitch and antialiasing on threshold Vernier acuity. In: Proceedings of the 2015 IMAGE Society Annual Conference. Dayton, OH; 2015.
  • 88. Delorme A, Makeig S. EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. Journal of Neuroscience Methods. 2004;134(1):9–21. doi: 10.1016/j.jneumeth.2003.10.009 [DOI] [PubMed] [Google Scholar]
  • 89. Bigdely-Shamlo N, Mullen T, Kothe C, Su KM, Robbins KA. The PREP pipeline: standardized preprocessing for large-scale EEG analysis. Frontiers in Neuroinformatics. 2015;9. doi: 10.3389/fninf.2015.00016 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 90. Nunez MD, Vandekerckhove J, Srinivasan R. How attention influences perceptual decision making: Single-trial EEG correlates of drift-diffusion model parameters. Journal of Mathematical Psychology. 2017;76:117–130. doi: 10.1016/j.jmp.2016.03.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 91. Ratcliff R, Hasegawa YT, Hasegawa RP, Smith PL, Segraves MA. Dual Diffusion Model for Single-Cell Recording Data From the Superior Colliculus in a Brightness-Discrimination Task. Journal of Neurophysiology. 2007;97(2):1756–1774. doi: 10.1152/jn.00393.2006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 92. Purcell BA, Heitz RP, Cohen JY, Schall JD, Logan GD, Palmeri TJ. Neurally constrained modeling of perceptual decision making. Psychological Review. 2010;117(4):1113–1143. doi: 10.1037/a0020311 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 93. Beaumont MA. Approximate Bayesian Computation in Evolution and Ecology. Annual Review of Ecology, Evolution, and Systematics. 2010;41(1):379–406. doi: 10.1146/annurev-ecolsys-102209-144621 [DOI] [Google Scholar]
  • 94. Turner BM, Van Zandt T. A tutorial on approximate Bayesian computation. Journal of Mathematical Psychology. 2012;56(2):69–85. doi: 10.1016/j.jmp.2012.02.005 [DOI] [Google Scholar]
  • 95. Ratcliff R, Tuerlinckx F. Estimating parameters of the diffusion model: Approaches to dealing with contaminant reaction times and parameter variability. Psychonomic Bulletin & Review. 2002;9(3):438–481. doi: 10.3758/bf03196302 [DOI] [PMC free article] [PubMed] [Google Scholar]
PLoS Comput Biol. doi: 10.1371/journal.pcbi.1009096.r001

Decision Letter 0

Wolfgang Einhäuser, Marieke Karlijn van Vugt

19 Dec 2020

Dear Prof. Markkula,

Thank you very much for submitting your manuscript "Accumulation of continuously time-varying sensory evidence constrains neural and behavioral responses in human collision threat detection" for consideration at PLOS Computational Biology.

As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. In light of the reviews (below this email), we would like to invite the resubmission of a significantly-revised version that takes into account the reviewers' comments.

All reviewers indicate they find this an interesting paper, with an interesting application of accumulator models to a real-life situation. They also all have some concerns. I agree with these concerns, which in my main boil down to the following issues that need to be addressed:

1) to what extent is this really a change detection problem rather than a two-alternative forced choice decision, and how does this impact the modeling? I would like to see, if possible, the suggested single-threshold model Reviewer 1 is suggesting

2) exactly how do the situational variables map onto the model parameters, and could other mappings also be possible, as Reviewer 2 is suggesting?

3) is it possible to estimate the CPP onset with certainty, as Reviewer 1 asks? And could the CPP become more "normal" when a different high-pass filter is used, as Reviewer 3 suggests?

4) a final, more theoretical problem, related to issue 1), is how does the accumulator model "know" when to start?

I invite you to submit a revised version of the manuscript in which these issues, as well as the issues raised by the reviewers are addressed. And for now, happy holidays!

We cannot make any decision about publication until we have seen the revised manuscript and your response to the reviewers' comments. Your revised manuscript is also likely to be sent to reviewers for further evaluation.

When you are ready to resubmit, please upload the following:

[1] A letter containing a detailed list of your responses to the review comments and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

[2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file).

Important additional instructions are given below your reviewer comments.

Please prepare and submit your revised manuscript within 60 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email. Please note that revised manuscripts received after the 60-day due date may require evaluation and peer review similar to newly submitted manuscripts.

Thank you again for your submission. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments.

Sincerely,

Marieke Karlijn van Vugt, PhD

Associate Editor

PLOS Computational Biology

Wolfgang Einhäuser

Deputy Editor

PLOS Computational Biology

***********************

All reviewers indicate they find this an interesting paper, with an interesting application of accumulator models to a real-life situation. They also all have some concerns. I agree with these concerns, which in my main boil down to the following issues that need to be addressed:

1) to what extent is this really a change detection problem rather than a two-alternative forced choice decision, and how does this impact the modeling? I would like to see, if possible, the suggested single-threshold model Reviewer 1 is suggesting

2) exactly how do the situational variables map onto the model parameters, and could other mappings also be possible, as Reviewer 2 is suggesting?

3) is it possible to estimate the CPP onset with certainty, as Reviewer 1 asks? And could the CPP become more "normal" when a different high-pass filter is used, as Reviewer 3 suggests?

4) a final, more theoretical problem, related to issue 1), is how does the accumulator model "know" when to start?

I invite you to submit a revised version of the manuscript in which these issues, as well as the issues raised by the reviewers are addressed. And for now, happy holidays!

Reviewer's Responses to Questions

Comments to the Authors:

Please note here if the review is uploaded as an attachment.

Reviewer #1: The authors examine a model of collision threat detection by humans. They contrast an evidence accumulation model of this capability against a simple threshold model without evidence accumulation. The evidence accumulation style of model wins hands down. That model comparison seems quite compelling. They also examine the centroparietal positivity (CPP) computed from EEG recordings of the participants during task performance. In their work, the CPP does not appear to reflect an evidence accumulation process, but rather something closer to decision readout perhaps. This is in contrast to the results of Kelly & O'Connell and colleagues, who established a relationship between evidence accumulation and the CPP. The stimuli here (looming vehicles), however, might not be processed in quite the same way as less naturalistic stimuli. Still, there was a CPP, and the absence of correlation with evidence accumulation is noteworthy.

I found the paper to be interesting, novel, and of obvious practical importance. I have a few larger and more minor points of concern.

Major issues:

1) The experimental situation isn’t just different from standard perceptual tasks like dot motion discrimination in terms of the nature of the stimulus, but also in terms of the types of choices. This task does not involve two choices. It involves only one, and the question is when to choose it. This is more like a change-detection task than the two-choice type of task usually addressed by the diffusion model, isn't it? How does a change-detection model compare to the diffusion model? I believe change-detection algorithms such as the CUMSUM test are accumulators too, but without a lower threshold. �This is not necessarily a bad thing, but it just makes the mapping to the decision literature a little less straightforward. Isn't this really a change-detection problem? I think the authors ought to consider that.

2) Further, there are applications of single-boundary diffusion models to, among other things, interval timing (Simen et al., 2016). The nice thing about a single boundary model is that it has a very simple, closed-form expression for the first-passage time (or decision time) distribution, which is the inverse Gaussian distribution. This is much easier to compute than the first-passage time distribution of a two-boundary model. Much faster than approximate Bayesian computation too. It might be worth fitting to the data.

3) It's not clear to me how the authors embedded the assumption of a declining threshold within trials into their model fits. What was the shape and equation for that function? On p. 13, the text suggests there was not in fact a declining threshold. It was fixed at 1. So how does that square with the description in Fig. 1? The text should describe how the declining threshold was modeled, or why it wasn't, given Fig. 1.

4) Need a table of the model parameters and their fitted values and any other info needed (hyperparameters?). I think one should always display that in publications that use this type of model.

5) I'm concerned about excessive reliance on the *onset* of CPP activity as the main dependent variable in the EEG section of the paper. Maybe that’s not the best feature of the CPP to examine, particularly when it would seem that onset-estimates might be sensitive to the way in which they are computed (?). It seems from the figure that buildup rate and maximum amplitude clearly distinguish the conditions. It would help to know more details about the behavior (average RT) to compare to the CPP data. �

Minor points:

1) Fig. 1, B: the time axis isn't labeled under the response time histograms. Please label

2) p. 4: Define \\dot{\\theta}. It is used before it's defined. Please define first

3) p. 5: The statement about "bias" is confusing, I think. Drift is indeed related to evidence accumulation in typical applications of the diffusion model, But "bias" is usually considered a different form of shifting the evidence accumulation process up or down at the start of a trial. It could occasionally be said that the drift is biased, but it sounds odd to me to treat bias and drift as the same thing. I think users of the diffusion model would agree.

4) p. 6: Wow, I did not expect to see evidence in favor of auditory over combined auditory and visual

5) p. 7, Plese define "onset" here. How was it computed?

6) I find the word “stringent” to be somehwat unusual. I notice that it sounds unfamiliar

to me from the perceptual decision literature. Maybe “powerful” would actually be a better word? The model is powerful because the tests of its predictions are powerful in the statistical sense of power to discriminate between hypotheses.�

7) p. 9: "we show that in our paradigm the onset of the CPP, rather than its build-up profile, can be 303 explained by evidence accumulation". It didn't seem to me that the authors investigated the build-up profile of the CPP that closely. Fig. 3C also seems to show something like differences in the CPP buildup profile, to my eye.

8) p. 10: I like the ending of the Discussion. It seems that this paper calls for a controlled comparison between say, dot motion discrimination, and looming automobile detection.� The authors ought to consider doing that comparison in the future. It would really inform the discussion around the CPP and what it reflects.

Reviewer #2: This paper describes a nice proof of concept for an early stage evidence accumulation process that depends on optical expansion and later stage decision process that is measured by CPP. My one major caution for the conclusions in the paper is that they result from highly flexible models (so goodness of fit is not necessarily an indicator of identifying a useful model of the data) and that the evidence of the distinct stages is still dependent on the assumptions that went into the analysis. I include a few minor comments below.

I would appreciate a bit more clarification on the variables in the study. I suggest making explicit the relationship between response time average optical expansion rate. Also, clarify why that was the independent variable rather than response time.

Deceleration speed and initial distance were crossed, but there was no indication that their interaction was tested statistically. Why was that test not included?

Figure 1: Although there is a lot already crammed in, there still needs to be axis labels for each subfigure.

To what extent is the estimation procedure for the CPP onset novel? There was not a clear justification for this approach in the paper. Was this a planned analysis? How does the uncertainty in the estimates from this procedure influence the downstream analyses (i.e., the accumulator model fits).

It may not be appropriate for this journal, but I suggest adding more detail on the cautionary note about applying the LDT to real-world situations. Is the way LDT used conservative enough that the error does not matter? How would the authors suggest improving the approach?

Pg 10, ln 341: It is not clear how the two stage accumulation process implies that visual looming accumulation occurs without awareness. Please clarify.

Pg 11, ln 382: How much data was removed? Do the timeouts vary across levels of the independent variable?

Pg 31, ln 450: Issue with the less than sign.

Reviewer #3: In this paper, Markkula et al use a combination of computational modeling and human electrophysiology to examine the mechanisms of looming decisions, in a naturalistic task of detecting when a car in front has begun to decelerate. The results show quite convincingly that there is a role for accumulation of optical expansion rate over time in these decisions, where previous research – and policy - has assumed that there is a fixed threshold set on momentary optical expansion rate. They also find that during this task, a centro-parietal ERP signal (CPP) associated with evidence accumulation is strongly present, but, interestingly, has a much more short-lived temporal extent and seems to begin rising a fixed time before the response is made regardless of the strength of the evidence on which the decision is based. The conclusion is thus that evidence accumulation is involved in the looming decision but that in this context, the CPP reflects a second stage of processing after accumulation has reached threshold. I find this an excellent paper, clearly written and with thorough and innovative methods, on a very interesting question that has clear real-world relevance. My comments mostly seek clarifications on how the models were structured and whether the most important alternative accumulation models have been ruled out, none of which harm the central conclusion that these decisions do involve some form of accumulation over time and not simply a fixed looming threshold.

My first question may simply be out of ignorance of the looming field. But it struck me that what most decision makers might do in this situation – because it seems the more natural thing we do when driving a car – is set a threshold on proximity (a translation of size into an estimate of number of metres away), rather than a rate of optical expansion, because if the car is far enough in front, a very steep rate of optical expansion might not tend to call for braking just yet. I wondered more generally why the “evidence” in this scenario isn’t proximity rather than optical expansion rate, and if the decision is re-cast as the former, does that obviate the accumulation since distance is the integral of speed? If my comment is off the mark, it might nevertheless serve to highlight what more general journal audience might think when looking at the situation.

The most puzzling aspect of the results for me was that the best quantitative fit to the behavioural data did not seem to require either leakage or a threshold set on the evidence. Either of these mechanisms would be able to explain a short-lived evidence accumulation process as suggested by the CPP, and it also seems that either of them is required to fully account for how decisions can be made under such gradual and temporally-uncertain conditions. That is, the winning model has to assume that evidence accumulation begins some fixed time after the onset of the target regardless of the strength of that target, which almost assumes the brain precisely knows the timing of onset in a way that it couldn’t possibly, given the timing jitter. A threshold on the evidence would provide a very simple mechanism for knowing when to kick off the accumulation process, rather like Purcell and Schall’s gated accumulator model, and leakage would obviate the need for the accumulator to be kicked off at all – it would just be continuous. For both of these mechanisms, the decision process could be modelled from the beginning of the stimulus and allow naturally for the targets appearing at variable times, and potentially also any effects of that timing (which were not discussed in the paper – were later targets detected differently than earlier ones? See Boubenec et al 2017). But was either model implemented in this way? I also wondered about the impact of constraining evidence to be positive (“E(i) ≥ 0”), which crops up in the supplemental information (top p4) but not in the main paper and should be more fully explained (ideally, in the mathematical equation, e.g. by using a half-wave rectification operator if that is what is going on). If evidence samples are not permitted to be negative, then the accumulation of noise alone would build up and eventually cross threshold, because positive values are added to the total but negative ones ignored. Might this contribute to the simulated timecourse of accumulation being longer than it would otherwise be? And there is no provision for the inevitable noisiness of the crossing of a sensory threshold, for instance in a nondecision time variability parameter or by implementing the threshold not on the pure physical signal but the noisy evidence representation itself, with noise sigma and all. My overall point is that given the ability of threshold and leak models to provide a fuller explanation of performance of the task starting from stimulus outset, as well as a short-lived CPP, the details and justification for the specific ways these models were implemented are very important to lay out fully, with perhaps a discussion of alternative ways (e.g. NOT rectifying E(i)).

Also related to the remarkably short-lived CPP, I wondered what possible impact the fairly strong high pass filter (0.1 Hz cutoff) may have had. This would have the effect of reducing slow shifts in the signals, of the kind that the model predicts but are missing in the real CPPs. It is therefore important to explore the impact of removing the high pass filter.

More minor:

Drugowitsch et al (2014; eLife) examined decisions about time-varying evidence and might be worth a citation given the authors’ framing in terms of breaking away from stationary evidence

line 83 is written as if it is assumed the reader is already familiar with Lamble et al. I also suggest finishing the intro or starting the results with a brief intro to the physics of the situation for the uninitiated, e.g. define theta!

I suggest briefly stating in the main part of the results, perhaps in the figure 1 legend, that misses and false alarms were so few as to be not worth showing, because otherwise readers may wonder if the behavioural data are incompletely shown, until they reach the methods, where it is stated in an odd place, in the EEG preprocessing section. This is the same place we learn what proportion of trials were catch trials, which is a detail I believe should be in the results where the task is described as it can be integral to a subject’s responding strategy. I may have missed it, but I did not see a statement of the proportion of catch trials that resulted in false alarms, and whether those were included in modelling?

In Fig 3C: I suggest adding a note to the legend to explain that the splaying-out of the model simulation’s traces across conditions before the response time is a result of post-decision accumulation and that the time of coalescence some ~300 ms before the response marks the point of threshold crossing in the model. Otherwise I think it may be confusing for some readers that the model has a constant threshold yet the simulated traces reach different levels at response time, until they have sifted through the methods.

Line 227: the “last time ... exceeded?” Is this a typo – should it be the last time 30% was NOT exceeded?

The term “pre-decision” is used several times to refer to, e.g. , the response-locked CPP, but I think this will be quite confusing for most, especially when most of the CPP seems to rise after the timepoint the model indicates as the threshold crossing. Perhaps stick to “response-locked,” “pre-response” or “pre-commitment”

**********

Have all data underlying the figures and results presented in the manuscript been provided?

Large-scale datasets should be made available via a public repository as described in the PLOS Computational Biology data availability policy, and numerical data that underlies graphs or summary statistics should be provided in spreadsheet form as supporting information.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Patrick A. Simen

Reviewer #2: Yes: Joseph W. Houpt

Reviewer #3: Yes: Simon Kelly

Figure Files:

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org.

Data Requirements:

Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5.

Reproducibility:

To enhance the reproducibility of your results, PLOS recommends that you deposit laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions, please see http://journals.plos.org/compbiol/s/submission-guidelines#loc-materials-and-methods

PLoS Comput Biol. doi: 10.1371/journal.pcbi.1009096.r003

Decision Letter 1

Wolfgang Einhäuser, Marieke Karlijn van Vugt

26 Mar 2021

Dear Prof. Markkula,

Thank you very much for submitting your manuscript "Accumulation of continuously time-varying sensory evidence constrains neural and behavioral responses in human collision threat detection" for consideration at PLOS Computational Biology. As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. The reviewers appreciated the attention to an important topic. Based on the reviews, we are likely to accept this manuscript for publication, providing that you modify the manuscript according to the review recommendations.

Thank you very much for your thoughtful revisions. Two reviewers have now accepted your manuscript. Reviewer 3 still has one concern about the filtering of the ERPs, and I concur. I encourage you to submit a revision that addresses this concern. Many thanks in advance!

Please prepare and submit your revised manuscript within 30 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email.

When you are ready to resubmit, please upload the following:

[1] A letter containing a detailed list of your responses to all review comments, and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out

[2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file).

Important additional instructions are given below your reviewer comments.

Thank you again for your submission to our journal. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments.

Sincerely,

Marieke Karlijn van Vugt, PhD

Associate Editor

PLOS Computational Biology

Wolfgang Einhäuser

Deputy Editor

PLOS Computational Biology

***********************

A link appears below if there are any accompanying review attachments. If you believe any reviews to be missing, please contact ploscompbiol@plos.org immediately:

[LINK]

Thank you very much for your thoughtful revisions. Two reviewers have now accepted your manuscript. Reviewer 3 still has one concern about the filtering of the ERPs, and I concur. I encourage you to submit a revision that addresses this concern. Many thanks in advance!

Reviewer's Responses to Questions

Comments to the Authors:

Please note here if the review is uploaded as an attachment.

Reviewer #1: I am pretty satisfied with the changes the authors made. First, I want to acknowledge that they were correct in their responses about three mistakes I made in my original review. And then I have one more concern prompted by another reviewer's comments, and one more suggestion about highlighting the optimality properties of CUSUM, and perhaps their own model.

First mistake: I understand now that the evidence/drift is non-constant over time, so there are no closed-form response time distributions.

Second mistake: I'm not sure how I managed to mistake A and AV as standing for Audio and Audio-Visual -- I must've been reviewing another paper with audio and visual conditions at the same time and confused them! Maybe using "VA" instead of "AV" would've prevented that misreading on my part. That's a trivial change that could reduce the chance of other readers making the same silly mistake. It also has the same order as the corresponding words in "variable-gain accumulator".

Third mistake: Regarding my misinterpretation about whether there were declining thresholds in the evidence accumulator model -- I had to reconstruct why I thought that. It was because of the caption to Fig. 1, parts B and C, in which different final \\dot(\\theta) values at the time of responding are noted -- these final values decrease for slower looming stimuli. Of course, the time-integral of \\dot(\\theta) need not be lower when there are lower final \\dot(\\theta) values. So there's no problem -- I was just confused again. However, since I did make that mistake of interpretation, maybe other readers would too, and it might be worth emphasizing in the caption that the top panel of 1B shows that the integral of the \\dot(theta) values in the bottom panel do indeed reach the same fixed threshold at decision time.

New concern, but not a serious one: I think it's interesting, on my second consideration of this manuscript and of the other reviews, that conceptually, integrating the time-derivative of radial image size over time, without leak, from the start of the trial up to now, is equivalent at every moment to computing the difference between the current radial size and the initial radial size, by the fundamental theorem of calculus. That's only important in the following sense: I can either physically compute a radial velocity, and also compute a running integral of that velocity, or I can take the current size and subtract off my memory of the initial size at the start of the trial. Right? It might seem simpler to some readers to just directly subtract two quantities rather than differentiate, and then undo the differentiation by integrating; and why would the brain use calculus operations when it could just use addition and subtraction?

Personally, I suspect we DO need to compute time derivatives and integrals for all sorts of other reasons, so it's arguably parsimonious for the brain to use differentiation and integration operations. More importantly, the fatal flaw of the pure addition/subtraction approach is that you need to remember an initial radial size, potentially for quite a long time. The velocity computation, in contrast, can be done with a much shorter-duration memory requirement, as in an Euler-method approximation of the derivative -- for that, I just need to remember the radial size from a little while ago in order to subtract it from the current size. And the integration part is really just addition anyway. This might be worth mentioning, if it isn't already mentioned (I couldn't find it).

Regarding the reference to Page's CUSUM procedure for change point detection: it would be interesting to analyze more formally to what extent this model implements CUSUM, as has been done for two-boundary drift-diffusion and the sequential probability ratio test. That's not necessary for this paper. However, as I read the description of CUSUM, something, often the likelihood of the data sample (I think?), is always subtracted off the next data sample before it is added to the running sum. This sounds like a leak term, but I guess it could also just be approximately implemented with a constant inhibition of the accumulator.

Finally, when mentioning CUSUM, I think it would be well worth highlighting the optimality property of CUSUM for certain kinds of scenarios, in terms of minimizing the detection delay of a maximally costly change, while also minimizing the cost of false alarms. Sounds perfect for a case in which the maximally costly change is a deadly collision from failure to detect a vehicle's rapid approach. So there's a reasonable chance that the model here is essentially optimal for maximizing the chance of survival, without also leading to slamming on the brakes every five feet.

As far as the CPP onsets are concerned, I think the new wording in the paper handles that issue, but I would defer to Reviewer 3 on that.

Reviewer #2: The revised article reads well and is generally quite clear. The authors have done well addressing my concerns regarding the earlier submission. I apologize for the independent/dependent mix up and now I am going to have to admit making the mistake to my intro psych class.

Reviewer #3: The authors have provided clear and thoughtful replies to my comments and have made excellent amendments to the manuscript.

On the model specification, it is now clearer how and why it was implemented, and with these clarifications I am satisfied that there aren't obvious alternatives that should have been visited.

On the high-pass filter, however, while I do agree that the waveforms are similar to look at when it is removed, I do think it noteworthy that the first half of the timeframe is less flat, and may contain a subtle buildup aspect. A 0.1 Hz low cut off tends to be about as high as you can go without distorting regular, rapidly-fluctuating ERPs, so it stands to reason that it might begin to subtly distort signals as slow as those involved in the current task. I suspect that if you were to apply a similar 0.1 Hz filter on the simulated waveforms from the model, it may appreciably alter the waveforms in a way that makes the buildup more local to the response - not enough to bring it into alignment with the CPP, perhaps, but in principle, it doesn't seem that the comparison can be fully fair when the CPP is subjected to a transformation that attenuates gradual aspects when the simulated waveforms are not.

I therefore suggest replacing the CPP waveforms and onset estimates in the main figure with the unfiltered ones, or at the very least, computing and showing the unfiltered CPP onset estimates similar to main figure 3D, in supplementary Fig S6A.

Minor:

The abstract says "the model explains CPP onset rather than buildup" but it will not be clear what that means without getting further into the paper - consider whether a tweak here can help to convey that result more clearly, e.g. "The model's estimated decision times align with CPP onset?"

**********

Have all data underlying the figures and results presented in the manuscript been provided?

Large-scale datasets should be made available via a public repository as described in the PLOS Computational Biology data availability policy, and numerical data that underlies graphs or summary statistics should be provided in spreadsheet form as supporting information.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: None

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Patrick Simen

Reviewer #2: Yes: Joseph Houpt

Reviewer #3: Yes: Simon Kelly

Figure Files:

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org.

Data Requirements:

Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5.

Reproducibility:

To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols

References:

Review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript.

If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

PLoS Comput Biol. doi: 10.1371/journal.pcbi.1009096.r005

Decision Letter 2

Wolfgang Einhäuser, Marieke Karlijn van Vugt

19 May 2021

Dear Prof. Markkula,

We are pleased to inform you that your manuscript 'Accumulation of continuously time-varying sensory evidence constrains neural and behavioral responses in human collision threat detection' has been provisionally accepted for publication in PLOS Computational Biology.

Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. A member of our team will be in touch with a set of requests.

Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated.

IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript.

Should you, your institution's press office or the journal office choose to press release your paper, you will automatically be opted out of early publication. We ask that you notify us now if you or your institution is planning to press release the article. All press must be co-ordinated with PLOS.

Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Computational Biology. 

Best regards,

Marieke Karlijn van Vugt, PhD

Associate Editor

PLOS Computational Biology

Wolfgang Einhäuser

Deputy Editor

PLOS Computational Biology

***********************************************************

Congratulations! Your paper has been accepted. Based on my reading, you have satisfactorily addressed the comments of all reviewers.

PLoS Comput Biol. doi: 10.1371/journal.pcbi.1009096.r006

Acceptance letter

Wolfgang Einhäuser, Marieke Karlijn van Vugt

22 Jun 2021

PCOMPBIOL-D-20-02020R2

Accumulation of continuously time-varying sensory evidence constrains neural and behavioral responses in human collision threat detection

Dear Dr Markkula,

I am pleased to inform you that your manuscript has been formally accepted for publication in PLOS Computational Biology. Your manuscript is now with our production department and you will be notified of the publication date in due course.

The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript.

Soon after your final files are uploaded, unless you have opted out, the early version of your manuscript will be published online. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers.

Thank you again for supporting PLOS Computational Biology and open-access publishing. We are looking forward to publishing your work!

With kind regards,

Olena Szabo

PLOS Computational Biology | Carlyle House, Carlyle Road, Cambridge CB4 3DN | United Kingdom ploscompbiol@plos.org | Phone +44 (0) 1223-442824 | ploscompbiol.org | @PLOSCompBiol

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Appendix. Additional method details, analyses, and results.

    (PDF)

    Attachment

    Submitted filename: Response to reviewers - v1.1.pdf

    Attachment

    Submitted filename: Response to reviewers - Rev2.pdf

    Data Availability Statement

    The primary research data for this study, as well as the software code implementing the experimental paradigm, data analyses, and computational models, are available at: https://doi.org/10.17605/OSF.IO/KU3H4.


    Articles from PLoS Computational Biology are provided here courtesy of PLOS

    RESOURCES