Skip to main content
The Journal of Neuroscience logoLink to The Journal of Neuroscience
. 2016 Jan 6;36(1):1–3. doi: 10.1523/JNEUROSCI.3690-15.2016

Combining Computational Models of Cognition and Neural Data to Learn about Mixed Task Strategies

Gilles de Hollander 1,
PMCID: PMC6601790  PMID: 26740643

In perceptual decision-making tasks, participants are usually assumed to apply only a single cognitive strategy throughout the course of a task. Variability in observed behavior (e.g., reaction times) is explained as the result of variability in the same cognitive process that gave rise to the observed behavior. For example, in most theories of perceptual decision-making, it is assumed that variability in reaction times is the result of the variability in the amount of information the stimulus provides, the efficiency of information processing, the amount of response caution, and the speed of the motor response (Gold and Shadlen, 2007; Brown and Heathcote, 2008; Ratcliff and McKoon, 2008; Forstmann et al., 2015).

Such theories assume that during every trial of a perceptual decision-making task, stimulus information is processed and used to guide a choice. This assumption could be challenged by hypothesizing that, on a subset of trials, participants are guessing instead of using the stimulus information provided. Such a guessing strategy seems plausible in speeded decision-making, where a response has to be given very quickly. In a recent publication in The Journal of Neuroscience, Noorbaloochi and colleagues (2015) suggest that, under such circumstances, participants are likely to use mixed task strategies.

Interestingly, such an account of speeded decision-making, where under speed stress participants sometimes employ a guessing strategy, has been tested before, yielding conflicting results (Dutilh et al., 2011; van Maanen, 2015). Different, but related, work has also suggested that sometimes the behavioral patterns in decision-making under severe speed-stress cannot be explained by classical models of decision-making, but has suggested the presence of an urgency signal rather than an additional guess process (Hawkins et al., 2015a).

In the study by Noorbaloochi et al. (2015), participants had to judge whether a rectangle was shifted to either the left or the right relative to a reference location and received monetary compensation for every correct answer. Additionally, before every stimulus, a cue indicated whether an additional reward would be given for a correct response in a specific direction and, if so, in which direction. Participants would only get a monetary reward when they correctly indicated the direction of the shift, but a substantially larger one when this direction was congruent with the direction of the cue. This resulted in biased responses toward the alternative with a higher payoff (see also Diederich and Busemeyer, 2006; Mulder et al., 2012).

Computational models of cognition offer a way to investigate such choice behavior precisely. They can make concrete, quantitative predictions for behavioral data and map raw behavioral measurements to meaningful latent cognitive variables (Lewandowsky and Farrell, 2010; Forstmann et al., 2011). The most prominent of such models in perceptual decision-making are sequential sampling models (Gold and Shadlen, 2007; Brown and Heathcote, 2008; Ratcliff and McKoon, 2008). These models can explain differences in choice proportions and the shape of the reaction time (RT) distributions by differences in model parameters. These parameters all capture some aspect of decision-making, such as the efficiency of information processing or response caution. In sequential sampling models, every choice option in an experiment is represented as an evidence accumulator. During a trial, all accumulators race against each other, and the first one that crosses its response threshold determines the choice of the subject on that trial. The evidence accumulators can have different rates to account for the quality of information the stimulus provides. For example, if a trial is very easy, the average speed of the accumulator corresponding to the correct response will be much higher than the rate of the accumulator corresponding to the incorrect response. Therefore, this accumulator is more likely to win the race, resulting in a correct response. The accumulators can also have different starting points to account for the amount of evidence the decision maker deems sufficient to choose that particular option.

In their study, Noorbaloochi et al. (2015) observed that participants were more likely to choose the cued option, especially when they responded quickly. They used a sequential sampling model [the linear ballistic accumulator (LBA) model; Brown and Heathcote, 2008] to explain that pattern of behavior. Originally, the authors had two different hypotheses. The first hypothesis was that on biased trials, the starting point of the accumulator with a higher payoff was heightened and the other was lowered. The second hypothesis was that on biased trials, the rate of the accumulator with a higher payoff was increased and the rate of the other was decreased. Both hypotheses can be implemented in a LBA model and provide specific predictions about the shape of the correct and incorrect reaction time distributions, as well as the number of correct responses, in different conditions.

Importantly, it turned out that neither of the two models could fully explain the behavioral pattern observed. Specifically, they could not explain the occurrence of “fast errors.” These are incorrect responses with a shorter mean RT than the correct responses.

In addition to behavioral data, electroencephalography (EEG) data were collected throughout the experiment. Such neural data can give additional insight about the cognitive strategy participants use, when behavioral data alone fall short (Marr, 1982; Forstmann et al., 2011). Here, the focus of analysis was on the lateralized readiness potential (LRP), a well established event-related brain potential measured contralateral to the motor response. The LRP is thought to be related to decision-making processes (Gratton et al., 1988; Hawkins et al., 2015a). Analysis of the acquired EEG data suggested that in biased trials, but not unbiased trials, the LRP showed a shift toward the cued direction shortly after stimulus presentation but well before the large surge of activity that indicated choice.

Noorbaloochi et al. (2015) interpreted this early component in the LRP as evidence that an additional process was taking place on a subset of the biased trials. In addition to the original two hypotheses of a shift in starting points or drift rates, a third hypothesis of mixed strategies was formalized using an extended LBA model. In this extended model, a third “fast guess” accumulator races with the two original accumulators, but starts earlier and is completely independent of stimulus information. This extended fast-guess LBA model was successful in predicting the observed fast errors, as well as the other patterns in the behavioral data. Importantly, the Akaike information criterion (AIC) for this model was lower than the other two models in all participants. The AIC is a measure of the goodness-of-fit of a model that penalizes for the number of parameters used. The lower AICs for the fast-guess LBA thus provides evidence that the better fit of the model is not just the result of overfitting because of a larger number of parameters (Vandekerckhove et al., 2015).

Although the results and interpretation by Noorbaloochi et al. (2015) are compelling, there is another plausible interpretation of the data that has not been discussed or tested. This alternative account proposes that participants use some urgency signal to reduce their response thresholds during the course of a single trial. A recent reanalysis of some large datasets suggests that both humans and nonhuman primates might sometimes apply such a strategy and that the resulting behavioral patterns cannot be described by standard sequential sampling models (Hawkins et al., 2015a). Hawkins et al. (2015a) suggest that such strategies might arise from extensive training on the task, as well as very short response deadlines. Both extensive training and heavy speed stress were used in the experiment by Noorbaloochi et al. (2015). Participants performed >10,000 trials and were paid additional money to give faster responses during the training sessions. Furthermore, the RT distributions presented in the paper lack the long tail of slow responses that are very typical of RT distributions (Luce, 1986). Indeed, such Gaussian-like RT distributions are a hallmark of an “incoming bounds” strategy, where the response thresholds are lowered as a function of elapsed decision time (Hawkins et al., 2015a,b).

Interestingly, the standard LBA models in the study by Noorbaloochi et al. (2015) were unable to account for fast errors, which is not in line with previous studies (Brown and Heathcote, 2008). The misfits that were found in the study by Noorbaloochi et al. (2015) might be present because the model can only account for the lack of a tail in the RT distributions by sacrificing its fit of the fast errors. It would be interesting to see how a sequential sampling model with (nonlinear) incoming bounds would perform compared with the fast-guess LBA.

The EEG data might help distinguish between the two alternative models. However, to do this, the link between neural data and a cognitive model should be made more tight by using quantitative methods (Turner et al., 2013). For example, if individual differences in the amount of fast guesses, as estimated by the fast-guess LBA, can be predicted by individual differences in the early component of the LRP, this would strengthen the interpretation that this signal is related to a fast-guess process. Conversely, if the rate of the incoming bounds can be predicted by some component of the LRP, this would strengthen the incoming bounds-interpretation.

The authors ought to be applauded for sharing the data and modeling code of the paper publicly. Sharing data allows other researchers to fit these data using different models and compare their performance. Data sharing helps cognitive neuroscientists in distinguishing between competing theories of speeded decision-making implemented in different models more quickly, and ultimately further our understanding of how we make everyday decisions (Munafò et al., 2014).

Footnotes

Editor's Note: These short, critical reviews of recent papers in the Journal, written exclusively by graduate students or postdoctoral fellows, are intended to summarize the important findings of the paper and provide additional insight and commentary. For more information on the format and purpose of the Journal Club, please see http://www.jneurosci.org/misc/ifa_features.shtml.

I thank Laura Fontanesi, Birte Forstmann, Guy Hawkins, and Leendert van Maanen for insightful discussions.

The author declares no competing financial interests.

References

  1. Brown SD, Heathcote A. The simplest complete model of choice response time: linear ballistic accumulation. Cogn Psychol. 2008;57:153–178. doi: 10.1016/j.cogpsych.2007.12.002. [DOI] [PubMed] [Google Scholar]
  2. Diederich A, Busemeyer JR. Modeling the effects of payoff on response bias in a perceptual discrimination task: bound-change, drift-rate-change, or two-stage-processing hypothesis. Percept Psychophys. 2006;68:194–207. doi: 10.3758/BF03193669. [DOI] [PubMed] [Google Scholar]
  3. Dutilh G, Wagenmakers EJ, Visser I, van der Maas HL. A phase transition model for the speed-accuracy trade-off in response time experiments. Cogn Sci. 2011;35:211–250. doi: 10.1111/j.1551-6709.2010.01147.x. [DOI] [PubMed] [Google Scholar]
  4. Forstmann BU, Wagenmakers EJ, Eichele T, Brown S, Serences JT. Reciprocal relations between cognitive neuroscience and formal cognitive models: opposites attract? Trends Cogn Sci. 2011;15:272–279. doi: 10.1016/j.tics.2011.04.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Forstmann BU, Ratcliff R, Wagenmakers EJ. Sequential sampling models in cognitive neuroscience: advantages, applications, and extensions. Annu Rev Psychol. 2015 doi: 10.1146/annurev-psych-122414-033645. doi: 10.1146/annurev-psych-122414-033645. Advance online publication. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Gold JI, Shadlen MN. The neural basis of decision making. Annu Rev Neurosci. 2007;30:535–574. doi: 10.1146/annurev.neuro.29.051605.113038. [DOI] [PubMed] [Google Scholar]
  7. Gratton G, Coles MG, Sirevaag EJ, Eriksen CW, Donchin E. Pre- and poststimulus activation of response channels: a psychophysiological analysis. J Exp Psychol Hum Percept Perform. 1988;14:331–344. doi: 10.1037/0096-1523.14.3.331. [DOI] [PubMed] [Google Scholar]
  8. Hawkins GE, Forstmann BU, Wagenmakers EJ, Ratcliff R, Brown SD. Revisiting the evidence for collapsing boundaries and urgency signals in perceptual decision-making. J Neurosci. 2015a;35:2476–2484. doi: 10.1523/JNEUROSCI.2410-14.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Hawkins GE, Wagenmakers EJ, Ratcliff R, Brown SD. Discriminating evidence accumulation from urgency signals in speeded decision making. J Neurophysiol. 2015b;114:40–47. doi: 10.1152/jn.00088.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Lewandowsky S, Farrell S. Computational modeling in cognition: principles and practice. Thousand Oaks, CA: Sage; 2010. [Google Scholar]
  11. Luce RD. Response times. Oxford: Oxford UP; 1986. [Google Scholar]
  12. Marr D. Vision: a computational approach. San Francisco: Freeman and Co; 1982. [Google Scholar]
  13. Mulder MJ, Wagenmakers EJ, Ratcliff R, Boekel W, Forstmann BU. Bias in the brain: a diffusion model analysis of prior probability and potential payoff. J Neurosci. 2012;32:2335–2343. doi: 10.1523/JNEUROSCI.4156-11.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Munafò M, Noble S, Browne WJ, Brunner D, Button K, Ferreira J, Holmans P, Langbehn D, Lewis G, Lindquist M, Tilling K, Wagenmakers EJ, Blumenstein R. Scientific rigor and the art of motorcycle maintenance. Nat Biotechnol. 2014;32:871–873. doi: 10.1038/nbt.3004. [DOI] [PubMed] [Google Scholar]
  15. Noorbaloochi S, Sharon D, McClelland JL. Payoff information biases a fast guess process in perceptual decision making under deadline pressure: evidence from behavior, evoked potentials, and quantitative model comparison. J Neurosci. 2015;35:10989–11011. doi: 10.1523/JNEUROSCI.0017-15.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Ratcliff R, McKoon G. The diffusion decision model: theory and data for two-choice decision tasks. Neural Comput. 2008;20:873–922. doi: 10.1162/neco.2008.12-06-420. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Turner BM, Forstmann BU, Wagenmakers EJ, Brown SD, Sederberg PB, Steyvers M. A Bayesian framework for simultaneously modeling neural and behavioral data. Neuroimage. 2013;72:193–206. doi: 10.1016/j.neuroimage.2013.01.048. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. van Maanen L. Is there evidence for a mixture of processes in speed-accuracy trade-off behavior? Topics Cogn Sci. 2015 doi: 10.1111/tops.12182. In press. [DOI] [PubMed] [Google Scholar]
  19. Vandekerckhove J, Matzke D, Wagenmakers EJ. Model comparison and the principle of parsimony. In: Busemeyer JR, Wang Z, Townsend JT, Eidels A, editors. The Oxford handbook of computational and mathematical psychology. Oxford: Oxford UP; 2015. pp. 300–319. [Google Scholar]

Articles from The Journal of Neuroscience are provided here courtesy of Society for Neuroscience

RESOURCES