Skip to main content
eLife logoLink to eLife
. 2020 Sep 29;9:e53664. doi: 10.7554/eLife.53664

A circuit mechanism for decision-making biases and NMDA receptor hypofunction

Sean Edward Cavanagh 1,†,, Norman H Lam 2,, John D Murray 3,‡,, Laurence Tudor Hunt 1,4,5,6,‡,, Steven Wayne Kennerley 1,‡,
Editors: Tobias H Donner7, Michael J Frank8
PMCID: PMC7524553  PMID: 32988455

Abstract

Decision-making biases can be features of normal behaviour, or deficits underlying neuropsychiatric symptoms. We used behavioural psychophysics, spiking-circuit modelling and pharmacological manipulations to explore decision-making biases during evidence integration. Monkeys showed a pro-variance bias (PVB): a preference to choose options with more variable evidence. The PVB was also present in a spiking circuit model, revealing a potential neural mechanism for this behaviour. To model possible effects of NMDA receptor (NMDA-R) antagonism on this behaviour, we simulated the effects of NMDA-R hypofunction onto either excitatory or inhibitory neurons in the model. These were then tested experimentally using the NMDA-R antagonist ketamine, a pharmacological model of schizophrenia. Ketamine yielded an increase in subjects’ PVB, consistent with lowered cortical excitation/inhibition balance from NMDA-R hypofunction predominantly onto excitatory neurons. These results provide a circuit-level mechanism that bridges across explanatory scales, from the synaptic to the behavioural, in neuropsychiatric disorders where decision-making biases are prominent.

Research organism: Rhesus macaque

Introduction

A major challenge in computational psychiatry is to relate changes that occur at the synaptic level to the cognitive computations that underlie neuropsychiatric symptoms (Wang and Krystal, 2014; Huys et al., 2016). For example, one line of research has implicated N-methyl-D-aspartate receptor (NMDA-R) hypofunction in the pathophysiology of schizophrenia (Nakazawa et al., 2012; Kehrer et al., 2008; Olney and Farber, 1995). Some of the strongest evidence in support of this hypothesis comes from the observation that subanaesthetic doses (~0.1–0.5 mg/kg) of the NMDA-R antagonist ketamine produce psychotomimetic effects in humans, especially cognitive aspects (Krystal et al., 1994; Umbricht et al., 2000; Malhotra et al., 1996; see Frohlich and Van Horn, 2014 for review). But how do we link our understanding of the pharmacological actions of ketamine to its effects on cognition? One strategy to bridge across these different scales is to consider behaviour at the intermediate level of the cortical microcircuit.

Circuit models present a promising avenue to address the challenges of neuropsychiatric research due to their biophysically detailed mechanisms. By perturbing the circuit model at the synaptic level, specific behavioural and neural predictions can be made. For example, NMDA-Rs have long been argued to play a central role in temporally extended cognitive processes such as working memory, supported by studies of the effects of NMDA-R antagonism in prefrontal cortical microcircuits (Wang et al., 2013). Using a cortical circuit model, a precise pattern of working memory deficits can be predicted by hypofunction of NMDA-Rs, eliciting changes in excitation-inhibition balance (E/I ratio) (Murray et al., 2014). This predicts changes in behaviour consistent with those observed in healthy volunteers administered with ketamine (Murray et al., 2014), and also in patients with schizophrenia (Starc et al., 2017). Yet it currently remains unclear whether this approach might generalise to explain the behavioural consequences of NMDA-R antagonism in other temporally extended cognitive processes.

A closely related cognitive process to working memory is evidence accumulation – the decision process whereby multiple samples of information are combined over time to form a categorical choice (Gold and Shadlen, 2007). Recent research has advanced our understanding of how such evidence accumulation decisions are made in the healthy brain. Of particular relevance to psychiatric research, it has been possible to disentangle systematic biases in decision-making and reveal the mechanisms through which they occur. For instance, when choosing between two series of bars with distinct heights, people have a preference to choose the option where evidence is more broadly distributed across samples (Tsetsos et al., 2016; Tsetsos et al., 2012). Although this ‘pro-variance bias’ may appear irrational, and would not be captured by many normative decision-making models, it becomes the optimal strategy when the accumulation process is contaminated by noise (Tsetsos et al., 2016). These behaviours have presently been well-characterised using algorithmic level descriptions of decision formation, yet in order to understand how these decision biases might be affected by NMDA-R hypofunction, a mechanistic explanation is needed.

As with working memory, an influential technique used to investigate evidence accumulation at the mechanistic level has been biophysically grounded computational modelling of cortical circuits (Wang, 2002; Wong and Wang, 2006; Murray et al., 2017). Through strong recurrent connections between similarly tuned pyramidal neurons, and NMDA-R mediated synaptic transmission, these circuits can facilitate the integration of evidence across long timescales. Crucially, these neural circuit models bridge synaptic and behavioural levels of understanding, by predicting both choices and their underlying neural activity. These predictions reproduce key experimental phenomena, mirroring the behavioural and neurophysiological data recorded from macaque monkeys performing evidence accumulation tasks (Wang, 2002; Wong et al., 2007). Whether neural circuit models can provide a mechanistic implementation of the pro-variance bias, and other systematic biases associated with evidence accumulation, is currently unknown. Moreover, while NMDA-R antagonists have been tested during various decision-making tasks (Shen et al., 2010; Evans et al., 2012), the role of the NMDA-R in shaping the temporal process of evidence accumulation has not been characterised experimentally.

Here, we used a psychophysical behavioural task in macaque monkeys, in combination with spiking cortical circuit modelling and pharmacological manipulations, to gain new insights into decision-making biases in both health and disease. We trained two subjects to perform a challenging decision-making task requiring the combination of multiple samples of information with distinct magnitudes. Replicating observations from humans, monkeys showed a pro-variance bias. The pro-variance bias was also present in the spiking circuit model, revealing an explanation of how it may arise through neural dynamics. We then investigated the effects of NMDA-R hypofunction in the circuit model, by perturbing NMDA-R function at distinct synaptic sites. Perturbations could either elevate or lower the E/I ratio which strengthen or weaken recurrent circuit dynamics, with each effect making dissociable predictions for evidence accumulation behaviour. These model predictions were tested experimentally by administering monkeys with a subanaesthetic dose of the NMDA-R antagonist ketamine (0.5 mg/kg, intramuscular injection). Ketamine produced decision-making deficits consistent with a lowering of the cortical E/I ratio.

Results

To study evidence accumulation behaviour in non-human primates, we developed a novel two-alternative perceptual decision-making task (Figure 1A). Subjects were presented with two series of eight bars (evidence samples), one on either side of central fixation. Their task was to decide which evidence stream had the taller/shorter average bar height, and indicate their choice contingent on a contextual cue shown at the start of the trial. The individual evidence samples were drawn from Gaussian distributions, which could have different variances for different options (Figure 1B). This task design had several advantages over evidence accumulation paradigms previously employed with animal subjects. Subjects were given eight evidence samples with distinct magnitudes (Figure 1C) – encouraging a temporal integration decision-making strategy. Precise experimental control of the stimuli facilitated analytical approaches probing the influence of evidence variability and time course on choice, and allowed us to design specific trials that attempted to induce systematic biases in choice behaviour.

Figure 1. An evidence-varying decision-making task for macaque monkeys.

Figure 1.

(A) Task design. Two streams of stimuli were presented to a monkey, both of which consisted of a sequence of eight samples of bars of varying heights. Depending on the contextual cue shown at the start of the trial, the monkey had to report the stream with either taller or shorter mean bar height. On correct trials, the monkey was rewarded proportionally to the mean evidence for the correct stream; incorrect trials were not rewarded. The monkey was required to fixate centrally while the evidence was presented, indicated by the dashed red fixation zone (not visible to subject). (B) Generating process of each stimulus stream. The generating mean for each trial was chosen from a uniform distribution (see Materials and methods), while the generating standard deviation was 12 and 24 for the narrow (brown) and broad (blue) streams respectively. (C) Example Trial. The bar heights in both streams varied over time. The dotted lines illustrate the mean of the eight stimuli for the narrow/broad streams. In this example, the narrow stream has taller mean bar height and thus more mean evidence strength, so is the correct choice. The narrow/broad streams are randomly assigned to the left/right options on different trials; in the example trial shown here (A and C), the narrow stream is assigned to the right option, the broad stream is assigned to the left option.

Two monkeys (Macaca mulatta) completed 29,726 trials (Monkey A: 10,748; Monkey H: 18,978). Despite the challenging nature of the task, subjects were able to perform it with high accuracy (Figure 2A–B, Figure 2—figure supplement 1A–B). The precise control of the discrete stimuli allowed us to evaluate the impact of evidence presented at each time point on the final behavioural choice, via logistic regression (see Materials and methods). Stimuli presented at a time point with a larger regression coefficient have a strong impact on the choice, relative to time points with smaller coefficients. We found that the subjects utilised all eight stimuli throughout the trial to inform their decision, and demonstrated a primacy bias such that early stimuli have stronger temporal weights than later stimuli (Figure 2C–D, Figure 2—figure supplement 1C–D). A primacy bias has been reported in prior studies in monkeys, and is consistent with a decision-making strategy of bounded evidence integration (Kiani et al., 2008; Nienborg and Cumming, 2009; Wimmer et al., 2015). As it was clear both monkeys could accurately perform the task, all subsequent figures are presented with data collapsed across subjects for conciseness, but results separated by subjects are consistent (see supplementary figures).

Figure 2. Subjects use evidence presented throughout the trial to guide their choices.

(A-B) Choice accuracy plotted as a function of the amount of evidence in favour of the best option. Lines are a psychometric fit to the data. (C-D) Logistic regression coefficients reveal the contribution (weight) of all eight stimuli on subjects’ choices (see Materials and methods). Although subjects used all eight stimuli to guide their choices, they weighed the initially presented evidence more strongly. All errorbars indicate the standard error.

Figure 2.

Figure 2—figure supplement 1. Subjects use evidence presented throughout the trial to guide their choices – data separated by ‘ChooseTall’ and ‘ChooseShort’ trials.

Figure 2—figure supplement 1.

The psychometric functions and evidence weightings are very similar on ‘ChooseTall’ (red) and ‘ChooseShort’ (blue) trials. (A–B) Choice accuracy plotted as a function of the amount of evidence in favour of the best option. Lines are a psychometric fit to the data. (C–D) Logistic regression coefficients reveal the contribution (weight) of all eight stimuli on subjects’ choices (see Materials and methods). Although subjects used all eight stimuli to guide their choices, they weighed the initially presented evidence more strongly. All errorbars indicate the standard error.

We next probed the influence of evidence variability on choice. We designed specific choice options with different levels of standard deviation across samples in an attempt to replicate the pro-variance bias previously reported for human subjects (see Materials and methods) (Tsetsos et al., 2016; Tsetsos et al., 2012). On each trial, one option was allocated a narrow distribution of bar heights, and the other a broad distribution. In different conditions, either the broad or narrow stimuli stream could be the correct choice (‘Broad Correct’ Trials or ‘Narrow Correct’ Trials), or there could be no clear correct answer (‘Ambiguous’ Trials) (Figure 3A, Figure 3—figure supplements 1 and 2). If subjects chose optimally, and only the mean bar height influenced their choice, their accuracy would be the same in ‘Broad Correct’ and ‘Narrow Correct’ trials and they would be indifferent to the variance of the distributions in ‘Ambiguous’ trials. We show that our monkeys deviate from such behaviours. The monkeys are more accurate on ‘Broad Correct’ trials than on ‘Narrow Correct’ trials (Figure 3B, Figure 3—figure supplements 1 and 2). Furthermore, in the ‘Ambiguous’ trials, the monkeys demonstrated a preference for the broadly distributed stream, which has greater variability across samples (Figure 3C, Figure 3—figure supplements 1 and 2). Such a pro-variance bias pattern of decision behaviour is similar to what was found in human subjects (Tsetsos et al., 2016; Tsetsos et al., 2012; Figure 3D–E).

Figure 3. Subjects show a pro-variance bias in their choices on Narrow-Broad Trials, mirroring previous findings in human subjects.

(A) The narrow-broad trials include three types of conditions, where either the narrow stream is correct (brown), the broad stream is correct (blue), or the difference in mean evidence is small (grey, ‘Ambiguous’ trials). See Materials and methods and Figure 3—figure supplement 1 for details of the generating process. (B–C) Monkey choice performance on Narrow-Broad trials. (B) Subjects were significantly more accurate on ‘Broad-correct’ trials (Chi-squared test, chi = 99.05, p<1×10−10). Errorbars indicate the standard error. (C) Preference for the broad option on ‘Ambiguous’ trials. Subjects were significantly more likely to choose the broad option (Binomial test, p<1×10−10). Errorbar indicates the standard error. (D–E) Human choice performance on Narrow-Broad trials previously reported by Tsetsos et al., 2012. (D) Choice accuracy when either the narrow or the broad stream is correct, respectively. Subjects were more accurate on ‘Broad-correct’ trials. (E) Preference for the broad option on ‘Ambiguous’ trials. Subjects were more likely to choose the broad option.

Figure 3.

Figure 3—figure supplement 1. Extra Information on Narrow-Broad Trials, separated by subjects.

Figure 3—figure supplement 1.

(A) The generating process of the narrow-correct trials, for each narrow (brown) and broad (blue) stimuli sample. A full stream sequentially presents 8 such stimuli, each for 200ms with a 50ms inter-sample interval in between. In each trial where the narrow choice is correct, the generating mean of the narrow stream, μN, is uniformly sampled from [48,60]. The generating mean of the broad stream, μB, is then set to be μN8. For all trials, the generating standard deviation of the narrow and broad streams are σN=12, σB=24 respectively. The lines above the distributions denote the ranges of μN and μB. The particular values of μN and μB in this figure are shown for one trial, and chosen arbitrarily for illustration purpose. Given the generating means and standard deviations in a trial, a sequence of 8 stimuli samples are generated from a Gaussian process with certain constraints, for each of the narrow and broad options (See Materials and methods). (B) Sampled distribution of the mean evidence of the narrow and broad streams, across all trials for both monkeys where the narrow option is correct. (C, D) Same as (A, B) but for broad-correct trials. Here, μB is uniformly sampled from [48,60], and μN is set to be μB8. (E, F) Same as (A, B) but for ambiguous trials. Here, μN and μB are equal and uniformly sampled from [44,56]. (G) The accuracy of Monkey A in the narrow-correct and broad-correct trials. Monkey A was significantly more accurate on ‘Broad-correct’ trials (Chi-squared test, chi = 38.39, p = 5.80x10−10). Errorbars show the standard error. (H) The probability for Monkey A to choose the broad option in ambiguous trials. Monkey A was significantly more likely to choose the broad option (Binomial test, p < 1x10−10). (I) Same as (G) but for Monkey H (Chi-squared test, chi = 59.46, p < 1x10−10). (J) Same as (H) but for Monkey H (Binomial test, p = 3.00x10−6).

Figure 3—figure supplement 2. Extra Information on Narrow-Broad Trials, separated by ‘ChooseTall’ and ‘ChooseShort’ trials.

Figure 3—figure supplement 2.

The findings are very similar on ‘ChooseTall’ (red) and ‘ChooseShort’ (blue) trials. (A) The accuracy of Monkey A in the narrow-correct and broad-correct trials. Monkey A was significantly more accurate on ‘Broad-correct’ trials (Chi-squared test, chi(ChooseTall) = 18.66, p(ChooseTall) = 1.57×10−5, chi(ChooseShort) = 19.87, p (ChooseShort) = 8.30×10−6). Errorbars show the standard error. (B) The probability for Monkey A to choose the broad option in ambiguous trials. Monkey A was significantly more likely to choose the broad option (Binomial test, p(ChooseTall)=2.29×10−5, p (ChooseShort) = p < 10−10). (C) Same as (A) but for Monkey H (Chi-squared test, chi(ChooseTall) = 43.52, p(ChooseTall)=p < 10−10, chi(ChooseShort) = 16.19, p (ChooseShort) = 5.74×10−5). (D) Same as (B) but for Monkey H (Binomial test, p(ChooseTall)=3.47×10−6, p (ChooseShort) = 0.0314).

To further probe the pro-variance bias, we studied choices from a larger pool of ‘Regular’ trials in which the mean evidences and variabilities of the two streams were set independently on each trial (Figure 4A–B, Figure 4—figure supplements 1 and 2). ‘Regular' trials allowed us to explore the pro-variance bias across a greater range of choice difficulties (Figure 4C) and quantitatively characterise its effect using regression analysis. On ‘Regular’ trials, subjects also demonstrated a preference for options with broadly distributed evidence. Regression analysis confirmed that evidence variability was a significant predictor of choice (Figure 4D; see Materials and methods).

Figure 4. Subjects show a pro-variance bias in their choices on regular trials.

For these analyses, stimulus streams were divided into ‘Lower SD’ or ‘Higher SD’ options post-hoc, on a trial-wise basis. (A) On regular trials, the mean evidence of each stream was independent. (B) Each stream is sampled from either a narrow or a broad distribution, such that about 50% of the trials have one broad stream and one narrow stream, 25% of the trials have two broad streams, and 25% of the trials have two narrow streams. (C) Psychometric function when either the ‘Lower SD’ (brown) or ‘Higher SD’ (blue) stream is correct in the regular trials. (D) Regression analysis using the left-right differences of the mean and standard deviation of the stimuli evidence to predict left choice. The beta coefficients quantify the contribution of both statistics to the decision-making processes of the monkeys (Mean Evidence: t = 74.78, p<10−10; Evidence Standard Deviation: t = 19.65, p<10−10). Notably, a significantly positive evidence SD coefficient indicates the subjects preferred to choose options which were more variable across samples. Errorbars indicate the standard error.

Figure 4.

Figure 4—figure supplement 1. Extra information on Regular Trials, separated by subjects.

Figure 4—figure supplement 1.

In the regular-trials, each of the two streams is randomly chosen to be either narrow (μN[47,53], σN=12), or broad (μB[44,56], σB=24), then divided into ‘Lower SD’ or ‘Higher SD’ options post-hoc, depending on the sampled standard deviation of evidence relative to the other option. (A) The distribution of the mean evidence of ‘Lower SD’ and ‘Higher SD’ streams, across all regular trials for both monkeys. (B) The distribution of the evidence variability of ‘Lower SD’ and ‘Higher SD’ streams, across all regular trials for both monkeys. (C) The psychometric function of Monkey A when either the ‘Lower SD’ (brown) or ‘Higher SD’ (blue) stream is correct. (D) A regression model using evidence mean and variability to predict the animals’ choices. Each regressor is the left-right difference of the mean and standard deviation of the evidence streams. This shows that both statistics are utilised by Monkey A to solve the task (Mean Evidence: t = 45.90, p < 10−10; Evidence Standard Deviation: t = 16.68, p < 10−10). (E) A regression model including the mean, maximum, minimum, first, and last evidence values of both the left and right streams as regressors, in order to evaluate the contribution of each quantity and the possibility that the monkey is utilising strategies alternative to evidence integration and pro-variance bias. Evidently, Monkey A mainly relies on temporal integration to solve the task, as indicated by a strong mean evidence coefficient in both regression models. See also Supplementary files 1, 2, 3 for cross-validation analysis comparing regression models including various combinations of these predictors. (F–H) Same as (C–E) but for Monkey H. The statistics of the regression model in (G) are (Mean Evidence: t = 58.88, p < 10−10; Evidence Standard Deviation: t = 12.08, p < 10−10). All errorbars indicate the standard error.

Figure 4—figure supplement 2. Extra information on Regular Trials, separated by ‘ChooseTall’ and ‘ChooseShort’ trials.

Figure 4—figure supplement 2.

The findings are very similar on both trial types. (A) The psychometric function of Monkey A when either the ‘Lower SD’ (brown) or ‘Higher SD’ (blue) stream is correct, on ‘ChooseTall’ trials. (B) A regression model using evidence mean and variability to predict Monkey A’s choices on ‘ChooseTall’ trials. Each regressor is the left-right difference of the mean and standard deviation of the evidence streams. This shows that both statistics are utilised by Monkey A to solve the task (mean evidence: t(ChooseTall) = 32.78, p(ChooseTall)<10−10; evidence standard deviation: t(ChooseTall) = 6.81, p(ChooseTall)<10−10). (C) A regression model including the mean, maximum, minimum, first, and last evidence values of both the left and right streams as regressors, in order to evaluate the contribution of each quantity on choices on ‘ChooseTall’ trials and the possibility that Monkey A is utilising strategies alternative to evidence integration and pro-variance bias. Evidently, Monkey A mainly relies on temporal integration to solve the task, as indicated by a strong mean evidence coefficient in both regression models. (D) The psychometric function of Monkey A when either the ‘Lower SD’ (brown) or ‘Higher SD’ (blue) stream is correct, on ‘ChooseShort’ trials. (E) The same regression model as (B) applied to Monkey A’s choices on ‘ChooseShort’ trials (mean evidence: t(ChooseShort) = 32.09, p(ChooseShort)<10−10; evidence standard deviation: t(ChooseShort) = 16.47, p(ChooseShort)<10−10). (F) The same regression model as (C) applied to Monkey A’s choices on ‘ChooseShort’ trials. (G–L) Same as (A–F) but for Monkey H. The statistics of the regression model in (H) are (mean evidence: t(ChooseTall) = 42.76, p(ChooseTall)<10−10; evidence standard deviation: t(ChooseTall) = 6.19, p(ChooseTall)=5.92−10), and (K) are (mean evidence: t(ChooseShort) = 40.43, p(ChooseShort)<10−10; evidence standard deviation: t(ChooseShort) = 10.97, p(ChooseShort)<10−10). All errorbars indicate the standard error.

Figure 4—figure supplement 3. Extra information on Regular Trials – the subjects do not show a frequent winner bias.

Figure 4—figure supplement 3.

(A) A regression model using evidence mean and the number of local winners to predict Monkey A’s choices. This shows that after controlling for mean evidence, Monkey A did not have a frequent winner bias (Mean Evidence: t = 34.86, p<10−10; Local Wins: t = 1.26, p=0.2068). (B) Same as (A) but for Monkey H. The statistics of the regression model in (B) are (Mean Evidence: t = 44.33, p<10−10; Local Wins: t = 0.048, p=0.9614). All errorbars indicate the standard error.

In addition, we defined the pro-variance bias (PVB) index as the ratio of the regression coefficient for evidence standard deviation over the regression coefficient for mean evidence. Although evidence standard deviation was irrelevant in determining the correct option to choose in the task, and it is important to note that we were not suggesting it is explicitly computed by the monkey subjects, the sensitivity of choice behaviour to evidence standard deviation could also arise as a by-product of the neural computations to evaluate the task-relevant mean evidence (as shown later in the Results). PVB index thus served as a unitless, descriptive measure of the evidence accumulation process, quantifying the subjects’ sensitivity to evidence standard deviation relative to the evidence accumulation process. A PVB index value of 0 thereby indicates no pro-variance bias, whereas a PVB index value of 1 indicates the subject is as sensitive to evidence standard deviation as they are to mean evidence. The PVB index thus provides a quantitative measure of the pro-variance bias. A key motivation for defining PVB index is as a potentially useful measure for assessing changes in decision-making behaviour, such as via pharmacological perturbation (performed in later experiments in this paper). For example, if a perturbation simply weakened the overall sensitivity of choice to stimulus information, this would presumably down-scale the mean and standard deviation regression coefficients proportionally, yielding no change in the PVB index as a ratio. In contrast, if a perturbation to the decision-making process differentially impacts how evidence mean vs. standard deviation impact choice, then this would be reflected as a change in the PVB index. From the ‘Regular’ trials, the PVB index across both monkeys was 0.173 (Monkey A = 0.230; Monkey H = 0.138).

Recent work has suggested that when traditional evidence accumulation tasks are performed, it is hard to dissociate whether subjects are combining information across samples, or whether conventional analyses may be disguising a simpler heuristic (Waskom and Kiani, 2018; Stine et al., 2020). In particular, an alternative decision-making strategy which does not involve temporal accumulation of evidence is to detect the single most extreme sample. Because the extreme sample will occur at different times in each trial, if a subject employed this strategy, the choice regression weights across time points would be distributed as in Figure 2C–D. Therefore, it is possible for these findings to be mistakenly interpreted as reflecting evidence accumulation. We wanted to quantitatively confirm that subjects were using the strategy we envisioned when designing our task, namely evidence accumulation. Additionally, we wanted to further investigate the relative contributions of mean evidence and evidence variability on choices. A logistic regression approach probed the influence upon choice of mean evidence, evidence variability, first/last samples, and the most extreme samples within each stream (Figure 4—figure supplements 1E,H and 2C,F,I,L, see Materials and methods). A cross-validation approach revealed choice was principally driven by the mean evidence, verifying that subjects performed the task using evidence accumulation (Supplementary file 1, see Materials and methods).

Although this analysis revealed choices were not primarily driven by an ‘extreme sample detection’ decision strategy, another concern was whether partially employing this strategy could explain the pro-variance effect we observed. To address this, we compared the influence of ‘evidence variability’ versus the influence of ‘extreme samples’ on subjects’ choices. Cross-validation revealed that choices were better described by a model incorporating evidence variability, rather than the extreme sample values (Supplementary file 2). We also demonstrated that including evidence variability as a co-regressor improved the performance of all combinations of nested models (Supplementary file 3). In summary, it can be concluded that although subjects integrated across samples, they were additionally influenced by sample variability.

Previous studies have revealed that a ‘frequent winner’ bias – whereby subjects prefer to choose options where there is a greater number of cases of stronger evidence between the simultaneously presented stimuli – coexists with the pro-variance bias (Tsetsos et al., 2016). Furthermore, both of these biases may arise from the same selective accumulation mechanism (Tsetsos et al., 2016). Therefore, we next analysed whether our subjects’ choices were also influenced by a ‘frequent winner’ bias (Figure 4—figure supplement 3; Materials and methods). After controlling for the influence of mean evidence on choices, we found that neither subject demonstrated a ‘frequent winner’ bias.

Existing algorithmic-level proposals for generating a pro-variance bias in human decision-making rely on the disregarding of sensory information before it enters the accumulation process, depending on its salience (Tsetsos et al., 2016). To investigate a possible alternative basis for the pro-variance bias, at the level of neural implementation, we sought to characterise decision-making behaviour in a biophysically-plausible spiking cortical circuit model (Figure 5A–B, Figure 5—figure supplement 1; Wang, 2002; Lam, 2017). In the circuit architecture, two groups of excitatory pyramidal neurons are assigned to the left and right options, such that high activity in one group signals the response to the respective option. Excitatory neurons within each group are recurrently connected to each other via AMPA and NMDA receptors, and this recurrent excitation supports ramping activity and evidence accumulation. Both groups of excitatory neurons are jointly connected to a group of inhibitory interneurons, resulting in feedback inhibition and winner-take-all competition (Wang, 2002; Wong and Wang, 2006). The two groups of excitatory neurons receive separate inputs - with each group receiving information about one of the two options (i.e. Group A receives IA reflecting the left option; Group B receives IB reflecting the right option). Specifically, we assume the bar heights from each stream are remapped, upstream of the simulated decision-making circuit, to evidence for the corresponding option depending on the cued context. Therefore, taller bars correspond to larger inputs in a ‘ChooseTall’ trial and smaller inputs in a ‘ChooseShort’ trial. Combined together, this synaptic architecture endows the circuit model with decision-making functions.

Figure 5. Spiking cortical circuit model reproduces pro-variance bias.

(A) Circuit model schematic. The model consists of two excitatory neural populations which receive separate inputs (IA and IB), each reflecting the momentary evidence for one of the two stimuli streams. Each population integrates evidence due to recurrent excitation, and competes with the other via lateral inhibition mediated by a population of interneurons. (B) Example firing rate trajectories of the two populations on a single trial where option A is chosen. (C, D) Narrow-Broad Trials. (C) The circuit model is significantly more accurate when the broad stream is correct, than when the narrow stream is correct (Chi-squared test, chi = 1981, p<1×10−10). (D) On ‘Ambiguous trials’, the circuit model is significantly more likely to choose the broad option (Binomial test, p<1×10−10). (E–G) Regular trials. (E) The psychometric function of the circuit model when either the ‘Lower SD’ (brown) or ‘Higher SD’ (blue) stream is correct, respectively. (F) Regression analysis of the circuit model choices on regular trials, using evidence mean and variability as predictors of choice. Both quantities contribute to the decision-making process of the circuit model (Mean Evidence: t = 129.50, p<10−10; Evidence Standard Deviation: t = 45.27, p<10−10). (G) Regression coefficients of the stimuli at different time-steps, showing the time course of evidence integration. The circuit demonstrates a temporal profile which decays over time, similar to the monkeys. All errorbars indicate the standard error.

Figure 5.

Figure 5—figure supplement 1. Extended regression results on the circuit model performance.

Figure 5—figure supplement 1.

(A) Circuit model schematic. The model consists of two excitatory populations which receive separate inputs, reflecting evidence for the two stimuli streams. Each population integrates evidence due to recurrent excitation, and competes with the other due to lateral inhibition. (B) Regression analysis of the regular trial circuit model data, using the mean, maximum, minimum, first, and last evidence values of both the left and right streams, in order to evaluate the possibility of decision-making strategies alternative to evidence integration and pro-variance bias. Similar to the monkeys, the circuit model mainly relies on mean evidence to solve the task. See also Supplementary files 13 for cross-validation analysis comparing regression models including various combinations of these predictors. All errorbars indicate the standard error.

Figure 5—figure supplement 2. Pro-variance bias and temporal weightings in trials separated with more or less total evidence, for circuit model and monkey data.

Figure 5—figure supplement 2.

(A–C) Regression analysis of the circuit model choices, using evidence mean and variability as predictors, on all regular trials (grey), half of the regular trials with more total evidence (pink), and half of the regular trials with less total evidence (blue). (A) Regression coefficient for mean evidence. The regression coefficient for mean evidence is not significantly different between trials with more total evidence and trials with less total evidence (permutation test, p=0.8366). (B) Regression coefficient for evidence standard deviation. The regression coefficient for evidence standard deviation is not significantly different between trials with more total evidence and trials with less total evidence (permutation test, p=0.8710). (C) PVB index. The PVB index is significantly higher for trials with less total evidence than for trials with more total evidence (permutation test, p=7×10−5) (D) Temporal regression weights using the three sets of trials. (E–H) Same as (A–D) but using the monkey behavioural data. The regression coefficients for mean evidence and evidence standard deviation are not significantly different between trials with more total evidence and trials with less total evidence (permutation tests, p(mean evidence)=0.9452, p(evidence standard deviation)=0.9869). The PVB index tends to be higher for trials with less total evidence than for trials with more total evidence, but the effect is insignificant (permutation test, p=0.3677). All errorbars indicate the standard error.

The spiking circuit model was tested with the same trial types as the monkey experiment. Importantly, not only can the circuit model perform the evidence accumulation task, it also demonstrated a pro-variance bias comparable to the monkeys (Figure 5C–F). Regression analysis showed that the circuit model utilises a strategy similar to the monkeys to solve the decision-making task (Figure 5—figure supplement 1B). The temporal process of evidence integration in the circuit model disproportionately weighted early stimuli over late stimuli (Figure 5G), similar to the evidence integration patterns observed in both monkeys. However, the circuit model demonstrated an initial ramp-up in stimuli weights due to the time needed for it to reach an integrative state.

To understand the origin of the pro-variance bias in the spiking circuit, we mathematically reduced the circuit model to a mean-field model (Figure 6A), which demonstrated similar decision-making behaviour to the spiking circuit (Figure 6B-C, Figure 6—figure supplement 1). The mean-field model, with two variables representing the integrated evidence for the two choices, allowed phase-plane analysis to further investigate the pro-variance bias. A simplified case was considered where the broad and narrow streams have the same mean evidence, and the stimuli evidence varies over time in the broad stream but not the narrow stream (i.e. σN = 0) (Figure 6E-H). This example provides an intuitive explanation for the pro-variance bias: a momentarily strong stimulus has an asymmetrically greater influence upon the decision-making process than a momentarily weak stimulus. Input streams with larger variability and thus a higher chance to display both strong inputs and weak inputs, can thus leverage such asymmetry more so than input streams with small variability, resulting in pro-variance bias. Such asymmetry arises from the expansive non-linearities of the firing rate profiles (Figure 6D, see Materials and methods).

Figure 6. Mean-Field model explanation for pro-variance bias.

(A) The mean-field model of the circuit, with two variables representing evidence for the two options. For simplicity, we assume one stream is narrow and one is broad, and label the populations receiving the inputs as N and B respectively. (B) Psychometric function of regular trials as in (Figure 5E). (C) Regression analysis of the regular trial data as in (Figure 5F) (mean: t = 143.42, p < 10−10; standard deviation: t = 30.76, p < 10−10). Errorbars indicate the standard error. (D) The mean-field model uses a generic firing rate profile (black), with zero firing rate at small inputs, then a near-linear response as input increases (see Materials and methods). Such profiles have an expansive non-linearity (with a positive second order derivative (grey)) that can generate pro-variance bias. (E–H) An explanation of the pro-variance bias using phase-plane analysis. (E) A momentarily strong stimulus from the broad stream will drive the model to choose broad (large SB, small SN). Blue and brown lines correspond to nullclines. (F) A momentarily weak stimulus in the broad stream will drive the model to choose narrow (large SN, small SB). (G) The net effect of one strong and one weak broad stimulus, compared with two average stimuli, is to drive the system to the broad choice. That is, a momentarily strong stimulus has an asymmetrically greater influence on the decision-making process than a momentarily weak stimulus, leading to pro-variance bias. (H) The net drive to the broad or narrow option when the broad stimulus is momentarily strong (red) or weak (blue), along the diagonal (SB=SN in G).

Figure 6.

Figure 6—figure supplement 1. Extended regression results on the mean-field model performance.

Figure 6—figure supplement 1.

(A) The mean-field model consists of two variables which represent the accumulated evidence for the two choice options. The two variables demonstrate self-excitation and mutual inhibition. (B) Regression model on the regular trial model data, using the mean, maximum, minimum, first, and last evidence values of both the left and right streams, in order to evaluate the possibility of decision-making strategies alternative to evidence integration and pro-variance bias. Similar to the monkeys, the model mainly relies on mean evidence to solve the task. All errorbars indicate the standard error.

To explore whether this explanation may account for the pro-variance bias in the circuit model and monkey behaviour (Figures 4 and 5), we re-analysed the data separating trials into two halves: those with more or less total evidence (summed across both streams) (Figure 5—figure supplement 2). The circuit model demonstrated a smaller PVB index (larger mean evidence and smaller evidence standard deviation regression weights) for trials with more total evidence than for trials with less total evidence (Figure 5—figure supplement 2C). This was consistent with the prediction from the F-I non-linearity: trials with more total evidence, and thus larger total input, will more strongly drive the neurons to the near-linear regime of the firing rate profile, where the effect of the expansive non-linearity was weaker (Figure 6D). Similar analysis of the monkey behavioural data revealed a similar trend of smaller PVB index (larger mean evidence and smaller evidence standard deviation regression weights) for trials with more total evidence than less total evidence, though the effect was insignificant (Figure 5—figure supplement 2G). In addition, distinct temporal weighting on stimuli were observed in both the circuit model and experimental data, for trials with more versus less total evidence (Figure 5—figure supplement 2D,H).

An advantage of the circuit model over existing algorithmic level explanations of the pro-variance bias is it can be used to make testable behavioural predictions in response to different synaptic or cellular perturbations, including excitation-inhibition (E/I) imbalance. In turn, perturbation experiments can constrain and refine model components. Therefore, we studied the behavioural effects of distinct E/I perturbations, and upstream sensory deficit, on decision making and in particular, pro-variance bias (Figure 7, Figure 7—figure supplement 1). Three perturbations were introduced to the circuit model: lowered E/I balance (via NMDA-R hypofunction on excitatory pyramidal neurons), elevated E/I balance (via NMDA-R hypofunction on inhibitory interneurons), or sensory deficit (as weakened scaling of external inputs to stimuli evidence) (Figure 7A).

Figure 7. Predictions for E/I perturbations of the Spiking Circuit Model.

(A) Model perturbation schematic. Three potential perturbations are considered: lowered E/I (via NMDA-R hypofunction on excitatory pyramidal neurons), elevated E/I (via NMDA-R hypofunction on inhibitory interneurons), or sensory deficit (as weakened scaling of external inputs to stimuli evidence).(B–E) The regular-trial choice accuracy for each of the circuit perturbations (dark colour for when the ‘Higher SD’ stream is correct, light colour for when the ‘Lower SD’ stream is correct).(F–H) Regression analysis on the regular trial choices of the four models, using evidence mean and evidence variability to predict choice.(F) The mean evidence regression coefficients in the four models. Lowering E/I, elevating E/I, and inducing sensory deficits similarly reduce the coefficient, reflecting a drop in choice accuracy. (G) The evidence standard deviation regression coefficients in the four models. All three perturbations reduce the coefficient, but to a different extent. (H) The PVB index (ratio of evidence standard deviation coefficient over mean evidence coefficient) provides dissociable predictions for the perturbations. The lowered E/I circuit increases the PVB index relative to the control model (permutation test, p=6×10−5), while the elevated E/I circuit decreases the PVB index (permutation test, p<10−5). The PVB index is roughly maintained in the sensory deficit circuit (permutation test, p=0.6933). The dashed line indicates the PVB index for the control circuit, * indicates significant difference when the PVB index is compared with the control circuit. (I) The regression weights of stimuli at different time-steps for the four models. All errorbars indicate the standard error.

Figure 7.

Figure 7—figure supplement 1. Model perturbations do not influence decision-making strategy.

Figure 7—figure supplement 1.

(A) Model perturbation schematic. Three potential perturbations are considered: lowered E/I (via NMDA-R hypofunction on excitatory pyramidal neurons), elevated E/I (via NMDA-R hypofunction on inhibitory interneurons), or sensory deficit (as weakened scaling of external inputs to stimuli evidence). (B) The regression model using mean, maximum, minimum, first, and last evidence values of each of the left and right streams as regressors, for the four models. Each bar shows the average of the left and right regressors of the corresponding variable. None of the perturbed models demonstrate a significant shift in decision-making strategies. The elevated E/I circuit has a larger first evidence regression coefficient, due to over-emphasis of early stimuli (Figure 7I). All errorbars indicate the standard error. (C) Same as (B) but a proportion of the circuit models’ choices have been randomised according to the empirical lapse rate of Monkey A when administered with ketamine, then the regression weights re-calculated (see also Figure 8—figure supplement 8; Equation 6). (D) Same as (B) but with lapsing adjusted for that observed in Monkey H ketamine data (see also Figure 8—figure supplement 9).

Figure 7—figure supplement 2. Regression analysis using evidence mean and evidence variability to predict choice, under simultaneous NMDA-R hypofunctions on excitatory and inhibitory neurons.

Figure 7—figure supplement 2.

(A) The mean evidence regression coefficient for various models of NMDA-R hypofunctions on excitatory (GEE) and inhibitory (GEI) neurons. (B) The evidence standard deviation regression coefficient for various models of NMDA-R hypofunctions. (C) PVB index for various models of NMDA-R hypofunctions. PVB index robustly varies with E/I perturbation. In each plot, the green, purple, and orange dots respectively denote the control, lowered E/I, and elevated E/I circuit models used in Figure 7.

Figure 7—figure supplement 3. Regression analysis using mean, maximum, minimum, first, and last evidence values of each of the left and right streams as regressors, under simultaneous NMDA-R hypofunctions on excitatory and inhibitory neurons.

Figure 7—figure supplement 3.

Each subplot shows the average of the left and right regressors of the corresponding variable. (A) Mean evidence regression coefficient for various models of NMDA-R hypofunctions on excitatory (GEE) and inhibitory (GEI) neurons. (B) Maximum evidence regression coefficient for various models of NMDA-R hypofunctions. (C) Minimum evidence regression coefficient for various models of NMDA-R hypofunctions. (D) First evidence regression coefficient for various models of NMDA-R hypofunctions. (E) Last evidence regression coefficient for various models of NMDA-R hypofunctions. In each plot, the green, purple, and orange dots respectively denote the control, lowered E/I, and elevated E/I circuit models used in Figure 7.

Figure 7—figure supplement 4. Regression coefficients and PVB index as a function of sensory deficit.

Figure 7—figure supplement 4.

(A–C) Regression model with mean evidence and evidence standard deviation, and the resulting PVB index. The PVB index increases at very large sensory deficits, in a regime with minimal decision making performance that does not match to monkey behaviour under ketamine (see Figure 8—figure supplement 7). (D–F) Regression model with mean, max, min, first, last evidence. In each plot, the black vertical dash line denotes the sensory deficit perturbation strength used in Figure 7. All errorbars indicate the standard error.

While all circuit models were capable of performing the task (Figure 7B–E), the choice accuracy of each perturbed model was reduced when compared to the control model. This was quantified by the regression coefficient of mean evidence (Figure 7F). In addition, the regression coefficient for evidence standard deviation was reduced for each perturbed model in comparison to the control model, indicating a lesser influence of evidence variability on choice (Figure 7G). Finally, in a dissociation between the three model perturbations, the PVB index was increased by lowered E/I, decreased by elevated E/I, and roughly unaltered by sensory deficits (Figure 7H). Further regression analyses indicated no obvious shift in utilised strategies relative to the control model (Figure 7—figure supplement 1). Crucially, the effect of E/I and sensory perturbations on PVB index and regression coefficients were generally robust to the strength and pathway of perturbation (Figure 7—figure supplements 2 and 3).

Disease and pharmacology-related perturbations likely concurrently alter multiple sites, for instance NMDA-Rs of both excitatory and inhibitory neurons. We thus parametrically induced NMDA-R hypofunction on both excitatory and inhibitory neurons in the circuit model. The net effect on E/I ratio depended on the relative perturbation strength to the two populations (Lam, 2017). Stronger NMDA-R hypofunction on excitatory neurons lowered the E/I ratio, while stronger NMDA-R hypofunction on inhibitory neurons elevated the E/I ratio. Notably, proportional reduction to both pathways preserved E/I balance and did not lower the mean evidence regression coefficient (a proxy of performance) (Figure 7—figure supplement 2A). On the other hand, decision making performance was maximally susceptible to perturbations in the orthogonal direction, along the E/I axis (Lam, 2017). Furthermore, along this axis PVB index monotonically increased with lowered E/I ratio and decreased with elevated E/I ratio, demonstrating a robust prediction from our circuit model (Figure 7—figure supplement 2C). Sensory deficit perturbations did not significantly alter PVB index, until the limit where decision making performance was greatly impaired (Figure 7—figure supplement 4). Finally, the temporal weightings were distinctly altered by the elevated- and lowered- E/I perturbations (Figure 7I). The circuit model thus provided the basis of dissociable prediction by E/I-balance perturbing pharmacological agents.

Since the decision making choice accuracy depends on E/I ratio along an inverted-U shape – where the control, E/I balanced model is right next to the (slightly lowered E/I) peak (Lam, 2017)- both elevating and lowering E/I ratio drive the model away from the peak, resulting in lowered mean evidence regression weight. The evidence standard deviation regression weight similarly follows an inverted-U shape, but with the peak at a more strongly lowered E/I ratio (Figure 7—figure supplement 2). As such, elevating E/I ratio consistently lowers the evidence standard deviation regression weight, but lowering E/I ratio by a small amount initially increases the evidence standard deviation regression weight, and only decreases the evidence standard deviation regression weight after such peak in the inverted-U shape is passed at greater perturbation strengths. Notably, regardless of the magnitude with which E/I ratio is lowered, PVB index is consistently increased, providing a robust measure of pro-variance bias.

To explore these predictions experimentally, we collected behavioural data from both monkeys following the administration of a subanaesthetic dose (0.5 mg/kg, intramuscular injection) of the NMDA-R antagonist ketamine (see Materials and methods, Figure 8, Figure 8—figure supplement 1). After a baseline period of the subjects performing the task, either ketamine or saline was injected intramuscularly (Monkey A: 13 saline sessions, 15 ketamine sessions; Monkey H: 17 saline sessions, 18 ketamine sessions). Administering ketamine had behavioural effects for around 30 min in both subjects. The data collected during this period formed a behavioural database of 8521 completed trials (Monkey A Saline: 1710; Monkey A Ketamine: 2276; Monkey H Saline: 2669; Monkey H Ketamine: 1866). Following ketamine administration, subjects’ choice accuracy was markedly decreased (Figure 8A), without a significant shift in their strategies (Figure 8—figure supplement 1, Supplementary file 4).

Figure 8. Experimental effects of ketamine on evidence accumulation behaviour produce an increased pro-variance bias, consistent with lowered excitation-inhibition balance.

(A) Mean percentage of correct choices across sessions made by monkeys relative to the injection of ketamine (red) or saline (blue). Shaded region denotes ‘on-drug’ trials (trials 5–30 min after injection) which are used for analysis in the rest of the figure. (B, C) The psychometric function when either the ‘Lower SD’ or ‘Higher SD’ streams are correct, with saline (B) or ketamine (C) injection. (D–F) Ketamine injection impairs the decision-making of the monkeys, in a manner consistent with the prediction of the lowered E/I circuit model. Dashed lines indicate pre-injection values in each plot. (D) The regression coefficient for mean evidence, under injection of saline or ketamine. Ketamine significantly reduces the coefficient (permutation test, p<1×10−6), reflecting a drop in choice accuracy. (E) The evidence standard deviation regression coefficient, under injection of saline or ketamine. Ketamine does not significantly reduce the coefficient (permutation test, p=0.152). (F) Ketamine increases the PVB index (permutation test, p=8×10−6), consistent with the model prediction of the lowered E/I circuit. (G) The regression weights of stimuli at different time-steps, for the monkeys with saline or ketamine injection. Ketamine injection lowers and flattens the curve of temporal weights, consistent with the lowered E/I circuit model. Errorbars in (A) indicate the standard error mean, in all other panels errorbars indicate the standard error.

Figure 8.

Figure 8—figure supplement 1. Extra information on ketamine experiments, separated by subjects.

Figure 8—figure supplement 1.

(A) Mean percentage of correct choices across sessions made by Monkey A relative to the injection of ketamine (red) or saline (blue). (B) The psychometric function of Monkey A when either the ‘Lower SD’ or ‘Higher SD’ streams are correct with saline (left) or ketamine (right) injection. (C) Ketamine injection impairs the behaviour of Monkey A, in a manner consistent with the prediction of the lowered E/I circuit model. Dashed lines indicate pre-injection values in each plot. (Left) The regression coefficient for mean evidence, under injection of saline or ketamine. Ketamine significantly reduces the coefficient (permutation test p<1×10−6), reflecting a drop in choice accuracy. (Middle) The regression coefficient for evidence standard deviation, under injection of saline or ketamine. Ketamine significantly reduces the coefficient (permutation test p=4.98×10−3), but to a lesser extent than that of the mean evidence regression coefficient. (Right) Ketamine increases the PVB index (permutation test p=1.16×10−3), consistent with the model prediction of the lowered E/I circuit. (D) The regression model using mean, maximum, minimum, first, and last evidence values of each of the left and right streams as regressors, under injection of saline or ketamine. Each bar shows the average of the left and right regressors of the corresponding variable. Ketamine injection does not alter decision-making strategies. (E) The regression weights of stimuli at different time-steps, for Monkey A with saline or ketamine injection. Ketamine injection lowers and flattens the curve of temporal weights, consistent with the lowered E/I circuit model. (F–J) Same as (A–E) but for Monkey H. (H) Ketamine significantly reduces the regression coefficient for mean evidence (permutation test p<1×10−6), does not significantly reduce the regression coefficient for evidence standard deviation (permutation test p=0.871), and significantly increases the PVB index (permutation test p=5.92×10−3). All errorbars indicate the standard error.

Figure 8—figure supplement 2. Behavioural effects of ketamine on the pro-variance bias and temporal weightings are not explained by lapsing.

Figure 8—figure supplement 2.

Results from Figure 8—figure supplement 1 are replicated with an extended model which included a lapse term. (A) Ketamine injection impairs the behaviour of Monkey A, in a manner consistent with the prediction of the lowered E/I circuit model. (Left) The coefficient for mean evidence, under injection of saline or ketamine. Ketamine significantly reduces the coefficient (permutation test, p<0.0001) reflecting a drop in choice accuracy. (Middle) The coefficient for evidence standard deviation, under injection of saline or ketamine. Ketamine does not significantly reduce the coefficient (permutation test, p=0.636). (Right) Ketamine increases the PVB index (permutation test, p=0.0006), consistent with the model prediction of the lowered E/I circuit. (B) The regression model using mean, maximum, minimum, first, and last evidence values of each of the left and right streams as regressors, under injection of saline or ketamine. Each bar shows the average of the left and right regressors of the corresponding variable. Ketamine injection does not alter decision-making strategies. Black asterisks denote a significant difference between saline and ketamine conditions at the respective regressor (permutation test, p<0.05). (C) The weights of stimuli at different time-steps, for Monkey A with saline or ketamine injection. Ketamine injection lowers the curve of temporal weights, consistent with the lowered E/I circuit model. Black asterisks denote a significant difference between saline and ketamine conditions at the respective sample number (permutation test, p<0.05). (D–F) Same as (A–C) but for Monkey H. (D) Ketamine significantly reduces the coefficient for mean evidence (permutation test, p<0.0001), does not significantly reduce the coefficient for evidence standard deviation (permutation test, p=0.731), and significantly increases the PVB index (permutation test, p=0.0056). All errorbars denote the 95% confidence interval generated through a bootstrap procedure.

Figure 8—figure supplement 3. Time course of ketamine’s influence on pro-variance bias.

Figure 8—figure supplement 3.

(A) The time course of the pro-variance bias index (PVB) is shown for Monkey A. The PVB index is significantly raised for around 20 min following ketamine administration (red). The black horizontal bar at the top of the graph denotes a significant difference between saline (blue) and ketamine conditions at the respective time points (cluster-based permutation test, p=0.0080; see Materials and methods). (B) As in (A), except for Monkey H (cluster-based permutation test, p=0.0332).

Figure 8—figure supplement 4. Cosine similarity of various perturbation effects in the circuit model, to the effect of ketamine injections on monkey behaviour.

Figure 8—figure supplement 4.

(A) Schematic of the similarity measure. The effect of ketamine perturbation to monkey choice behaviour (with lapse rate accounted for) is represented as the relative change in regression coefficients for mean evidence and evidence SD of monkey under ketamine vs saline (red arrows), and similarly for perturbed models vs control circuit model. Cosine similarity is the cosine of the angle between them, and measures the similarity between distinct perturbations. The angle between the vectors of Monkey A and H is 20.1o, corresponding to a cosine similarity value of 0.94. The example model perturbation (black arrow) with weak NMDA-R hypofunction on excitatory neurons (purple dot in B,C) has good cosine similarity and Euclidean distance to the ketamine effect in both monkeys (see B-D,G, Figure 8—figure supplement 5). (B) Circuit models with various degrees of NMDA-R hypofunction on excitatory (GEE) and inhibitory (GEI) neurons are considered. In each case, the regression coefficients of mean evidence and evidence standard deviations are compared to that of the control circuit model. The direction of alteration in each perturbed model, in the 2-dimensional space of mean evidence and evidence standard deviation betas, is compared to the direction of alteration by ketamine injection to Monkey A choice behaviour. A higher cosine similarity means the relative extent (and direction) of alteration, to the regression coefficients of mean evidence and evidence standard deviation, is more similar between the perturbations in the circuit model and the monkey data. Note that the lower left corner point (i.e. the control model) is marked white to clarify cosine similarity is undefined at that point. The green, purple, and orange dots respectively denote the control, lowered E/I, and elevated E/I circuit models used in Figure 7. (C) Same as (B) but with comparisons made with Monkey H choice behaviour. (D) Circuit models with varying degrees of NMDA-R hypofunction on excitatory neurons (GEE) are considered. In each case, the cosine similarity between the perturbation vector in the model and Monkey A data are compared, in the same manner as (B). The large dot denotes the perturbation strength used in the lowered E/I model in Figure 7. Note that cosine similarity is undefined at the point with no perturbation (i.e. the control model). (E) Same as (D) but with various degrees of NMDA-R hypofunction on inhibitory neurons (GEI) in the circuit model. The large dot denotes the perturbation strength used in the elevated E/I model in Figure 7. (F) Same as (D) but with various degrees of sensory deficits in the circuit model. The large dot denotes the perturbation strength used in the sensory deficit model in Figure 7. Purple dashed lines indicate the largest cosine similarity between the ketamine effect on Monkey A and any model perturbations across (D–F) (which is achieved by a lowered E/I model). (G–I) Same as (D–F) but with comparisons made with Monkey H choice behaviour.

Figure 8—figure supplement 5. Euclidean distance of various perturbation effects in the circuit model, to the effect of ketamine injections on monkey behaviour.

Figure 8—figure supplement 5.

(A) Schematic of the dissimilarity measure. The effect of ketamine perturbation to monkey choice behaviour (with lapse rate accounted for) is represented as the relative change in regression coefficients for mean evidence and evidence SD of monkey under ketamine vs saline (red arrows), and similarly for perturbed models vs control circuit model. The Euclidean distance between the vectors measures the dissimilarity between distinct perturbations. The example model perturbation (black arrow) with weak NMDA-R hypofunction on excitatory neurons (purple dot in B,C), has good cosine similarity and Euclidean distance to the ketamine effect in both monkeys (see B-D,G, Figure 8—figure supplement 4). (B) Circuit models with various degrees of NMDA-R hypofunction on excitatory (GEE) and inhibitory (GEI) neurons are considered. In each case, the regression coefficients of mean evidence and evidence standard deviations are compared to that of the control circuit model. The direction of alteration in each perturbed model, in the 2-dimensional space of mean evidence and evidence standard deviations, is compared to the direction of alteration by ketamine injection to Monkey A choice behaviour. A smaller Euclidean distance means the relative extent (and direction) of alteration, to the regression coefficients of mean evidence and evidence standard deviation, is more similar between the perturbations in the circuit model and the monkey data. The green, purple, and orange dots respectively denote the control, lowered E/I, and elevated E/I circuit models used in Figure 7. (C) Same as (B) but with comparisons made with Monkey H choice behaviour. (D) Circuit models with varying degrees of NMDA-R hypofunction on excitatory neurons (GEE) are considered. In each case, the Euclidean distance between the perturbation vector in the model and Monkey A data are compared, in the same manner as (B). The large dot denotes the perturbation strength used in the lowered E/I model in Figure 7. (E) Same as (D) but with various degrees of NMDA-R hypofunction on inhibitory (GEI) neurons in the circuit model. The large dot denotes the perturbation strength used in the elevated E/I model in Figure 7. (F) Same as (D) but with various degrees of sensory deficits in the circuit model. The large dot denotes the perturbation strength used in the sensory deficit model in Figure 7. Purple dashed lines indicate the smallest Euclidean distance between ketamine effect on Monkey A and any model perturbations across (D–F) (which is achieved by a lowered E/I model). (G–I) Same as (D–F) but with comparisons made with Monkey H choice behaviour.

Figure 8—figure supplement 6. Kullback–Leibler (KL) divergence from monkey behavior with saline or ketamine to circuit models with various perturbations.

Figure 8—figure supplement 6.

(A) KL divergence between Monkey A saline data and circuit models with various degrees of NMDA-R hypofunction on excitatory (GEE) and inhibitory (GEI) neurons. Lower KL divergence between data and model means the psychometric functions are more similar between the two. The green, purple, and orange dots respectively denote the control, lowered E/I, and elevated E/I circuit models used in Figure 7. (B) KL divergence between Monkey A saline data and circuit models with various degrees of sensory deficit. Green dashed line shows the KL-divergence between Monkey A saline data and the control model, which is the lowest among the four circuit models used in Figure 7 (see C). (C) KL divergence between Monkey A saline data and the control, lowered E/I, and elevated E/I, and sensory deficit circuit models used in Figure 7. The control model has the lowest KL divergence compared to Monkey A saline data. (D–F) Same as (A–C) but with comparisons made with Monkey H saline data. (G–I) Same as (A–C) but with comparisons made with Monkey A ketamine data. Monkey A’s empirical lapse rate is incorporated into the psychometric function of the circuit models for computing the KL divergence. (H) Purple dashed line shows the KL-divergence between Monkey A ketamine data and the lowered E/I model, which is the lowest among the four circuit models in (I). (J–L) Same as (G–I) but with comparisons made with Monkey H ketamine data.

Figure 8—figure supplement 7. Psychometric function of monkeys under ketamine injection, and circuit model with large sensory deficit.

Figure 8—figure supplement 7.

(A) Psychometric function of Monkey A under ketamine injection, replotted from Figure 8—figure supplement 1. (B) Psychometric function of Monkey H under ketamine injection, replotted from Figure 8—figure supplement 1. (C) Psychometric function of a circuit model with 40% sensory deficit, with lapse rate fitted to Monkey A ketamine data. (D) Psychometric function of a circuit model with 40% sensory deficit, with lapse rate fitted to Monkey H ketamine data. All errorbars indicate the standard error.

Figure 8—figure supplement 8. Predictions for E/I perturbations of the Spiking Circuit Model, with built-in Monkey A lapse rate, compared with Monkey A behaviour.

Figure 8—figure supplement 8.

For each circuit model (control, lowered E/I, elevated E/I, sensory deficit), a proportion of trials are selected and the corresponding choices are randomly shuffled to one of the two choices, with the proportion determined by the empirically measured lapse rate for Monkey A. The regression weights (M–P) are then calculated from the revised choices of the circuit model using the equations without lapse rate (Equations 4 and 5) to allow direct comparison with the subject data which did not control for lapse rate in Figure 8—figure supplement 1. The choice accuracy (dots in I-L) is similarly calculated for the revised choices of the circuit model, while the psychometric fit functions (lines in I-L), which are for visualisation purposes only, are fitted using an equation which incorporated the same fixed lapse rate (Equation 12).These procedures are repeated 100 times for each circuit model, with the average values taken. (A–G) Monkey A data under saline and ketamine injection. The data are identical to that in Figure 8—figure supplement 1, and is repeated here for comparison to the model. Note that the psychometric fit function in (C) here is modified to incorporate a lapse effect for better comparison as explained above (see Materials and methods, Equation 12). (H) Model perturbation schematic. Three potential perturbations are considered: lowered E/I (via NMDA-R hypofunction on excitatory pyramidal neurons), elevated E/I (via NMDA-R hypofunction on inhibitory interneurons), or sensory deficit (as weakened scaling of external inputs to stimuli evidence).(I–L) The regular-trial choice accuracy for each of the circuit perturbations (dark colour for when the ‘Higher SD’ stream is correct, light colour for when the ‘Lower SD’ stream is correct). Note that the psychometric fit functions here are modified to incorporate a lapse effect for better comparison as explained above (see Materials and methods, Equation 12). (M–O) Regression analysis on the regular trial choices of the four models, using evidence mean and evidence variability to predict choice (see Materials and methods, Equation 5). (M) The mean evidence regression coefficients in the four models. Lowering E/I, elevating E/I, and inducing sensory deficits similarly reduce the coefficient, reflecting a drop in choice accuracy. (N) The evidence standard deviation regression coefficients in the four models. All three perturbations reduce the coefficient, but to a different extent. (O) The PVB index (ratio of evidence standard deviation coefficient over mean evidence coefficient) provides dissociable predictions for the perturbations. The lowered E/I circuit increases the PVB index relative to the control model (permutation test, p<10−5), while the elevated E/I circuit decreases the PVB index (permutation test, p<10−5). The PVB index is roughly maintained in the sensory deficit circuit (permutation test, p=0.4718). The dashed line indicates the PVB index for the control circuit, * indicates a significant difference when the PVB index is compared with the control circuit. (P) The regression weights of stimuli at different time-steps for the four models (see Materials and methods, Equation 4). All errorbars indicate the standard error.

Figure 8—figure supplement 9. Predictions for E/I perturbations of the Spiking Circuit Model, with built-in Monkey H lapse rate, compared with Monkey H behaviour.

Figure 8—figure supplement 9.

For each circuit model (control, lowered E/I, elevated E/I, sensory deficit), a proportion of trials are selected and the corresponding choices are randomly shuffled to one of the two choices, with the proportion determined by the empirically measured lapse rate for Monkey H. The regression weights (M–P) are then calculated from the revised choices of the circuit model using the equations without lapse rate (Equations 4 and 5) to allow direct comparison with the subject data which did not control for lapse rate in Figure 8—figure supplement 1. The choice accuracy (dots in I-L) is similarly calculated for the revised choices of the circuit model, while the psychometric fit functions (lines in I-L), which are for visualisation purposes only, are fitted using an equation which incorporated the same fixed lapse rate (Equation 12).These procedures are repeated 100 times for each circuit model, with the average values taken. (A–G) Monkey H data under saline and ketamine injection. The data are identical to that in Figure 8—figure supplement 1, and is repeated here for comparison to the model. Note that the psychometric fit function in (C) here is modified to incorporate a lapse effect for better comparison as explained above (see Materials and methods, Equation 12). (H) Model perturbation schematic. Three potential perturbations are considered: lowered E/I (via NMDA-R hypofunction on excitatory pyramidal neurons), elevated E/I (via NMDA-R hypofunction on inhibitory interneurons), or sensory deficit (as weakened scaling of external inputs to stimuli evidence).(I–L) The regular-trial choice accuracy for each of the circuit perturbations (dark colour for when the ‘Higher SD’ stream is correct, light colour for when the ‘Lower SD’ stream is correct). Note that the psychometric fit functions here are modified to incorporate a lapse effect for better comparison as explained above (see Materials and methods, Equation 12). (M–O) Regression analysis on the regular trial choices of the four models, using evidence mean and evidence variability to predict choice (see Materials and methods, Equation 5). (M) The mean evidence regression coefficients in the four models. Lowering E/I, elevating E/I, and inducing sensory deficits similarly reduce the coefficient, reflecting a drop in choice accuracy. (N) The evidence standard deviation regression coefficients in the four models. All three perturbations reduce the coefficient, but to a different extent. (O) The PVB index (ratio of evidence standard deviation coefficient over mean evidence coefficient) provides dissociable predictions for the perturbations. The lowered E/I circuit increases the PVB index relative to the control model (permutation test, p<10−5), while the elevated E/I circuit decreases the PVB index (permutation test, p<10−5). The PVB index is roughly maintained in the sensory deficit circuit (permutation test, p=0.3817). The dashed line indicates the PVB index for the control circuit, * indicates a significant difference when the PVB index is compared with the control circuit. (P) The regression weights of stimuli at different time-steps for the four models (see Materials and methods, Equation 4). All errorbars indicate the standard error.

To understand the nature of this deficit, we studied the effect of drug administration on the pro-variance bias (Figure 8B–F). Although subjects were less accurate following ketamine injection, they retained a pro-variance bias (Figure 8C). Regression analysis confirmed ketamine caused choices to be substantially less driven by mean evidence (Figure 8D), but still strongly influenced by the standard deviation of evidence across samples (Figure 8E). The PVB index was significantly higher when ketamine was administered, than saline (permutation test p=8×10−6, Figure 8F). Of all the circuit model perturbations, this was only consistent with lowered E/I balance (Figure 7H).

In further analysis, we also controlled for the influence of ketamine on the subjects’ lapse rate – i.e. the propensity for the animals to respond randomly regardless of trial difficulty. We modelled this lapse rate using an additional term that bounded the logistic function at Y0 and (1-Y0), rather than 0 and 1 (Figure 8—figure supplement 2, see Materials and methods, Equation 9). In other words, lapse rate refers to the asymptote error rate at the limit of strong evidence. Consistent with the psychometric function (Figure 8C), we found that ketamine significantly increased the subjects’ lapse rate (Subject A: Lapse(Saline)=1.49×10−11, Lapse(Ketamine)=0.118, permutation test, p<0.0001; Subject H: Lapse(Saline)=0.012, Lapse(Ketamine)=0.0684, permutation test, p=0.019). Crucially, however, the PVB effect was still present in the regression model that included the effect of lapses. This confirms that the change in lapse rate was not responsible for any of the behavioural effects of ketamine outlined above. We also investigated the time-course of ketamine’s influence on the PVB index (Figure 8—figure supplement 3). This confirmed that the rise in PVB was an accurate description of a common behavioural deficit throughout the duration of ketamine administration.

Additional observations further supported the lowered E/I hypothesis for the effect of ketamine on monkey choice behaviour. Quantitative model comparison, using cosine similarity, Euclidean distance, and Kullback–Leibler (KL) divergence, revealed the effect of ketamine injection on monkey choice behaviour was better explained by lowered E/I perturbations in the circuit model, than by sensory deficit or elevated E/I perturbations (Figure 8—figure supplements 46). Very strong sensory deficit may also increase PVB index, but with minimal decision making performance and a psychometric function very different from the monkey data (Figure 7—figure supplement 4, Figure 8—figure supplement 7). In addition, we investigated the effect of ketamine on the time course of evidence weighting (Figure 8G). It caused a general downward shift of the temporal weights; but had no strong effects on how each stimulus was weighted relative to the others in the stream. This shifting of the weights could reflect a sensory deficit, but given the results of the pro-variance analysis, collectively the behavioural effects of ketamine are most consistent with lowered E/I balance and weakened recurrent connections. Notably, the saline data demonstrate a U-shaped pattern different from the primacy pattern observed in non-drug experiments (Figure 2C,D) and spiking circuit models (Figure 7I). This may be due to task modifications for the ketamine/saline experiments compared with the non-drug experiments, but could also potentially arise from distinct regimes of decision making attractor dynamics (e.g. see Genís Prat-Ortega et al., 2020).

To quantify the effect of lapse rate on evidence sensitivity and regression weights in general, we examined the effect of a lapse mechanism downstream of spiking circuit models (Figure 8—figure supplements 89). Using the lapse rate fitted to the experimental data collected from the two monkeys, we assigned such portions of trials to have randomly selected choices for each circuit model, and repeated the analysis to obtain psychometric functions and various regression weights. Crucially, while the psychometric function as well as evidence mean and standard deviation regression weights were suppressed, the findings on PVB index were not qualitatively altered in the circuit models, further supporting the finding that the lapse rate does not account for changes in PVB under ketamine.

Discussion

Previous studies have shown human participants exhibit choice biases when options differ in the standard deviation of the evidence samples, preferring choice options drawn from a more variable distribution (Tsetsos et al., 2016; Tsetsos et al., 2012). By utilising a behavioural task with precise experimenter control over the distributions of time-varying evidence, we show that macaque monkeys exhibit a similar pro-variance bias in their choices akin to human participants. This pro-variance bias was also present in a spiking circuit model, which demonstrated a neural mechanism for this behaviour. We then introduced perturbations at distinct synaptic sites of the circuit, which revealed dissociable predictions for the effects of NMDA-R antagonism. Ketamine produced decision-making deficits consistent with a lowering of the cortical excitation-inhibition balance.

Biophysically grounded neural circuit modelling is a powerful tool to link cellular level observations to behaviour. Previous studies have shown recurrent cortical circuit models reproduce normative decision-making and working memory behaviour, and replicate the corresponding neurophysiological activity (Wang et al., 2013; Murray et al., 2014; Wang, 2002; Wong and Wang, 2006; Murray et al., 2017; Wong et al., 2007; Wimmer et al., 2014). However, whether they are also capable of reproducing idiosyncratic cognitive biases has not previously been explored. Here we demonstrated pro-variance and primacy biases in a spiking circuit model. The primacy bias results from the formation of attractor states before all of the evidence has been presented. This neural implementation for bounded evidence accumulation corresponds with previous algorithmic explanations (Kiani et al., 2008).

The results from our spiking circuit modelling also provide a parsimonious candidate mechanism for the pro-variance bias within the evidence accumulation process. Specifically, strong evidence in favour of an option pushes the network towards an attractor state more so than symmetrically weak evidence pushes it away. Previous explanations for pro-variance bias proposed computations at the level of sensory processing upstream of evidence accumulation. In particular, a ‘selective integration’ model proposed that information for the momentarily weaker option is discarded before it enters the evidence accumulation process (Tsetsos et al., 2016). Conceptually, our model was analogous to previous models in that weak evidence is weighted less relative to strong evidence. However, there are key differences between the two models. In ‘selective integration’ and similar models concerning sensory processes, an asymmetric filter was applied to the stimuli before the stimuli were evaluated by the decision making process, in some upstream area that can be potentially modulated based on task demands. In contrast, in our circuit model pro-variance bias arose from the non-linearity activity profile (Figure 6D, see Materials and methods) of model neurons. In that sense, pro-variance bias was an intrinsic phenomenon of the evidence integration process in our circuit model.

Despite the conceptual analogy between our circuit model and the ‘selective integration’ model in which weak stimuli were asymmetrically weighted, our circuit model cannot be directly mapped to the latter. In the ‘selective integration’ model, the asymmetry is realized as a discounting of the momentarily weaker stimuli by a constant factor. In our circuit model, the asymmetry arose from the non-linearity of the transfer function. However, the transfer function was not static, but dynamically evolved with the state of the model (e.g. in the mean-field model, the transfer function depended on the two decision variables, See Materials and methods). Due to this complexity, the asymmetry of the circuit model cannot be reduced to one simple expression, and was instead closely entangled with the attractor dynamics of the system.

Crucially, our circuit model generated dissociable predictions for the effects of NMDA-R hypofunction on the pro-variance bias (PVB) index that were tested by follow-up ketamine experiments. While it remains an open question where and how in the brain the selective integration process takes place, our modelling results suggest that purely sensory deficits may not capture the alterations in choice behaviour observed under ketamine, in contrast to E/I perturbations in decision-making circuits (Figure 7H). Multiple complementary processes may simultaneously contribute to pro-variance bias during decision making, especially in complex behaviours over longer timescales. Future work will aim to contrast between these two models with neurophysiological data recorded while monkeys are performing this task.

On the other hand, there may also be limits in the extent to which our findings can be directly compared to those from previous studies in humans (Tsetsos et al., 2016; Tsetsos et al., 2012). For example, human studies have revealed that a ‘frequent winner’ bias coexists with the pro-variance bias and may arise from the same selective integration mechanism. Unlike previous studies, our subjects did not exhibit a ‘frequent winner’ bias. Furthermore, although both studies demonstrate a PVB, the temporal weighting of evidence in the previous human studies exhibit recency, unlike the primacy found in the present study. This may be in part due to differences in the underlying computational regimes that are used for evidence integration, or may be due to more trivial differences between the experimental paradigms – for example, different paradigms have identified primacy (Kiani et al., 2008), recency (Cheadle et al., 2014) or noiseless sensory evidence integration (Brunton et al., 2013). A stronger test will be to record neurophysiological data while monkeys are performing our task; this would help to distinguish between the ‘selective integration’ hypothesis and the cortical circuit mechanism proposed here.

The PVB index, as the ratio of standard deviation to mean evidence regression weights, serves as a conceptually useful measure to interpret changes in pro-variance bias due to ketamine perturbation in this study. Given the model does not feature any explicit processes that mediate pro-variance bias, PVB should be understood as an emergent phenomenon arising from the decision-making process. In this context, a sensory-deficit perturbation, which down-scales the incoming evidence strength without perturbing the decision-making process, should proportionally down-scale the evidence mean and standard deviation regression weights, thus maintaining the PVB index. In contrast, lowering and elevating E/I ratio distinctly alter the dynamics of the decision-making process and thus differentially perturb the PVB index. It is also important to study how changes in the PVB index are driven by changes in the mean vs. standard deviation regression coefficients, as considering PVB index alone can obscure these effects. For instance, based on the model, the increase in PVB index by lowering E/I is generally due to a stronger decrease in mean regression coefficient than standard deviation regression coefficient (Figure 7—figure supplement 2). However, small perturbations of lowering E/I may actually increase PVB index due to an increase in standard deviation regression coefficient and a decrease in mean regression coefficient. As a support of this model finding, while the two monkeys both demonstrate a significant decrease in mean regression weight by ketamine, one monkey seems to demonstrate a trend to decrease standard deviation regression weight and the other seems to demonstrate a trend to increase (Figure 8—figure supplement 2). The two monkeys, both interpreted as lowered E/I ratio using the model-based approach in this study, may therefore experience slightly different degrees of E/I reduction when administered with ketamine, as shown through concurrent changes in NMDA-R conductances in the circuit model (Figure 7—figure supplement 2).

In this study we did not undertake quantitative fitting of the circuit model parameters to match the empirical data. Rather we took a previously developed circuit model and only manually adjusted input strengths to be loosely in the regime of experimental behavior. There are technical and theoretical challenges in quantitatively fitting to psychophysical behavior with biophysically-based circuit models, including reduced mean-field models, which have impeded such applications in the field. Critical challenges include computational cost of simulation, a large number of parameters with unknown effective degeneracies on behavior, and treatment of noise in mean-field reductions. Future work, beyond the scope of the present study, is needed to bridge these gaps in relating circuit models to psychophysical behavior.

Instead of direct model fitting, here we studied biophysically-based spiking circuit models for two primary purposes: to examine whether a behavioral phenomenon, such as pro-variance bias, can emerge from a regime of circuit dynamics, and through what dynamical circuit mechanisms; and to characterize how the phenomenon and underlying dynamics is altered by modulation of neurobiologically grounded parameters, such as NMDA-R conductance. The circuit modelling in this study demonstrates a set of mechanisms which is sufficient to produce the phenomenon of interest. The bottom-up mechanistic approach in this study, which makes links to the physiological effects of pharmacology and makes testable predictions for neural recordings and perturbations, is complementary to top-down algorithmic modeling approaches.

Our pharmacological intervention experimentally verified the significance of NMDA-R function for decision-making. In the spiking circuit model, NMDA-Rs expressed on pyramidal cells are necessary for reverberatory excitation, without which evidence cannot be accumulated and stable working memory activity cannot be maintained. NMDA-Rs on interneurons are necessary for maintaining background inhibition and preventing the circuit from reaching an attractor state prematurely (Murray et al., 2014; Wang, 2002). By administering ketamine, an NMDA-R antagonist, specific short-term deficits in choice behaviour were induced, which were consistent with a lowering of the cortical excitation-inhibition balance in the circuit model. This suggests the NMDA-R antagonist we administered systemically was primarily acting to inhibit neurotransmission onto pyramidal cells and weaken the recurrent connection strength across neurons. It is important to note that in addition to the main role of ketamine as a NMDA-R antagonist, it might also target other receptor sites (Chen et al., 2009; Zanos et al., 2016; Moaddel et al., 2013). However, of all receptors, ketamine has by far the highest affinity for the NMDA-R receptor (Frohlich and Van Horn, 2014). The effects of synaptic perturbations could be interpreted in terms of their net effect on E/I balance, at least to the first order (Murray et al., 2014; Lam, 2017). For instance, in the circuit model, proportional NMDA hypofunction on both E and I neurons maintains E/I balance and minimally impairs circuit computation, while the effect of disproportionate NMDA hypofunction on E and I neurons is well captured by the direction of net change in E/I ratio (Figure 7—figure supplements 2 and 3). Given the highest affinity of ketamine to NMDA-Rs, the effect of NMDA-R-hypofunction should dominantly determine the direction of E/I imbalance, and should not be counter-balanced by the effect of other perturbations. Finally, other receptors and brain areas are likely altered by systemic ketamine administration, which is beyond the scope of the microcircuit model in this study.

The physiological effects of NMDA-R antagonism on in vivo cortical circuits remains an unresolved question. A number of studies have proposed a net cortical disinhibition through NMDA-R hypofunction on inhibitory interneurons (Nakazawa et al., 2012; Krystal et al., 2003; Lisman et al., 2008; Lewis et al., 2012). The disinhibition hypothesis is supported by studies finding NMDA-R antagonists mediate an increase in the firing of prefrontal cortical neurons, in rodents (Jackson et al., 2004; Homayoun and Moghaddam, 2007) and monkeys (Ma et al., 2018; Ma et al., 2015; Skoblenick and Everling, 2012; Skoblenick et al., 2016). On the other hand, the effects of NMDA-R antagonists on E/I balance may vary across neuronal sub-circuits within a brain area. For instance, in a working memory task, ketamine was found to increase spiking activity of response-selective cells, but decrease activity of the task-relevant delay-tuned cells in primate prefrontal cortex (Wang et al., 2013). Such specificity might explain why several studies reported less conclusive effects of NMDA-R antagonists on overall prefrontal firing rates in monkeys (Wang et al., 2013; Zick et al., 2018). In vitro work has also revealed the excitatory post-synaptic potentials (EPSPs) of prefrontal pyramidal neurons are much more reliant on NMDA-R conductance than parvalbumin interneurons (Rotaru et al., 2011). Other investigators combining neurophysiological recordings with modelling approaches have also concluded that the action of NMDA-R antagonists is primarily upon pyramidal cells (Wang et al., 2013; Moran et al., 2015). Our present findings, integrating pharmacological manipulation of behaviour with biophysically-based spiking circuit modelling, suggest that the ketamine-induced behavioural biases are more consistent with a lowering of excitation-inhibition balance and weakening of recurrent dynamics. Future work with electrophysiological recordings during the performance of our task, under pharmacological interventions, can potentially dissociate the effect of ketamine on E/I balance specifically in cortical neurons exhibiting decision-related signals. Notably, the decision making behaviours in our circuit model arise from attractor dynamics relying on unstructured interneurons to provide lateral feedback inhibition. Recent experiments found that, in mouse parietal cortex during a decision-making task, inhibitory parvalbumin (PV) interneurons - thought to provide feedback inhibition - may be equally selective as excitatory pyramidal neurons (Najafi et al., 2020). Depending on the pattern and connectivity of their feedback projections to pyramidal neurons, such a circuit structure supports different forms of evidence accumulation in cortical circuits (Lim and Goldman, 2013). It remains to be seen how the pro-variance bias effect and the current predictions extend to circuit models with selective inhibitory interneurons.

The minutes-long timescale of the NMDA-R mediated decision-making deficit we observed was also consistent with the psychotomimetic effects of subanaesthetic doses of ketamine in healthy humans (Krystal et al., 1994; Krystal et al., 2003). As NMDA-R hypofunction is hypothesised to play a role in the pathophysiology of schizophrenia (Kehrer et al., 2008; Olney and Farber, 1995; Krystal et al., 2003; Lisman et al., 2008), our findings have important clinical relevance. Previous studies have demonstrated impaired perceptual discrimination in patients with schizophrenia performing the random-dot motion (RDM) decision-making task (Chen et al., 2003; Chen et al., 2004; Chen et al., 2005). Although RDM tasks have been extensively used to study evidence accumulation (Gold and Shadlen, 2007), previously this performance deficit in schizophrenia was interpreted as reflecting a diminished representation of sensory evidence in visual cortex (Chen et al., 2003; Butler et al., 2008). Based on our task with precise temporal control of the stimuli, our findings suggest that NMDA-R antagonism alters the decision-making process in association cortical circuits. Dysfunction in these association circuits may therefore provide an important contribution to cognitive deficits - one that is potentially complementary to upstream sensory impairment. Crucially, our task uniquely allowed us to rigorously verify that the subjects used an accumulation strategy to guide their choices (cf. previous animal studies [Gold and Shadlen, 2007; Roitman and Shadlen, 2002; Hanks et al., 2015; Morcos and Harvey, 2016; Katz et al., 2016]), with these analyses suggesting the strategy our subjects employed was relatively consistent with findings in human participants. This consistency further ensures our findings may translate across species, in particular to clinical populations.

Another related line of schizophrenia research has shown a decision-making bias known as jumping to conclusions (JTC) (Ross et al., 2015; Huq et al., 1988). The JTC has predominately been demonstrated in the ‘beads task’, a paradigm where participants are shown two jars of beads, one mostly pink and the other mostly green (typically 85%). The jars are hidden, and the participants are presented a sequence of beads drawn from a single jar. Following each draw, they are asked if they are ready to commit to a decision about which jar the beads are being drawn from. Patients with schizophrenia typically make decisions based on fewer beads than controls. Importantly, this JTC bias has been proposed as a mechanism for delusion formation. Based on the JTC literature, one plausible hypothesis for behavioural alteration under NMDA-R antagonism in our task may be a strong increase in the primacy bias, whereby only the initially presented bar samples would be used to guide the subjects’ decisions. However, following ketamine administration, we did not observe a strong primacy – instead all samples received roughly the same weighting. There are important differences between our task and the beads task. In our task, the stimulus presentation is shorter (2 s, compared to slower sampling across bead draws), and is of fixed duration rather than terminated by the subject’s choice, and therefore may not involve the perceived sampling cost of the beads task (Ermakova et al., 2019).

Our precise experimental paradigm and complementary modelling approach allowed us to meticulously quantify how monkeys weight time-varying evidence and robustly dissociate sensory and decision-making deficits – unlike prior studies using the RDM and beads tasks. Our approach can be readily applied to experimental and clinical studies to yield insights into the nature of cognitive deficits and their potential underlying E/I alterations in pharmacological manipulations and pathophysiologies across neuropsychiatric disorders, such as schizophrenia (Wang and Krystal, 2014; Huys et al., 2016) and autism (Wang and Krystal, 2014; Yizhar et al., 2011; Lee et al., 2017; Marín, 2012). Finally, our study highlights how precise task design, combined with computational modelling, can yield translational insights across species, including through pharmacological perturbations, and across levels of analysis, from synapses to cognition.

Materials and methods

Subjects

Two adult male rhesus monkeys (M. mulatta), subjects A and H, were used. The subjects weighed 12–13.3 kg, and both were ~6 years old at the start of the data collection period. We regulated their daily fluid intake to maintain motivation in the task. All experimental procedures were approved by the UCL Local Ethical Procedures Committee and the UK Home Office (PPL Number 70/8842), and carried out in accordance with the UK Animals (Scientific Procedures) Act.

Behavioural protocol

Subjects sat head restrained in a primate behavioural chair facing a 19-inch computer screen (1,280 × 1024 px screen resolution, and 60 Hz refresh rate) in a dark room. The monitor was positioned 59.5 cm away from their eyes, with the height set so that the centre of the screen aligned with neutral eye level for the subject. Eye position was tracked using an infrared camera (ISCAN ETL-200) sampled at 240 Hz. The behavioural paradigm was run in the MATLAB-based toolbox MonkeyLogic (http://www.monkeylogic.net/, Brown University) (Asaad and Eskandar, 2008a; Asaad and Eskandar, 2008b; Asaad et al., 2013). Eye position data were relayed to MonkeyLogic for use online during the task, and was recorded for subsequent offline analysis. Following successful trials, juice reward was delivered to the subject using a precision peristaltic pump (ISMATEC IPC). Subjects performed two types of behavioural sessions: standard and pharmacological. In pharmacological sessions, following a baseline period, either an NMDA-R antagonist (Ketamine) or saline was administered via intramuscular injection. Monkey A completed 41 standard sessions, and 28 pharmacological sessions (15 ketamine; 13 saline). Monkey H completed 68 standard sessions, and 35 pharmacological sessions (18 ketamine; 17 saline).

Injection protocol

Typically, two pharmacological sessions were performed each week, at least 3 days apart. Subjects received either a saline or ketamine injection into the trapezius muscle while seated in the primate chair. Approximately 12 min into the session, local anaesthetic cream was applied to the muscle. At 28 min, the injection was administered. The task was briefly paused for this intervention (64.82 ± 10.85 secs). Drug dose was determined through extensive piloting, and a review of the relevant literature (Wang et al., 2013; Blackman et al., 2013). The dose used was 0.5 mg/kg.

Task

Subjects were trained to perform a two-alternative value-based decision-making task. A series of bars, each with different heights, were presented on the left and right-side of the computer monitor. Following a post-stimulus delay, subjects were rewarded for saccading towards the side with either the taller or shorter average bar-height, depending upon a contextual cue displayed at the start of the trial (see Figure 1A inset). The number of pairs of bars in each series was either four (‘4SampleTrial’) or eight (‘8SampleTrial’) during trials in each standard behavioural session. In this report, we only consider the results from the eight sample trials, though similar results were obtained from the four sample trials. The number of bars was always six during pharmacological sessions.

The bars were presented inside of fixed-height rectangular placeholders (width, 84px; height, 318px). The placeholders had a black border (thickness 9px), and a grey centre where the stimuli were presented (width, 66px; height, 300px). The bar heights could take discrete percentiles, occupying between 1% and 99% of the grey space. The height of the bar was indicated by a horizontal black line (thickness 6px). Beneath the black line, there was 45° grey gabor shading.

An overview of the trial timings is outlined in Figure 1A. Subjects initiated a trial by maintaining their gaze on a central, red fixation point for 750 ms. After this fixation was completed, one of four contextual cues (see Figure 1A inset) was centrally presented for 350 ms. Subjects had previously learned that two of these cues instructed to choose the side with the taller average bar-height (‘ChooseTall’ trial), and the other two instructed to choose the side with the shorter average bar-height (‘ChooseShort’ trial). Next, two black masks (width, 84px; height, 318px) were presented for 200 ms in the location of the forthcoming bar stimuli. These were positioned either side of the fixation spot (6° visual angle from centre). Each bar stimulus was presented for 200 ms, followed by a 50 ms inter-stimulus-interval where only the fixation point remained on the screen. Once all of the bar stimuli had been presented, the mask stimuli returned for a further 200 ms. There was then a post stimulus delay (250–750 ms, uniformly sampled across trials). Following this, the colour of the fixation point was changed to green (go cue), and two circular saccade targets appeared on each side of the screen where the bars had previously been presented. This cued the subject to indicate their choice by making a saccade to one of the targets. Once the subject reported their decision, there were two stages of feedback. Immediately following choice, the green go cue was extinguished, the contextual cue was re-presented centrally, along with the average bar heights of the two series of stimuli previously presented. The option the subject chose was indicated by a purple outline surrounding the relevant bar placeholder (width, 3.8°; height, 10°). Following 500 ms, the second stage of feedback began. The correct answer was indicated by a white outline surrounding the bar placeholder (width, 5.7°; height, 15°). On correct trials, the subject was rewarded for a length of time proportional to the average height of the chosen option (directly proportional on a ‘ChooseTall’ trial, negatively proportional on a ‘ChooseShort’ trial). On incorrect trials, there was no reward. Regardless of the reward amount, the second feedback stage lasted 1200 ms. This was followed by an inter-trial-interval (1.946 ± 0.051 secs; for Standard Sessions, across all completed included trials). The inter-trial-interval duration was longer on ‘4SampleTrials’ than ‘8SampleTrials’, in order for the trials to be an equal duration, and facilitate a similar reward rate between the two conditions.

Subjects were required to maintain central fixation from the fixation period until they indicated their choice. If the initial fixation period was not completed, or fixation was subsequently broken, the trial was aborted and the subject received a 3000 ms timeout (Trials in standard sessions: Monkey A – 22.46%, Monkey H – 15.27%). On the following trial, the experimental condition was not repeated. If subjects failed to indicate their choice within 8000 ms, a 5000 ms timeout was initiated (Trials in standard sessions: Monkey A - 0%, Monkey H – 0%).

Experimental conditions were blocked according to the contextual cue and evidence length. This produced four block types (ChooseTall4SampleTrial (T4), ChooseTall8SampleTrial (T8), ChooseShort4SampleTrial (S4), ChooseShort8SampleTrial (S8)). At the start of each session, subjects performed a short block of memory-guided saccades (MGS) (Hikosaka and Wurtz, 1983), completing 10 trials. Data from these trials are not presented in this report. Following the MGS block, the first block of decision-making trials was selected at random. After the subject completed 15 trials in a block, a new block was selected without replacement. Each new block had to have either the same evidence length or the same contextual cue as the previous block. After all four blocks had been completed, there was another interval of MGS trials. A new evidence accumulation start block was then randomly selected. As there were four block types, and either the evidence length or the contextual cue had to be preserved across a block switch, there were two ‘sequences’ in which the blocks could transition (i.e. T4→T8→S8→S4; or T4→S4→S8→T8, if starting from T4). Following the intervening MGS trials, the blocks transitioned in the opposite sequence to those used previously, starting from the new randomly chosen block. This block switching protocol was continued throughout the session. At the start of each block, the background of the screen was changed for 5000 ms to indicate the evidence length of the forthcoming block. A burgundy colour indicated an eight sample block was beginning, a teal colour indicated a four sample block was beginning.

Trial generation

The heights of the bars on each trial were precisely controlled. On the majority of trials (Regular Trials, Completed trials in standard sessions: Monkey A – 76.67%, Monkey H – 76.23%), the heights of each option were generated from independent Gaussian distributions (Figure 4A-B). There were two levels of variance for the distributions, designated as ‘Narrow’ and ‘Broad’. The mean of each distribution, μ, was calculated as μ=50+Z×σ, where Z𝒰(0.25,0.25), and σ was either 12 or 24 for narrow and broad stimuli streams. The individual bar heights were then determined by 𝒩(μ,σ). The trial generation process was constrained so the samples reasonably reflected the generative parameters. These restrictions required bar heights to range from 1 to 99, and the actual σ for each stream to be no more than 4 from the generative value. On any given trial, subjects could be presented with two narrow streams, two broad streams, or one of each. The evidence variability was therefore independent between the two streams. For post-hoc analysis (Figure 4) we defined one stream as the ‘Lower SD’ option on each trial, and the other the ‘Higher SD’ option, based upon the sampled/actual σ.

A proportion of ‘decision-bias trials’ were also specifically designed to elucidate the effects of evidence variability on choice, and whether subjects displayed primacy/recency biases (Tsetsos et al., 2012). These trials occurred in equal proportions within all four block types. Only one of these decision-bias trial types was tested in each behavioural session.

Narrow-broad trials (Completed trials in standard sessions: Monkey A – 14.87%, Monkey H – 15.78%) probed the effect of evidence variability on choice (Tsetsos et al., 2012). Within this category of trials, there were three conditions (Figure 3A). In each, the bar heights of one alternative were associated with a narrow Gaussian distribution (𝒩N, 12)), and the bar heights from the other with a broad Gaussian distribution (𝒩B, 24)). In the first two conditions, ‘Narrow Correct’N 𝒰 (48, 60), μB = μN – 8) and ‘Broad Correct’B 𝒰 (48, 60), μN = μB – 8), there was a clear correct answer. In the third condition, ‘Ambiguous’B 𝒰 (44, 56), μN = μB), there was only small evidence in favour of the correct answer. In all of these conditions, the generated samples had to be within 4 of the generating σ. Furthermore, on ‘Narrow Correct’ and ‘Broad Correct’ trials the difference between the mean evidence of the intended correct and incorrect stream had to range from +2 to +14. On the ‘Ambiguous’ trials, the mean evidence in favour of one option over the other was constrained to be <4. A visualisation of the net evidence in each of these trial types is displayed (Figure 3A). For the purposes of illustration, the probability density was smoothed by a sliding window of ±1, within the generating constraints described above (‘Narrow Correct’ and ‘Broad Correct’ trials have net evidence for correct option within [2, 14]; ‘Ambiguous’ trials have net evidence within [-4, 4]). A very small number of trials were excluded from this visualisation, because their net evidence fell marginally outside the constraints. This was because bar heights were rounded to the nearest integer (due to the limited number of pixels on the computer monitor) after the generating procedure and the plot reflects the presented bar heights.

Half-half trials (Completed trials in standard sessions: Monkey A – 8.46%, Monkey H – 8.00%) probed the effect of temporal weighting biases on choice (Tsetsos et al., 2012). The heights of each option were generated using the same Gaussian distribution (X 𝒩HH, 12), where μHH𝒰 (40, 60)). This distribution was truncated to form two distributions: XTall {mean(X) – 0.5*SD(X),∞}, and XShort {-∞, mean(X) + 0.5*SD(X)}. On each trial, one option was designated ‘TallFirst’ – where the first half of bar heights was drawn from XTall and the second half of bar heights drawn from XShort. This process was also constrained so that the mean of samples drawn from XTall had to be at least 7.5 greater than those taken from XShort. The other option was ‘ShortFirst’, where the samples were drawn from the two distributions in the reverse order.

Task modifications for pharmacological sessions

Minor adjustments were made to the task during the pharmacological sessions to maximise trial counts available for statistical analysis. Trial length was fixed to 6 pairs of samples. The block was switched between ‘ChooseTall6Sample’ and ‘ChooseShort6Sample’ after 30 completed trials, without intervening MGS trials. From our pilot data, it was clear ketamine reduced choice accuracy. In order to maintain subject motivation, the most difficult ‘Regular’ and ‘HalfHalf’ trials were not presented. Following the trial generation procedures described above, in pharmacological sessions these trials were additionally required to have >4 mean difference in evidence strength. Of the ‘Narrow-Broad’ trials, only ‘Ambiguous’ conditions were used; but no further constraints were applied to these trials. In some sessions, a small number of control trials were used, in which the bar heights for each option were fixed across all of the samples. All analyses utilised ‘Regular’, ‘Half-Half’, and ‘Narrow-Broad’ trials. Monkey H did not always complete sufficient trials once ketamine was administered. Sessions where the number of completed trials was fewer than the minimum recorded in the saline sessions were discarded (6 of 18 sessions). Following ketamine administration, Monkey A did not complete fewer trials in any session than the minimum recorded in a saline session.

Behavioural data analysis

To assess decision-making accuracy during standard sessions, we initially fitted a psychometric function (Kiani et al., 2008; Roitman and Shadlen, 2002) to subjects’ choices pooled across ‘Regular’ and ‘Narrow-Broad’ trials (Figure 2A-B). This defines the choice accuracy (P) as a function of the difference in mean evidence in favour of the correct choice (evidence strength,x):

P(x)=0.5+0.5(1exp((xα)β)) (1)

where α and β are respectively the discrimination threshold and order of the psychometric function, and exp is the exponential function. To illustrate the effect of pro-variance bias, we also fitted a three-parameter psychometric function to the subjects’ probability to choose the higher SD option (PHSD) in the ‘Regular’ trials, as a function of the difference in mean evidence in favour of the higher SD option on each trial (xHSD):

PHSDxHSD=0.5+0.5signxHSD+δ1-exp-xHSD+δαβ (2)

where δ is the psychometric function shift, and sign returns 1 and -1 for positive and negative inputs respectively. To be explicitly clear, on ‘ChooseTall’ trials, the mean evidence in favour of the higher SD option was calculated by subtracting the mean bar height of the lower SD option from that of the higher SD option. On ‘ChooseShort’ trials, the mean evidence in favour of the higher SD option was calculated by subtracting [100 - mean bar height of the lower SD option] from [100 – mean bar height of the higher SD option].

In both cases, the psychometric function is fitted using the method of maximum-likelihood estimation (MLE), with the estimator

i[𝟙ilog(P(x))+(1𝟙i)log(1P(x))] (3)

(and similarly for PHSD & xHSD), where i is summed across trials. 𝟙i=1 if the correct (higher SD) option is chosen in trial i and 0 otherwise.

The temporal weights of stimuli were calculated using logistic regression. This function defined the probability (PL) of choosing the left option:

lnPL1-PL=β0'+n=18βn'(Ln-Rn) (4)

where β0' is a bias term, βn' reflects the weighting given to the nth pair of stimuli, Ln and Rn reflect the evidence for the left and right option at each time point.

Regression analysis was used to probe the influence of evidence mean, and evidence variability on choice during the ‘Regular’ trials (Figures 4D, 5F, 6C, 7F–H and 8D-F, Figure 4—figure supplement 1D,G, Figure 8—figure supplement 1C,H). This function defined the probability (PL) of choosing the left option:

lnPL1-PL=β0+β1meanL-meanR+β2stdL-stdR (5)

where β0 is a bias term, β1 reflects the influence of evidence mean, and β2 reflects the influence of standard deviation of evidence (evidence variability).

This approach was extended to probe other potential influences on the decision-making process. An expanded regression model was defined as follows:

ln(PL1 PL)= β0 + β1(mean(L))+ β2 (std(L))+ β3 (Max(L)) + β4 (Min(L))+ β5 (L1)+ β6 (L8) + β7 (mean(R))+ β8 (std(R))+ β9 (Max(R)) + β10 (Min(R))+ β11 (R1)+ β12 (R8) (6)

where β0 is a bias term, β1 reflects the influence of evidence mean of the left samples, β2 reflects the influence of evidence variability of the left samples, β3 reflects the influence of the maximum left sample, β4 reflects the influence of the minimum left sample, β5 reflects the influence of the first left sample, β6 reflects the influence of the last left sample. β7 to β12 reflect the same attributes for samples on the right side of the screen. Due to strong correlations among evidence standard deviation, maximum, and minimum, the regression model without β2 and β8 is used to evaluate the contribution of regressors other than evidence mean and standard deviation to the decision making process (Figure 4—figure supplement 1E,H, Figure 5—figure supplement 1B, Figure 6—figure supplement 1B, Figure 7—figure supplement 1B, Figure 8—figure supplement 1D,I).

To explore whether the subjects demonstrated a frequent-winner bias (Tsetsos et al., 2012), whereby they prefer to choose options that more frequently have the greater evidence across samples, we used a regression approach (Figure 4—figure supplement 3). The regression equation defined the probability (PL) of choosing the left option:

ln(PL1PL)= β0+ β1 (mean(L)mean(R))+ β2 (LocalWins(L)LocalWins(R)) (7)

where β0 is a bias term, β1 reflects the influence of evidence mean, and β2 reflects the influence of local winners (frequent-winner bias). The number of local wins for each option ranges between 0 and 8, and is the amount of times that the momentary evidence is stronger for that option. To provide an example, consider a trial where the evidence values were Left: [50 55 56 48 80 45 30 50], Right: [55 48 90 34 70 50 50 70]. Here, there would be 3 local wins for the left option, and 5 local wins for the right option.

To control for possible lapse effects induced by ketamine, where the animal responded randomly regardless of the trial difficulty, the behavioural models described above were extended to include an extra ‘lapse parameter’, Y0. The purpose of this parameter was to quantify the frequency of lapses, and to isolate the effect of lapsing from our other analyses of interest (i.e. the effect of ketamine on PVB index). In other words, lapse rate refers to the asymptote error rate at the limit of strong evidence. Equations 4-6 were extended as follows:

ln(PLY0 1 PL Y0 )= β0 + n=18βn(LnRn) (8)
ln(PLY0 1 PL Y0 )= β0 + β1(mean(L)mean(R))+ β2 (std(L)std(R)) (9)
ln(PLY0 1 PLY0 )= β0 + β1(mean(L))+ β2 (std(L))+ β3 (Max(L)) + β4 (Min(L))+ β5 (L1)+ β6 (L8) + β7 (mean(R))+ β8 (std(R))+ β9 (Max(R)) + β10 (Min(R))+ β11 (R1)+ β12 (R8) (10)

The models including a lapse term (Equations 8-10) were fitted via maximum-likelihood estimation (using the fminsearch algorithm in MATLAB), using the following cost function:

Σi[𝟙ilog(P(xi))+(1𝟙i)log(1P(xi))]+ λ(j=0Mβj2+Y02) (11)

where i is summed across trials. 1i=1 if the left option is chosen in trial i and 0 otherwise. λ is an L2 regularisation constant, which was set to 0.01. Bootstrapping was used to generate error estimates for the parameters of these models (10,000 iterations). As our analyses demonstrate that the animals very rarely lapse when administered with saline, we did not deem it necessary to apply the lapsing models to the standard session experiment (i.e. Figures 2, 3, 4, 5, 6).

To visualise the influence of lapsing upon the psychometric functions, and to allow a comparison between the monkey behaviour and circuit model performance, we extended Equation 2:

PHSD(xHSD)=0.5+0.5(12Y0) sign (xHSD+δ) (1exp((|xHSD + δ|α)β)) (12)

Here, Y0 was a fixed parameter according to the lapse rate calculated from the relevant monkey’s behavioural data.

The goodness-of-fit of various regression models with combinations of the predictors in the full model (Equation 6) were compared using a 10-fold cross-validation procedure (Supplementary files 14). Trials were initially divided into 10 groups. Data from 9 of the groups were used to train each regression model and calculate regression coefficients. The likelihood of the subjects’ choices in the left-out group (testing group), given the regression coefficients, could then be determined. The log-likelihood was then summed across these left-out trials. This process was repeated so that each of the 10 groups acted as the testing group. The whole cross-validation procedure was performed 100 times, and the average log-likelihood values were taken.

To initially explore the time course of drug effects on decision-making, we plotted choice accuracy (combined across ‘Regular’, ‘Half-Half’ and ‘Narrow-Broad’ trials) relative to drug administration (Figure 8A). Trials were binned relative to the time of injection. Within each session, choice accuracy was estimated at every minute, using a 6 min window around the bin centre. Accuracy was then averaged across sessions. To further probe the influence of drug administration on decision-making, we defined an analysis window based upon the time course of behavioural effects. All trials before the time of injection were classified as ‘pre-drug’. All trials beginning 5–30 min after injection were defined as ‘on-drug’ trials. These trials were then analysed using the same methods as described for the Standard sessions.

To quantify the effect of ketamine administration on the PVB index (Figure 8F, Figure 8—figure supplement 1C,H), we performed a permutation test. Trials collected during ketamine administration were compared with those collected during saline administration. The test statistic was calculated as the difference between the PVB index in ketamine and saline conditions. For each permutation, trials from the two sets of data were pooled together, before two shuffled sets with the same number of trials as the original ketamine and saline data were extracted. Next, the PVB index was computed in each permuted set, and the difference between the two PVB indices calculated. The difference measure for each permutation was used to build a null distribution with 1000000 entries. The difference measure from the true data were compared with the null distribution to calculate a p-value. For the models including a lapse term (Figure 8—figure supplement 2), the same test was performed with 10,000 permutations.

We later revisited the time course of drug effects by running our regression analyses at each of the binned windows described above (Figure 8—figure supplement 3). To calculate the time window where a parameter differed between ketamine and saline conditions, we used a cluster-based permutation test (Nichols and Holmes, 2002; Cavanagh et al., 2018; Cavanagh et al., 2016). These tests allowed us to correct for multiple comparisons while assessing the significance of time series data. The difference between the parameter of interest (PVB index) was calculated in the true data for each timepoint. All consecutive timepoints when this statistic exceeded a threshold ((|PVBSalinePVBketamine|0.15)) were designated as a ‘cluster’. The size of the clusters were compared to a null distribution constructed using a permutation test. The drug administered (ketamine or saline) in each session was randomly permuted 10,000 times and the cluster analysis was repeated for each permutation. The size of the largest cluster for each permutation was entered into the null distribution. The true cluster size was significant at the p < 0.05 level if the true cluster length exceeded the 95th percentile of the null distribution.

Spiking circuit model

A biophysically-based spiking circuit model was used to replicate decision making dynamics in a local association cortical microcircuit. The model was based on Wang, 2002, but with minor modifications from a previous study (Lam, 2017). The current model had one extra change in the input representation of the stimulus, described in detail below.

The circuit model consisted of NE=1600 excitatory pyramidal neurons and NI=400 inhibitory interneurons, all simulated as leaky integrate-and-fire neurons. All neurons were recurrently connected to each other, with NMDA and AMPA conductances mediating excitatory connections, and GABAA conductances mediating inhibitory connections. All neurons also received background inputs, while selective groups of excitatory neurons (see below) received stimulus inputs. Both background and stimulus inputs were mediated by AMPA conductances with Poisson spike trains.

Within the population of excitatory neurons were two non-overlapping groups of size NE,G=240. Neurons within the two groups received separate inputs reflecting the left and right stimuli streams. Neurons in the same group preferentially connected to each other (with a multiplicative factor w+>1 to the connection strength), allowing integration of the stimulus input. The connection strength to any other excitatory neurons was reduced by a factor w<1 in a manner which preserved the total connection strength. Due to lateral inhibition mediated by interneurons, excitatory neurons in the two different groups competed with each other. Inhibitory neurons, as well as excitatory neurons not in the two groups, were insensitive to the presented stimuli and were non-selective toward either choices or the respective neuron groups.

Momentary stimuli bar evidences were modelled as Poisson inputs (from an upstream sensory area) to the two groups of excitatory neurons (Figure 5A). The mean rate of Poisson input for any group, µ, linearly scaled with the corresponding stimulus evidence:

μ=μ0+μ(h50) (13)

where h[0,100] represented the momentary stimulus evidence, equal to the bar height in ‘ChooseTall’ trials, and 100 minus bar height in ‘ChooseShort’ trials. μ0=30Hz was the input strength when h=50, and μ=1Hz. For simplicity, we assumed each bar stimulus lasted 250ms, rather than 200ms with a subsequent 50ms inter-stimuli interval as in the experiment.

The circuit model simulation outputs spike data for the two excitatory populations, which are then converted to population activity smoothened with a 0.001s time-step via a casual exponential filter. In particular, for each spike of a given neuron, the histogram-bins corresponding to times before that spike receives no weight, while the histogram-bins corresponding to times after the spike receives a weight of 1τfilterexp(Δtτfilter), where Δt is the time of the histogram-bin after the spike, and τfilter=20ms.

From the population activity of the two excitatory populations, a choice is selected 2 s after stimulus offset, based on the population with higher activity. Stimulus inputs in general drive categorical, winner-take-all competitions such that the winning population will ramp up its activity until a high attractor state (>30 Hz, in comparison to approximately 1.5 Hz baseline firing rate), while suppressing the activity of the other population below baseline via lateral inhibition (Figure 5B). It is also possible that neither population reaches the high-activity state. Both populations, remaining at the spontaneous state, will have similarly low activities, such that the decision readout is random.

In addition to the control model, three perturbed spiking circuit models were considered (Murray et al., 2014; Lam, 2017): lowered E/I balance, elevated E/I balance, and sensory deficit. E/I perturbations were implemented through hypofunction of NDMARs (Figure 7A), as this is a leading hypothesis in the pathophysiology of schizophrenia (Nakazawa et al., 2012; Kehrer et al., 2008; Lisman et al., 2008). NMDA-R antagonists such as ketamine also provide a leading pharmacological model of schizophrenia (Krystal et al., 1994; Krystal et al., 2003). NMDA-R hypofunction on excitatory neurons (reduced GEE) resulted in lowered E/I ratio, whereas NMDA-R hypofunction on interneurons (reduced GEI) resulted in elevated E/I ratio due to disinhibition (Lam, 2017). Sensory deficit was implemented as weakened scaling of external inputs to stimuli evidence, resulting in a reduced μ. For the exact parameters, the lowered E/I model reduced GEE by 1.3125%, the elevated E/I model reduced GEI by 2.625%, and the sensory deficit model had a sensory deficit of 20% (such that μ was reduced by 20%) (Figure 7, Figure 7—figure supplement 1). The GEE reduction parameter was chosen as the perturbation strength which fits most well to the effect of ketamine on monkey behavioural alteration (Figure 8—figure supplements 4, 5). The GEI reduction and sensory deficit parameters were chosen to match the reduction of mean evidence regression coefficient in the GEE perturbation (Figure 7—figure supplements 2, 4).

The control circuit model completed 94,000 ‘Regular’ trials, where both streams were narrow in 25% of the trials, both streams were broad in 25% of the trials, and one stream was narrow and one was broad in 50% of the trials (Figure 5, Figure 5—figure supplements 1 and 2). All trials were generated identically as in standard session experiments. The control model also completed 47,000 standard session Narrow-Broad trials. To evaluate the effect of circuit perturbations, the control model, the lowered E/I model, the elevated E/I model, and the sensory deficit model all completed an identical set of 40,000 ‘Regular’ trials, where both streams were narrow in 25% of the trials, both streams were broad in 25% of the trials, and one stream was narrow and one was broad in 50% of the trials (Figure 7, Figure 7—figure supplement 1). The same permutation test described earlier for comparing PVB index between ketamine and saline conditions was also used to quantify whether various perturbed circuit models have different PVB indices relative to the control model (Figure 7H).

Testing the versatility of model predictions

To examine the versatility of the model predictions by perturbations on the pro-variance bias effect, we parametrically reduced both GEE and GEI concurrently, by {0%, 0.4375%, 0.875%, 1.3125%, 1.75%, 2.1875%, 2.625%} for GEE and {0%, 0.875%, 1.75%, 2.625%, 3.5%, 4.375%, 5.25%} for GEI (Figure 7—figure supplements 2, 3). In addition, we also parametrically varied the sensory deficit, with a sensory deficit of {0%, 5%, 10%, 15%, 20%, 25%, 30%, 35%, 40%, 45%, 50%} (Figure 7—figure supplement 4). 12,000 ‘Regular trials’ were completed for each condition in the parameter scans, with the same distribution of narrow/broad streams as in the four main circuit models.

The effect of various perturbations to the circuit model was compared to the ketamine effect on the choice behaviour of the two monkeys, using coefficients from the regression model with left-right difference in mean evidence and evidence standard deviation as regressors (Equation 5). In particular, for each perturbation condition, the relative difference in mean evidence regression coefficient between the perturbed circuit model and the control model, and the relative difference in evidence standard deviation regression coefficient between the perturbed circuit model and the control model, were computed. Similarly, the relative differences in the two regression coefficients between the monkey data under ketamine vs saline injection were also computed (with lapse rate accounted for). The direction of alterations was mapped to the 2-dimensional space of relative coefficient differences for mean evidence and evidence standard deviations, and was compared between the perturbations to model and monkey choice behaviour using cosine similarity (CS) and Euclidean distance (ED) (Figure 8—figure supplements 4 and 5):

CS=δβmeanmonkey  δβmeanmodel+ δβstdmonkey  δβstdmodelδβmeanmonkey2+δβstdmonkey2δβmeanmodel2+δβstdmodel2 (14)
ED=(δβmeanmonkey δβmeanmodel)2+(δβstdmonkey δβstdmodel)2  (15)
δβmeanmonkey =(βmeanmonkey ketamineβmeanmonkey saline)βmeanmonkey saline,δβmeanmodel =(βmeanmodel pert.βmeanmodel control)βmeanmodel controlδβstdmonkey =(βstdmonkey ketamineβstdmonkey saline)βstdmonkey saline,δβstdmodel=(βstdmodel pert.βstdmodel control)βstdmodel control

where the subscript denoted the regression coefficient (mean evidence or evidence standard deviation), the superscript denoted the data of the regression analysis (monkey under ketamine injection, monkey under saline injection, control circuit model, or the model with the perturbation condition of interest). A higher cosine similarity (and lower Euclidean distance) meant the relative extent (and direction) of alteration, to the regression coefficients of mean evidence and evidence standard deviation, was more similar between the perturbations in the circuit model and the monkey data.

In contrast to the two measures above which evaluate the effect of perturbation (e.g. by ketamine), Kullback–Leibler (KL) divergence allows direct comparison between monkey behavior under saline or ketamine injections, and various model conditions. More explicitly, for each monkey’s data collected under the influence of ketamine or saline, and for each model condition in the parameter space, we computed the KL divergence of the choice behaviors from the model to the monkey data.

DKL=xHSDPmonkey(xHSD)log(Pmonkey(xHSD)Pmodel(xHSD)) (16)

where xHSD is summed over the range of net evidence strength in favour of the higher SD option on each trial, while Pmonkey and Pmodel are the choice probabilities for the monkey and model to respectively choose the broad option given xHSD.

Mean field model

The current spiking circuit model was mathematically reduced to a mean-field model, as outlined in Niyogi and Wong-Lin, 2013, in the same manner as from Wang, 2002 to Wong and Wang, 2006. The mean-field model consisted of two variables (S1, S2), namely the NMDA-R gating variables of the two groups of excitatory neurons representing the integrated evidence for the two choices. The two gating variables evolved according to:

dSidt= SiτNMDA + (1Si)γri (17)

for i=1,2. τNMDA=100ms and γ=0.641 were the synaptic time constant and saturation factor for NMDA-R. r1, r2 were the firing rates of the two populations, and were quasi-statically computed from the transfer function based on the total input currents I1, I2 (Figure 6D). The input currents

I1=α1S1+α2S2+β1r1+β2r2+I1ext (18)
I2=α1S2+α2S1+β1r2+β2r1+I2ext (19)

arose from the NMDA-Rs of the same population (e.g. α1S1 in Equation 18) and competing population (e.g. α2S2 in Equation 18), the AMPA-Rs of the same population (e.g. β1r1 in Equation 18) and competing population (e.g. β2r2 in Equation 18), and external inputs (e.g. I1ext in Equation 18). GABA-Rs were also expressed in αi and βi to account for lateral inhibition. Using change of variables x1=α1S1+α2S2+I1ext, x2=α1S2+α2S1+I2ext, the transfer function can be written as

r1= ax1f(x2)b1exp[d(ax1f(x2)b)]  (20)
r2= ax2f(x1)b1exp[d(ax2f(x1)b)]  (21)

where a, b, d were constants that depended on β1, and f was a function of xi that depended on β2. We omitted the expression of αi,β1,a,b,d,f,Iiext for the sake of simplicity, but please see Wong and Wang, 2006 for details. The resulting transfer function (Figure 6D) is such that small input below a threshold generated no response, while very large input generated a linear response (note that Figure 6D shows r1 as a function of x1, with x2=0). This resulted in an expansive non-linearity between the two limits, allowing strong inputs to drive the system more strongly than weak-inputs. Input streams with large variability, with higher chance to have both strong inputs and weak inputs, can thus leverage such asymmetry better than input streams with small variability, resulting in pro-variance bias (Figure 6).

ri, as a (transfer) function of Ii, was sensitive to NMDA-R hypofunction due to α1, α2 in the first two terms in Equations 18 and 19. Through α1 and α2, NMDA-R hypofunction altered the transfer function and thus the expansive non-linearity, thus altering the pro-variance bias effect. In addition, parameter changes due to NMDA-R hypofunction (α1 and α2) will also alter the attractor dynamics of the circuit model, such that the perturbed circuit will have different dynamics and ranges of S1 and S2, resulting in a second indirect effect on the pro-variance bias effect.

The mean-field model completed 94,000 standard session ‘Regular’ trials, in the same manner as the circuit models. We only generated control circuits for the mean-field model. Predictions of perturbations from spiking circuit models generally held for the mean-field model. However, due to detailed distinctions in the dynamics of the spiking circuit model verses the mean-field model, perturbation-induced decision deficit arose from different mechanisms between the two sets of models (Lam, 2017). This complicated the translatability between the two sets of models, so we focused on the control circuit.

Code and data availability

Stimuli generation and data analysis for the experiment were performed in MATLAB. The spiking circuit model was implemented using the Python-based Brian2 neural simulator (Goodman and Brette, 2008), with a simulations time step of 0.02ms. Further analyses for both experimental and model data were completed using custom-written Python and MATLAB codes. Data and analysis scripts to reproduce figures from the paper will be made publicly available for download from an online repository upon publication. Data has been uploaded to Dryad under the doi:10.5061/dryad.pnvx0k6k3. Code is available on GitHub at https://github.com/normanlam1217/CavanaghLam2020CodeRepository (copy archived at https://github.com/elifesciences-publications/CavanaghLam2020CodeRepository; Lam, 2020).

Funding Statement

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Contributor Information

Sean Edward Cavanagh, Email: sean.cavanagh.12@ucl.ac.uk.

John D Murray, Email: john.murray@yale.edu.

Laurence Tudor Hunt, Email: laurence.hunt@psych.ox.ac.uk.

Steven Wayne Kennerley, Email: s.kennerley@ucl.ac.uk.

Tobias H Donner, University Medical Center Hamburg-Eppendorf, Germany.

Michael J Frank, Brown University, United States.

Funding Information

This paper was supported by the following grants:

  • National Institute of Mental Health R01MH112746 to John D Murray.

  • Wellcome 098830/Z/12/Z to Laurence Tudor Hunt.

  • Wellcome 208789/Z/17/Z to Laurence Tudor Hunt.

  • Brain and Behavior Research Foundation to Laurence Tudor Hunt.

  • National Institute for Health Research Oxford Health Biomedical Research Centre to Laurence Tudor Hunt.

  • Middlesex Hospital Medical School General Charitable Trust to Sean Edward Cavanagh.

  • NSERC PGSD2 - 502866 - 2017 to Norman H Lam.

  • Wellcome 096689/Z/11/Z to Steven Wayne Kennerley.

Additional information

Competing interests

No competing interests declared.

Author contributions

Conceptualization, Data curation, Software, Formal analysis, Investigation, Visualization, Methodology, Writing - original draft, Writing - review and editing.

Conceptualization, Data curation, Software, Formal analysis, Investigation, Visualization, Methodology, Writing - review and editing.

Conceptualization, Resources, Supervision, Funding acquisition, Writing - review and editing.

Conceptualization, Resources, Supervision, Funding acquisition, Writing - review and editing.

Conceptualization, Resources, Supervision, Funding acquisition, Project administration, Writing - review and editing.

Ethics

Animal experimentation: All experimental procedures were approved by the UCL Local Ethical Procedures Committee and the UK Home Office (PPL Number 70/8842), and carried out in accordance with the UK Animals (Scientific Procedures) Act.

Additional files

Supplementary file 1. Difference in log-likelihood of Full regression model (mean, SD, max, min, first, last of evidence values; Equation 6 in Materials and methods) vs reduced model, for each monkey and the circuit model.

Log-likelihood values were calculated using a cross-validation procedure (see Materials and methods). Column label refers to the removed regressor. Positive values indicate the full regression model performs better. Values depend on the number of completed trials, which differed both between subjects and the circuit model. For both monkeys and the circuit model, mean evidence is clearly the most important driver of choice behaviour, followed by first and last evidence samples which reflects the primacy bias. Finally, evidence standard deviation (SD) has a stronger effect than maximum and minimum evidence samples (Max and Min).

elife-53664-supp1.docx (13.4KB, docx)
Supplementary file 2. Difference in log-likelihood of regression models including either evidence standard deviation (SD) or both maximum and minimum evidence (Max and Min) as regressors, for each monkey and the circuit model.

Log-likelihood values were calculated using a cross-validation procedure (see Materials and methods). Column label refers to the regressors additional to either SD or Max and Min. Positive values indicate the regression model with SD performs better than that with Max and Min. Values depend on the number of completed trials, which differed both between subjects and the circuit model. Regardless of whether first and last evidence sample regressors are included, the models with standard deviation of evidence have higher log-likelihoods than the models with maximum and minimum evidence samples, indicating a better explanation of the data by standard deviation than by maximum and minimum evidence samples.

elife-53664-supp2.docx (12.9KB, docx)
Supplementary file 3. Increase in log-likelihood of various regression models (regressors in column labels) due to inclusion of evidence standard deviation as a regressor, for each monkey and the circuit model.

Log-likelihood values were calculated using a cross-validation procedure (see Materials and methods). Values depend on the number of completed trials, which differed both between subjects and the circuit model. Positive values across the table indicates the evidence standard deviation regressor robustly improves model performance for all models examined.

elife-53664-supp3.docx (12.9KB, docx)
Supplementary file 4. Difference in log-likelihood of regression models including either evidence standard deviation (SD) or both maximum and minimum evidence (Max and Min) as regressors, for each monkey with saline or ketamine injection.

Log-likelihood values were calculated using a cross-validation procedure (see Materials and methods). Column label refers to the regressors additional to either SD or Max and Min. Positive values indicate the regression model with SD performs better than that with Max and Min. Values depend on the number of completed trials, which differed across conditions. Regardless of whether first and last evidence sample regressors are included, the models with standard deviation of evidence have higher log-likelihoods than the models with maximum and minimum evidence samples, indicating a better explanation of the data by standard deviation than by maximum and minimum evidence samples. In particular, under ketamine injection, monkeys did not switch their strategy to primarily use maximum and minimum evidence samples (over standard deviation of evidence) to guide their choice.

elife-53664-supp4.docx (15.2KB, docx)
Transparent reporting form

Data availability

Stimuli generation and data analysis for the experiment were performed in MATLAB. The spiking circuit model was implemented using the Python-based Brian2 neural simulator, with a simulations time step of 0.02ms. Further analyses for both experimental and model data were completed using custom-written Python and MATLAB codes. Data has been uploaded to Dryad under the doi:10.5061/dryad.pnvx0k6k3. Code is available on GitHub at https://github.com/normanlam1217/CavanaghLam2020CodeRepository (copy archived at https://github.com/elifesciences-publications/CavanaghLam2020CodeRepository).

The following dataset was generated:

Cavanagh SE, Lam NH, Murray JD, Hunt LT, Kennerley SW. 2020. Data from: A circuit mechanism for decision making biases and NMDA receptor hypofunction. Dryad Digital Repository.

References

  1. Asaad WF, Santhanam N, McClellan S, Freedman DJ. High-performance execution of psychophysical tasks with complex visual stimuli in MATLAB. Journal of Neurophysiology. 2013;109:249–260. doi: 10.1152/jn.00527.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Asaad WF, Eskandar EN. A flexible software tool for temporally-precise behavioral control in matlab. Journal of Neuroscience Methods. 2008a;174:245–258. doi: 10.1016/j.jneumeth.2008.07.014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Asaad WF, Eskandar EN. Achieving behavioral control with millisecond resolution in a high-level programming environment. Journal of Neuroscience Methods. 2008b;173:235–240. doi: 10.1016/j.jneumeth.2008.06.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Blackman RK, Macdonald AW, Chafee MV. Effects of ketamine on context-processing performance in monkeys: a new animal model of cognitive deficits in schizophrenia. Neuropsychopharmacology. 2013;38:2090–2100. doi: 10.1038/npp.2013.118. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Brunton BW, Botvinick MM, Brody CD. Rats and humans can optimally accumulate evidence for decision-making. Science. 2013;340:95–98. doi: 10.1126/science.1233912. [DOI] [PubMed] [Google Scholar]
  6. Butler PD, Silverstein SM, Dakin SC. Visual perception and its impairment in schizophrenia. Biological Psychiatry. 2008;64:40–47. doi: 10.1016/j.biopsych.2008.03.023. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Cavanagh SE, Wallis JD, Kennerley SW, Hunt LT. Autocorrelation structure at rest predicts value correlates of single neurons during reward-guided choice. eLife. 2016;5:e18937. doi: 10.7554/eLife.18937. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Cavanagh SE, Towers JP, Wallis JD, Hunt LT, Kennerley SW. Reconciling persistent and dynamic hypotheses of working memory coding in prefrontal cortex. Nature Communications. 2018;9:3498. doi: 10.1038/s41467-018-05873-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Cheadle S, Wyart V, Tsetsos K, Myers N, de Gardelle V, Herce Castañón S, Summerfield C. Adaptive gain control during human perceptual choice. Neuron. 2014;81:1429–1441. doi: 10.1016/j.neuron.2014.01.020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Chen Y, Nakayama K, Levy D, Matthysse S, Holzman P. Processing of global, but not local, motion direction is deficient in schizophrenia. Schizophrenia Research. 2003;61:215–227. doi: 10.1016/S0920-9964(02)00222-0. [DOI] [PubMed] [Google Scholar]
  11. Chen Y, Levy DL, Sheremata S, Holzman PS. Compromised late-stage motion processing in schizophrenia. Biological Psychiatry. 2004;55:834–841. doi: 10.1016/j.biopsych.2003.12.024. [DOI] [PubMed] [Google Scholar]
  12. Chen Y, Bidwell LC, Holzman PS. Visual motion integration in schizophrenia patients, their first-degree relatives, and patients with bipolar disorder. Schizophrenia Research. 2005;74:271–281. doi: 10.1016/j.schres.2004.04.002. [DOI] [PubMed] [Google Scholar]
  13. Chen X, Shu S, Bayliss DA. HCN1 channel subunits are a molecular substrate for hypnotic actions of ketamine. Journal of Neuroscience. 2009;29:600–609. doi: 10.1523/JNEUROSCI.3481-08.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Ermakova AO, Gileadi N, Knolle F, Justicia A, Anderson R, Fletcher PC, Moutoussis M, Murray GK. Cost evaluation during Decision-Making in patients at early stages of psychosis. Computational Psychiatry. 2019;3:18–39. doi: 10.1162/cpsy_a_00020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Evans S, Almahdi B, Sultan P, Sohanpal I, Brandner B, Collier T, Shergill SS, Cregg R, Averbeck BB. Performance on a probabilistic inference task in healthy subjects receiving ketamine compared with patients with schizophrenia. Journal of Psychopharmacology. 2012;26:1211–1217. doi: 10.1177/0269881111435252. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Frohlich J, Van Horn JD. Reviewing the ketamine model for schizophrenia. Journal of Psychopharmacology. 2014;28:287–302. doi: 10.1177/0269881113512909. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Genís Prat-Ortega KW, Roxin A, de la Rocha J. Flexible categorization in perceptual decision making. bioRxiv. 2020 doi: 10.1101/2020.05.23.110460. [DOI] [PMC free article] [PubMed]
  18. Gold JI, Shadlen MN. The neural basis of decision making. Annual Review of Neuroscience. 2007;30:535–574. doi: 10.1146/annurev.neuro.29.051605.113038. [DOI] [PubMed] [Google Scholar]
  19. Goodman D, Brette R. Brian: a simulator for spiking neural networks in Python. Frontiers in Neuroinformatics. 2008;2:5. doi: 10.3389/neuro.11.005.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Hanks TD, Kopec CD, Brunton BW, Duan CA, Erlich JC, Brody CD. Distinct relationships of parietal and prefrontal cortices to evidence accumulation. Nature. 2015;520:220–223. doi: 10.1038/nature14066. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Hikosaka O, Wurtz RH. Visual and oculomotor functions of monkey substantia nigra pars reticulata. III. Memory-contingent visual and saccade responses. Journal of Neurophysiology. 1983;49:1268–1284. doi: 10.1152/jn.1983.49.5.1268. [DOI] [PubMed] [Google Scholar]
  22. Homayoun H, Moghaddam B. NMDA receptor hypofunction produces opposite effects on prefrontal cortex interneurons and pyramidal neurons. Journal of Neuroscience. 2007;27:11496–11500. doi: 10.1523/JNEUROSCI.2213-07.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Huq SF, Garety PA, Hemsley DR. Probabilistic judgements in deluded and non-deluded subjects. The Quarterly Journal of Experimental Psychology Section A. 1988;40:801–812. doi: 10.1080/14640748808402300. [DOI] [PubMed] [Google Scholar]
  24. Huys QJ, Maia TV, Frank MJ. Computational psychiatry as a bridge from neuroscience to clinical applications. Nature Neuroscience. 2016;19:404–413. doi: 10.1038/nn.4238. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Jackson ME, Homayoun H, Moghaddam B. NMDA receptor hypofunction produces concomitant firing rate potentiation and burst activity reduction in the prefrontal cortex. PNAS. 2004;101:8467–8472. doi: 10.1073/pnas.0308455101. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Katz LN, Yates JL, Pillow JW, Huk AC. Dissociated functional significance of decision-related activity in the primate dorsal stream. Nature. 2016;535:285–288. doi: 10.1038/nature18617. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Kehrer C, Maziashvili N, Dugladze T, Gloveli T. Altered Excitatory-Inhibitory balance in the NMDA-Hypofunction model of schizophrenia. Frontiers in Molecular Neuroscience. 2008;1:6. doi: 10.3389/neuro.02.006.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Kiani R, Hanks TD, Shadlen MN. Bounded integration in parietal cortex underlies decisions even when viewing duration is dictated by the environment. Journal of Neuroscience. 2008;28:3017–3029. doi: 10.1523/JNEUROSCI.4761-07.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Krystal JH, Karper LP, Seibyl JP, Freeman GK, Delaney R, Bremner JD, Heninger GR, Bowers MB, Charney DS. Subanesthetic effects of the noncompetitive NMDA antagonist, ketamine, in humans. Psychotomimetic, perceptual, cognitive, and neuroendocrine responses. Archives of General Psychiatry. 1994;51:199–214. doi: 10.1001/archpsyc.1994.03950030035004. [DOI] [PubMed] [Google Scholar]
  30. Krystal JH, D'Souza DC, Mathalon D, Perry E, Belger A, Hoffman R. NMDA receptor antagonist effects, cortical glutamatergic function, and schizophrenia: toward a paradigm shift in medication development. Psychopharmacology. 2003;169:215–233. doi: 10.1007/s00213-003-1582-z. [DOI] [PubMed] [Google Scholar]
  31. Lam NH. Effects of altered excitation-inhibition balance on decision making in a cortical circuit model. bioRxiv. 2017 doi: 10.1101/100347. [DOI] [PMC free article] [PubMed]
  32. Lam NH. CavanaghLam2020CodeRepository. a0a12bcGitHub. 2020 https://github.com/normanlam1217/CavanaghLam2020CodeRepository
  33. Lee E, Lee J, Kim E. Excitation/Inhibition imbalance in animal models of autism spectrum disorders. Biological Psychiatry. 2017;81:838–847. doi: 10.1016/j.biopsych.2016.05.011. [DOI] [PubMed] [Google Scholar]
  34. Lewis DA, Curley AA, Glausier JR, Volk DW. Cortical parvalbumin interneurons and cognitive dysfunction in schizophrenia. Trends in Neurosciences. 2012;35:57–67. doi: 10.1016/j.tins.2011.10.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Lim S, Goldman MS. Balanced cortical microcircuitry for maintaining information in working memory. Nature Neuroscience. 2013;16:1306–1314. doi: 10.1038/nn.3492. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Lisman JE, Coyle JT, Green RW, Javitt DC, Benes FM, Heckers S, Grace AA. Circuit-based framework for understanding neurotransmitter and risk gene interactions in schizophrenia. Trends in Neurosciences. 2008;31:234–242. doi: 10.1016/j.tins.2008.02.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Ma L, Skoblenick K, Seamans JK, Everling S. Ketamine-Induced changes in the signal and noise of rule representation in working memory by lateral prefrontal neurons. The Journal of Neuroscience. 2015;35:11612–11622. doi: 10.1523/JNEUROSCI.1839-15.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Ma L, Skoblenick K, Johnston K, Everling S. Ketamine alters lateral prefrontal oscillations in a Rule-Based working memory task. The Journal of Neuroscience. 2018;38:2482–2494. doi: 10.1523/JNEUROSCI.2659-17.2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Malhotra AK, Pinals DA, Weingartner H, Sirocco K, Missar CD, Pickar D, Breier A. NMDA receptor function and human cognition: the effects of ketamine in healthy volunteers. Neuropsychopharmacology. 1996;14:301–307. doi: 10.1016/0893-133X(95)00137-3. [DOI] [PubMed] [Google Scholar]
  40. Marín O. Interneuron dysfunction in psychiatric disorders. Nature Reviews Neuroscience. 2012;13:107–120. doi: 10.1038/nrn3155. [DOI] [PubMed] [Google Scholar]
  41. Moaddel R, Abdrakhmanova G, Kozak J, Jozwiak K, Toll L, Jimenez L, Rosenberg A, Tran T, Xiao Y, Zarate CA, Wainer IW. Sub-anesthetic concentrations of (R,S)-ketamine metabolites inhibit acetylcholine-evoked currents in α7 nicotinic acetylcholine receptors. European Journal of Pharmacology. 2013;698:228–234. doi: 10.1016/j.ejphar.2012.11.023. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Moran RJ, Jones MW, Blockeel AJ, Adams RA, Stephan KE, Friston KJ. Losing control under ketamine: suppressed cortico-hippocampal drive following acute ketamine in rats. Neuropsychopharmacology. 2015;40:268–277. doi: 10.1038/npp.2014.184. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Morcos AS, Harvey CD. History-dependent variability in population dynamics during evidence accumulation in cortex. Nature Neuroscience. 2016;19:1672–1681. doi: 10.1038/nn.4403. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Murray JD, Anticevic A, Gancsos M, Ichinose M, Corlett PR, Krystal JH, Wang XJ. Linking microcircuit dysfunction to cognitive impairment: effects of disinhibition associated with schizophrenia in a cortical working memory model. Cerebral Cortex. 2014;24:859–872. doi: 10.1093/cercor/bhs370. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Murray JD, Jaramillo J, Wang XJ. Working memory and Decision-Making in a frontoparietal circuit model. The Journal of Neuroscience. 2017;37:12167–12186. doi: 10.1523/JNEUROSCI.0343-17.2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Najafi F, Elsayed GF, Cao R, Pnevmatikakis E, Latham PE, Cunningham JP, Churchland AK. Excitatory and inhibitory subnetworks are equally selective during Decision-Making and emerge simultaneously during learning. Neuron. 2020;105:165–179. doi: 10.1016/j.neuron.2019.09.045. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Nakazawa K, Zsiros V, Jiang Z, Nakao K, Kolata S, Zhang S, Belforte JE. GABAergic interneuron origin of schizophrenia pathophysiology. Neuropharmacology. 2012;62:1574–1583. doi: 10.1016/j.neuropharm.2011.01.022. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Nichols TE, Holmes AP. Nonparametric permutation tests for functional neuroimaging: a primer with examples. Human Brain Mapping. 2002;15:1–25. doi: 10.1002/hbm.1058. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Nienborg H, Cumming BG. Decision-related activity in sensory neurons reflects more than a neuron's causal effect. Nature. 2009;459:89–92. doi: 10.1038/nature07821. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Niyogi RK, Wong-Lin K. Dynamic excitatory and inhibitory gain modulation can produce flexible, robust and optimal decision-making. PLOS Computational Biology. 2013;9:e1003099. doi: 10.1371/journal.pcbi.1003099. [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Olney JW, Farber NB. Glutamate receptor dysfunction and schizophrenia. Archives of General Psychiatry. 1995;52:998–1007. doi: 10.1001/archpsyc.1995.03950240016004. [DOI] [PubMed] [Google Scholar]
  52. Roitman JD, Shadlen MN. Response of neurons in the lateral intraparietal area during a combined visual discrimination reaction time task. The Journal of Neuroscience. 2002;22:9475–9489. doi: 10.1523/JNEUROSCI.22-21-09475.2002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Ross RM, McKay R, Coltheart M, Langdon R. Jumping to conclusions about the beads task? A Meta-analysis of delusional ideation and Data-Gathering. Schizophrenia Bulletin. 2015;41:1183–1191. doi: 10.1093/schbul/sbu187. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Rotaru DC, Yoshino H, Lewis DA, Ermentrout GB, Gonzalez-Burgos G. Glutamate receptor subtypes mediating synaptic activation of prefrontal cortex neurons: relevance for schizophrenia. Journal of Neuroscience. 2011;31:142–156. doi: 10.1523/JNEUROSCI.1970-10.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Shen K, Kalwarowsky S, Clarence W, Brunamonti E, Paré M. Beneficial effects of the NMDA antagonist ketamine on decision processes in visual search. Journal of Neuroscience. 2010;30:9947–9953. doi: 10.1523/JNEUROSCI.6317-09.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Skoblenick KJ, Womelsdorf T, Everling S. Ketamine alters Outcome-Related local field potentials in monkey prefrontal cortex. Cerebral Cortex. 2016;26:2743–2752. doi: 10.1093/cercor/bhv128. [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Skoblenick K, Everling S. NMDA antagonist ketamine reduces task selectivity in macaque dorsolateral prefrontal neurons and impairs performance of randomly interleaved prosaccades and antisaccades. Journal of Neuroscience. 2012;32:12018–12027. doi: 10.1523/JNEUROSCI.1510-12.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Starc M, Murray JD, Santamauro N, Savic A, Diehl C, Cho YT, Srihari V, Morgan PT, Krystal JH, Wang XJ, Repovs G, Anticevic A. Schizophrenia is associated with a pattern of spatial working memory deficits consistent with cortical disinhibition. Schizophrenia Research. 2017;181:107–116. doi: 10.1016/j.schres.2016.10.011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Stine GM, Zylberberg A, Ditterich J, Shadlen MN. Differentiating between integration and non-integration strategies in perceptual decision making. eLife. 2020;9:e55365. doi: 10.7554/eLife.55365. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Tsetsos K, Chater N, Usher M. Salience driven value integration explains decision biases and preference reversal. PNAS. 2012;109:9659–9664. doi: 10.1073/pnas.1119569109. [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Tsetsos K, Moran R, Moreland J, Chater N, Usher M, Summerfield C. Economic irrationality is optimal during noisy decision making. PNAS. 2016;113:3102–3107. doi: 10.1073/pnas.1519157113. [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Umbricht D, Schmid L, Koller R, Vollenweider FX, Hell D, Javitt DC. Ketamine-induced deficits in auditory and visual context-dependent processing in healthy volunteers: implications for models of cognitive deficits in schizophrenia. Archives of General Psychiatry. 2000;57:1139–1147. doi: 10.1001/archpsyc.57.12.1139. [DOI] [PubMed] [Google Scholar]
  63. Wang XJ. Probabilistic decision making by slow reverberation in cortical circuits. Neuron. 2002;36:955–968. doi: 10.1016/S0896-6273(02)01092-9. [DOI] [PubMed] [Google Scholar]
  64. Wang M, Yang Y, Wang CJ, Gamo NJ, Jin LE, Mazer JA, Morrison JH, Wang XJ, Arnsten AF. NMDA receptors subserve persistent neuronal firing during working memory in dorsolateral prefrontal cortex. Neuron. 2013;77:736–749. doi: 10.1016/j.neuron.2012.12.032. [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Wang XJ, Krystal JH. Computational psychiatry. Neuron. 2014;84:638–654. doi: 10.1016/j.neuron.2014.10.018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. Waskom ML, Kiani R. Decision making through integration of sensory evidence at prolonged timescales. Current Biology. 2018;28:3850–3856. doi: 10.1016/j.cub.2018.10.021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Wimmer K, Nykamp DQ, Constantinidis C, Compte A. Bump attractor dynamics in prefrontal cortex explains behavioral precision in spatial working memory. Nature Neuroscience. 2014;17:431–439. doi: 10.1038/nn.3645. [DOI] [PubMed] [Google Scholar]
  68. Wimmer K, Compte A, Roxin A, Peixoto D, Renart A, de la Rocha J. Sensory integration dynamics in a hierarchical network explains choice probabilities in cortical area MT. Nature Communications. 2015;6:6177. doi: 10.1038/ncomms7177. [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. Wong KF, Huk AC, Shadlen MN, Wang XJ. Neural circuit dynamics underlying accumulation of time-varying evidence during perceptual decision making. Frontiers in Computational Neuroscience. 2007;1:6. doi: 10.3389/neuro.10.006.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Wong KF, Wang XJ. A recurrent network mechanism of time integration in perceptual decisions. Journal of Neuroscience. 2006;26:1314–1328. doi: 10.1523/JNEUROSCI.3733-05.2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  71. Yizhar O, Fenno LE, Prigge M, Schneider F, Davidson TJ, O'Shea DJ, Sohal VS, Goshen I, Finkelstein J, Paz JT, Stehfest K, Fudim R, Ramakrishnan C, Huguenard JR, Hegemann P, Deisseroth K. Neocortical excitation/inhibition balance in information processing and social dysfunction. Nature. 2011;477:171–178. doi: 10.1038/nature10360. [DOI] [PMC free article] [PubMed] [Google Scholar]
  72. Zanos P, Moaddel R, Morris PJ, Georgiou P, Fischell J, Elmer GI, Alkondon M, Yuan P, Pribut HJ, Singh NS, Dossou KS, Fang Y, Huang XP, Mayo CL, Wainer IW, Albuquerque EX, Thompson SM, Thomas CJ, Zarate CA, Gould TD. NMDAR inhibition-independent antidepressant actions of ketamine metabolites. Nature. 2016;533:481–486. doi: 10.1038/nature17998. [DOI] [PMC free article] [PubMed] [Google Scholar]
  73. Zick JL, Blackman RK, Crowe DA, Amirikian B, DeNicola AL, Netoff TI, Chafee MV. Blocking NMDAR disrupts spike timing and decouples monkey prefrontal circuits: implications for Activity-Dependent disconnection in schizophrenia. Neuron. 2018;98:1243–1255. doi: 10.1016/j.neuron.2018.05.010. [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision letter

Editor: Tobias H Donner1
Reviewed by: Konstantinos Tsetsos2, Valentin Wyart3

In the interests of transparency, eLife publishes the most substantive revision requests and the accompanying author responses.

Acceptance summary:

This study uses a combination of neural circuit modeling with pharmacological intervention and behavioral psychophysics in monkeys to dissect the mechanisms of decision-making. It implicates the N-methyl-aspartate (NMDA) receptor in the accumulation of decision evidence, linking NMDA-mediated recurrent excitation of pyramidal neurons to a well-known behavioral phenomenon: a bias to choose options exhibiting larger variations in value. The approach opens up new perspectives for the mechanistic assessment of decision computations in the brain.

Decision letter after peer review:

Thank you for submitting your article "A circuit mechanism for decision making irrationalities and NMDA-R hypofunction: behaviour, modelling and pharmacology" for consideration by eLife. Your article has been reviewed by three peer reviewers, and the evaluation has been overseen by Tobias Donner as Reviewing Editor and Michael Frank as the Senior Editor. The following individual involved in review of your submission has agreed to reveal their identity: Konstantinos Tsetsos (Reviewer #1); Valentin Wyart (Reviewer #2).

The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.

While editors and reviewers found your work interesting in principle, all reviewers raised some substantial concerns that would need to be addressed before we can reach a final decision on your paper. The essential revisions are listed below. Indeed, it seems possible that the results of these requested analyses will require a substantial toning-down of several of your claims pertaining to E/I balance, in a way that could undermine the specificity of conclusions and the suitability of your paper for eLife. Even so, we agreed to give you the chance to address the concerns, for which two months should be a realistic time frame.

Summary:

This manuscript reports a computational and pharmacological study in monkeys, into a question of interest to a broad research community: The role of the NMDA receptor in evidence accumulation and decision-making. The authors used a protocol developed and tested in humans by Tsetsos and colleagues, in which subjects compare the average length of two sequences of visual bar stimuli. The monkeys exhibit a so-called “pro-variance bias” (PVB) toward choosing the more variable stream, although the monkey behavior differs from humans in other aspects (see below). The authors show that a neural spiking circuit model of bounded evidence accumulation shows a similar PVB, and that a lowered E/I ratio simultaneously decreases accuracy and increases PVB. Finally, they report that intramuscular injection of ketamine transiently decreases accuracy and increases PVB, as predicted by a lowered E/I ratio. The authors interpret their findings in the context of the previous work on PVB as well as pseudo-psychotic effects of ketamine in human subjects.

Essential revisions:

1) Specificity of the pharmacological claim within the circuit model.

You should show that the ketamine behavioural effects are robustly obtained under the lowered E/I hypothesis (e.g. for various magnitudes of E/I reduction ) and, crucially, incompatible with a) sensory deficit, b) elevated E/I, c) concurrent changes in both NMDA receptors. Practically, this means the following.

a) For each hypothesis, model predictions should be shown by varying the relevant model parameter(s) gradually within a range.

b) The similarity between model predictions and behavioural data should always be quantified using a goodness of fit metric (currently this is done by eye balling). Please should focus on perturbations who provide a good quantitative fit to the data.

c) Perturbations appear to be implemented in the same fashion as in Lam et al., 2017. There, the authors also changed other parameters besides the relevant synaptic weights, in order to maintain stability in the model dynamics. It is not clear if and how these extra changes could be pharmacologically induced by ketamine. Please clarify this aspect and derive predictions when stability adjustments are not performed.

2) Effect of drug on lapses.

Please test for a ketamine effect (sedation) on lapse rates. The psychometric functions under ketamine indicate a large change in the lapse rate which is currently not taken into account. All descriptive analyses (logistic regressions) and model simulations should take into account lapse rates. Can an increase in lapse rates explain away the changes in the PVB effect, psychometric curves, and kernels?

3) Validity of circuit model.

Currently, the circuit model is presented as a black box. You devote a couple of sentences in describing how the expansive non-linearities in the F-I curve give rise to the pro-variance effect. This part is not very well developed. One way to test whether indeed the non-linearities are crucial in the pro-variance effect the monkeys show, is to separately analyse trials with "high" (total sum of both streams high) vs. "low" (total sum of both streams low) evidence and see if the PVB effect changes. Or add the total sum as a regressor and compare the regression weights in the model and in the data. In addition to non-linearities in FI curves, the attractor dynamics of the circuit model may (or may not) promote the PVB effect. Are these dynamics even necessary to produce the pro-variance effect in the model? And is there any link between signatures of attractor dynamics (e.g. kernel shapes) and the PVB effect in the data? If dynamics were redundant in the model, would this undermine the claim that the PVB can be diagnostic of E-I balance?

This relates to the question concerning the way the PVB is quantified: in the model, how can the PVB index change even if the F-I non-linearity remains unchanged? It thus seems that the PVB index is sensitive to the overall signal-to-noise associated with the model and it is not a pure marker of the pro-variance propensity.

Please clarify what the PVB index stands for.

4) Results for both task framings.

Please present separate results for the two framings, i.e. "select higher" and "select lower" trials, which is interesting from an empirical viewpoint. Also: Have you mis-labelled the "high-variance correct" and "low-variance correct" trials in the "select the lower" conditions? (If not, then the quantification of the PVB may be wrong.)

5) Generalizability of findings to humans.

Reviewers raised doubts about the suggested analogy of monkey and human performance, and the underlying computations: Showing that both humans and monkeys have a PVB is not sufficient to establish a cross-species link. In the human work by Tsetsos et al. (PNAS, 2012, 2016), the temporal weighting of evidence on choice exhibits recency, in sharp contrast to the primacy found here in monkeys. What does this imply in terms of the relationship at a mechanistic level? This point needs to be discussed.

6) Link to schizophrenia.

Reviewers remarked that the link to schizophrenia is very loose: no patients are tested and overall behavioral signatures are different even from healthy human subjects (see point 3). Reviewers agreed that this point should at least be toned down substantially or dropped altogether. This tentative link could be brought up as speculation in Discussion, but not used as the basis for setting up the study.

7) Discuss limitations of pharmacological protocol.

a) The physiological effects of ketamine on cortical circuits remain speculative. The drug is unlikely to have the single, simple effect, as assumed in the model. This should be acknowledged in Discussion. Also, what happens in the model when NMDA hypofunction is implement in both neuron types?

b) The use of an intramuscular injection of ketamine at 0.5 mg/kg (about an order of magnitude stronger than what would be used in humans) produces a massive transient effect on task behavior, which has potential important drawbacks. First, the effect is massive, with decision accuracy dropping from about 85% correct to less than 60% correct after 5 minutes, followed by a sustained recovery over the next 30 minutes. This effect of ketamine is so strong that it is hard to know whether it is truly NMDA receptor hypofunction that produces the behavioral deficit, or task disengagement due to the substantial decrease in reward delivery (for example). The time window chosen for the analysis is also strongly non-stationary, and it is difficult to assess how much an average taken over this window is truly an accurate depiction of a common behavioral deficit throughout this time period (where accuracy goes from 60% correct to 80% correct). Again, the presence of possible attentional lapses should be accounted for (and reported in the manuscript) in all model fits and analyses, given the strength of ketamine-induced deficits triggered by this pharmacological protocol. We realize that this aspect of the study cannot be changed at this point, but it should be acknowledged as an important limitation.

[Editors' note: further revisions were suggested prior to acceptance, as described below.]

Thank you for resubmitting your article "A circuit mechanism for decision making biases and NMDA receptor hypofunction" for consideration by eLife. Your revised article has been reviewed by 2 peer reviewers, and the evaluation has been overseen by a Reviewing Editor and Michael Frank as the Senior Editor. The following individuals involved in review of your submission have agreed to reveal their identity: Konstantinos Tsetsos (Reviewer #1); Valentin Wyart (Reviewer #2).

The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.

We would like to draw your attention to changes in our revision policy that we have made in response to COVID-19 (https://elifesciences.org/articles/57162). Specifically, when editors judge that a submitted work as a whole belongs in eLife but that some conclusions require a modest amount of new analyses, as they do with your paper, we are asking that the manuscript be revised to either limit claims to those supported by data in hand, or to explicitly state that the relevant conclusions require additional supporting analyses.

Our expectation is that the authors will eventually carry out the additional analyses and report on how they affect the relevant conclusions either in a preprint on bioRxiv or medRxiv, or if appropriate, as a Research Advance in eLife, either of which would be linked to the original paper.

Summary:

The authors have provided an extensive response to the reviewers' comments based on several additional analyses of their data; they have successfully addressed a large subset of the comments. Specifically, they have performed several additional analyses to (i) test alternative hypotheses as well as the robustness of the favored hypothesis, (ii) examine lapses under ketamine, (iii) unpack the workings of the circuit model, and (iv) examine the frequent-winner effect in the data so they can assess the generalizability of this study to humans. We acknowledge that all these analyses have led to a significant improvement. Nevertheless, we remain uncertain about the validity of the overall conclusion, that ketamine induces NMDA-R hypo-function in excitatory neurons, and that this effect is behaviorally manifested as an increase in a pro-variance bias.

Revisions for this paper:

1) Motivate modeling approach.

Given that you opted not to fit the model (which would be done with the meanfield reduction), or tune its parameters so that it matches the above behavioral patterns, we believe you should unpack the reasoning underlying this particular modeling approach.

2) Plot model predictions along with data.

As we pointed in our first review there seem to be some discrepancies between the data and the model, which we remain concerned about:

i) Ketamine data asymptote at a lower than 100% level. The lapse rates are still not plugged in the circuit model so as to bring the model predictions closer to the data.

ii) The control kernel in Figure 7I and the monkey kernels in Figure 8C look different. In the model, there is a primacy pattern (except for the first item) but in the data we see a flat/ U-shaped pattern. Plotting those together could reveal the degree of discrepancy.

iii) In the control condition, the psychometric functions in Figure 7B and in Figure 8B look different (for example in terms of convergence of the light and dark coloured lines). The elevated E/I plot in Figure 7D appears to be closer to the saline psychometric curve.

Such discrepancies, if true, matter: if the "baseline" model does not capture behavior in the control condition well, we cannot be confident about the validity of the subsequent perturbations performed to emulate the ketamine effect. To allow for better assessing the match, we strongly encourage you to always plot model predictions with the data.

Ideally, you would also assess the goodness of fit using maximum likelihood. (A certain parametrization could exhibit similarity with the data in terms of the logistic regression weights (PVB) but at the same time it can miss largely on capturing the psychometric function.) We believe this would be straightforward, given the simulations you have already performed, but leave the decision to you, whether to not to do this.

We realize that this point was not explicitly raised in the previous round. Then, reviewers had asked for a quantification of the goodness of fit. The approach you chose (logistic regression) is specific to the PVB index (not applied to psychometric functions and kernels) and did not fully convince reviewers.

3) Assess effects of concurrent NMDA-blockade on E and I neurons.

You establish that E/I increase reduces the PVB index while E/I decrease has the opposite effect. However, you have not examined the effect of concurrent changes of NMDA-Rs of both, E and I cells, which we had suggested to do. Please comment on the fact that concurrent changes could mimic the effect of E-E reduction (Figure 7—figure supplement 2: moving up diagonally the purple point would result in equivalent behavior). Unless there is strong support in favor of the selective NMDA change over the concurrent change (assessed via maximum likelihood), the conclusions should be reframed.

4) Add a lapse rate downstream from circuit model.

You have now assessed lapse rates in your analysis, but reviewers remarked that you do not report the best-fitting lapse rates. This makes it impossible to judge just how much lapses contribute to the decrease in task performance in the initial period following ketamine injection (which is included in all analyses). We are concerned that this massive performance drop under ketamine is not only triggered by aPVB, but also (perhaps largely) by an increase in lapses and a decrease in evidence sensitivity.

We would expect a lapse mechanism to be in play in the circuit model when emulating the ketamine effect. You could use the fraction of lapses best fitted to psychometric curves (which clearly do not saturate at p(correct) = 1) for the circuit model simulations. It seems conceivable that allowing the circuit model to lapse will reduce the weight applied on the mean evidence.

5) Different quantification of pro-variance bias.

We do not understand the motivation for compressing sensitivity to mean and to variance into a single PVB index. Our reading is that the pro-variance effect, quantified as a higher probability of choosing a more variable stream (see Tsetsos et al., 2012), can just be directly mapped onto the variance regressor. Combining the weights into a PVB index and framing the general discussion around this index seems unnecessary. The main behavioral result of ketamine can be parsimoniously summarized as a reduced sensitivity to the mean evidence. Relatedly, please discuss if and how the ketamine-induced increase in the PVB effect, the way you quantified it, rides over a strong decrease of the sensitivity to mean evidence under ketamine.

It does seem to be the case that sensitivity to variance remains statistically indistinguishable between saline and ketamine (if anything it is slightly reduced). The E/I increase model consistently predicts that the variance regressor is reduced. This is not the case with the E/I decrease model, which occasionally predicts increases in the sensitivity to the variance (see yellow grids in Figure 7—figure supplement 2). This feature of the E/I decrease model should be discussed, as it seems to undermine the statement that the E/I perturbation produces robust predictions regardless of perturbation magnitude (i.e. depending on the strength of E/I reduction the model can produce a decrease or increase on variance sensitivity, and the relationship is non-monotonic). Overall, we believe that combining sensitivity to mean and variance obscures the interpretation of the data and model predictions.

Again, we realize that this point appears to be new. But reviewers feel they could not really have a strong case regarding this metric without seeing the more detailed model predictions (in a 2-d grid) that you have presented in your revision.

eLife. 2020 Sep 29;9:e53664. doi: 10.7554/eLife.53664.sa2

Author response


Essential revisions:

1) Specificity of the pharmacological claim within the circuit model.

You should show that the ketamine behavioural effects are robustly obtained under the lowered E/I hypothesis (e.g. for various magnitudes of E/I reduction) and, crucially, incompatible with a) sensory deficit, b) elevated E/I, c) concurrent changes in both NMDA receptors. Practically, this means the following.

a) For each hypothesis, model predictions should be shown by varying the relevant model parameter(s) gradually within a range.

b) The similarity between model predictions and behavioural data should always be quantified using a goodness of fit metric (currently this is done by eye balling). Please should focus on perturbations who provide a good quantitative fit to the data.

c) Perturbations appear to be implemented in the same fashion as in Lam et al., 2017. There, the authors also changed other parameters besides the relevant synaptic weights, in order to maintain stability in the model dynamics. It is not clear if and how these extra changes could be pharmacologically induced by ketamine. Please clarify this aspect and derive predictions when stability adjustments are not performed.

We thank the reviewers for this comment. We agree it is important to demonstrate the robustness of our model predictions. We have therefore included a 2-dimensional parameter scan with simultaneous NMDA-R hypofunction on excitatory (which lowers E/I) and inhibitory (which elevates E/I) neurons in the circuit model. We have also included a 1-dimensional parameter scan of the sensory deficit perturbation strength. Crucially, these parameter scans demonstrate robust effects by perturbations on the PVB index and the majority of the regression coefficients, in the three directions of lowered E/I, elevated E/I, and sensory deficit (new Figure 7—figure supplements 2, 3, and 4 ). In particular, PVB index is consistently increased by lowered E/I, decreased by elevated E/I, and unaltered by sensory deficit. Extremely strong sensory deficit resulted in an increase in PVB index, but this effect occurred at the limit where the model can barely perform the task (Figure 7—figure supplement 4), with a psychometric function qualitatively different from the monkey behaviour under ketamine (Figure 8—figure supplement 6).

To address comment 1b, we need to define an appropriate measure to quantify the degree to which the perturbation in the model alters decision-making behaviour in a similar manner as does ketamine in the monkeys. Importantly, the control parameters of the biophysically-based spiking circuit model were not at all fit to the monkey’s baseline behaviour (which is typical for spiking circuit modelling), and instead were the same as in Lam et al., 2017. Despite differences between model and monkey in control psychometric performance, we can quantify whether a perturbation produces a similar change in performance. The same could be applied for the two monkeys – despite baseline differences, does ketamine alter behaviour similarly between them?

Here, we focused on two key aspects of behavioural alteration: the relative changes in the (i) evidence mean and (ii) evidence standard deviation regression weights. We then quantify the comparison between two sets of change (e.g., model to monkey, or between two monkeys) as the cosine similarity (CS) of the two vectors composed of these relative changes (Figure 8—figure supplement 4A). Applying this measure to compare between the two monkeys, we find CS = 0.94, corresponding to an angle of 20.1 degrees, which shows the consistency of ketamine effects between the monkeys.

We applied this analysis to quantify the similarity between a monkey’s behaviour change under ketamine and the model under a range of parameter perturbations (2D sweeps of NMDAR hypofunction, and sensory deficit) (Figure 8—figure supplement 4B-I). These analyses found that the lowered E/I perturbation robustly yielded a similar performance change as measured in the monkeys under ketamine, with higher CS values than elevated E/I or sensory deficit perturbations. Specifically, the 1D sweep of lowered E/I yielded maximum CS values of 0.9972 and 0.9968 for Monkeys A and H, respectively (comparable to the between-monkey CS of 0.9391). These results were replicated by model comparison analysis using Euclidean distance, as a metric which also accounts for the magnitude of the vectors (Figure 8—figure supplement 5).

It is important to note that our modelling results support the hypothesis of lowered E/I in decision making circuits contributing to the pro-variance effect, but cannot exclude possible contributions from sensory deficits (which will not alter the pro-variance bias in our model). For the same reason, we did not consider a 2-dimensional parameter scan with both lowered E/I and sensory deficit perturbations, as no dissociable predictions can be inferred from that analysis.

The cosine similarity and Euclidean distance analyses, motivated by comment 1b, informed us that a moderately weaker perturbation of lowered E/I (by ~25%) yielded a better fit than the perturbation strength in our original submission, to the pattern of behavioural alteration observed under ketamine (Figure 8—figure supplement 4D,G and 5D,G). We have therefore updated the main Figure 7 with a lowered E/I perturbation strength that is a better fit by this measure, along with other perturbations to match the reduction in evidence mean regression weight.

Regarding comment 1c, we would like to clarify that the control circuit model in the current study is identical to that in Lam et al., 2017. The only parameter which is different is μ, which scales the input current as a function of the visual stimulus; given the different task paradigms, we believe it is reasonable to retune μ to better match the observed experimental data. The control circuit model in both the current study and in Lam et al., 2017 are different from the model presented in Wang, 2002. As originally noted in Lam et al., 2017, adjustments were made to the Wang, 2002 parameters to have stability of baseline and memory states under a wider range of E/I perturbations. (We note that all of the same qualitative effects of altered E/I can be observed in the Wang, 2002 parameters, but within a smaller range of perturbation strengths.)

Importantly, in both the current study and in Lam et al., 2017, we considered the control circuit model as the default state, corresponding to no pharmacological E/I perturbation. Therefore, the adjustments to control parameters from Wang, 2002 to the present study are not part of the simulated effects of pharmacological perturbation. The simulated effect of the perturbation on the local circuit, corresponding to ketamine, is solely mediated by reducing the conductance of recurrent NMDA receptors.

For the reviewers’ convenience, we included additions to the manuscript in response to this comment. In response to comment 1a, we added the following text to the Results:

“While all circuit models were capable of performing the task (Figure 7B-E), the choice accuracy of each perturbed model was reduced when compared to the control model. […] Together, the circuit model thus provided the basis of dissociable prediction by E/I-balance perturbing pharmacological agents.”

We also added the details of parameter scans to test the robustness of model prediction

in the Materials and methods, in response to comments 1a and 1b:

“[…] For the exact parameters, the lowered E/I model reduced GEE by 1.3125%, the elevated E/I model reduced GEI by 2.625%, and the sensory deficit model had a sensory deficit of 20% (such that μ was reduced by 20%) (Figure 7, Figure 7—figure supplement 1). […] A higher cosine similarity (and lower Euclidean distance) meant the relative extent (and direction) of alteration, to the regression coefficients of mean evidence and evidence standard deviation, was more similar between the perturbations in the circuit model and the monkey data.”

In response to comment 1b, we added the following text to the Results:

“Additional observations further supported the lowered E/I hypothesis for the effect of ketamine on monkey choice behaviour. […] This shifting of the weights could reflect a sensory deficit, but given the results of the pro-variance analysis, collectively the behavioural effects of ketamine are most consistent with lowered E/I balance and weakened recurrent connections.”

2) Effect of drug on lapses.

Please test for a ketamine effect (sedation) on lapse rates. The psychometric functions under ketamine indicate a large change in the lapse rate which is currently not taken into account. All descriptive analyses (logistic regressions) and model simulations should take into account lapse rates. Can an increase in lapse rates explain away the changes in the PVB effect, psychometric curves, and kernels?

Thank you for raising this important point. As the term lapse rate is slightly ambiguous, we will initially provide some clarification. Lapse rate may refer to the rate at which incomplete trials occur (i.e. due to the subject not responding, or breaking fixation). Alternatively, it may refer to the animal responding randomly, regardless of the trial difficulty, on a certain proportion of trials. Our response below will address both of these factors.

Firstly, in our initial submission, all incomplete trials (i.e. those where the animal did not commit to a choice, or broke fixation) were excluded from the analyses. The only trials included in the analyses were those where the animal completed a choice. Hence, any change in our accuracy measure (i.e. as in Figure 8A) relates specifically to changes in their actual choices, rather than task engagement. It is also important to stress that these “incomplete trials” occurred rarely, even when the animals were administered with ketamine:

The second type of lapsing, random responses, are an important consideration that our initial submission did not address. As the reviewers suggest, it is possible that an increase in these types of lapses could account for the animals’ reduction in accuracy when administered with ketamine. To address this point, we extended our existing logistic regression models to incorporate an extra parameter which could account for these lapses. The benefits of including this parameter were twofold:

1) To quantify the lapse rate

2) To control for lapsing, and isolate its effect from our other analyses (i.e. PVB index, kernels).

The updated models are listed below (description taken from the revised Materials and methods):

“To control for possible lapse effects induced by ketamine, where the animal responded randomly regardless of the trial difficulty, the behavioural models described above were extended to include an extra “lapse parameter”, Y0. […] Bootstrapping was used to generate error estimates for the parameters of these models (10,000 iterations). As our analyses demonstrate that the animals very rarely lapse when administered with saline, we did not deem it necessary to apply the lapsing models to the standard session experiment (i.e. Figures 2-6). ”

Crucially, our existing analyses of the ketamine data were not affected when controlling for lapses. It was clear that accounting for lapse rates did not explain away the changes in the PVB effect or the kernels. We have included these new results as a supplementary figure to the main Figure 8. See Figure 8—figure supplement 2.

For the reviewers’ convenience, we have also included Author response image 2 which compares the results from the original submission with the updated results utilising the lapsing model:

Author response image 2. Incorporating a lapsing parameter does not greatly influence coefficients from the original logistic model for the PVB analysis.

Author response image 2.

(A) The mean evidence regression coefficient under saline (blue) and ketamine (red) under logistic regression with (no hatches) or without (hatched) a lapse term, using Monkey A data. (B) Same as (A) but using Monkey H data instead. (C-D) Same as (A-B) but for the evidence standard deviation regression coefficient instead. (E-F) Same as (A-B) but for PVB index instead. Note that while both mean evidence and evidence standard deviation regression coefficients under ketamine injection vary with or without the lapse term, the PVB index is stable to the lapse term. All errorbars denote the 95% confidence interval generated through a bootstrap procedure.

As the reviewers implied, the subjects’ lapsing did increase with ketamine. Whilst we have robustly established this is not the cause of our behavioural effects, we felt this was an important point to include in the manuscript. We have therefore updated the main text in the Results section together with changes from comment 1b:

“To understand the nature of this deficit, we studied the effect of drug administration on the pro-variance bias (Figure 8B-F). […]This confirmed that the rise in PVB was an accurate description of a common behavioural deficit throughout the duration of ketamine administration.”

As mentioned in the Materials and methods, our analyses demonstrate that the animals very rarely lapse when administered with saline. As such, we did not deem it necessary to apply the lapsing models to the standard session experiment (i.e. Figures 2-6). With regards to the psychometric functions (e.g. Figures 8B-C), these have not been updated. This is because the three parameters in this model (Equation 2) are already sufficient to capture lapsing behaviour. Regardless, these psychometrics are purely illustrative and are not used in any of the statistical reporting.

3) Validity of circuit model.

Currently, the circuit model is presented as a black box. You devote a couple of sentences in describing how the expansive non-linearities in the F-I curve give rise to the pro-variance effect. This part is not very well developed. One way to test whether indeed the non-linearities are crucial in the pro-variance effect the monkeys show, is to separately analyse trials with "high" (total sum of both streams high) vs. "low" (total sum of both streams low) evidence and see if the PVB effect changes. Or add the total sum as a regressor and compare the regression weights in the model and in the data. In addition to non-linearities in FI curves, the attractor dynamics of the circuit model may (or may not) promote the PVB effect. Are these dynamics even necessary to produce the pro-variance effect in the model? And is there any link between signatures of attractor dynamics (e.g. kernel shapes) and the PVB effect in the data? If dynamics were redundant in the model, would this undermine the claim that the PVB can be diagnostic of E-I balance?

This relates to the question concerning the way the PVB is quantified: in the model, how can the PVB index change even if the F-I non-linearity remains unchanged? It thus seems that the PVB index is sensitive to the overall signal-to-noise associated with the model and it is not a pure marker of the pro-variance propensity.

Please clarify what the PVB index stands for.

We thank the reviewers for raising these important issues, and for suggesting an interesting analysis which we now include. We agree the mechanism of the pro-variance effect from the decision making process could be further analysed and explained, especially regarding the expansive non-linearities in the F-I curve. We have now expanded on Results, Materials and methods, and Discussion, to discuss how the evidence integration process can generate a pro-variance effect. We also discussed the relation of this mechanism with attractor dynamics, the comparison of this mechanism with the selective integration model (Tsetsos et al, 2016), and how E/I balance disruption may change the F-I non-linearity in the mean-field model and thus impact the PVB index.

In particular, regarding the reviewers’ comment on how attractor dynamics may contribute to a pro-variance bias, we want to highlight that in recurrent circuit models, there is not a clean separation between attractor dynamics and the other factors impacting evidence integration, e.g. to disentangle contributions to PVB. This is in contrast to the Tsetsos et al, 2016 model, which has separable stages from nonlinear transformation of evidence, to the process of integrating that transformed evidence. Figure 6E-H illustrates that in the recurrent circuit, the temporal change of the systems state (S1, S2) depends on the current state (S1, S2) itself, exhibiting an attractor landscape. Furthermore, Figure 6D-H shows that this attractor landscape itself reconfigures dynamically as the stimulus input changes. In a sense, the “gain” of how stimulus impacts the state (i.e. how it is integrated) varies dynamically as a function of both stimulus and the stochastically evolving state of the system (see Materials and methods). This is why these factors cannot be disentangled. These points are now included in the Discussion. Nonetheless, we do agree that future theoretical analysis would be useful to help to link biophysical circuit models, reduced as nonlinear dynamical systems, to more tractable evidence accumulation models (e.g. selective integration). Such algorithmic models may allow us to unveil how various signatures of attractor dynamics are linked to the PVB effect, as raised by the reviewers. For instance, a short integration timescale demonstrated by elevated E/I circuits (Figure 7I) would prevent within-trial variabilities of the stimulus from being inferred, especially when only one or two bars are integrated.

Based on the suggestion for a new analysis, we tested for differential effects of “high” vs. “low” amounts of total evidence, in both the model and the monkeys (Figure 5—figure supplement 2). In the circuit model, trials with more total evidence more strongly drive the neurons to the near-linear regime of the F-I curve, and thus have a smaller PVB index than trials with less total evidence. Interestingly, the monkeys also demonstrated a consistent trend, though this effect did not achieve statistical significance. The temporal regression weights were also different between more vs. less total evidence, consistently between model and monkeys. The Results section is now expanded to discuss the support of the F-I non-linearity and more generally attractor dynamics from this analysis.

Finally, in relation to the question about how can the PVB index change even if the F-I non-linearity remains unchanged, we now include more details on our mean-field model in the Materials and methods section, in order to explain how E/I balance disruption may lead to changes in PVB index. The transfer function as a function of variables x1 and x2 (Equations 18, 19) is unchanged across the circuit models. However, x1 and x2 can be expressed in terms of underlying input currents, and the transfer function thus expressed as a function of the synaptic currents (I1 and I2) depends on NMDA-R mediated recurrent interactions. As a result, the effective transfer function on the stimulus input is actually altered by E/I perturbation (because E/I perturbation changes the recurrent contributions to synaptic currents). As such, NMDA-R hypofunction alters PVB index, both due to changes in NMDA-R coupling strengths (α1 and α2), and also from distinct dynamics and ranges of S1 and S2 as a result of different α1 and α2.

The updated texts are included below for the reviewers’ convenience.

In Results:

“To understand the origin of the pro-variance bias in the spiking circuit, we mathematically reduced the circuit model to a mean-field model (Figure 6A), which demonstrated similar decision-making behaviour to the spiking circuit (Figure 6B-C, Figure 6—figure supplement 1). […] In addition, distinct temporal weighting on stimuli were observed in both the circuit model and experimental data, for trials with more versus less total evidence (Figure 5—figure supplement 2D,H).”

In Materials and methods:

“The current spiking circuit model was mathematically reduced to a mean-field model, as outlined in (Niyogi and Wong-Lin, 2013), in the same manner as from (Wang, 2002) to (Wong and Wang, 2006). […] This complicated the translatability between the two sets of models, so we focused on the control circuit.”

In Discussion:

“The results from our spiking circuit modelling also provided a parsimonious explanation for the cause of the pro-variance bias within the evidence accumulation process. […] While other phenomenological models may also explain pro-variance bias, their link to our circuit model is similarly indirect, and were out of the scope of this study.”

4) Results for both task framings.

Please present separate results for the two framings, i.e. "select higher" and "select lower" trials, which is interesting from an empirical viewpoint. Also: Have you mis-labelled the "high-variance correct" and "low-variance correct" trials in the "select the lower" conditions? (If not, then the quantification of the PVB may be wrong.)

Thank you for this suggestion. In response to this point, we have included three additional supplementary figures (Figure 2—figure supplement 1, Figure 3—figure supplement 2, Figure 4—figure supplement 2). It is clear from these figures that very similar results are attained for all analyses regardless of the task framing.

Unfortunately, we are slightly unclear what the reviewers meant with regards to the mislabelling of conditions. To clarify, the quantification of the PVB is determined by Equation 5:

ln(PL1 PL)= β0 + β1(mean(L)mean(R))+ β2 (std(L)std(R)) (5)

where PL refers to the probability of choosing the left option, β0 is a bias term, β1 reflects the influence of evidence mean, and β2 reflects the influence of standard deviation of evidence (evidence variability). Author response table 1 outlines how this relates to the bar heights in each of the conditions:

Author response table 1.

Condition Variable Description
“Select Higher” mean(L) Average height of the 8 bars on the left side of the screen
“Select Higher” mean(R) Average height of the 8 bars on the right side of the screen
“Select Higher” std(L) Standard deviation of the heights of the 8 bars on the left side of the screen
“Select Higher” std(R) Standard deviation of the heights of the 8 bars on the right side of the screen
“Select Lower” mean(L) Average of (100 – Bar Height) for the 8 stimuli on the left side of the screen
“Select Lower” mean(R) Average of (100 – Bar Height) for the 8 stimuli on the right side of the screen
“Select Lower” std(L) Standard deviation of (100 – Bar Height) for the 8 stimuli on the left side of the screen
“Select Lower” std(R) Standard deviation of (100 – Bar Height) for the 8 stimuli on the right side of the screen

In the main paper (i.e. Figure 4D), the analysis is not calculated separately for “select higher” and “select lower” conditions. Furthermore, it does not depend on whether the trial is labelled as “high-variance correct” or “low-variance correct”. The purpose of these labels was only for visualisation as part of the psychometric plots (Figure 4C).

We believe some of this confusion may be resulting from the terminology we are using. To address this, we have updated references to “select higher” and “select lower” to “select taller” and “select shorter”. For example,

“Subjects were presented with two series of eight bars (evidence samples), one on either side of central fixation. Their task was to decide which evidence stream had the taller/shorter average bar height, and indicate their choice contingent on a contextual cue shown at the start of the trial.”

“Subjects had previously learned that two of these cues instructed to choose the side with the taller average bar-height (“ChooseTallTrial”), and the other two instructed to choose the side with the shorter average bar-height (“ChooseShortTrial”).”

We have also added the following sentences to the Materials and methods section to add some clarity with how this ties in with the illustrative psychometric plots of the pro-variance:

“To illustrate the effect of pro-variance bias, we also fitted a three-parameter psychometric function to the subjects’ probability to choose the higher SD option (PHSD) in the “Regular” trials, as a function of the difference in mean evidence in favour of the higher SD option on each trial (xHSD[…] On “ChooseShortTrials”, the mean evidence in favour of the higher SD option was calculated by subtracting (100 – mean bar height of the lower SD option) from (100 – mean bar height of the higher SD option).”

To clarify, it is not necessary to split the results for the two framings for the circuit model data. This is because the inputs to the circuit model are the transformed evidence values (i.e. bar height on “Select Higher” trials; 100 – bar height on “Select Lower trials”). Therefore, the circuit model will not show any difference in results between the two task framings.

5) Generalizability of findings to humans.

Reviewers raised doubts about the suggested analogy of monkey and human performance, and the underlying computations: Showing that both humans and monkeys have a PVB is not sufficient to establish a cross-species link. In the human work by Tsetsos et al. (PNAS, 2012, 2016), the temporal weighting of evidence on choice exhibits recency, in sharp contrast to the primacy found here in monkeys. What does this imply in terms of the relationship at a mechanistic level? This point needs to be discussed.

Thanks for raising this point. We agree that there are differences between the primacy bias found in our paradigm and the recency bias found in the previous Tsetsos papers. We now discuss this point in the Discussion:

“Crucially, our circuit model generated dissociable predictions for the effects of NMDA-R hypofunction on the pro-variance bias (PVB) index that were tested by follow-up ketamine experiments. […] A stronger test will be to record neurophysiological data while monkeys are performing our task; this would help to distinguish between the “selective integration” hypothesis and the cortical circuit mechanism proposed here.”

6) Link to schizophrenia.

Reviewers remarked that the link to schizophrenia is very loose: no patients are tested and overall behavioral signatures are different even from healthy human subjects (see point 3). Reviewers agreed that this point should at least be toned down substantially or dropped altogether. This tentative link could be brought up as speculation in Discussion, but not used as the basis for setting up the study.

Thanks for this comment. We agree that our previous version focussed too heavily on the potential link to schizophrenia, and that it is indeed unreasonable for us to do this without including data from patients or human volunteers. As such, we have extensively rewritten the Abstract, significance statement, and Introduction to tone them down substantially. In particular, we have removed most of the references to schizophrenia that were found throughout the previous version.

On the other hand, we think that it is reasonable to discuss the relationship between NMDA-receptor hypofunction and its effects on cognition and behaviour (we are directly manipulating/measuring these in the present study). We also feel that it is important to motivate this with an initial reference to the (vast) literature on NMDA-R antagonism via ketamine administration as an acute model of schizophrenia in humans. This was, after all, one of the main motivating factors for wanting to characterise the effects of ketamine in the present task.

We have therefore kept an initial reference to this relationship at the beginning of the Introduction, and then in the rest of the Introduction have limited our discussion to those of mechanisms of action of ketamine and NMDA-R hypofunction, rather than schizophrenia. We hope that the reviewers find this to be a reasonable compromise.

7) Discuss limitations of pharmacological protocol.

a) The physiological effects of ketamine on cortical circuits remain speculative. The drug is unlikely to have the single, simple effect, as assumed in the model. This should be acknowledged in Discussion. Also, what happens in the model when NMDA hypofunction is implement in both neuron types?

We thank the reviewers for this excellent point and agree that we should address the complex effect of ketamine on the brain. We now discuss that point in Discussion (see below). Regarding the effects when NMDA hypofunction is implemented in both neuron types, this is covered in our response to major comment 1.

“Our pharmacological intervention experimentally verified the significance of NMDA-R function for decision-making. […] Finally, receptors of other brain areas might also be altered by intramuscular ketamine injection, which is beyond the scope of the microcircuit model in this study.”

b) The use of an intramuscular injection of ketamine at 0.5 mg/kg (about an order of magnitude stronger than what would be used in humans) produces a massive transient effect on task behavior, which has potential important drawbacks. First, the effect is massive, with decision accuracy dropping from about 85% correct to less than 60% correct after 5 minutes, followed by a sustained recovery over the next 30 minutes. This effect of ketamine is so strong that it is hard to know whether it is truly NMDA receptor hypofunction that produces the behavioral deficit, or task disengagement due to the substantial decrease in reward delivery (for example). The time window chosen for the analysis is also strongly non-stationary, and it is difficult to assess how much an average taken over this window is truly an accurate depiction of a common behavioral deficit throughout this time period (where accuracy goes from 60% correct to 80% correct). Again, the presence of possible attentional lapses should be accounted for (and reported in the manuscript) in all model fits and analyses, given the strength of ketamine-induced deficits triggered by this pharmacological protocol. We realize that this aspect of the study cannot be changed at this point, but it should be acknowledged as an important limitation.

Thank you for this comment. We have structured our response to first address the reviewers’ concerns regarding the drug dose and administration route. Then we address the reviewers’ point about task disengagement. Finally, we address the point regarding the analysis time window. We have previously addressed accounting for attentional lapses in our response to reviewer comment 2.

i) Firstly, we acknowledge that an intravenous infusion approach would have advantages over intramuscular injections. However, this was not possible because it was not within the remit of the ethical approval granted by the local ethical procedures committee and UK Home Office. Despite this, it is important to stress that intramuscular injections of ketamine at around 0.5mg/kg has been the standard approach used in several previous non-human primate studies (see Author response table 2). We are not aware of any non-human primate cognitive neuroscience studies that have used an infusion approach.

Author response table 2.

Authors Journal Intramuscular Ketamine Doses Used
(M. Wang, Yang et al., 2013) Neuron 0.5-1.5 mg/kg
(Blackman, Macdonald et al., 2013) Neuropsychopharmacology 0.32–0.57 mg/kg
(Ma, Skoblenick et al., 2015) Journal of Neuroscience 0.4 mg/kg
(Ma, Skoblenick et al., 2018) Journal of Neuroscience 0.4 – 0.7 mg/kg
(Shen, Kalwarowsky et al., 2010) Journal of Neuroscience 0.25 – 1 mg/kg
(K. J. Skoblenick, Womelsdorf et al., 2016) Cerebral Cortex 0.4 mg/kg
(K. Skoblenick and Everling, 2014) Journal of Cognitive Neuroscience 0.4 mg/kg
(K. Skoblenick and Everling, 2012) Journal of Neuroscience 0.4 – 0.8 mg/kg
(Taffe, Davis et al., 2002) Psychopharmacology 0.3- 1.7 mg/kg
(Condy, Wattiez et al., 2005) Biological Psychiatry 0.2 – 1.2 mg/kg
(Stoet and Snyder, 2006) Neuropsychopharmacology 0.07 – 1 mg/kg

Secondly, as stated in our original submission, we extensively piloted different doses ranging from 0.1 – 1.0 mg/kg before data collection began. 0.5mg/kg was chosen as it was consistently inducing a performance deficit, while not causing significant task disengagement.

Finally, with regards to the chosen dose, we respectfully disagree that it is an order of magnitude stronger than that used in humans. Although it is slightly difficult to compare with relevant human studies as the vast majority of these have used infusion approaches, we will consider one such protocol (Anticevic, Gancsos et al., 2012; Corlett, Honey et al., 2006). In these studies, the authors gave an initial intravenous bolus of 0.23 mg/kg over 1 minute, followed by a subsequent continuous target controlled infusion (0.58 mg/kg over 1 h; plasma target, 200 ng/mL). This dose is relatively similar to what could be expected shortly after a 0.5mg/kg intramuscular injection. Furthermore, in the most relevant intramuscular study we could find, (Ghoneim, Hinrichs et al., 1985) did use intramuscular injections of ketamine to study its cognitive effects in humans. The dose they used was 0.25 – 0.5 mg/kg.

ii) With regards to task disengagement, we did not find evidence of a significant increase in incomplete trials (see response to reviewer comment 2, Author response image 1). Although we did find the animals lapse more frequently when administered ketamine, our behavioural effects were still present when controlling for this (see response to reviewer comment 2).

Author response image 1. Animals rarely fail to complete trials when administered with ketamine.

Author response image 1.

The proportion of incomplete trials that occurred between 5 minutes and 30 minutes relative to drug administration. Errorbars indicate the standard error, each dot represents an individual session. Monkey A counterintuitively completed a higher proportion of trials when administered with ketamine, as this appeared to reduce his slightly stronger tendency to break central fixation early in order to directly view the stimuli. For Monkey H, the proportion of incomplete trials was relatively uninfluenced by ketamine.

iii) The reviewers make a good point with regards to the analysis time window. Firstly, a similar approach of averaging across all trials after an intramuscular injection has been used in previous non-human primate studies (Blackman, Macdonald et al., 2013; Ma, Skoblenick et al., 2018; Ma, Skoblenick et al., 2015; K. Skoblenick and Everling, 2012, 2014; K. J. Skoblenick, Womelsdorf et al., 2016; M. Wang, Yang et al., 2013). However, we agree that it would be beneficial to investigate this further. To determine the time course of ketamine’s influence on the PVB index, we ran a sliding regression analysis:

“We later revisited the time course of drug effects by running our regression analyses at each of the binned windows described above (Figure 8—figure supplement 3[…] The true cluster size was significant at the p < 0.05 level if the true cluster length exceeded the 95th percentile of the null distribution.”

This reveals that the increase in PVB index is present when data from individual time periods are analysed. We believe this should allay the reviewers’ concern that the increase in PVB index is not a common behavioural deficit throughout this time period.

iv) The presence of possible attentional lapses has been accounted in all the drug day analyses. This was covered in our response to reviewer comment 2 above.

[Editors' note: further revisions were suggested prior to acceptance, as described below.]

Summary:

The authors have provided an extensive response to the reviewers' comments based on several additional analyses of their data; they have successfully addressed a large subset of the comments. Specifically, they have performed several additional analyses to (i) test alternative hypotheses as well as the robustness of the favored hypothesis, (ii) examine lapses under ketamine, (iii) unpack the workings of the circuit model, and (iv) examine the frequent-winner effect in the data so they can assess the generalizability of this study to humans. We acknowledge that all these analyses have led to a significant improvement. Nevertheless, we remain uncertain about the validity of the overall conclusion, that ketamine induces NMDA-R hypo-function in excitatory neurons, and that this effect is behaviorally manifested as an increase in a pro-variance bias.

Thank you for the summary of our revisions, and the opportunity to incorporate this new round of feedback. We believe these revisions, which include new figures and text, address the reviewers’ concerns and improve the manuscript through increased clarity. Importantly, we believe we have provided strong evidence to further support our main conclusion that ketamine induces NMDA-R hypofunction to lower E/I balance (by acting predominantly, but not necessarily exclusively, on excitatory neurons), and that this effect is behaviorally manifested as an increase in pro-variance bias.

Revisions for this paper:

1) Motivate modeling approach.

Given that you opted not to fit the model (which would be done with the meanfield reduction), or tune its parameters so that it matches the above behavioral patterns, we believe you should unpack the reasoning underlying this particular modeling approach.

We agree that further text would help to explain the reasoning behind our modeling approach.

We did not include direct fitting of the psychophysical data with circuit models for several reasons:

- We are not aware of any prior literature which has quantitively fit this class of circuit model – for either the spiking model or the mean-field reduction – directly to psychophysical behavior. We believe that developing approaches to do so is an important methodological challenge, but that it is beyond the scope of the present paper.

-Simulation of the spiking circuit model is too computationally expensive for model fitting.

- Fitting via the mean-field model reduction is a potentially tractable strategy. However, there are issues with the mean-field model related to its reduction which make it less than ideal. In particular, the effective noise parameter is added back, by hand as a free parameter, after the reduction. As such, this mean-field model does not derive what the magnitude of that noise parameter should be, nor how the strength of effective noise changes under a parameter perturbation. For this reason we do not use the mean-field model to examine E/I perturbations, as there is not a way to derive how the effective noise should vary across E/I perturbations which we expect would be important. (Instead, we used the mean-field model to examine the circuit mechanisms of the PVB phenomenon for a generic circuit.)

- Both spiking and mean-field models have a large number of parameters. It is not clear which parameters should be free and fitted vs. fixed. Even toward the conservative end, the number of plausibly fittable parameters is well over 10. Numerical simulation of the mean-field model needed for model fitting is still too computationally expensive in such a high-dimensional parameter space. There is not a principled reason to only fit over 2 dimensions. (In contrast, we did a 2D sweep over the NMDAR conductances, motivated by ketamine as a perturbation, to characterize their impact.)

- The parameterization of the mean-field model is not amenable to model fitting. Within the large number of parameters, there is a high degree of degeneracy, or “sloppiness” in how a parameter impacts psychophysical behavior, and parameters can effectively trade off each other at least locally. This is because the model is parameterized for biophysical mechanism rather than parameter parsimony at the level of behavioral output. This poses important – and largely unexplored – challenges for parameter identifiability and estimation, which are beyond the scope of the current study. Given these challenges, even if a model could be fit in the high-dimensional parameter space, it would be unclear how to interpret the set of fitted parameter values in light of potential degeneracies and how they may map onto lower-dimensional effective parameters (e.g., related to E/I ratio).

Although not well suited for model fitting to empirical behavioral data, biophysically-based circuit modeling can be fruitfully applied and interpreted for at least two purposes, which is why we chose this approach for this particular study:

- To examine whether, and through what dynamical circuit mechanism, a behavioral phenomenon (here, pro-variance bias) can emerge in biophysical circuit models within a particular dynamical regime (here, one previously developed to study decision making).

- To characterize how modulation of a biophysical parameter (here, NMDAR conductance, motivated by the pharmacological actions of ketamine) changes an emergent phenomenon (here, choice behavior) within a dynamical circuit regime.

Circuit modeling can demonstrate that a set of mechanisms is sufficient to produce a phenomenon. Furthermore, the pharmacological component of our study with ketamine naturally raises the question of how NMDAR hypofunction within this influential circuit model of decision making (Wang, 2002) impacts the behavioral phenomena studied here, which we examined from a bottom-up approach. Such a bottom-up approach is complementary to more top-down approaches of fitting behavior with computational- and algorithmic-level models. We believe that this modeling approach thereby provides useful insights even without behavioral model fitting, and furthermore it generates circuit-level predictions which can be investigated in future studies through experimental methods including electrophysiology and brain perturbations.

We have now added the following paragraph to the Discussion to note these issues with model fitting and the reasoning underlying our modeling approach:

“In this study we did not undertake quantitative fitting of the circuit model parameters to match the empirical data. […] The bottom-up mechanistic approach in this study, which makes links to the physiological effects of pharmacology and makes testable predictions for neural recordings and perturbations, is complementary to top-down algorithmic modeling approaches.”

2) Plot model predictions along with data.

As we pointed in our first review there seem to be some discrepancies between the data and the model, which we remain concerned about:

i) Ketamine data asymptote at a lower than 100% level. The lapse rates are still not plugged in the circuit model so as to bring the model predictions closer to the data.

We thank the reviewers for bringing up these issues. For clarity, here lapse rate is defined as asymptote error rate at strong evidence. (Note that lapse rate is measured in completed trials, and therefore does not reflect uncompleted trials.) First, we would like to clarify our methods in the previous revision (previous Figure 8—figure supplements 4 and 5), which perhaps did not clearly emphasize how it accounted for lapse. The model comparisons to monkey data did indeed account for lapse rates for each monkey. Specifically, these analyses were comparing regression β weights between models and monkeys. Preceding that comparison, the regression β weights for the monkeys were calculated from a regression model that includes a lapse rate term (see Equations 8-11). Therefore, the model β weights were compared to lapse-corrected monkey β weights.

We see that for greater clarity it would be beneficial to visualize the model data that includes lapse rates at empirically-set levels, to facilitate direct comparison to the ketamine data. We have also decided to combine our response to this point with the related suggestion from comment 4 below to visualize the circuit model with empirical lapse rate, where we added a lapse mechanism downstream of the spiking circuit model. Specifically, we select a random subset of trials in the model, at a proportion according to the Monkey’s empirical lapse rate, and then randomize responses for those trials.

Finally, we have chosen to maintain Figure 7 without empirically-set lapses. We believe this is most logically consistent, with Figure 7 appearing chronologically first in the paper as a model prediction based on non-drug results, before lapses are demonstrated in ketamine data in Figure 8. Instead, the spiking circuit models with added empirically-set lapse rates are demonstrated in new supplementary figures (new Figure 8—figure supplements 8-9), for direct visual comparison to empirical ketamine results. In addition, we have added the empirically derived lapse rates to the results already presented in Figure 7—figure supplement 1.

ii) The control kernel in Figure 7I and the monkey kernels in Figure 8C look different. In the model, there is a primacy pattern (except for the first item) but in the data we see a flat/ U-shaped pattern. Plotting those together could reveal the degree of discrepancy.

Following the reviewers’ suggestion, we now include the juxtaposition of model and empirical plots, for the ketamine data, as new supplementary figures (new Figure 8—figure supplement 8-9; please see the prior comment above for details).

We also want to emphasize that the comparison of temporal weights might be more informative between control model (Figure 7I) and the non-drug data (Figures 2C,D), which both show a primacy effect. It is also interesting, and potentially important, to note that although the kernels differ somewhat between the control data in Figure 2, and the saline data in Figure 8 – namely, between showing more primacy vs. flat/U-shaped – both datasets show similar and robust pro-variance bias, which suggests that the precise shape of the kernel is not determinative of the pro-variance bias phenomenon.

The saline data (Figure 8G, Figure 8—figure supplement 1) might demonstrate a flat/ U-shaped pattern, distinct from both model and non-drug experimental data, for other reasons. For instance, the task structure for the saline/ketamine trials is different from that of the non-drug (and model) trials, with 6 instead of 8 stimuli, and was also made easier to keep the monkeys motivated (Please see “Task Modifications for Pharmacological Sessions” in Materials and methods for details). The difference between Figures 2C,D and Figure 8G might instead be due in part to such task modifications. Furthermore, we note that Figure 2 is based on about 7 times more trials than the saline data in Figure 8 and should therefore be more reliable. On the other hand, it is also possible the U-shaped pattern in the saline data suggests dynamical regimes in circuit models distinct from that considered here.

Motivated by this comment, we have added the following text to the Results:

“Additional observations further supported the lowered E/I hypothesis for the effect of ketamine on monkey choice behaviour. […] This may be due to task modifications for the ketamine/saline experiments compared with the non-drug experiments, but could also potentially arise from distinct regimes of decision making attractor dynamics (e.g. see Ortega et al., 2020).”

iii) In the control condition, the psychometric functions in Figure 7B and in Figure 8B look different (for example in terms of convergence of the light and dark coloured lines). The elevated E/I plot in Figure 7D appears to be closer to the saline psychometric curve.

Such discrepancies, if true, matter: if the "baseline" model does not capture behavior in the control condition well, we cannot be confident about the validity of the subsequent perturbations performed to emulate the ketamine effect. To allow for better assessing the match, we strongly encourage you to always plot model predictions with the data.

Ideally, you would also assess the goodness of fit using maximum likelihood. (A certain parametrization could exhibit similarity with the data in terms of the logistic regression weights (PVB) but at the same time it can miss largely on capturing the psychometric function.) We believe this would be straightforward, given the simulations you have already performed, but leave the decision to you, whether to not to do this.

We realize that this point was not explicitly raised in the previous round. Then, reviewers had asked for a quantification of the goodness of fit. The approach you chose (logistic regression) is specific to the PVB index (not applied to psychometric functions and kernels) and did not fully convince reviewers.

We thank the reviewer for raising these important issues. We agree it will be beneficial to add a more direct comparison of the model behavior to each of saline and ketamine data, in both visualization and quantitative measures, in parallel to those in the last revision (which compared the effect of perturbation).

In brief, in the last revision, we focused primarily on comparing the model to the monkey ketamine behavior in terms of characterizing the change in behavioral measures of interest: namely, the mean weight, SD weight, and PVB index. We believe this is especially of interest because the change under ketamine is also important for comparing the two monkeys: for instance, two subjects may have very different baseline psychophysical performance, yet show a very consistent change from that baseline under ketamine. The same perspective applies to comparing the model to the monkeys, by focusing on the similarity of their change in behavior under a perturbation.

Nonetheless, we agree it is also of interest to provide assessment of how well the models’ psychometric performance agrees with the monkeys’ performance in saline and ketamine conditions. We have thus added a new supplementary figure which computes the Kullback-Leibler (KL) divergence between the saline and ketamine data of both monkeys and models of various perturbations (new Figure 8—figure supplement 6, NB. Figure 8—figure supplement 6 from the previous submission has moved to Figure 8—figure supplement 7). We chose KL divergence, instead of likelihood, because it was a more robust measure that was less sensitive to extreme responses in the behavior where the model had negligible likelihood (e.g., an error at strong evidence).

We note that we do not approach this as model fitting (for reasons elaborated in the response to the reviewers’ first comment above), but rather as providing a quantitative measure of psychometric similarity.

In this new analysis, we demonstrated the saline data for each monkey is more similar to the control model (green symbol) than to the lowered E/I model (purple) (Figure 8—figure supplement 6C, F), consistent with the previous conclusion. Importantly, while the elevated E/I plot in Figure 7D may appear visually more similar to the saline plot (combined across monkeys) in Figure 8B (as the reviewers pointed out), quantitatively comparing the model data to the saline plot (separated between the 2 monkeys) using KL divergence shows the control model is more similar to the saline data (Figure 8—figure supplement 6C, F). Finally, we would like to reiterate that the key model comparison determining the perturbation parameters in Figure 7 is still done with the previous perturbed vs baseline comparison (Figure 8—figure supplement 4,5,7), which we believe is at least as critical as the comparison in Figure 8—figure supplement 6. We have also changed the text to mention KL divergence wherever model comparison is mentioned.

In the new text, we now include these KL divergence results, alongside the measures of change in behavioral features. We have also expanded the “Testing the versatility of model predictions” section of Materials and methods to include KL divergence.

3) Assess effects of concurrent NMDA-blockade on E and I neurons.

You establish that E/I increase reduces the PVB index while E/I decrease has the opposite effect. However, you have not examined the effect of concurrent changes of NMDA-Rs of both, E and I cells, which we had suggested to do. Please comment on the fact that concurrent changes could mimic the effect of E-E reduction (Figure 7, suppl. 2: moving up diagonally the purple point would result in equivalent behavior). Unless there is strong support in favor of the selective NMDA change over the concurrent change (assessed via maximum likelihood), the conclusions should be reframed.

We would like to clarify that we examined the effect of concurrent changes of NMDA-Rs on both E and I cells (e.g. please see Figure 7—figure supplement 2-3, Figure 8—figure supplement 4-5, and the newly added Figure 8—figure supplement 7 where we explicitly computed the resulting E/I ratio across EandI cells perturbation conditions). These 2D sweep analyses illustrate that the net effect on E/I ratio is a key effective parameter, and that concurrent changes can cause the same effects illustrated by the ‘pure’ perturbations. We have also decided to more strongly emphasize the discussion of concurrent changes of NMDA-Rs of both E and I cells, with a focus on the net effect on E/I ratio which can arise from concurrent changes. We take care not to suggest our findings support such a pure perturbation acting on a single cell type, which is not required for there to be a preferential impact on a cell type (e.g. via differential NMDA-R subunits) yielding a net impact on E/I ratio. We further added the following text in the Results section to justify why our main figures primarily considered perturbation to either E or I cells, but not both:

“[…] Crucially, the effect of E/I and sensory perturbations on PVB index and regression coefficients were generally robust to the strength and pathway of perturbation (Figure 7—figure supplement 2, 3).

Disease and pharmacology-related perturbations likely concurrently alter multiple sites, for instance NMDA-Rs of both excitatory and inhibitory neurons. We thus parametrically induced NMDA-R hypofunction on both excitatory and inhibitory neurons in the circuit model. The net effect on E/I ratio depended on the relative perturbation strength to the two populations27. Stronger NMDA-R hypofunction on excitatory neurons lowered the E/I ratio, while stronger NMDA-R hypofunction on inhibitory neurons elevated the E/I ratio. Notably, proportional reduction to both pathways preserved E/I balance and did not lower the mean evidence regression coefficient (a proxy of performance) (Figure 7—figure supplement 2A). […]”

We have also added an additional clarification to the Abstract to make clear that our main conclusion is that ketamine induces NMDA-R hypofunction to lower E/I balance (by acting predominantly, but not necessarily exclusively, on excitatory neurons):

“[…] Ketamine yielded an increase in subjects' PVB, consistent with lowered cortical excitation/inhibition balance from NMDA-R hypofunction predominantly onto excitatory neurons.”

4) Add a lapse rate downstream from circuit model.

You have now assessed lapse rates in your analysis, but reviewers remarked that you do not report the best-fitting lapse rates. This makes it impossible to judge just how much lapses contribute to the decrease in task performance in the initial period following ketamine injection (which is included in all analyses). We are concerned that this massive performance drop under ketamine is not only triggered by aPVB, but also (perhaps largely) by an increase in lapses and a decrease in evidence sensitivity.

We would expect a lapse mechanism to be in play in the circuit model when emulating the ketamine effect. You could use the fraction of lapses best fitted to psychometric curves (which clearly do not saturate at p(correct) = 1) for the circuit model simulations. It seems conceivable that allowing the circuit model to lapse will reduce the weight applied on the mean evidence.

We thank the reviewers for this excellent suggestion. We have added two supplementary figures, in which we incorporated a ‘downstream’ lapse mechanism to the circuit model, with lapse rate fitted to the two monkeys’ ketamine data (Figure 8—figure supplement 8-9). We believe this allows readers to better evaluate the effect of lapse using the circuit models.

We would also like to clarify that we have reported our best-fitted lapse rates in the previous submission. As shown in the previous submission, accounting for such lapse rates did not significantly change the regression weight or evidence sensitivities when analyzing the subjects’ behavior (Figure 8—figure supplement 2). Our further analyses have also shown the regression weights and evidence sensitivities are not affected in the circuit models’ behavior when empirical lapses are incorporated (Figure 8—figure supplement 8-9, please also see the new Figure 7—figure supplement 1). For clarity, we have further expanded on explaining the lapse rate.

“In further analysis, we also controlled for the influence of ketamine on the subjects’ lapse rate – i.e. the propensity for the animals to respond randomly regardless of trial difficulty. […] This confirmed that the rise in PVB was an accurate description of a common behavioral deficit throughout the duration of ketamine administration.”

Figure 8—figure supplements 8 and 9 can be found above in the response to comment 2. We also added the following text together with Figure 8—figure supplements 8 and 9:

“To quantify the effect of lapse rate on evidence sensitivity and regression weights in general, we examined the effect of a lapse mechanism downstream of spiking circuit models (Figure 8—figure supplement 8-9). Using the lapse rate fitted to the experimental data collected from the two monkeys, we assigned such portions of trials to have randomly selected choices for each circuit model, and repeated the analysis to obtain psychometric functions and various regression weights. Crucially, while the psychometric function as well as evidence mean and standard deviation regression weights were suppressed, the findings on PVB index were not qualitatively altered in the circuit models, further supporting the finding that the lapse rate does not account for changes in PVB under ketamine.”

5) Different quantification of pro-variance bias.

We do not understand the motivation for compressing sensitivity to mean and to variance into a single PVB index. Our reading is that the pro-variance effect, quantified as a higher probability of choosing a more variable stream (see Tsetsos et al., 2012), can just be directly mapped onto the variance regressor. Combining the weights into a PVB index and framing the general discussion around this index seems unnecessary. The main behavioral result of ketamine can be parsimoniously summarized as a reduced sensitivity to the mean evidence. Relatedly, please discuss if and how the ketamine-induced increase in the PVB effect, the way you quantified it, rides over a strong decrease of the sensitivity to mean evidence under ketamine.

It does seem to be the case that sensitivity to variance remains statistically indistinguishable between saline and ketamine (if anything it is slightly reduced). The E/I increase model consistently predicts that the variance regressor is reduced. This is not the case with the E/I decrease model, which occasionally predicts increases in the sensitivity to the variance (see yellow grids in Figure 7—figure supplement 2). This feature of the E/I decrease model should be discussed, as it seems to undermine the statement that the E/I perturbation produces robust predictions regardless of perturbation magnitude (i.e. depending on the strength of E/I reduction the model can produce a decrease or increase on variance sensitivity, and the relationship is non-monotonic). Overall, we believe that combining sensitivity to mean and variance obscures the interpretation of the data and model predictions.

Again, we realize that this point appears to be new. But reviewers feel they could not really have a strong case regarding this metric without seeing the more detailed model predictions (in a 2-d grid) that you have presented in your revision.

We thank the reviewer for raising this point. We agree that further explanation of our choice to define a PVB measure as the ratio would improve the paper. We also agree that it is important to clearly report the effects of the mean and variance terms separately as well, to be explicit about what is driving the change in the ratio measure of PVB index.

First, as the reviewers note, there is a downside to reporting a ratio, which is that it can obscure how a change in the ratio is driven by changes in the numerator (SD) or denominator (mean) terms. Therefore, to accommodate the reviewers’ suggestion, we believe that the best solution for clarity is to report the changes in SD and mean weights, individually, alongside where changes in the ratio PVB index are reported. We have now included this information throughout the text, wherever a change in PVB index is reported for the model or monkeys, so that readers can readily keep track of how mean and SD terms are impacted, alongside the PVB index.

We believe that describing a PVB index as the ratio of SD to mean weights, is conceptually useful when interpreting changes in these behavioral sensitivities (as here by ketamine).

A key motivation relates to a point raised by the reviewers in the previous round of review, which said: “You should stress that the model does not feature any explicit PVB, and that PVB emerges through sample-by-sample competition between the two streams.” We agree that PVB should be understood as an emergent phenomenon arising from the decision-making process.

In evidence accumulation models, it is a non-trivial problem how to possibly reduce the sensitivity to the mean of evidence without a proportional reduction in the sensitivity to the SD of evidence. The simplest way to reduce the mean sensitivity would be down-scaling of the incoming evidence strength, but this would presumably down-scale the SD sensitivity by the same factor. Indeed, this is what is demonstrated by our “upstream deficit” perturbation: mean weight reduced, and SD weight reduced by the same proportion, leaving the PVB index unchanged. This is therefore a useful feature of our definition of PVB index: a ‘reference’ proportional change of SD weight is the same as the proportional change of mean weight, which results in no change in PVB index.

All three of our circuit perturbations (lowered E/I, elevated E/I, upstream deficit) reduce the sensitivity to the mean, and therefore consideration of the mean evidence regressor is not sufficient to dissociate these three circuit perturbations in the model. The qualitative behavioral dissociation between the three is their impact on the PVB index: increased PVB index for lowered-E/I, decreased PVB index for elevated-E/I, and unchanged PVB index for upstream deficit. Therefore, the key question to dissociate these three circuit perturbations is how the SD weight changes relative to the mean weight, which is captured by their ratio.

The reviewers are correct in pointing out that elevating E/I ratio consistently predicts that the evidence standard deviation regressor is reduced, while lowering E/I ratio can predict an increase in the sensitivity to the variance (as shown in Figure 7—figure supplement 2). This is, in fact, a non-trivial property of the circuit model. The circuit model, in the strong recurrent regime, has decision making choice accuracy following an inverted-U shape as a function of E/I ratio (e.g. see Lam et al., 2017). The control model is in fact not at the peak of this inverted-U shape, but slightly to the side. Instead, the peak of the inverted-U shape occurs at a weakly lowered E/I circuit. Similar to decision making choice accuracy, the evidence standard deviation regression weight also follows an inverted-U shape as a function of E/I ratio (Figure 7—figure supplement 2B). In contrast to choice accuracy, the control model is even further to the side away from the peak of the inverted-U shape, and the peak occurs at an E/I ratio lower than that of the peak for choice accuracy (e.g. compare Figure 7—figure supplement 2A and B).

If we consider the distinct locations of the control model on the inverted-U shape for choice accuracy and evidence standard deviation regression weight we explained above, the effects of elevating and lowering E/I ratio on the mean evidence regressor, evidence standard deviation regressor, and PVB index are more clear and interpretable. Elevating E/I ratio always moves the model down the inverted-U shapes for both choice accuracy and standard deviation regression weight, resulting in a consistent effect regardless of the scale of perturbation. On the other hand, weakly lowering E/I ratio would drive the model up the inverted-U shape for standard deviation regression weight, while driving the model down the inverted-U shape for choice accuracy, leading to a decrease in mean evidence regression weight but increase in standard deviation weight. It is only if the perturbation is strong enough to push the model across the peak (of standard deviation regression weight), that lowering E/I ratio will result in a decrease in both mean and standard deviation regressor. Notably, in both regimes the PVB index increases. This is another reason we utilized PVB index: as a robust measure of E/I ratio while alleviating the readers of the detailed changes and mechanisms of mean and standard deviation regression weights. (As an aside for completeness, when considering how the mean and standard deviation regression weights could possibly move along their respective inverted-U curves, an even weaker perturbation of lowered E/I will drive the control model up the peak of both inverted-U shapes. However, the range is smaller than the perturbation strengths used in this study, and the slight increase in mean evidence regression weight, which is bounded by the peak for choice accuracy which is not much higher than the already nearby control model, will be dominated by the larger increase in standard-deviation weight, where the control model is further from the peak and is not nearly as bounded. Therefore, this remains consistent with our argument that E/I perturbations produce robust effects regardless of perturbation magnitude).

We note that in our two monkey subjects, both showed strong significant decreases in mean weight, while only one showed a significant decrease in SD weight, yet both showed a consistent proportional change in PVB index. This is consistent, and can be explained by, the aforementioned inverted-U description.

Finally, another attractive property of presenting the PVB index as a ratio is that it is a dimensionless quantity, which facilitates comparisons between the monkeys and the model.

In the new revision, we have added in text noting changes in mean and SD weights where changes in PVB index are noted. We have also expanded the text when introducing the SD/mean ratio as a PVB index, to better motivate why it is an interesting measure for this phenomenon:

“In addition, we defined the pro-variance bias (PVB) index as the ratio of the regression coefficient for evidence standard deviation over the regression coefficient for mean evidence. […] From the ‘Regular’ trials, the PVB index across both monkeys was 0.173 (Monkey A = 0.230; Monkey H = 0.138).”

We have also added the following text in the Results to explain how the inverted-U phenomena relates to PVB index:

“Since the decision making choice accuracy depends on E/I ratio along an inverted-U shape – where the control, E/I balanced model is right next to the (slightly lowered E/I) peak (Lam et al., 2017) – both elevating and lowering E/I ratio drive the model away from the peak, resulting in lowered mean evidence regression weight. […]Notably, regardless of the magnitude with which E/I ratio is lowered, PVB index is consistently increased, providing a robust measure of pro-variance bias.”

We have also added the following text in Discussion to provide further motivation of the PVB index as a useful measure:

“The PVB index, as the ratio of standard deviation to mean evidence regression weights, serves as a conceptually useful measure to interpret changes in pro-variance bias due to ketamine perturbation in this study. […] The two monkeys, both interpreted as lowered E/I ratio using the model-based approach in this study, may therefore experience slightly different degrees of E/I reduction when administered with ketamine, as shown through concurrent changes in NMDA-R conductances in the circuit model (Figure 7—figure supplement 2).”

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Data Citations

    1. Cavanagh SE, Lam NH, Murray JD, Hunt LT, Kennerley SW. 2020. Data from: A circuit mechanism for decision making biases and NMDA receptor hypofunction. Dryad Digital Repository. [DOI] [PMC free article] [PubMed]

    Supplementary Materials

    Supplementary file 1. Difference in log-likelihood of Full regression model (mean, SD, max, min, first, last of evidence values; Equation 6 in Materials and methods) vs reduced model, for each monkey and the circuit model.

    Log-likelihood values were calculated using a cross-validation procedure (see Materials and methods). Column label refers to the removed regressor. Positive values indicate the full regression model performs better. Values depend on the number of completed trials, which differed both between subjects and the circuit model. For both monkeys and the circuit model, mean evidence is clearly the most important driver of choice behaviour, followed by first and last evidence samples which reflects the primacy bias. Finally, evidence standard deviation (SD) has a stronger effect than maximum and minimum evidence samples (Max and Min).

    elife-53664-supp1.docx (13.4KB, docx)
    Supplementary file 2. Difference in log-likelihood of regression models including either evidence standard deviation (SD) or both maximum and minimum evidence (Max and Min) as regressors, for each monkey and the circuit model.

    Log-likelihood values were calculated using a cross-validation procedure (see Materials and methods). Column label refers to the regressors additional to either SD or Max and Min. Positive values indicate the regression model with SD performs better than that with Max and Min. Values depend on the number of completed trials, which differed both between subjects and the circuit model. Regardless of whether first and last evidence sample regressors are included, the models with standard deviation of evidence have higher log-likelihoods than the models with maximum and minimum evidence samples, indicating a better explanation of the data by standard deviation than by maximum and minimum evidence samples.

    elife-53664-supp2.docx (12.9KB, docx)
    Supplementary file 3. Increase in log-likelihood of various regression models (regressors in column labels) due to inclusion of evidence standard deviation as a regressor, for each monkey and the circuit model.

    Log-likelihood values were calculated using a cross-validation procedure (see Materials and methods). Values depend on the number of completed trials, which differed both between subjects and the circuit model. Positive values across the table indicates the evidence standard deviation regressor robustly improves model performance for all models examined.

    elife-53664-supp3.docx (12.9KB, docx)
    Supplementary file 4. Difference in log-likelihood of regression models including either evidence standard deviation (SD) or both maximum and minimum evidence (Max and Min) as regressors, for each monkey with saline or ketamine injection.

    Log-likelihood values were calculated using a cross-validation procedure (see Materials and methods). Column label refers to the regressors additional to either SD or Max and Min. Positive values indicate the regression model with SD performs better than that with Max and Min. Values depend on the number of completed trials, which differed across conditions. Regardless of whether first and last evidence sample regressors are included, the models with standard deviation of evidence have higher log-likelihoods than the models with maximum and minimum evidence samples, indicating a better explanation of the data by standard deviation than by maximum and minimum evidence samples. In particular, under ketamine injection, monkeys did not switch their strategy to primarily use maximum and minimum evidence samples (over standard deviation of evidence) to guide their choice.

    elife-53664-supp4.docx (15.2KB, docx)
    Transparent reporting form

    Data Availability Statement

    Stimuli generation and data analysis for the experiment were performed in MATLAB. The spiking circuit model was implemented using the Python-based Brian2 neural simulator (Goodman and Brette, 2008), with a simulations time step of 0.02ms. Further analyses for both experimental and model data were completed using custom-written Python and MATLAB codes. Data and analysis scripts to reproduce figures from the paper will be made publicly available for download from an online repository upon publication. Data has been uploaded to Dryad under the doi:10.5061/dryad.pnvx0k6k3. Code is available on GitHub at https://github.com/normanlam1217/CavanaghLam2020CodeRepository (copy archived at https://github.com/elifesciences-publications/CavanaghLam2020CodeRepository; Lam, 2020).

    Stimuli generation and data analysis for the experiment were performed in MATLAB. The spiking circuit model was implemented using the Python-based Brian2 neural simulator, with a simulations time step of 0.02ms. Further analyses for both experimental and model data were completed using custom-written Python and MATLAB codes. Data has been uploaded to Dryad under the doi:10.5061/dryad.pnvx0k6k3. Code is available on GitHub at https://github.com/normanlam1217/CavanaghLam2020CodeRepository (copy archived at https://github.com/elifesciences-publications/CavanaghLam2020CodeRepository).

    The following dataset was generated:

    Cavanagh SE, Lam NH, Murray JD, Hunt LT, Kennerley SW. 2020. Data from: A circuit mechanism for decision making biases and NMDA receptor hypofunction. Dryad Digital Repository.


    Articles from eLife are provided here courtesy of eLife Sciences Publications, Ltd

    RESOURCES