Skip to main content
eLife logoLink to eLife
. 2021 Jan 4;10:e59907. doi: 10.7554/eLife.59907

Human complex exploration strategies are enriched by noradrenaline-modulated heuristics

Magda Dubois 1,2,, Johanna Habicht 1,2, Jochen Michely 1,2,3, Rani Moran 1,2, Ray J Dolan 1,2, Tobias U Hauser 1,2,
Editors: Thorsten Kahnt4, Christian Büchel5
PMCID: PMC7815309  PMID: 33393461

Abstract

An exploration-exploitation trade-off, the arbitration between sampling a lesser-known against a known rich option, is thought to be solved using computationally demanding exploration algorithms. Given known limitations in human cognitive resources, we hypothesised the presence of additional cheaper strategies. We examined for such heuristics in choice behaviour where we show this involves a value-free random exploration, that ignores all prior knowledge, and a novelty exploration that targets novel options alone. In a double-blind, placebo-controlled drug study, assessing contributions of dopamine (400 mg amisulpride) and noradrenaline (40 mg propranolol), we show that value-free random exploration is attenuated under the influence of propranolol, but not under amisulpride. Our findings demonstrate that humans deploy distinct computationally cheap exploration strategies and that value-free random exploration is under noradrenergic control.

Research organism: Human

Introduction

Chocolate, Toblerone, spinach, or hibiscus ice-cream? Do you go for the flavour you like the most (chocolate), or another one? In such an exploration-exploitation dilemma, you need to decide whether to go for the option with the highest known subjective value (exploitation) or opt instead for less known or valued options (exploration) so as to not miss out on possibly even higher rewards. In the latter case, you can opt to either choose an option that you have previously enjoyed (Toblerone), an option you are curious about because you do not know what to expect (hibiscus), or even an option that you have disliked in the past (spinach). Depending on your exploration strategy, you may end up with a highly disappointing ice cream encounter, or a life-changing gustatory epiphany.

A common approach to the study of complex decision making, for example an exploration-exploitation trade-off, is to take computational algorithms developed in the field of artificial intelligence and test whether key signatures of these are evident in human behaviour. This approach has revealed humans use strategies that reflect an implementation of computationally demanding exploration algorithms (Gershman, 2018; Schulz and Gershman, 2019). One such strategy, directed exploration, involves awarding an ‘information bonus’ to choice options, a bonus that scales with uncertainty. This is captured in algorithms such as the Upper Confidence Bound (UCB) (Auer, 2003; Carpentier et al., 2011) and leads to an exploration of choice options the agent knowns little about (Gershman, 2018; Schwartenbeck et al., 2019) (e.g. the hibiscus ice-cream). An alternative strategy, sometimes termed ‘random’ exploration, is to induce stochasticity after value computations in the decision process. This can be realised using a fixed parameter as a source of stochasticity, such as a softmax temperature parameter (Daw et al., 2006; Wilson et al., 2014), which can be combined with the UCB algorithm (Gershman, 2018). Alternatively, one can use a dynamic source of stochasticity, such as in Thompson sampling (Thompson, 1933), where stochasticity adapts to an uncertainty about choice options. This exploration is essentially a more sophisticated, uncertainty-driven, version of a softmax. By accounting for stochasticity when comparing choice options’ expected values, in effect choosing based on both uncertainty and value, these exploration strategies increase the likelihood of choosing ‘good’ options that are only slightly less valuable than the best (e.g. the Toblerone ice-cream if you are a chocolate lover).

The above processes are computationally demanding, especially when facing real-life multiple-alternative decision problems (Daw et al., 2006; Cohen et al., 2007; Cogliati Dezza et al., 2019). Human cognitive resources are constrained by capacity limitations (Papadopetraki et al., 2019), metabolic consumption (Zénon et al., 2019), but also because of resource allocation to parallel tasks (e.g.Wahn and König, 2017; Marois and Ivanoff, 2005). This directly relates to an agents’ motivation to perform a given task (Papadopetraki et al., 2019; Botvinick and Braver, 2015; Froböse et al., 2020), as increasing an information demand in one process automatically reduces its availability for others (Zénon et al., 2019). In real-world highly dynamic environments, this arbitration is critical as humans need to maintain resources for alternative opportunities (i.e. flexibility; Papadopetraki et al., 2019; Kool et al., 2010; Cools, 2015). This accords with previous studies showing humans are demand-avoidant (Kool et al., 2010; Froböse and Cools, 2018) and suggests that exploration computations tend to be minimised. Here, we examine the explanatory power of two additional computationally less costly forms of exploration, namely value-free random exploration and novelty exploration.

Computationally, the least resource demanding way to explore is to ignore all prior information and to choose entirely randomly, de facto assigning the same probability to all options. Such ‘value-free’ random exploration, as opposed to the two previously considered ‘value-based’ random explorations (for simulations comparing their effects Figure 1—figure supplement 2) that add stochasticity during choice value computation, forgoes any costly computation (i.e. value mean and uncertainty), known as an ϵ-greedy algorithmic strategy in reinforcement learning (Sutton and Barto, 1998). Computational efficiency, however, comes at the cost of sub-optimality due to occasional selection of options of low expected value (e.g. the repulsive spinach ice cream).

Despite its sub-optimality, value-free random exploration has neurobiological plausibility. Of relevance in this context is a view that exploration strategies depend on dissociable neural mechanisms (Zajkowski et al., 2017). Influences from noradrenaline and dopamine are plausible candidates in this regard based on prior evidence (Cohen et al., 2007; Hauser et al., 2016). Amongst other roles (such as memory [Sara et al., 1994], or energisation of behaviour [Varazzani et al., 2015; Silvetti et al., 2018]), the neuromodulator noradrenaline has been ascribed a function of indexing uncertainty (Silvetti et al., 2013; Yu and Dayan, 2005; Nassar et al., 2012) or as acting as a ‘reset button’ that interrupts ongoing information processing (David Johnson, 2003; Bouret and Sara, 2005; Dayan and Yu, 2006). Prior experimental work in rats shows boosting noradrenaline leads to more value-free-random-like random behaviour (Tervo et al., 2014), while pharmacological manipulations in monkeys indicates reducing noradrenergic activity increases choice consistency (Jahn et al., 2018).

In human pharmacological studies, interpreting the specific function of noradrenaline on exploration strategies is problematic as many drugs, such as atomoxetine (e.g. Warren et al., 2017), impact multiple neurotransmitter systems. Here, to avoid this issue, we chose the highly specific β-adrenoceptor antagonist propranolol, which has only minimal impact on other neurotransmitter systems (Fraundorfer et al., 1994; Hauser et al., 2019). Using this neuromodulator, we examine whether signatures of value-free random exploration are impacted by administration of propranolol.

An alternative computationally efficient exploration heuristic to random exploration is to simply choose an option not encountered previously, which we term novelty exploration. Humans often show novelty seeking (Bunzeck et al., 2012; Wittmann et al., 2008; Gershman and Niv, 2015; Stojić et al., 2020), and this strategy can be used in exploration as implemented by a low-cost version of the UCB algorithm. Here, a novelty bonus (Krebs et al., 2009) is added if a choice option has not been seen previously (i.e. it does not have to rely on precise uncertainty estimates). The neuromodulator dopamine is implicated not only in exploration in general (Frank et al., 2009), but also in signalling such types of novelty bonuses, where evidence indicates a role in processing and exploring novel and salient states (Wittmann et al., 2008; Bromberg-Martin et al., 2010; Costa et al., 2014; Düzel et al., 2010; Iigaya et al., 2019). Although pharmacological dopaminergic studies in humans have demonstrated effects on exploration as a whole (Kayser et al., 2015), they have not identified specific exploration strategies. Here, we used the highly specific D2/D3 antagonist amisulpride, to disentangle the specific role of dopamine and noradrenaline on different exploration strategies.

Thus, in the current study, we examine the contributions of value-free random exploration and novelty exploration in human choice behaviour. We developed a novel exploration task combined with computational modeling to probe the contributions of noradrenaline and dopamine. Under double-blind, placebo-controlled, conditions, we assessed the impact of two antagonists with high affinity and specificity for either dopamine (amisulpride) or noradrenaline (propranolol), respectively. Our results provide evidence that both exploration heuristics supplement computationally more demanding exploration strategies, and that value-free random exploration is particularly sensitive to noradrenergic modulation, with no effect of amisulpride.

Results

Probing the contributions of heuristic exploration strategies

We developed a novel multi-round three-armed bandit task (Figure 1; bandits depicted as trees), enabling us to assess the contributions of value-free random exploration and novelty exploration in addition to Thompson sampling and UCB (combined with a softmax). In particular, we exploited the fact that both heuristic strategies make specific predictions about choice patterns. The novelty exploration assigns a ‘novelty bonus’ only to bandits for which subjects have no prior information, but not to other bandits. This can be seen as a low-resolution version of UCB, which assigns a bonus to all choice options proportionally to how informative they are, in effect a graded bonus which scales to each bandit's uncertainty. Thus, to capture this heuristic, we manipulated the amount of prior information with bandits carrying only little information (i.e. 1 vs 3 initial samples) or no information (0 initial samples). A high novelty exploration predicts a higher frequency of selecting the novel option (Figure 1f). This is in contrast to high exploration using other strategies which does not predict such a strong effect on the novel option (Figure 1—figure supplement 5).

Figure 1. Study design.

In the Maggie’s farm task, subjects had to choose from three bandits (depicted as trees) to maximise an outcome (sum of reward). The rewards (apple size) of each bandit followed a normal distribution with a fixed sampling variance. (a) At the beginning of each trial, subjects were provided with some initial samples on the wooden crate at the bottom of the screen and had to select which bandit they wanted to sample from next. (b) Depending the condition, they could either perform one draw (short horizon) or six draws (long horizon). The empty spaces on the wooden crate (and the sun's position) indicated how many draws they had left. The first draw in both conditions was the main focus of the analysis. (c) In each trial, three bandits were displayed, selected from four possible bandits, with different generative processes that varied in terms of their sample mean and number of initial samples (i.e. samples shown at the beginning of a trial). The ‘certain-standard bandit’ and the ‘standard bandit’ had comparable means but different levels of uncertainty about their expected mean: they provided three and one initial sample, respectively; the ‘low-value bandit’ had a low mean and displayed one initial sample; the ‘novel bandit’ did not show any initial sample and its mean was comparable with that of the standard bandits. (d) Prior to the task, subjects were administered different drugs: 400 mg amisulpride that blocks dopaminergic D2/D3 receptors, 40 mg propranolol to block noradrenergic β-receptors, and inert substances for the placebo group. Different administration times were chosen to comply with the different drug pharmacokinetics (placebo matching the other groups’ administration schedule). (e) Simulating value-free random behaviour with a low vs high model parameter (ϵ) in this task shows that in a high regime, agents choose the low-value bandit more often (left panel; mean ± 1 SD) and are less consistent in their choices when facing identical choice options (right panel). (f) Novelty exploration exclusively promotes choosing choice options for which subjects have no prior information, captured by the ‘novel bandit’ in our task. For details about simulations cf. Materials and methods. For details about the task display Figure 1—figure supplement 1. For simulations of different exploration strategies and their impact of different bandits Figure 1—figure supplement 25.

Figure 1.

Figure 1—figure supplement 1. Visualisation of the nine different sizes that the apples could take.

Figure 1—figure supplement 1.

The associated rewards went from 2 (small apple on the left) to 10 (big apple on the right).
Figure 1—figure supplement 2. Comparison of value-based (softmax) and value-free (ϵ-greedy) random exploration.

Figure 1—figure supplement 2.

(a) Changing the softmax inverse temperature affects the slope of the sigmoid while changing the ϵ-greedy parameter (b) affects the compression of the sigmoid. Conceptually, in a softmax exploration mode, as each bandit's expected value is taken into account, (c) the second best bandit (medium-value bandit) is favoured over one with a lower value (low-value bandit) when injecting noise. In contrast, in an ϵ-greedy exploration mode, (d) bandits are explored equally often irrespective of their expected value. Both simulations were performed on trials without novel bandit. When simulating on all trials, we observe that this also has a consequence for choice consistency. (e) Choices are more consistent in a low (versus high) softmax exploration mode (i.e. high and low values of β, respectively), and similarly (f) choices are more consistent in a low (versus high) ϵ-greedy exploration mode (i.e. low and high values of ϵ, respectively). When comparing the overall consistency of the two random exploration strategies, consistency is higher in the value-based mode, reflecting a higher probability of (consistently) exploring the second best option, compared to an equal probability of exploring any non-optimal option (inconsistently) in the value-free mode.
Figure 1—figure supplement 3. Simulation illustrations of high and low exploration on the frequency of picking the low-value bandit using different exploration strategies (a) a high (versus low) value-free random exploration increases the selection of the low-value bandit, whereas neither (b) a high (versus low) novelty exploration, (c) a high (versus low) Thompson-sampling exploration nor (d) a high (versus low) UCB exploration affected this frequency.

Figure 1—figure supplement 3.

To illustrate the long (versus short) horizon condition, we accommodated the fact that not only key values but also other exploration strategies were enhanced by increasing multiple exploration strategies, as found in our experimental data (Appendix 2—table 7 for parameter values). Please note that the difference between low and high exploration is critical here, rather than a comparison of the absolute height of the bars between strategies (which is influences in the models by multiple different exploration strategies). For simulations fitting participants’ data, please see Figure 5—figure supplements 1 and 3.
Figure 1—figure supplement 4. Simulation illustrations of high and low exploration choice consistency using different exploration strategies shows that (a) a high (versus low) value-free random exploration decreases the proportion of same choices, whereas neither (b) a high (versus low) novelty exploration, (c) a high (versus low) Thompson-sampling exploration nor (d) a high (versus low) UCB exploration affected this measure.

Figure 1—figure supplement 4.

To illustrate the long (versus short) horizon condition, accommodated the fact that not only the key value but also other exploration strategies were enhanced by increasing multiple exploration strategies, as found in our experimental data (Appendix 2—table 7 for parameter values). Please note that the difference between low and high exploration is critical here, rather than a comparison of the absolute height of the bars between strategies (which is influences in the models by multiple different exploration strategies). For simulations fitting participants’ data, please see Figure 5—figure supplements 1 and 3.
Figure 1—figure supplement 5. Simulation illustrations of high and low exploration on the frequency of picking the novel bandit using different exploration strategies shows that (a) a high (versus low) value-free random exploration has little effect on the selection of the novel bandit, whereas (b) a high (versus low) novelty exploration increases this frequency.

Figure 1—figure supplement 5.

(c) A high (versus low) Thompson-sampling exploration had little effect and (d) a high (versus low) UCB exploration affected this frequency but to a lower extend than novelty exploration. To illustrate the long (versus short) horizon condition, we accommodated the fact that not only the key value but also other exploration strategies were enhanced by increasing multiple exploration strategies, as found in our experimental data (Appendix 2—table 7 for parameter values). Please note that the difference between low and high exploration is critical here, rather than a comparison of the absolute height of the bars between strategies (which is influences in the models by multiple different exploration strategies). For simulations fitting participants’ data, please see Figure 5—figure supplements 1 and 3.

Value-free random exploration, captured here by ϵ-greedy, predicts that all prior information is discarded entirely and that there is equal probability attached to all choice options. This strategy is distinct from other exploration strategies as it is likely to choose bandits known to be substantially worse than the other bandits. Thus, a high value-free random exploration predicts a higher frequency of selecting the low-value option (Figure 1e), whereas high exploration using other strategies does not predict such effect (Figure 1—figure supplement 3). A second prediction is that choice consistency, across repeated trials, is directly affected by value-free random exploration, in particular by comparison to other more deterministic exploration strategies (e.g. directed exploration) that are value-guided and thus will consistently select the most informative and valuable options. Given that value-free random exploration splits its choice probability equally (i.e. 33.3% of choosing any bandit out of the three displayed), an increase in such exploration predicts a lower likelihood of choosing the same bandit again, even under identical choice options (Figure 1e). This contrasts to other strategies that make consistent exploration predictions (e.g. UCB would consistently explore the choice option that carries a high information bonus; Figure 1—figure supplement 4).

We generated bandits from four different generative processes (Figure 1c) with distinct sample means (but a fixed sampling variance) and number of initial samples (i.e. samples shown at the beginning of a trial for this specific bandit). Subjects were exposed to these bandits before making their first draw. The ‘certain-standard bandit’ and the (less certain) ‘standard bandit’ were bandits with comparable means but varying levels of uncertainty, providing either three or one initial samples (depicted as apples; similar to the horizon task [Wilson et al., 2014]). The ‘low-value bandit’ was a bandit with one initial sample from a substantially lower generative mean, thus appealing to a value-free random exploration strategy alone. The last bandit, with a mean comparable with that of the standard bandits, was a ‘novel bandit’ for which no initial sample was shown, primarily appealing to a novelty exploration strategy (cf. Materials and methods for a full description of bandit generative processes). To assess choice consistency, all trials were repeated once. In the pilot experiments (data not shown), we noted some exploration strategies tended to overshadow other strategies. To effectively assess all exploration strategies, we opted to present only three of the four different bandit types on each trial, as different bandit triples allow different explorations to manifest. Lastly, to assess whether subjects’ behaviour captured exploration, we manipulated the degree to which subjects could interact with the same bandits. Similar to previous studies (Wilson et al., 2014), subjects could perform either one draw, encouraging exploitation (short horizon condition), or six draws, encouraging more substantial explorative behaviour (long horizon condition) (Wilson et al., 2014; Warren et al., 2017).

Testing the role of catecholamines noradrenaline and dopamine

In a double-blind, placebo-controlled, between-subjects study design, we assigned subjects (N=60) randomly to one of three experimental groups: amisulpride, propranolol, or placebo. The first group received 40 mg of the β-adrenoceptor antagonist propranolol to alter noradrenaline function, while the second group was administered 400 mg of the D2/D3 antagonist amisulpride that alters dopamine function. Because of different pharmacokinetic properties, these drugs were administered at different times (Figure 1d) and compared to a placebo group that received a placebo at both drug times to match the corresponding antagonist's time. One subject (amisulpride group) was excluded from the analysis due to a lack of engagement with the task. Reported findings were corrected for IQ and mood, as drug groups differed marginally in those measures (Appendix 2—table 7), by adding WASI (Wechsler, 2013) and PANAS (Watson et al., 1988a) negative scores as covariates in each ANOVA. Similar results were obtained in an analysis that corrected for physiological effects as from the analysis without covariates (cf. Appendix 1).

Increased exploration when information can subsequently be exploited

Our task embodied two decision-horizon conditions, a short and a long. To assess whether subjects explored more in a long horizon condition, in which additional information can inform later choices, we examined which bandit subjects chose in their first draw (in accordance with the horizon task [Wilson et al., 2014]), irrespective of their drug group. A marker of exploration here is evident if subjects chose bandits with lower expected values, computed as the mean value of their initial samples shown (trials where the novel bandit was chosen were excluded). As expected, subjects chose bandits with a lower expected value in the long compared to the short horizon (repeated-measures ANOVA for the expected value: F(1, 56) = 19.457, p<0.001, η2 = 0.258; Figure 2a). To confirm that this was a consequence of increased exploration, we analysed the proportion of how often the high-value option was chosen (i.e. the bandit with the highest expected reward based on its initial samples) and we found that subjects (especially those with higher IQ) sampled from it more in the short compared to the long horizon, (WASI-by-horizon interaction: F(1, 54) = 13.304, p = 0.001, η2 = 0.198; horizon main effect: F(1, 54) = 3.909, p = 0.053, η2 = 0.068; Figure 3a), confirming a reduction in exploitation when this information could be subsequently used. Interestingly, this frequency seemed to be marginally higher in the amisulpride group, suggesting an overall higher tendency to exploitation following dopamine blockade (cf. Appendix 1). This horizon-specific behaviour resulted in a lower reward on the first sample in the long compared to the short horizon (F(1, 56) = 23.922, p<0.001, η2 = 0.299; Figure 2c). When we tested whether subjects were more likely to choose options they knew less about (computed as the mean number of initial samples shown), we found that subjects chose less known (i.e. more informative) bandits more often in the long horizon compared to the short horizon (F(1, 56) = 58.78, p<0.001, η2 = 0.512; Figure 2b).

Figure 2. Benefits of exploration.

To investigate the effect of information on performance we collapsed subjects over all three treatment groups. (a) The expected value (average of its initial samples) of the first chosen bandit as a function of horizon. Subjects chose bandits with a lower expected value (i.e. they explored more) in the long horizon compared to the short horizon. (b) The mean number of samples for the first chosen bandit as a function of horizon. Subjects chose less known (i.e. more informative) bandits more in the long compared to the short horizon. (c) The first draw in the long horizon led to a lower reward than the first draw in the short horizon, indicating that subjects sacrificed larger initial outcomes for the benefit of more information. This additional information helped making better decisions in the long run, leading to a higher earning over all draws in the long horizon. For values and statistics Appendix 2—table 3. For response times and details about all long horizons’ samples Figure 2—figure supplement 1. *** = p<0.001. Data are shown as mean ± 1 SEM and each dot/line represent a subject.

Figure 2.

Figure 2—figure supplement 1. Further analysis of long horizon draws.

Figure 2—figure supplement 1.

(a) The first draw in the long horizon led to a lower reward than the short horizon, indicating more exploration, while the subsequent draws led to a higher reward indicating that this additional information helped making better decisions in the long run. (b) The first draws’ response time was the highest and then decreased for each draw. Long horizon trials in which subjects started with (c) an exploitation draw (choose the bandit with the highest expected value) led to little increase in reward (y-axis: difference between obtained reward and highest reward of initial samples; linear regression slope coefficient: mean = 0.118, sd = 0.038), whereas trials in which they started with (d) an exploration draw led to an large increase in reward (linear regression slope coefficient: mean = 0.028, sd = 0.041). This larger increase in reward when starting by exploring (slope is higher: t(58) = -12.161, p<0.001, d = −1.583) indicates that the information that was gained through exploration led to higher long-term outcomes. Data are shown as mean ± 1 SEM and each dot represent one subject.

Figure 3. Behavioural horizon and drug effects.

Choice patterns in the first draw for each horizon and drug group (propranolol, placebo and amisulpride). (a) Subjects sampled from the high-value bandit (i.e. bandit with the highest average reward of initial samples) more in the short horizon compared to the long horizon indicating reduced exploitation. (b) Subjects sampled from the low-value bandit more in the long horizon compared to the short horizon indicating value-free random exploration, but subjects in the propranolol group sampled less from it overall, and (c) were more consistent in their choices overall, indicating that noradrenaline blockade reduces value-free random exploration. (d) Subjects sampled from the novel bandit more in the long horizon compared to the short horizon indicating novelty exploration. Please note that some horizon effects were modulated by subjects’ intellectual abilities when additionally controlling for them (Appendix 2—table 4). Horizontal bars represent rm-ANOVA (thick) and pairwise comparisons (thin). = p<0.07, * = p<0.05, ** = p<0.01. Data are shown as mean ± 1 SEM and each line represent one subject. For values and statistics Appendix 2—table 4. For response times and frequencies specific to the displayed bandits Figure 3—figure supplements 12.

Figure 3.

Figure 3—figure supplement 1. Response time (RT) analysis per bandit.

Figure 3—figure supplement 1.

There was no difference in RT depending which bandit was chosen. For details and statistics cf. Appendix 1.
Figure 3—figure supplement 2. Proportion of draws per bandit combination (x-axis).

Figure 3—figure supplement 2.

(a) The high-value bandit was picked more when there was no novel bandit, and less when the high-value bandit was less certain. (b) The novel bandit was picked the most when the high-value bandit was less certain, then when the high-value bandit was more certain, and it was picked the least when both certain and certain standard bandits were present. (c) The low-value bandit was picked less when the high-value bandit was more certain. For statistics see Appendix 1.

Next, to evaluate whether subjects used the additional information beneficially in the long horizon condition, we compared the average reward (across six draws) obtained in the long compared to short horizon (one draw). We found that the average reward was higher in the long horizon (F(1, 56) = 103.759, p<0.001, η2 = 0.649; Figure 2c), indicating that subjects tended to choose less optimal bandits at first but subsequently learnt to appropriately exploit the harvested information to guide choices of better bandits in the long run. Additionally, when looking specifically at the long horizon condition, we found that subjects earned more when their first draw was explorative versus exploitative (Figure 2—figure supplement 1c–d; cf. Appendix 2 for details).

Subjects demonstrate value-free random behaviour

Value-free random exploration (analogue to ϵ-greedy) predicts that ϵ% of the time each option will have an equal probability of being chosen. In such a regime (compared to more complex strategies that would favour options with a higher expected value with a similar uncertainty), the probability of choosing bandits with a low expected value (here the low-value bandit; Figure 1e) will be higher (Figure 1—figure supplement 3). We investigated whether the frequency of picking the low-value bandit was increased in the long horizon condition across all subjects (i.e. when exploration is useful), and we found a significant main effect of horizon (F(1, 54) = 4.069, p = 0.049, η2 = 0.07; Figure 3b). This demonstrates that value-free random exploration is utilised more when exploration is beneficial.

Value-free random behaviour is modulated by noradrenaline function

When we tested whether value-free random exploration was sensitive to neuromodulatory influences, we found a difference in how often drug groups sampled from the low-value option (drug main effect: F(2, 54) = 7.003, p = 0.002, η2 = 0.206; drug-by-horizon interaction: F(2, 54) = 2.154, p = 0.126, η2 = 0.074; Figure 3b). This was driven by the propranolol group choosing the low-value option significantly less often than the other two groups (placebo vs propranolol: t(40) = 2.923, p = 0.005, d = 0.654; amisulpride vs propranolol: t(38) = 2.171, p = 0.034, d = 0.496) with no difference between amisulpride and placebo: (t(38) = -0.587, p = 0.559, d = 0.133). These findings demonstrate that a key feature of value-free random exploration, the frequency of choosing low-value bandits, is sensitive to influences from noradrenaline.

To further examine drug effects on value-free random exploration, we assessed a second prediction, namely choice consistency. Because value-free random exploration ignores all prior information and chooses randomly, it should result in a decreased choice consistency when presented identical choice options (Figure 1—figure supplements 2 and 4, compared to more complex strategies which are always biased towards the rewarding or the information providing bandit for example). To this end, each trial was duplicated in our task, allowing us to compute the consistency as the percentage of time subjects sampled from an identical bandit when facing the exact same choice options. In line with the above analysis, we found a difference in consistency by which drug groups sampled from different option (drug main effect: F(2, 54) = 7.154, p = 0.002, η2 = 0.209; horizon main effect: F(1, 54) = 1.333, p = 0.253, η2 = 0.024; drug-by-horizon interaction: F(2, 54) = 3.352, p = 0.042, η2 = 0.11; Figure 3c), driven by the fact that the propranolol group chose significantly more consistently than the other two groups (pairwise comparisons: placebo vs propranolol: t(40) = -3.525, p = 0.001, d = 0.788; amisulpride vs placebo: t(38) = 1.107, p = 0.272, d = 0.251; amisulpride vs propranolol: t(38) = -2.267, p = 0.026, d = 0.514). Please see Appendix 1 for further discussion and analysis of the drug-by-horizon interaction. Taken together, these results indicate that value-free random exploration depends critically on noradrenaline functioning, such that an attenuation of noradrenaline leads to a reduction in value-free random exploration.

Novelty exploration is unaffected by catecholaminergic drugs

Next, we examined whether subjects show evidence for novelty exploration by choosing the novel bandit for which there was no prior information (i.e. no initial samples), as predicted by model simulations (Figure 1f). We found a significant main effect of horizon (F(1, 54) = 5.593, p = 0.022, η2 = 0.094; WASI-by-horizon interaction: F(1, 54) = 13.897, p<0.001, η2 = 0.205; Figure 3d) indicating that subjects explored the novel bandit significantly more often in the long horizon condition, and this was particularly strong for subjects with a higher IQ. We next assessed whether novelty exploration was sensitive to our drug manipulation, but found no drug effects on the novel bandit (F(2, 54) = 1.498, p = 0.233, η2 = 0.053; drug-by-horizon interaction: F(2, 54) = 0.542, p = 0.584, η2 = 0.02; Figure 3d). Thus, there was no evidence that an attenuation of dopamine or noradrenaline function impacts novelty exploration in this task.

Subjects combine computationally demanding strategies and exploration heuristics

To examine the contributions of different exploration strategies to choice behaviour, we fitted a set of computational models to subjects’ behaviour, building on models developed in previous studies (Gershman, 2018). In particular, we compared models incorporating UCB, Thompson sampling, an ϵ-greedy algorithm and the novelty bonus (cf. Materials and methods). Essentially, each model makes different exploration predictions. In the Thompson model, Thompson sampling (Thompson, 1933; Agrawal and Goyal, 2012) leads to an uncertainty-driven value-based random exploration, where both expected value and uncertainty contribute to choice. In this model higher uncertainty leads to more exploration such that instead of selecting a bandit with the highest mean, bandits are chosen relative to how often a random sample would yield the highest outcome, thus accounting for uncertainty (Schulz and Gershman, 2019). The UCB model (Auer, 2003; Carpentier et al., 2011), capturing directed exploration, predicts that each bandit is chosen according to a mixture of expected value and an additional expected information gain (Schulz and Gershman, 2019). This is realised by adding a bonus to the expected value of each option, proportional to how informative it would be to select this option (i.e. the higher the uncertainty in the option's value, the higher the information gain). This computation is then passed through a softmax decision model, capturing value-based random exploration. Novelty exploration is a simplified version of the information bonus in the UCB algorithm, which only applies to entirely novel options. It defines the intrinsic value of selecting a bandit about which nothing is known, and thus saves demanding computations of uncertainty for each bandit. Last, the value-free random ϵ-greedy algorithm selects any bandit ϵ % of the time, irrespective of the prior information of this bandit. For additional models cf. Appendix 1.

We used cross-validation for model selection (Figure 4a) by comparing the likelihood of held-out data across different models, an approach that adequately arbitrates between model accuracy and complexity. The winning model encompasses uncertainty-driven value-based random exploration (Thompson sampling) with value-free random exploration (ϵ-greedy parameter) and novelty exploration (novelty bonus parameter η). The winning model predicted held-out data with a 55.25% accuracy (SD=8.36%; chance level = 33.33%). Similarly to previous studies (Gershman, 2018), the hybrid model combining UCB and Thompson sampling explained the data better than each of those processes alone, but this was no longer the case when accounting for novelty and value-free random exploration (Figure 4a). The winning model further revealed that all parameter estimates could be accurately recovered (Figure 4b; Figure 4—figure supplement 3). Interestingly, although the second and third place models made different prediction about the complex exploration strategy, using a directed exploration with value-based random exploration (UCB) or a combination of complex strategies (hybrid) respectively, they share the characteristic of benefitting from value-free random and novelty exploration. This highlights that subjects used a mixture of computationally demanding and heuristic exploration strategies.

Figure 4. Subjects use a mixture of exploration strategies.

(a) A 10-fold cross-validation of the likelihood of held-out data was used for model selection (chance level = 33.3%; for model selection at the individual level Figure 4—figure supplement 1). The Thompson model with both the ϵ-greedy parameter and the novelty bonus η best predicted held-out data (b) Model simulation with 47 simulations predicted good recoverability of model parameters (for correlations between behaviour and model parameters Figure 4—figure supplement 2); σ0 is the prior variance and Q0 is the prior mean (for parameter recovery correlation plots Figure 4—figure supplement 3). 1 stands for short horizon- and 2 for long horizon-specific parameters. For values and parameter details Appendix 2—table 5.

Figure 4.

Figure 4—figure supplement 1. Model comparison: further evaluations.

Figure 4—figure supplement 1.

(a) The winning model at the group level (the Thompson model with both ϵ and η) was also the one that accounted best for the largest number of subjects. (b) The Thompson+ϵ+η model and the UCB+ϵ+η are equally first in subject count when comparing all models, the Thompson+ϵ+η model is therefore still the winning model as it has the highest average likelihood of held-out data.
Figure 4—figure supplement 2. Correlations between model parameters and behaviour.

Figure 4—figure supplement 2.

The behavioural indicators of (a) value-free random exploration (left panel: draws from the low-value bandit; right panel: consistency) correlated with the ϵ-greedy parameter values, and of (b) novelty exploration (draws from the novel bandit) correlated with the novelty bonus η.
Figure 4—figure supplement 3. Parameter recovery analysis details.

Figure 4—figure supplement 3.

For each of the seven parameters of the winning model, we took four values, equally spread within the parameter range. We simulated behaviour using every combination (47=16384), fitted the model and analysed how well the generative parameters (original values) correlated with the recovered ones (fitted parameters). Pearson correlation coefficient = r. Each dot represents one simulation.

Noradrenaline controls value-free random exploration

To more formally compare the impact of catecholaminergic drugs on different exploration strategies, we assessed the free parameters of the winning model between drug groups (Figure 5, cf. Appendix 2 Table 6 for exact values). First, we examined the ϵ-greedy parameter that captures the contribution of value-free random exploration to choice behaviour. We assessed how this value-free random exploration differed between drug groups. A significant drug main effect (drug main effect: F(2, 54) = 6.722, p = 0.002, η2 = 0.199; drug-by-horizon interaction: F(2, 54) = 1.305, p = 0.28, η2 = 0.046; Figure 5a) demonstrates that the drug groups differ in how strongly they deploy this exploration strategy. Post-hoc analysis revealed that subjects with reduced noradrenaline functioning had the lowest values of ϵ (pairwise comparisons: placebo vs propranolol: t(40) = 3.177, p = 0.002, d = 0.71; amisulpride vs propranolol: t(38) = 2.723, p = 0.009, d = 0 .626) with no significant difference between amisulpride and placebo: (t(38) = 0.251, p = 0.802, d = 0.057). Critically, the effect on ϵ was also significant when the complex exploration strategy was a directed exploration with value-based random exploration (second place model) and, marginally significant, when it was a combination of the above (third place model; cf. Appendix 1).

Figure 5. Drug effects on model parameters.

The winning model’s parameters were fitted to each subject’s first draw (for model simulations Figure 5—figure supplement 1). (a) Subjects had higher values of ϵ (value-free random exploration) in the long compared to the short horizon. Notably, subjects in the propranolol group had lower values of ϵ overall, indicating that attenuation of noradrenaline functioning reduces value-free random exploration. Subjects from all groups (b) assigned a similar value to novelty, captured by the novelty bonus η, which was higher (more novelty exploration) in the long compared to the short horizon. (c) The groups had similar beliefs Q0 about a bandit's mean before seeing any initial samples and (d) were similarly uncertain σ0 about it (for gender effects Figure 5—figure supplement 2). Please note that some horizon effects were modulated by subjects’ intellectual abilities when additionally controlling for them (Appendix 2—table 6). ** = p<0.01. Data are shown as mean ± 1 SEM and each dot/line represent one subject. For parameter values and statistics Appendix 2—table 6.

Figure 5.

Figure 5—figure supplement 1. Simulated behaviour for Thompson+ϵ+η model.

Figure 5—figure supplement 1.

We used each subjects’ fitted parameters to simulate behaviour (Ntrials=4000). Data are shown as mean ± 1 SEM and each dot/line represent one agent.
Figure 5—figure supplement 2. Gender effect on prior variance parameter.

Figure 5—figure supplement 2.

Mean values (across horizon conditions) of σ0 were larger for female subjects, whereas in the amisulpride group, they were larger for male subjects. Data are shown as mean ± 1 SEM and each dot represent one subject.
Figure 5—figure supplement 3. Simulated behaviour for UCB+ϵ+η model.

Figure 5—figure supplement 3.

We used each subjects’ fitted parameters to simulate behaviour (Ntrials=4000). Data are shown as mean ± 1 SEM and each dot/line represent one agent.

The ϵ-greedy parameter was also closely linked to the above behavioural metrics (correlation between the ϵ-greedy parameter and: draws from the low-value bandit: RPearson = 0.828, p<0.001; choice consistency: RPearson = -0.596, p<0.001; Figure 4—figure supplement 2), and showed a similar horizon effect (horizon main effect: F(1, 54) = 1.968, p = 0.166, η2 = 0.035; WASI-by-horizon interaction: F(1, 54) = 6.08, p = 0.017, η2 = 0.101; Figure 5a). Our findings thus accord with the model-free analyses and demonstrate that noradrenaline blockade reduces value-free random exploration.

No drug effects on other parameters

The novelty bonus η captures the intrinsic reward of selecting a novel option. In line with the model-free behavioural findings, there was no difference between drug groups in terms of this effect (F(2, 54) = 0.249, p = 0.78, η2 = 0.009; drug-by-horizon interaction: F(2, 54) = 0.03, p = 0.971, η2 = 0.001). There was also a close alignment between model-based and model-agnostic analyses (correlation between the novelty bonus η and draws from the novel bandit: RPearson = 0.683, p<0.001; Figure 4—figure supplement 2), and we found a similarly increased novelty bonus effect in the long horizon in subjects with a higher IQ (WASI-by-horizon interaction: F(1, 54) = 8.416, p = 0.005, η2 = 0.135; horizon main effect: F(1, 54) = 1.839, p = 0.181, η2 = 0.033; Figure 5b).

When analysing the additional model parameters, we found that subjects had similar prior beliefs about bandits, given by the initial estimate of a bandit’s mean (prior mean Q0: F(2, 54) = 0.118, p = 0.889, η2 = 0.004; Figure 5c) and their uncertainty about it (prior variance σ0: horizon main effect: F(1, 54) = 0.129, p = 0.721, η2 = 0.002; drug main effect: F(2, 54) = 0.06, p = 0.942, η2 = 0.002; drug-by-horizon interaction: F(2, 54) = 2.162, p = 0.125, η2 = 0.074; WASI-by-horizon interaction: F(1, 54) = 0.022, p = 0.882, η2 < 0.001; Figure 5d). Interestingly, our dopamine manipulation seemed to affect this uncertainty in a gender-specific manner, with female subjects having larger values of σ0 compared to males in the placebo group, and with the opposite being true in the amisulpride group (cf. Appendix 1). Taken together, these findings show that value-free random exploration was most sensitive to our drug manipulations.

Discussion

Solving the exploration-exploitation problem is non-trivial, and one suggestion is that humans solve it using computationally demanding exploration strategies (Gershman, 2018; Schulz and Gershman, 2019), taking account of the uncertainty (variance) as well as the expected reward (mean) of each choice. Although tracking the distribution of summary statistics (e.g. mean and variance) is less resource costly than keeping track of full distributions (D'Acremont and Bossaerts, 2008), it nevertheless carries considerable costs when one has to keep track of multiple options, as in exploration. Indeed, in a three-bandit task such as that considered here, this results in a necessity to compute six key-statistics, drastically limiting computational resources when selecting among choice options (Cogliati Dezza et al., 2019). Real-life decisions often comprise an unlimited range of options, which results in a tracking of a multitude of key-statistics, potentially mandating a deployment of alternative more efficient strategies. Here, we demonstrate that two additional, less resource-hungry heuristics are at play during human decision-making, value-free random exploration and novelty exploration.

By assigning intrinsic value (novelty bonus [Krebs et al., 2009]) to an option not encountered before (Foley et al., 2014), a novelty bonus can be seen as an efficient simplification of demanding algorithms, such as UCB (Auer, 2003; Carpentier et al., 2011). It is interesting to note that our winning model did not include UCB, but instead novelty exploration. This indicates humans might use such a novelty shortcut to explore unseen, or rarely visited, states to conserve computational costs when such a strategy is possible. A second exploration heuristic that also requires minimal computational resources, value-free random exploration, also plays a role in our task. Even though less optimal, its simplicity and neural plausibility renders it a viable strategy. Indeed, we observe an increase in performance in each model after adding ϵ, supporting the notion that this strategy is a relevant additional human exploration heuristic. Interestingly, the benefit of ϵ is somewhat smaller in a simple UCB model (without novelty bonus), which probably arises because value-based random exploration partially captures some of the increased noisiness. We show through converging behavioural and modelling measures that both value-free random and novelty exploration were deployed in a goal-directed manner, coupled with increased levels of exploration when this was strategically useful. Importantly, these heuristics were observed in all best models (first, second and third position) even though each incorporated different exploration strategies. This suggests that the complex models make similar predictions in our task. This is also observed in our simulations, and demonstrates that value-free random exploration is at play even when accounting for other value-based forms of random exploration (Gershman, 2018; Wilson et al., 2014), whether fixed or uncertainty-driven.

Exploration was captured in a similar manner to previous studies (Wilson et al., 2014), by comparing in the same setting (i.e. same prior information) the first choice in a long decision horizon, where reward can be increased in the long term through information gain, and in a short decision horizon where information cannot subsequently be put to use. This means that by changing the opportunity to benefit from the information gained for the first sample, the long horizon invites extended exploration (Wilson et al., 2014), what we find also in our study. This experimental manipulation is a well-established means for altering exploration and has been used extensively in previous studies (Wilson et al., 2014; Zajkowski et al., 2017; Warren et al., 2017; Wu et al., 2018). Nevertheless, there remains a possibility that a longer horizon may also affect the psychological nature of the task. In our task, reward outcomes were presented immediately after every draw, rendering it unlikely that perception of reward delays (i.e. delay discounting) is impacted. Moreover, a monetary bonus was given only at the end of the task, and thus did not impact the horizon manipulation. We also consider our manipulation was unlikely to change effort in each horizon, because the reward (i.e. size of the apple) remains the same at every draw, resulting in an equivalent reward-effort ratio (Skvortsova et al., 2014; Hauser et al., 2017a; Walton and Bouret, 2019; Salamone et al., 2016). However, this issue can be addressed in further studies, for example, by equating the amount of button presses across both conditions.

Value-free random exploration might reflect other influences, such as attentional lapses or impulsive motor responses. We consider these as unlikely to a significant factor at play here. Indeed, there are two key features that would signify such effects. Firstly, these influences would be independent of task condition. Secondly, they would be expected to lead to shorter, or more variable, response latencies. In our data, we observe an increase in value-free exploration in the long horizon condition in both behavioural measures and model parameters, speaking against an explanation based upon simple mistakes. Moreover, we did not observe a difference in response latency for choices that were related to value-free random exploration (cf. Appendix 1), further arguing against mistakes. Lastly, the sensitivity of value-free random exploration to propranolol supports this being a separate process, and previous studies using the same drug did not find an effect on task mistakes (e.g. on accuracy [Hauser et al., 2018; Jahn et al., 2018; Salamone et al., 2016; Hauser et al., 2019; ; Sokol-Hessner et al., 2015]). However, future studies could explore these exploration strategies in more detail including by reference to subjects’ own self-reports.

It is still unclear how exploration strategies are implemented neurobiologically. Noradrenaline inputs, arising from the locus coeruleus (LC; Rajkowski et al., 1994) are thought to modulate exploration (Schulz and Gershman, 2019; Aston-Jones and Cohen, 2005; Servan-Schreiber et al., 1990), although empirical data on its precise mechanisms and means of action remains limited. In this study, we found that noradrenaline impacted value-free random exploration, in contrast to novelty exploration and complex exploration. This might suggest that noradrenaline influences ongoing valuation or choice processes that discards prior information. Importantly, this effect was observed whether the complex exploration was an uncertainty-driven value-based random exploration (winning model), a directed exploration with value-based random exploration (second place model) or a combination of the above (third place model; cf. Appendix 1). This is consistent with findings in rodents where enhanced anterior cingulate noradrenaline release leads to more random behaviour (Tervo et al., 2014). It is also consistent with pharmacological findings in monkeys that show enhanced choice consistency after reducing LC noradrenaline firing rates (Jahn et al., 2018). It would be interesting for future studies to determine, in more detail, whether value-free random exploration is corrupting a value computation itself, or whether it exclusively biases the choice process.

We note that pupil diameter has been used as an indirect marker of noradrenaline activity (Joshi et al., 2016), although the link between the two it not always straightforward (Hauser et al., 2019). Because the effect of pharmacologically induced changes of noradrenaline levels on pupil size remains poorly understood (Hauser et al., 2019; Joshi and Gold, 2020), including the fact that previous studies found no effect of propranolol on pupil diameter (Hauser et al., 2019; Koudas et al., 2009), we opted against using pupillometry in this study. However, our current findings align with previous human studies that show an association between this indirect marker and exploration, but that study did not dissociate between the different potential exploration strategies that subjects could deploy (Jepma and Nieuwenhuis, 2011). Future studies might usefully include indirect measures of noradrenaline activity, for example pupillometry, to examine a potential link between natural variations in noradrenaline levels and a propensity towards value-free random exploration.

The LC has two known modes of synaptic signalling (Rajkowski et al., 1994), tonic and phasic, thought to have complementary roles (Dayan and Yu, 2006). Phasic noradrenaline is thought to act as a reset button (Dayan and Yu, 2006), rendering an agent agnostic to all previously accumulated information, a de facto signature of value-free random exploration. Tonic noradrenaline has been associated, although not consistently (Jepma et al., 2010), with increased exploration (Aston-Jones and Cohen, 2005; Usher et al., 1999), decision noise in rats (Kane et al., 2017) and more specifically with random as opposed to directed exploration strategies (Warren et al., 2017). This later study unexpectedly found that boosting noradrenaline decreased (rather than increased) random exploration, which the authors speculated was due to an interplay with phasic signalling. Importantly, the drug used in that study also affects dopamine function making it difficult to assign a precise interpretation to the finding. A consideration of this study influenced our decision to opt for drugs with high specificity for either dopamine or noradrenaline (Hauser et al., 2018), enabling us to reveal highly specific effects on value-free random exploration. Although the contributions of tonic and phasic noradrenaline signalling cannot be disentangled in our study, our findings align with theoretical accounts and non-primate animal findings, indicating that phasic noradrenaline promotes value-free random exploration.

Aside from this ‘reset signal’ role, noradrenaline has been assigned other roles, including a role in memory function (Sara et al., 1994; Rossetti and Carboni, 2005; Gibbs et al., 2010). To minimise a possible memory-related impact, we designed the task such that all necessary information was visible on the screen at all times. This means subjects did not have to memorise values for a given trial, rendering the task less susceptible to forgetting or other memory effects. Another role for noradrenaline relates to volatility and uncertainty estimation (Silvetti et al., 2013; Yu and Dayan, 2005; Nassar et al., 2012), as well as the energisation of behaviour (Varazzani et al., 2015; Silvetti et al., 2018). Non-human primate studies demonstrate a higher LC activation for high effort choices, suggesting that noradrenaline release facilitates energy mobilisation (Varazzani et al., 2015). Theoretical models also suggest that the LC is involved in the control of effort exertion. Thus, it is thought to contribute to trading off between effortful actions leading to large rewards and ‘effortless’ actions leading to small rewards by modulating ‘raw’ reward values as a function of the required effort (Silvetti et al., 2018). Our task can be interpreted as encapsulating such a trade-off: complex exploration strategies are effortful but optimal in terms of reward gain, while value-free random exploration requires little effort while occasionally leading to low reward. Applying this model, a noradrenaline boost could optimise cognitive effort allocation for high reward gain (Silvetti et al., 2018), thereby facilitating complex exploration strategies compared to value-free random exploration. In such a framework, blocking noradrenaline release should decrease usage of complex exploration strategies, leading to an increase of value-free random exploration which is the opposite of what we observed in our data. Another interpretation of an effort-facilitation model of noradrenaline is that a boost would help overcoming cost, that is the lack of immediate reward when selecting the low-value bandit, essentially providing a significant increase to the value of information gain. In line with our results, a decrease would interrupt this boost in valuation, removing an incentive to choose the low-value option. However, this theory is currently limited by the absence of empirical evidence for noradrenaline boosting valuation.

Noradrenaline blockade by propranolol has been shown previously to enhance metacognition (Hauser et al., 2017b), decrease information gathering (Hauser et al., 2018), and attenuate arousal-induced boosts in incidental memory (Hauser et al., 2019). All these findings, including a decrease in value-free random exploration found here, suggests propranolol may influence how neural noise affects information processing. In particular, the results indicate that under propranolol behaviour is less stochastic and less influenced by ‘task-irrelevant’ distractions. This aligns with theoretical ideas, as well as recent optogenetic evidence (Tervo et al., 2014), that proposes noradrenaline infuses noise in a temporally targeted way (Dayan and Yu, 2006). It also accords with studies implicating noradrenaline in attention shifts (for a review Trofimova and Robbins, 2016). Other gain-modulation theories of noradrenaline/catecholamine function have proposed an effect on stochasticity (Aston-Jones and Cohen, 2005; Servan-Schreiber et al., 1990), although a hypothesised direction of effect is different (i.e. noradrenaline decreases stochasticity). Several aspects of noradrenaline functioning may explain the contradictory accounts of its link with stochasticity. For example, they might be capturing different aspects of an assumed U-shaped noradrenaline functioning curve, and/or distinct activity modes of noradrenaline (i.e. tonic and phasic firing) (Aston-Jones and Cohen, 2005). Further studies can shed light on how different modes of activity affect value-free random exploration. This idea can be extended also to tasks where propranolol has been shown to attenuate a discrimination between different levels of loss (with no effect on the value-based exploration parameter, referred to in these studies as consistency) (Rogers et al., 2004) and a reduction in loss aversion (Sokol-Hessner et al., 2015). This hints at additional roles for noradrenaline on prior information and task-distractibility during exploration in loss-frame environments. Future studies investigating exploration in loss contexts might provide important additional information on these questions.

It is important to mention here that β-adrenergic receptors, the primary target of propranolol, have been shown (unlike α-adrenergic receptors) to increase synaptic inhibition within rat cortex (Waterhouse et al., 1982), specifically through inhibitory GABA-mediated transmission (Waterhouse et al., 1984). Additionally β-adrenergic receptors are more concentrated in the intermediate layers in the prefrontal area (Goldman-Rakic et al., 1990), within which inhibition is favoured (Isaacson and Scanziani, 2011). Thus, inhibitory mechanisms might account for noradrenaline-related task-distractibility and randomness, or the role of β-adrenergic receptors in executive function impairments (Salgado et al., 2016). This raises the question of whether blocking β-adrenergic receptors might lead to an accumulation of synaptic noradrenaline, and therefore act via α-adrenergic receptors. To the best of our knowledge, evidence for such an effect is limited. A second question is whether the observed effects are a pure consequence of propranolol’s impact on the brain, or whether they reflect peripheral effects of propranolol. When we examined peripheral markers (i.e. heart rate) and behaviour we found no evidence for an effect on any of our findings, rendering such influences unlikely. However, future studies using drugs that exclusively targets peripheral, but not central, noradrenaline receptors (e.g. De Martino et al., 2008) are needed to answer this question conclusively.

Dopamine has been ascribed multiple functions besides reward learning (Schultz et al., 1997), such as novelty seeking (Düzel et al., 2010; Wittmann et al., 2008; Costa et al., 2014) or exploration in general (Frank et al., 2009). In fact, studies have demonstrated that there are different types of dopaminergic neurons in the ventral tegmental area, and that some contribute to non-reward signals, such as saliency and novelty (Bromberg-Martin et al., 2010). This suggests a role in novelty exploration. Moreover, dopamine has been suggested as important in an exploration-exploitation arbitration (Zajkowski et al., 2017; Kayser et al., 2015; Chakroun et al., 2019), although its precise role remains unclear, given reported effects on random exploration (Cinotti et al., 2019), on directed exploration (Costa et al., 2014; Frank et al., 2009), or no effects at all (Krugel et al., 2009). A recent study found no effect following dopamine blockade using haloperidol (Chakroun et al., 2019), which interestingly also affects noradrenaline function (e.g. Fang and Yu, 1995; Toru and Takashima, 1985). Our results did not demonstrate any main effect of dopamine manipulation on exploration strategies, even though blocking dopamine was associated with a trend level increase in exploitation (cf. Appendix 1). We believe it unlikely this reflects an ineffective drug dose as previous studies have found neurocognitive effects with the same dose (Hauser et al., 2019; Hauser et al., 2018; Kahnt et al., 2015; Kahnt and Tobler, 2017).

One possible reason for an absence of significant findings is that our dopaminergic blockade targets D2/D3 receptors rather than D1 receptors, a limitation due a lack of available specific D1 receptor blockers for use in humans. An expectation of greater D1 involvement arises out of theoretical models (Humphries et al., 2012) and a prefrontal hypothesis of exploration (Frank et al., 2009). Interestingly, we observed a weak gender-specific differential drug effect on subjects’ uncertainty about an expected reward, with women being more uncertain than men in the placebo setting, but more certain in the dopamine blockade setting (cf. Appendix 1). This might be meaningful as other studies using the same drug have also found behavioural gender-specific drug effects (Soutschek et al., 2017). Upcoming, novel drugs (Soutschek et al., 2020) might be able help unravel a D1 contribution to different forms of exploration. Additionally, future studies could use approved D2/D3 agonists (e.g. ropinirole) in a similar design to probe further whether enhancing dopamine leads to a general increase in exploration.

In conclusion, humans supplement computationally expensive exploration strategies with less resource demanding exploration heuristics, and as shown here the latter include value-free random and novelty exploration. Our finding that noradrenaline specifically influences value-free random exploration demonstrates that distinct exploration strategies may be under specific neuromodulator influence. Our current findings may also be relevant to enabling a richer understanding of disorders of exploration, such as attention-deficit/hyperactivity disorder (Hauser et al., 2016; Hauser et al., 2014) including how aberrant catecholamine function might contribute to its core behavioural impairments.

Materials and methods

Subjects

Sixty healthy volunteers aged 18–35 (mean = 23.22, SD = 3.615) participated in a double-blind, placebo-controlled, between-subjects study. The sample size was determined using power calculations taking effect sizes from our prior studies that used the same drug manipulations (Hauser et al., 2019; Hauser et al., 2018; Hauser et al., 2017b). Each subject was randomly allocated to one of three drug groups, controlling for an equal gender balance across all groups (cf. Appendix 1). Candidate subjects with a history of neurological or psychiatric disorders, current health issues, regular medications (except contraceptives), or prior allergic reactions to drugs were excluded from the study. Subjects had (self-reported) normal or corrected-to-normal vision. The groups consisted of 20 subjects each matched (Appendix 2—table 1) for gender and age. To evaluate peripheral drug effects, heart rate, systolic and diastolic blood pressure were collected at three different time-points: ‘at arrival’, ‘pre-task’ and ‘post-task’, cf. Appendix 1 for details. At 50 min after administrating the second drug, subjects were filled in the PANAS questionnaires (Watson et al., 1988a) and completed the WASI Matrix Reasoning subtest (Wechsler, 2013). Groups differed in mood (PANAS negative affect, cf. Appendix 1 for details) and marginally in intellectual abilities (WASI), and so we control for these potential confounders in our analyses (cf. Appendix 1 for uncorrected results). Subjects were reimbursed for their participation on an hourly basis and received a bonus according to their performance (proportional to the sum of all the collected apples’ sizes). One subject from the amisulpride group was excluded due to not engaging in the task and performing at chance level. The study was approved by the UCL research ethics committee and all subjects provided written informed consent.

Pharmacological manipulation

To reduce noradrenaline functioning, we administered 40 mg of the non-selective β-adrenoceptor antagonist propranolol 60 min before the task (Figure 1D). To reduce dopamine functioning, we administered 400 mg of the selective D2/D3 antagonist amisulpride 90 min before the task. Because of different pharmacokinetic properties, drugs were administered at different times. Each drug group received the drug on its corresponding time point and a placebo at the other time point. The placebo group received placebo at both time points, in line with our previous studies (Hauser et al., 2019; Hauser et al., 2018; Hauser et al., 2017b).

Experimental paradigm

To quantify different exploration strategies, we developed a multi-armed bandit task implemented using Cogent (http://www.vislab.ucl.ac.uk/cogent.php) for MATLAB (R2018a). Subjects had to choose between bandits (i.e. trees) that produced samples (i.e. apples) with varying reward (i.e. size) in two different horizon conditions (Figure 1a–b). Bandits were displayed during the entire duration of a trial and there was no time limit for sampling from (choosing) the bandits. The sizes of apples they collected were summed and converted to an amount of juice (feedback), which was displayed during 2000 ms at the end of each trial. Subjects were instructed to endeavour to make the most juice and that they would receive a cash bonus proportional to their performance. Overall subjects received £10/hr and a mean bonus of £1.12 (std: £0.06).

Similar to the horizon task (Wilson et al., 2014), to induce different extents of exploration, we manipulated the horizon (i.e. number of apples to be picked: one in the short horizon, six in the long horizon) between trials. This horizon-manipulation, which has been extensively used to modulate exploratory behaviour (Zajkowski et al., 2017; Warren et al., 2017; Wu et al., 2018; Guo and Yu, 2018), promotes exploration in the long horizon condition as there are more opportunities to gather reward.

Within a single trial, each bandit had a different mean reward µ (i.e. apple size) and associated uncertainty as captured by the number of initial samples (i.e. number of apples shown at the beginning of the trial). Each bandit (i.e. tree) i was from one of four generative processes (Figure 1c) characterised by different means μi and number of initial samples. The rewards (apple sizes) for each bandit were sampled from a normal distribution with mean μi, specific to the bandit, and with a fixed variance, S2 = 0.8. The rewards were those sampled values rounded to the closest integer. Each distribution was truncated to [2, 10], meaning that rewards with values above or below this interval were excluded, resulting in a total of 9 possible rewards (i.e. 9 different apple sizes; Figure 1—figure supplement 1 for a representation). The ‘certain standard bandit’ provided three initial samples and on every trial its mean μcs was sampled from a normal distribution: μcs~N5.5,1.4. The ‘standard bandit’ provided one initial sample and to make sure that its mean μs was comparable to μcs, the trials were split equally between the four following: {μs=μcs+1; μs=μcs1; μs=μcs+2; μs=μcs2}. The ‘novel bandit’ provided no initial samples and its mean μn was comparable to both μcs and μs by splitting the trials equally between the eight following:

{μn=μcs+1; μn=μcs1; μn=μcs+2; μn=μcs2; μn=μs+1;μn=μs1; μn=μs+2; μn=μs2}.

The ‘low bandit’ provided one initial sample which was smaller than all the other bandits’ means on that trial: μl=minμcs,μs,μn-1. We ensured that the initial sample from the low-value bandit was the smallest by resampling from each bandit in the trials were that was not the case. To make sure that our task captures heuristic exploration strategies, we simulated behaviour (Figure 1). Additionally, in each trial, to avoid that some exploration strategies overshadow other ones, only three of the four different groups were available to choose from. Based on the mean of the initial samples, we identified the high-value option (i.e. the bandit with the highest expected reward) in trials where both the certain-standard and the standard bandit were present.

There were 25 trials of each of the four three-bandit combination making it a total of 100 different trials. They were then duplicated to measure choice consistency, defined as the frequency of making the same choice on identical trials (in contrast to a previous propranolol study where consistency was defined in terms of a value-based exploration parameter [Sokol-Hessner et al., 2015]). Each subject played these 200 trials both in a short and in a long horizon settings, resulting in a total of 400 trials. The trials were randomly assigned to one of four blocks and subjects were given a short break at the end of each of them. To prevent learning, the bandits’ positions (left, middle or right) as well as their colour (eight sets of three different colours) where shuffled between trials. To ensure subjects distinguished different apple sizes and understood that apples from the same tree were always of similar size (generated following a normal distribution), they needed to undergo training prior to the main experiment. In training, based on three displayed apples of similar size, they were tasked to guess between two options, namely which apple was most likely to come from the same tree and then received feedback about their choice.

Statistical analyses

All statistical analyses were performed using the R Statistical Software (R Development Core Team, 2011). For computing ANOVA tests and pairwise comparisons the ‘rstatix’ package was used, and for computing effect sizes the ‘lsr’ package (Navarro, 2015) was used. To ensure consistent performance across all subjects, we excluded one outlier subject (belonging to the amisulpride group) from our analysis due to not engaging in the task and performing at chance level (defined as randomly sampling from one out of three bandits, that is 33%). Each bandit's selection frequency for a horizon condition was computed over all 200 trials and not only over the trials where this specific bandit was present (i.e. 3/4 of 200 = 150 trials). In all the analysis comparing horizon conditions, except when looking at score values (Figure 2c), only the first draw of the long horizon was used. We compared behavioural measures and model parameters using (paired-samples) t-tests and repeated-measures (rm-) ANOVAs with a between-subject factor of drug group (propranolol group, amisulpride group, placebo group) and a within-subject factor horizon (long, short). Information seeking, expected values and scores were analysed using rm-ANOVAs with a within-subject factor horizon. Measures that were horizon-independent (e.g. prior mean), were analysed using one-way ANOVAs with a between-subject factor drug group. As drug groups differed in negative affect (Appendix 2—table 1), which, through its relationship to anxiety (Watson et al., 1988b) is thought to affect cognition (Bishop and Gagne, 2018) and potentially exploration (de Visser et al., 2010). We corrected for negative affect (PANAS) and IQ (WASI) in each analysis by adding those two measures as covariates in each ANOVA mentioned above (cf. Appendix 1 for analysis without covariates and analysis with physiological effect as an additional covariates). We report effect sizes using partial eta squared (η2) for ANOVAs and Cohen’s d (d) for t-tests (Richardson, 2011).

Computational modelling

We adapted a set of Bayesian generative models from previous studies (Gershman, 2018), where each model assumed that different characteristics account for subjects’ behaviour. The binary indicators ctr,cn indicate which components (value-free random and novelty exploration respectively) were included in the different models. The value of each bandit is represented as a distribution N(Q,S) with S=0.8, the sampling variance fixed to its generative value. Subjects have prior beliefs about bandits’ values which we assume to be Gaussian with mean Q0 and uncertainty σ0. The subject's initial estimate of a bandit’s mean (Q0; prior mean) and its uncertainty about it (σ0; prior variance) are free parameters.

These beliefs are updated according to Bayes rule (detailed below) for each initial sample (note that there are no updates for the novel bandit).

Mean and variance update rules

At each time point t, in which a sample m, of one of the bandits is presented, the expected mean Q and precision τ=1σ2 of the corresponding bandit i are updated as follows:

Qi,t+1=τi,t*Qi,t+τsamp*mτi,t+τsamp
τt+1i=τsamp+τti

where τsamp=1S2 is the sampling precision, with the sampling variance S=0.8 fixed. Those update rules are equivalent to using a Kalman filter (Bishop, 2006) in stationary bandits.

We examined three base models: the UCB model, the Thompson model,and the hybrid model. The UCB model encompasses the UCB algorithm (captures directed exploration) and a softmax choice function (captures a value-based random exploration). The Thompson model reflects Thompson sampling (captures an uncertainty-driven value-based random exploration). The hybrid model captures the contribution of the UCB model and the Thompson model, essentially a mixture of the above. We computed three extensions of each model by either adding value-free random exploration (cvf,cn)=(1,0), novelty exploration (cvf,cn)=(0,1) or both heuristics cvf,cn=1,1, leading to a total of 12 models (see the labels on the x-axis in Figure 4a; cvf,cn=0,0 is the model with no extension). For additional models cf. Appendix 1. A coefficient cvf=1 indicates that an ϵ-greedy component was added to the decision rule, ensuring that once in a while (every ϵ % of the time), another option than the predicted one is selected. A coefficient cn=1 indicates that the novelty bonus η is added to the computation of the value of novel bandits and the Kronecker delta δ in front of this bonus ensures that it is only applied to the novel bandit. The models and their free parameters (summarised in Appendix 2—table 5) are described in detail below.

Choice rules

UCB model

In this model, an information bonus γ is added to the expected reward of each option, scaling with the option’s uncertainty (UCB). The value of each band it i at timepoint t is:

Vi,t=Qi,t+γσi,t+cnηδi=novel

The probability of choosing bandit i was given by passing this into the softmax decision function:

Pct=i=eβVi,txeβVi,t*1-cvfϵ+cvfϵ3

where β is the inverse temperature of the softmax (lower values producing more value-based random exploration), and the coefficient cvf adds the value-free random exploration component.

Thompson model

In this model, based on Thompson sampling, the overall uncertainty can be seen as a more refined version of a decision temperature (Gershman, 2018). The value of each band it i is as before:

Vi,t=Qi,t+cnηδi=novel

A sample xi,t~N(Vi,t,σi,t2) is taken from each bandit. The probability of choosing a bandit i depends on the probability that all pairwise differences between the sample from bandit i and the other bandits ji were greater or equal to 0 (see the probability of maximum utility choice rule [Speekenbrink and Konstantinidis, 2015]). In our task, because three bandits were present, two pairwise differences scores (contained in the two-dimensional vector u) were computed for each bandit. The probability of choosing bandit i is:

P(ct=i)=P(j:xi,t>;xj,t)(1cvfϵ)+cvfϵ3
Pct=xi=00ɸu;Mi,t,Ci,tdu*1-cvfϵ+cvfϵ3

where ɸ is the multivariate Normal density function with mean vector.

Mi,t=AiV1,tV2,tV3,t and covariance matrix

Ci,t=Aiσ1,t000σ2,t000σ3,tAiT

Where the matrix Ai computes the pairwise differences between bandit i and the other bandits. For example, for band it i=1:

A1=1-1010-1

Hybrid model

This model allows a combination of the UCB model and the Thompson model. The probability of choosing bandit i is:

Pct=i=wPUCBct=i+1-wPThompsonct=i*1-cvfϵ+cvfϵ3

where w specifies the contribution of each of the two models. PUCB and PThompson are calculated for cvf=0. If w=1, only the UCB model is used while if w=0 only the Thompson model is used. In between values indicate a mixture of the two models.

All the parameters besides Q0 and w were free to vary as a function of the horizon (Appendix 2—table 5) as they capture different exploration forms: directed exploration (information bonus γ; UCB model), novelty exploration (novelty bonus η), value-based random exploration (inverse temperature β; UCB model), uncertainty-directed exploration (prior variance σ0; Thompson model), and value-free random exploration (ϵ-greedy parameter). The prior mean Q0 was fitted to both horizons together as we do not expect the belief of how good a bandit is to depend on the horizon. The same was done for w as assume the arbitration between the UCB model and the Thompson model does not depend on horizon.

Parameter estimation

To fit the parameter values, we used the maximum a posteriori probability (MAP) estimate. The optimisation function used was fmincon in MATLAB. The parameters could vary within the following bounds:σ0=0.01,6,Q0=1,10,ϵ=0,0.5,η=[0,5]. The prior distribution used for the prior mean parameter Q0 was the normal distribution: Q0~N(5,2) that approximates the generative distributions. For the ϵ-greedy parameter, the novelty bonus η and the prior variance parameter σ0, a uniform distribution (of range equal to the specific parameters’ bounds) was used, which is equivalent to performing MLE. A summary of the parameter values per group and per horizon can be found in Appendix 2—table 6.

Model comparison

We performed a K-fold cross-validation with K=10. We partitioned the data of each subject (Ntrials= 400; 200 in each horizon) into K folds (i.e. subsamples). For model fitting in our model selection, we used maximum likelihood estimation (MLE), where we maximised the likelihood for each subject individually (fmincon was ran with eight randomly chosen starting point to overcome potential local minima). We fitted the model using K-1 folds and validated the model on the remaining fold. We repeated this process K times, so that each of the K fold is used as a validation set once, and averaged the likelihood over held out trials. We did this for each model and each subject and averaged across subjects. The model with the highest likelihood of held-out data (the winning model) was the Thompson sampling with (ctr,cn)={1,1}. It was also the model which accounted best for the largest number of subjects (Figure 4—figure supplement 1).

Parameter recovery

To make sure that the parameters are interpretable, we performed a parameter recovery analysis. For each parameter, we took four values, equally spread, within a reasonable parameter range (σ0=0.5,2.5,Q0=1,6,ϵ=0,0.5,η=[0,5]). All parameters but Q0 were free to vary as a function of the horizon. We simulated behaviour with one artificial agent for each 47 combinations using a new trial for each. The model was fitted using MAP estimation (cf. Parameter estimation) and analysed how well the generative parameters (generating parameters in Figure 5) correlated with the recovered ones (fitted parameters in Figure 5) using Pearson correlation (summarised in Figure 5c). In addition to the correlation we examined the spread (Figure 4—figure supplement 3) of the recovered parameters. Overall the parameters were well recoverable.

Model validation

To validate our model, we used each subjects’ fitted parameters to simulate behaviour on our task (4000 trials per agent). The stimulated data (Figure 5—figure supplement 3), although not perfect, resembles the real data reasonably well. Additionally, to validate the behavioural indicators of the two different exploration heuristics we stimulated the behaviour of 200 agents using the winning model on one horizon condition (i.e. trials = 200). For the indicators of value-free random exploration, we stimulated behaviour with low (ϵ=0) and high (ϵ=0.2) values of the ϵ-greedy parameter. The other parameters were set to the mean parameter fits (σ0=1.312,η=2.625,Q0=3.2). This confirms that higher amounts of value-free random exploration are captured by the proportion of low-value bandit selection (Figure 1f) and the choice consistency (Figure 1e). Similarly, for the indicator of novelty exploration, we simulated behaviour with low (η=0) and high (η=2) values of the novelty bonus η to validate the use of the proportion of the novel-bandit selection (Figure 1g). Again, the remaining parameters were set to the mean parameter fits (σ0=1.312,ϵ=0.1,Q0=3.2). Parameter values for high and low exploration were selected empirically from pilot and task data. Additionally, we simulated the effects of other exploration strategies in short and long horizon conditions (Figure 1—figure supplement 35). To simulate a long (versus short) horizon condition,we increased the overall exploration by increasing other exploration strategies. Details about parameter values can be found in Appendix 2—table 7.

Acknowledgements

MD is a predoctoral fellow of the International Max Planck Research School on Computational Methods in Psychiatry and Ageing Research. The participating institutions are the Max Planck Institute for Human Development and the University College London (UCL). TUH is supported by a Wellcome Sir Henry Dale Fellowship (211155/Z/18/Z), a grant from the Jacobs Foundation (2017-1261-04), the Medical Research Foundation, a 2018 NARSAD Young Investigator Grant (27023) from the Brain and Behavior Research Foundation, and an ERC Starting Grant (946055). RJD holds a Wellcome Trust Investigator Award (098362/Z/12/Z). The Max Planck UCL Centre is a joint initiative supported by UCL and the Max Planck Society. The Wellcome Centre for Human Neuroimaging is supported by core funding from the Wellcome Trust (203147/Z/16/Z).

Appendix 1

Drug effect on response times

There were no differences in response times (RT) between drug groups in the one-way ANOVA. Neither in the mean RT (ANOVA: F(2, 54) = 1.625, p = 0.206, η2 = 0.057) nor in its variability (standard deviation; F(2, 54)=1.85, p = 0.16, η2=0.064).

Bandit effect on response times

There was no difference in response times between bandits in the repeated-measures ANOVA (bandit main effect: F(1.78, 99.44) = 1.634, p = 0.203, η2 = 0.028; Figure 3—figure supplement 1).

Interaction effects on response times

When looking at the first choice in both conditions, no differences were evident in RT in the repeated-measures ANOVA with a between-subject factor drug group and within-subject factors horizon and bandit (bandit main effect: F(1.71, 92.46) = 1.203, p = 0.3, η2 = 0.022; horizon main effect: F(1, 54) = 0.71, p = 0.403, η2 = 0.013; drug main effect: F(2, 54) = 2.299, p = 0.11, η2 = 0.078; drug-by-bandit interaction: F(3.42, 92.46) = 0.431, p = 0.757, η2 = 0.016; drug-by-horizon interaction: F(2, 54) = 0.204, p = 0.816, η2 = 0.008; bandit-by-horizon interaction: F(1.39, 75.01) = 0.298, p = 0.662, η2 = 0.005; drug-by-bandit-by-horizon interaction: F(2.78, 75.01) = 1.015, p = 0.387, η2 = 0.036).

In the long horizon, when looking at all six samples, no differences were evident in RT between drug group in the repeated-measures ANOVA with a between-subject factor drug group and within-subject factors bandits and samples (drug main effect: F(2, 56)=0.542, p = 0.585, η2 = 0.019). There was an effect of bandit (bandit main effect: F(1.61, 90.12)=7.137, p = 0.003, η2 = 0.113), of sample (sample main effect: F(1.54, 86.15) = 427.047, p<0.001, η2 = 0.884) and an interaction between the two (bandit-by-sample interaction: F(3.33, 186.41) = 4.789, p = 0.002, η2 = 0.079; drug-by-bandit interaction: F(3.22, 90.12) = 0.525, p = 0.679, η2 = 0.018; drug-by-sample interaction: F(3.08, 86.15) = 1.039, p = 0.381, η2 = 0.036; drug-by-bandit-by-sample interaction: F(6.66, 186.41) = 0.645, p = 0.71, η2 = 0.023). Further analysis (not corrected for multiple comparisons) revealed that the interaction between bandit and sample reflected the fact that when looking at samples individually, there was a bandit main effect in the second sample (bandit main effect: F(1.27, 70.88) = 27.783, p<0.001, η2 = 0.332; drug main effect: F(2, 56) = 0.201, p = 0.819, η2 = 0.007; drug-by-bandit interaction: F(2.53, 70.88) = 0.906, p = 0.429, η2 = 0.031) and in the third sample (bandit main effect: F(1.23, 68.93) = 21.318, p<0.001, η2 = 0.276; drug main effect: F(2, 56) = 0.102, p = 0.903, η2 = 0.004; drug-by-bandit interaction: F(2.46, 68.93) = 0.208, p = 0.855, η2 = 0.007), but not in the other samples first sample: drug main effect: F(2, 56) = 1.108, p = 0.337, η2 = 0.038; bandit main effect: F(2, 112) = 0.339, p = 0.713, η2 = 0.006; drug-by-bandit interaction: F(4, 112) = 0.414, p = 0.798, η2 = 0.015; fourth sample: (drug main effect: F(2, 56) = 0.43, p = 0.652, η2 = 0.015; bandit main effect: F(1.36, 76.22)=1.348, p = 0.259, η2=0.024; drug-by-bandit interaction: F(2.72, 76.22) = 0.396, p = 0.737, η2 = 0.014; fifth sample: drug main effect: F(2, 56) = 0.216, p = 0.806, η2 = 0.008; bandit main effect: F(1.25, 69.79)=0.218, p = 0.696, η2 = 0.004; drug-by-bandit interaction: F(2.49, 69.79) = 0.807, p = 0.474, η2 = 0.028; sixth sample: drug main effect: F(2, 56) = 1.026, p = 0.365, η2 = 0.035; bandit main effect: F(1.05, 58.81)=0.614, p = 0.444, η2 = 0.011; drug-by-bandit interaction: F(2.1, 58.81) = 1.216, p = 0.305, η2 = 0.042). In the second sample, the high-value bandit was chosen faster (high-value bandit vs low-value bandit: t(59) = -5.736, p<0.001, d = 0.917; high-value bandit vs novel bandit: t(59) = -6.24, p<0.001, d = 0.599) and the low-value bandit was chosen slower (low-value bandit vs novel bandit: t(59) = 3.756, p<0.001, d = 0.432). In the third sample, the low-value bandit was chosen slower (high-value bandit vs low-value bandit: t(59) = -5.194, p<0.001, d = 0.571; low-value bandit vs novel bandit: t(59) = 4.448, p<0.001, d = 0.49; high-value bandit vs novel bandit: t(59) = -1.834, p = 0.072, d = 0.09).

Horizon effect on response times

There were no differences in RT between horizon conditions in the repeated-measures ANOVA with the between-subject factor drug group, the within-subject factor horizon condition and the covariates WASI and PANAS negative score horizon main effect: F(1, 54) = 1.443, p = 0.235, η2 = 0.026; drug main effect: F(2, 54) = 1.625, p = 0206, η2 = 0.057; drug-by-horizon interaction: F(2, 54) = 0.431, p = 0.652, η2 = 0.016. In the long horizon, the RT decreased with each sample (sample main effect: F(1.36, 73.5) = 13.626, p<0.001, η2 = 0.201; Pairwise comparisons: sample 1 vs 2: t(59) = 20.968, p<0.001, d = 2.73; sample 2 vs 3: t(59) = 11.825, p<0.001, d = 1.539; sample 3 vs 4: t(59) = 7.862, p<0.001, d = 1.024; sample 4 vs 5: t(59) = 4.117, p<0.001, d = 1.539; sample 5 vs 6: t(59) = 2.646, p = 0.01, d = 1.024; Figure 2—figure supplement 1b).

PANAS

The Positive Affect and Negative Affect scale (PANAS; Watson et al., 1988a) was completed 50 min after the second drug administration and 10 min prior to the task. Groups had similar positive affect but differed in negative affect (Appendix 2—table 1), driven by a higher score in the placebo group (pairwise comparisons: placebo vs propranolol: t(56)=2.801, p = 0.007, d = 0.799; amisulpride vs placebo: t(56)=-2.096, p = 0.041, d = 0.557; amisulpride vs propranolol: t(56) = 0.669, p = 0.506, d = 0.383). It is unclear whether this difference was driven by the drug manipulation, but similar studies have not reported such an effect (e.g. Hauser et al., 2019; Hauser et al., 2018; Campbell-Meiklejohn et al., 2011; Rogers et al., 2004; Hauser et al., 2017b). We controlled for a possible influence of these measures in all our analyses.

Physiological effects

Heart rate, systolic and diastolic pressure were obtained at 3 time points: at the beginning of the experiment before giving the drug (‘at arrival’), after giving the drug just before the task (‘pre-task’), and after finishing task and questionnaires (‘post-task’). The post-task heart rate was lower for participants who received propranolol compared to the other two groups (one-way ANOVA: F(2, 55) = 7.249, p = 0.002, η2 = 0.209; Appendix 2—table 2). A two-way ANOVA with the between-subject factor of drug group and within-subject factor of time (all three time points), showed a time-dependent decrease in heart rate (F(1.74, 95.97) = 99.341, p<0.001, η2 = 0.644), in systolic pressure (F(2, 110) = 8.967, p<0.001, η2 = 0.14) and in diastolic pressure (F(2, 110) = 0.874, p = 0.42, η2 = 0.016), indicating subjects relaxed across the course of the study. Those reductions did not differ between drug group (drug main effect: heart rate: F(2, 55) = 1.84, p = 0.169, η2 = 0.063; systolic pressure: F(2, 55)=1.08, p = 0.347, η2 = 0.038; diastolic pressure: F(2, 55) = 0.239, p = 0.788, η2 = 0.009; drug-by-time interaction: heart rate: F(3.49, 95.97) = 1.928, p = 0.121, η2 = 0.066; systolic pressure: F(4, 110) = 1.6, p = 0.179, η2 = 0.055; diastolic pressure: F(4, 110) = 0.951, p = 0.438, η2 = 0.033).

Task performance score

The performance did not differ between drug groups (total score: drug main effect: F(2, 5) = 2.313, p = 0.109, η2 = 0.079) but it was increased in subjects with higher IQ scores (WASI main effect: F(1, 54) = 17.172, p<0.001, η2 = 0.241).

In the long horizon, the score increased with each sample (sample main effect: F(3.12, 174.97) = 103.469, p<0.001, η2 = 0.649; Pairwise comparisons: sample 1 vs 2: t(59) = -6.737, p<0.001, d=0.877; sample 2 vs 3: t(59)=-3.69, p<0.001, d=0.48; sample 3 vs 4: t(59) = -5.167, p<0.001, d = 0.673; sample 4 vs 5: t(59) = -2.832, p = 0.006, d = 0.48; sample 5 vs 6: t(59) = -2.344, p = 0.022, d = 0.673; Figure 2—figure supplement 1a). The increase in reward was larger in trials where the first draw was exploratory (linear regression slope coefficient: mean=0.118, sd=0.038) compared to when it was exploitative (linear regression slope coefficient: mean = 0.028, sd = 0.041; t-tests for slope coefficients: t(58) = -12.161, p<0.001, d = -1.583; Figure 2—figure supplement 1d), suggesting that exploration was used beneficially and subjects benefitted from their initial exploration.

Dopamine effect on high-value bandit sampling frequency

The amisulpride group had a marginal tendency towards selecting the high-value bandit, meaning that they were disposed to exploit more overall (propranolol group excluded: horizon main effect: F(1, 35) = 3.035, p = 0.09, η2 = 0.08; drug main effect: F(1, 35) = 3.602, p = 0.066, η2 = 0.093; drug-by-horizon interaction: F(1, 35)=2.15, p = 0.151, η2 = 0.058). This trend effect was not observed when all three groups were included (horizon main effect: F(1, 54) = 3.909, p = 0.053, η2 = 0.068; drug main effect: F(2, 54) = 1.388, p = 0.258, η2 = 0.049; drug-by-horizon interaction: F(2, 54) = 0.834, p = 0.44, η2 = 0.03).

Gender effects

When adding gender as a between-subjects variable in the repeated-measures ANOVAs, none of the main results changed. Interestingly, we observed a drug-by-gender interaction in the prior variance σ0 (drug-by-gender interaction: F(2, 51) = 5.914, p = 0.005, η2 = 0.188; Figure 5—figure supplement 2), driven by the fact that, female subjects in the placebo group had a larger average σ0 (across both horizon conditions) compared to males (t(20) = 2.836, p = 0.011, d = 1.268), whereas male subjects have a larger σ0 compared to females in the amisulpride group, (t(19) = -2.466, p = 0.025, d = 1.124; propranolol group: t(20) = -0.04, p = 0.969, d = 0.018). This suggests that in a placebo setting, females are on average more uncertain about an option’s expected value, whereas in a dopamine blockade setting males are more uncertain. Besides this effect, we observed a trend-level significance in response times (RT), driven primarily by female subjects tending to have a faster RT in the long horizon compared to male subjects (gender main effect: F(1, 51) = 3.54, p = 0.066, η2 = 0.065).

Horizon and drug effects without covariate

When analysing the results without correcting for IQ (WASI) and negative affect (PANAS), similar results are obtained. The high-value bandit is picked more in the short-horizon condition indicating exploitation (F(1, 56) = 44.844, p<0.001, η2 = 0.445), whereas the opposite phenomenon is observed in the low-value bandit (F(1, 56) = 24.24, p<0.001, η2 = 0.302) and the novel bandit (horizon main effect: F(1, 56) = 30.867, p<0.001, η2 = 0.355), indicating exploration. In line with these results, the model parameters for value-free random exploration (ϵ: F(1, 56) = 10.362, p = 0.002, η2 = 0.156) and novelty exploration (η: F(1, 56) = 38.103, p<0.001, η2 = 0.405) are larger in the long compared to the short horizon condition. Additionally, noradrenaline blockade reduces value-free random exploration as can be seen in the two behavioural signatures, frequency of picking the low-value bandit (F(2, 56) = 2.523, p = 0.089, η2 = 0.083; Pairwise comparisons: placebo vs propranolol: t(40)=2.923, p = 0.005, d=0.654; amisulpride vs placebo: t(38) = -0.587, p = 0.559, d = 0.133; amisulpride vs propranolol: t(38) = 2.171, p = 0.034, d = 0.496), and in the consistency (F(2, 56) = 3.596, p = 0.034, η2 = 0.114; Pairwise comparisons: placebo vs propranolol: t(40) = -3.525, p = 0.001, d = 0.788; amisulpride vs placebo: t(38) = 1.107, p = 0.272, d = 0.251; amisulpride vs propranolol: t(38) = -2.267, p = 0.026, d = 0.514), as well as in the model parameter for value-free random exploration (ϵ: F(2, 56) = 3.205, p = 0.048, η2 = 0.103; Pairwise comparisons: placebo vs propranolol: t(40) = 3.177, p = 0.002, d = 0.71; amisulpride vs placebo: t(38) = 0.251, p = 0.802, d = 0.057; amisulpride vs propranolol: t(38) = 2.723, p = 0.009, d = 0.626).

Horizon and drug effects with heart rate as covariate

When analysing results but now correcting for the post-experiment heart rate (Appendix 2—table 1) in addition to IQ (WASI) and negative affect (PANAS), we obtained similar results. Noradrenaline blockade reduced value-free random exploration as seen in two behavioural signatures, frequency of picking the low-value bandit (F(2, 52) = 4.014, p = 0.024, η2 = 0.134; Pairwise comparisons:placebo vs propranolol: t(40) = 2.923, p = 0.005, d=0.654; amisulpride vs propranolol: t(38) = 2.171, p = 0.034, d = 0.496; amisulpride vs placebo: t(38) = -0.587, p = 0.559, d = 0.133), and consistency (F(2, 52) = 5.474, p = 0.007, η2 = 0.174; Pairwise comparisons: placebo vs propranolol: t(40) = -3.525, p = 0.001, d=0.788; amisulpride vs propranolol: t(38) = -2.267, p = 0.026, d = 0.514; amisulpride vs placebo: t(38) = 1.107, p = 0.272, d = 0.251), as well as in a model parameter for value-free random exploration (ϵ: F(2, 52) = 4.493, p = 0.016, η2 = 0.147; Pairwise comparisons: placebo vs propranolol: t(40) = 3.177, p = 0.002, d = 0.71; amisulpride vs propranolol: t(38) = 2.723, p = 0.009, d = 0.626; amisulpride vs placebo: t(38) = 0.251, p = 0.802, d = 0.057).

Other model results

When analysing the fitted parameter values of both the second winning model (UCB +ϵ + η) and third winning model (hybrid +ϵ + η), similar results pertain. Thus, a value-free random exploration parameter was reduced following noradrenaline blockade in the second winning model (ϵ: F(2, 54) = 4.503, p = 0.016, η2 = 0.143; Pairwise comparisons: placebo vs propranolol: t(38)=2.185, p = 0.033, d=0.386; amisulpride vs propranolol: t(40) = 1.724, p = 0.089, d = 0.501; amisulpride vs placebo: t(40) = -0.665, p = 0.508, d = 0.151) and was affected at a trend-level significance in the third winning model (ϵ: F(2, 54) = 3.04, p = 0.056, η2 =0.101). These results highlight our finding that value-free random exploration is modulated by noradrenaline and additionally demonstrates this is independent of the complex exploration strategy used as well as the value function.

Bandit combination effect

Behavioural results were analysed additionally for each bandit combination separately. The high-value bandit was chosen more when there was no novel bandit (pairwise comparisons: [certain-standard, standard, low] vs [certain-standard, standard, novel]: t(59) = 15.122, p<0.001, d = 1.969; [certain-standard, standard, low] vs [certain-standard, novel, low]: t(59) = 12.905, p<0.001, d = 2.389; [certain-standard, standard, low] vs [standard, novel, low]: t(59) = 18.348, p<0.001, d = 1.68), and less when its value was less certain ([standard, novel, low] vs [certain-standard, standard, novel]: t(59) = -6.986, p<0.001, d=0.407; [standard, novel, low] vs [certain-standard, novel, low]: t(59) = -5.44, p<0.001, d = 0.708; bandit combination main effect: F(1.81, 101.33) = 237.051, p<0.001, η2 = 0.809; [certain-standard, standard, novel] vs [certain-standard, novel, low]: t(59) = 0.364, p = 0.717, d = 0.909; Figure 3—figure supplement 2a). The novel bandit was chosen most often when the high-value bandit was less certain, then when the high-value bandit was more certain and was chosen least when both certain and certain standard bandits were present ([standard, novel, low] vs [certain-standard, novel, low]: t(59)=5.001, p<0.001, d=0.651; [standard, novel, low] vs [certain-standard, standard, novel]: t(59) = 9.414, p<0.001, d = 1.226; [certain-standard, novel, low] vs [certain-standard, standard, novel]: t(59) = 4.146, p<0.001, d=0.54; bandit combination main effect: F(2, 112) = 42.44, p<0.001, η2 = 0.431; Figure 3—figure supplement 2b). The low-value bandit was chosen less when the high-value bandit was more certain ([certain-standard, novel, low] vs [certain-standard, standard, low]: t(59) = -2.731, p = 0.008, d = 0.356; [certain-standard, novel, low] vs [standard, novel, low]: t(59) = -1.958, p = 0.055, d = 0.255; bandit combination main effect: F(1.66, 92.74) = 4.534, p = 0.019, η2 = 0.075; [certain-standard, standard, low] vs [standard, novel, low]: t(59) = 1.32, p = 0.192, d = 0.172; Figure 3—figure supplement 2c).

Other effects on choice consistency

Our results demonstrate a drug-by-horizon interaction on choice consistency (F(2, 54) = 3.352, p = 0.042, η2 = 0.110; Figure 3), mainly driven by the fact that frequency of selecting the same option is increased in the long (compared to the short) horizon in the amisulpride group, while there is no significant horizon difference in the other two drug groups (pairwise comparison for horizon effect: amisulpride group: t(19) = 2.482, p = 0.023, d = 0.569; propranolol group: t(20) = -1.91, p = 0.071, d = 0.427; placebo group: t(20) = 0.505, p = 0.619, d = 0.113). It is not entirely clear why catecholamines would increase the differentiation between the horizon conditions and this relatively weak effect should be replicated before interpreting.

Stand-alone heuristic models

We also analysed stand-alone heuristic models, in which there is no value computation (value of each bandit i: Vi=0). The held-out data likelihood for such heuristic model combined with novelty exploration had a mean of m = 0.367 (sd = 0.005). The model in which we added value-free random exploration on top of novelty exploration had a mean of m=0.384 (sd = 0.006). These models performed poorly, although better than chance level. Importantly, adding value-free random exploration improved performance. This highlights that subjects’ combine complex and heuristic modules in exploration.

Appendix 2

Appendix 2—table 1. Characteristics of drug groups.

The drug groups did not differ in gender, age, nor in intellectual abilities (adapted WASI matrix test).

Groups differed in negative affect (PANAS), driven by a higher score in the placebo group (pairwise comparisons: placebo vs propranolol: t(56) = 2.801, p = 0.007, d = 0.799; amisulpride vs placebo: t(56) = -2.096, p = 0.041, d = 0.557; amisulpride vs propranolol: t(56) = 0.669, p = 0.506, d = 0.383). For more details cf. Appendix 1. Mean (SD).

Propranolol Placebo Amisulpride
Gender (M/F) 10/10 10/10 10/9
Age 22.80
(3.59)
23.80
(4.23)
23.05
(3.01)
F(2,56) = 0.404,
p = 0.669, η2 = 0.014
Intellectual abilities 22.8
(1.85)
22.6
(3.70)
24.37
(2.45)
F(2,56) = 2.337,
p = 0.106, η2 = 0.077
Positive affect 24.55
(8.99)
28.90
(7.56)
29.58
(10.21)
F(2,56) = 1.832,
p = 0.170, η2 = 0.061
Negative affect 10.65
(.81)
12.75
(3.63)
11.16
(1.71)
F(2,56) = 4.259,
p = 0.019, η2 = 0.132

Appendix 2—table 2. Physiological effects on drug groups.

The drug groups also differed in post-experiment heart rate, driven by lower values in the propranolol group (pairwise comparisons: placebo vs propranolol: t(55)=3.5, p = 0.001, d = 1.293; amisulpride vs placebo: t(55) = −0.394, p = 0.695, d = 0.119; amisulpride vs propranolol: t(55)=3.013, p = 0.004, d = 0.921). For detailed statistics and analysis accounting for this cf. Appendix 1. Mean (SD).

Propranolol Placebo Amisulpride
Heart rate (BPM) At arrival 74.9
(10.8)
77,2
(12,6)
77.7
(13.8)
F(2, 55) = 0.290,
p = 0.749, η2 = 0.010
Pre-task 62,6
(8,5)
65,8
(8,3)
64,6
(9,8)
F(2, 55) = 0.667,
p = 0.517, η2 =0.024
Post-task 55,7
(6,7)
64,4
(6,9)
63,4
(10,0)
F(2, 55) = 7.249,
p = 0.002, η2 = 0.209
Systolic blood pressure At arrival 117,2
(10,4)
115,0
(9,7)
117,9
(9,7)
F(2, 55) = 0.438,
p = 0.648, η2 = 0.016
Pre-task 109,4
(9,2)
111,8
(8,6)
114,9
(8,6)
F(2, 55) = 1.841,
p = 0.168, η2 = 0.063
Post-task 109,5
(8,2)
113,9
(11,3)
114,6
(9,3)
F(2, 55) = 1.584,
p = 0.214, η2 = 0.054
Diastolic blood pressure At arrival 71,5
(7,8)
71,2
(6,7)
72,3
(6,7)
F(2, 55) = 0.115,
p = 0.891, η2 = 0.004
Pre-task 68,3
(7,0)
71,1
(10,6)
72,0
(5,9)
F(2, 55) = 1.111,
p = 0.337, η2 = 0.039
Post-task 70,8
(7,3)
70,9
(8,0)
70,3
(6,6)
F(2, 55) = 0.037,
p = 0.964, η2 = 0.001

Appendix 2—table 3. Table of statistics and behavioural values of Figure 2.

All of those measures were modulated by the horizon condition.

Horizon Mean (sd) Two-way repeated-measures ANOVA
Main effect of horizon
Expected value Short 6.368 (0.335) F(1, 56) = 19.457,
p<0.001, η2 = 0.258
Long 6.221 (0.379)
Initial samples Short 1.282 (0.247) F(1, 56) = 58.78,
p<0.001, η2 = 0.512
Long 1.084 (0.329)
Score (first sample) Short 5.904 (0.192) F(1, 56) = 58.78,
p<0.001, η2 = 0.512
Long 5.82 (0.182)
Score (average) Short 5.904 (0.192) F(1, 56) = 103.759,
p<0.001, η2 = 0.649
Long 6.098 (0.222)

Appendix 2—table 4. Table of statistics and behavioural measure values of Figure 3.

The drug groups differed in low-value bandit picking frequency (pairwise comparisons: placebo vs propranolol: t(40) = 2.923, p = 0.005, d = 0.654; amisulpride vs placebo: t(38) = -0.587, p = 0.559, d = 0.133; amisulpride vs propranolol: t(38)=2.171, p = 0.034, d = 0.496) and choice consistency (placebo vs propranolol: t(40) = -3.525, p = 0.01, d = 0.788; amisulpride vs placebo: t(38) = 1.107, p = 0.272, d = 0.251; amisulpride vs propranolol: t(38) = -2.267, p = 0.026, d = 0.514). The main effect is either of drug group (D) or of horizon (H). The interaction is either drug-by-horizon (DH) or horizon-by-WASI (measure of IQ; HW).

Mean (sd) Two-way repeated-measures ANOVA
Horizon Amisulpride Placebo Propranolol Main effect Interaction
High-value bandit Short 54.55
(8.87)
49.38
(9.10)
50.98
(11.4)
D F(2, 54)
= 1.388,
p = 0.258,
η2 = 0.049
DH F(2, 54)
= 0.834,
p = 0.440,
η2 = 0.030
Long 41.90
(8.47)
44.10
(13.88)
41.90
(13.57)
H F(1, 54)
= 3.909,
p = 0.053,
η2 = 0.068
HW F(1, 54)
= 13.304,
p = 0.001,
η2 = 0.198
Low-value bandit Short 3.32
(2.33)
4.28
(2.98)
2.50
(2.48)
D F(2, 54)
= 7.003,
p = 0.002,
η2 = 0.206
DH F(2, 54)
= 2.154,
p = 0.126,
η2 = 0.074
Long 5.45
(3.76)
5.35
(3.40)
3.45
(2.18)
H F(1, 54)
= 4.069,
p = 0.049,
η2 = 0.070
HW F(1, 54)
= 1.199,
p = 0.278,
η2 = 0.022
Novel bandit Short 36.87
(9.49)
39.02
(10.94)
40.15
(12.43)
D F(2, 54)
= 1.498,
p = 0.233,
η2 = 0.053
DH F(2, 54)
= 0.542,
p = 0.584,
η2 = 0.020
Long 46.82
(12.1)
43.62
(16.27)
48.55
(16.59)
H F(1, 54)
= 5.593,
p = 0.022,
η2 = 0.094
HW F(1, 54)
= 13.897,
p<0.001,
η2 = 0.205
Consistency Short 64.16
(12.27)
62.70
(12.59)
73.00
(11.33)
D F(2, 54)
= 7.154,
p = 0.002,
η2 = 0.209
DH F(2, 54)
= 3.352,
p = 0.042,
η2 = 0.110
Long 68.11
(10.34)
64.00
(8.93)
70.55
(9.91)
H F(1, 54)
= 1.333,
p = 0.253,
η2 = 0.024
HW F(1, 54)
= 0.409,
p = 0.525,
η2 = 0.008

Appendix 2—table 5. Table of parameters used for each model compared during model selection (Figure 4).

Each of the 12 columns indicate a model. The three ‘main models’ studied were the Thompson model, the UCB model and a hybrid of both. Variants were then created by adding the ϵ-greedy parameter, the novelty bonus and a combination of both. All the parameters besides Q0 and w were fitted to each horizon separately. Parameters: Q0 = prior mean (initial estimate of a bandits mean); σ0 = prior variance (uncertainty about Q0); w = contribution of UCB vs Thompson; γ = information bonus; β = softmax inverse temperature; ϵ = ϵ-greedy parameter (stochasticity); η = novelty bonus. Model selection measures include the cross-validation held-out data likelihood averaged over subjects, mean (SD), as well as the subject count for which this model performed better over either 12 models or over the 3 best models.

Model Thompson UCB Hybrid
+ϵ+η +ϵ+η +ϵ+η
Parameters Horizon independent Q0 Q0 Q0 Q0 Q0 Q0 Q0 Q0 w,Q0 w,Q0 w,Q0 w,Q0
Horizon dependent σ0 σ0, ϵ σ0,η σ0, ϵ,η γ,β γ,β, ϵ γ,β, η γ,β,
ϵ,η
σ0,γ,
β
σ0,γ,
β, ϵ
σ0,γ,
β,η
σ0,γ,
β,
ϵ,η
Model selection Mean held-out data likelihood 550.2 (8.1) 552.7 (7.1) 552,2 (8.7) 555.3 (8.4) 552.9 (8.0) 552.9 (8.0) 553.4 (8.1) 555.1 (8.8) 553.5 (8.1) 553.8 (8.4) 555.0
(8.4)
555.1 (8.5)
Subjects for which model fits best (out of 12) 0 3 2 20 0 0 1 20 0 0 7 6
Subjects for which model fits best (out of 3 best) - - - 27 - - - 22 - - - 10

Appendix 2—table 6. Table of statistics and fitted model parameters of Figure 5.

The drug groups differed in ϵ-greedy parameter value (pairwise comparisons: placebo vs propranolol: t(40) = 3.177, p = 0.002, d =0 .71; amisulpride vs placebo: t(38) = 0.251, p = 0.802, d = 0.057; amisulpride vs propranolol: t(38) = 2.723, p = 0.009, d = 0.626). The main effect is either of drug group (D) or of horizon (H). The interaction is either drug-by-horizon (DH) or horizon-by-WASI (measure of IQ; HW).

Mean (sd) Two-way repeated-measures ANOVA
Horizon Amisulpride Placebo Propranolol Main effect Interaction
ϵ-greedy parameter Short 0.10
(0.10)
0.12
(0.08)
0.07
(0.08)
D F(2, 54)
= 6.722,
p = 0.002,
η2 = 0.199
DH F(2, 54)
= 1.305,
p = 0.280,
η2 = 0.046
Long 0.17
(0.14)
0.14
(0.10)
0.08
(0.06)
H F(1, 54)
= 1.968,
p = 0.166,
η2 = 0.035
HW F(1, 54)
= 6.08,
p = 0.017,
η2 = 0.101
Novelty bonus η Short 2.07
(0.98)
2.26
(1.37)
2.05
(1.16)
D F(2, 54)
= 0.249,
p = 0.780,
η2 = 0.009
DH F(2, 54)
= 0.03,
p = 0.971,
η2 = 0.001
Long 3.24
(1.19)
3.12
(1.63)
2.95
(1.70)
H F(1, 54)
= 1.839,
p = 0.181,
η2 = 0.033
HW F(1, 54)
= 8.416,
p = 0.005,
η2 = 0.135
Prior variance σ0 Short 1.18
(0.20)
1.12
(0.43)
1.25
(0.34)
D F(2, 54)
= 0.060,
p = 0.942,
η2 = 0.002
DH F(2, 54)
= 2.162,
p = 0.125,
η2 =0.074
Long 1.41
(0.61)
1.42
(0.59)
1.21
(0.44)
H F(1, 54)
= 0.129,
p = 0.721,
η2 = 0.002
HW F(1, 54)
= 0.022,
p = 0.882,
η2 < 0.001
Prior mean Q0 3.22
(1.05)
3.20
(1.36)
3.44
(1.05)
D F(2, 54) = 0.118, p = 0.889, η2 = 0.004

Appendix 2—table 7. Parameter values used for simulations on Figure 1—figure supplement 35.

Parameter values for high and low exploration were selected empirically from pilot and task data. Value-free random exploration and novelty exploration were simulated with an argmax decision function, which always selects the value with the highest expected value. For simulating the long (versus short) horizon condition, we assumed that not only the key value but also the other exploration strategies increased, as found in our experimental data. For each simulation Q0 = 5 and unless otherwise stated, σ0=1.5.

Horizon Low exploration High exploration Additional parameters
Value-free random exploration Short ϵ = 0.1 ϵ = 0.2 η = 0
Long ϵ = 0.3 ϵ = 0.4 η = 2
Novelty exploration Short η = 0 η = 1 ϵ = 0
Long η = 2 η = 3 ϵ = 0.2
Thompson-sampling exploration Short σ0 = 0.8 σ0 = 1.2 η = 0, ϵ = 0
Long σ0 = 1.6 σ0 = 2 η = 2, ϵ = 0.2
UCB exploration Short γ = 0.1 γ = 0.3 β = 5, ϵ = 0
Long γ = 0.7 γ = 1.5 β = 1.5, ϵ = 0.2

Funding Statement

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Contributor Information

Magda Dubois, Email: magda.dubois.18@ucl.ac.uk.

Tobias U Hauser, Email: t.hauser@ucl.ac.uk.

Thorsten Kahnt, Northwestern University, United States.

Christian Büchel, University Medical Center Hamburg-Eppendorf, Germany.

Funding Information

This paper was supported by the following grants:

  • Max-Planck-Gesellschaft to Magda Dubois.

  • Wellcome Trust Sir Henry Dale Fellowship 211155/Z/18/Z to Tobias U Hauser.

  • Jacobs Foundation 2017-1261-04 to Tobias U Hauser.

  • Wellcome Trust Investigator Award 098362/Z/12/Z to Ray J Dolan.

  • Medical Research Foundation to Tobias U Hauser.

  • Brain and Behavior Research Foundation 27023 to Tobias U Hauser.

  • European Research Council 946055 to Tobias U Hauser.

  • Wellcome Trust Centre Award 203147/Z/16/Z to Ray J Dolan.

Additional information

Competing interests

No competing interests declared.

Author contributions

Conceptualization, Data curation, Software, Formal analysis, Writing - original draft, Writing - review and editing.

Data curation, Writing - review and editing.

Data curation, Writing - review and editing.

Formal analysis, Writing - review and editing.

Funding acquisition, Writing - review and editing.

Conceptualization, Software, Formal analysis, Supervision, Writing - original draft, Writing - review and editing.

Ethics

Human subjects: The study was approved by the UCL research committee (REC No 6218/002) and all subjects provided written informed consent.

Additional files

Transparent reporting form

Data availability

All necessary resources are publicly available at: https://github.com/MagDub/MFNADA-figures.

References

  1. Agrawal S, Goyal N. Analysis of Thompson sampling for the multi-armed bandit problem. Journal of Machine Learning Research : JMLR. 2012;23:1–26. [Google Scholar]
  2. Aston-Jones G, Cohen JD. An integrative theory of locus coeruleus-norepinephrine function: adaptive gain and optimal performance. Annual Review of Neuroscience. 2005;28:403–450. doi: 10.1146/annurev.neuro.28.061604.135709. [DOI] [PubMed] [Google Scholar]
  3. Auer P. Using confidence bounds for exploitation-exploration trade-offs. Journal of Machine Learning Research : JMLR. 2003;3:397–422. doi: 10.1162/153244303321897663. [DOI] [Google Scholar]
  4. Bishop CM. in Information Science and Statistics 2006
  5. Bishop SJ, Gagne C. Anxiety, depression, and decision making: a computational perspective. Annual Review of Neuroscience. 2018;41:371–388. doi: 10.1146/annurev-neuro-080317-062007. [DOI] [PubMed] [Google Scholar]
  6. Botvinick M, Braver T. Motivation and cognitive control: from behavior to neural mechanism. Annual Review of Psychology. 2015;66:83–113. doi: 10.1146/annurev-psych-010814-015044. [DOI] [PubMed] [Google Scholar]
  7. Bouret S, Sara SJ. Network reset: a simplified overarching theory of locus coeruleus noradrenaline function. Trends in Neurosciences. 2005;28:574–582. doi: 10.1016/j.tins.2005.09.002. [DOI] [PubMed] [Google Scholar]
  8. Bromberg-Martin ES, Matsumoto M, Hikosaka O. Dopamine in motivational control: rewarding, aversive, and alerting. Neuron. 2010;68:815–834. doi: 10.1016/j.neuron.2010.11.022. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Bunzeck N, Doeller CF, Dolan RJ, Duzel E. Contextual interaction between novelty and reward processing within the mesolimbic system. Human Brain Mapping. 2012;33:1309–1324. doi: 10.1002/hbm.21288. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Campbell-Meiklejohn D, Wakeley J, Herbert V, Cook J, Scollo P, Ray MK, Selvaraj S, Passingham RE, Cowen P, Rogers RD. Serotonin and dopamine play complementary roles in gambling to recover losses. Neuropsychopharmacology. 2011;36:402–410. doi: 10.1038/npp.2010.170. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Carpentier A, Lazaric A, Ghavamzadeh M, Munos R, Auer P. Upper-confidence-bound algorithms for active learning in multi-armed bandits. arXiv. 2011 https://arxiv.org/abs/1507.04523
  12. Chakroun K, Mathar D, Wiehler A, Ganzer F, Peters J. Dopaminergic modulation of the exploration/exploitation trade-off in human decision-making. bioRxiv. 2019 doi: 10.1101/706176. [DOI] [PMC free article] [PubMed]
  13. Cinotti F, Fresno V, Aklil N, Coutureau E, Girard B, Marchand AR, Khamassi M. Dopamine blockade impairs the exploration-exploitation trade-off in rats. Scientific Reports. 2019;9:1–14. doi: 10.1038/s41598-019-43245-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Cogliati Dezza I, Cleeremans A, Alexander W. Should we control? the interplay between cognitive control and information integration in the resolution of the exploration-exploitation dilemma. Journal of Experimental Psychology: General. 2019;148:977–993. doi: 10.1037/xge0000546. [DOI] [PubMed] [Google Scholar]
  15. Cohen JD, McClure SM, Yu AJ. Should I stay or should I go? how the human brain manages the trade-off between exploitation and exploration. Philosophical Transactions of the Royal Society B: Biological Sciences. 2007;362:933–942. doi: 10.1098/rstb.2007.2098. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Cools R. The cost of dopamine for dynamic cognitive control. Current Opinion in Behavioral Sciences. 2015;4:152–159. doi: 10.1016/j.cobeha.2015.05.007. [DOI] [Google Scholar]
  17. Costa VD, Tran VL, Turchi J, Averbeck BB. Dopamine modulates novelty seeking behavior during decision making. Behavioral Neuroscience. 2014;128:556–566. doi: 10.1037/a0037128. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. D'Acremont M, Bossaerts P. Neurobiological studies of risk assessment: a comparison of expected utility and mean-variance approaches. Cognitive, Affective, & Behavioral Neuroscience. 2008;8:363–374. doi: 10.3758/CABN.8.4.363. [DOI] [PubMed] [Google Scholar]
  19. David Johnson J. Noradrenergic control of cognition: global attenuation and an interrupt function. Medical Hypotheses. 2003;60:689–692. doi: 10.1016/S0306-9877(03)00021-5. [DOI] [PubMed] [Google Scholar]
  20. Daw ND, O'Doherty JP, Dayan P, Seymour B, Dolan RJ. Cortical substrates for exploratory decisions in humans. Nature. 2006;441:876–879. doi: 10.1038/nature04766. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Dayan P, Yu AJ. Phasic norepinephrine: a neural interrupt signal for unexpected events. Network: Computation in Neural Systems. 2006;17:335–350. doi: 10.1080/09548980601004024. [DOI] [PubMed] [Google Scholar]
  22. De Martino B, Strange BA, Dolan RJ. Noradrenergic neuromodulation of human attention for emotional and neutral stimuli. Psychopharmacology. 2008;197:127–136. doi: 10.1007/s00213-007-1015-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. de Visser L, van der Knaap LJ, van de Loo AJ, van der Weerd CM, Ohl F, van den Bos R. Trait anxiety affects decision-making differently in healthy men and women: towards gender-specific endophenotypes of anxiety. Neuropsychologia. 2010;48:1598–1606. doi: 10.1016/j.neuropsychologia.2010.01.027. [DOI] [PubMed] [Google Scholar]
  24. Düzel E, Penny WD, Burgess N. Brain oscillations and memory. Current Opinion in Neurobiology. 2010;20:143–149. doi: 10.1016/j.conb.2010.01.004. [DOI] [PubMed] [Google Scholar]
  25. Fang J, Yu PH. Effect of haloperidol and its metabolites on dopamine and noradrenaline uptake in rat brain slices. Psychopharmacology. 1995;121:379–384. doi: 10.1007/BF02246078. [DOI] [PubMed] [Google Scholar]
  26. Foley NC, Jangraw DC, Peck C, Gottlieb J. Novelty enhances visual salience independently of reward in the parietal lobe. Journal of Neuroscience. 2014;34:7947–7957. doi: 10.1523/JNEUROSCI.4171-13.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Frank MJ, Doll BB, Oas-Terpstra J, Moreno F. Prefrontal and striatal dopaminergic genes predict individual differences in exploration and exploitation. Nature Neuroscience. 2009;12:1062–1068. doi: 10.1038/nn.2342. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Fraundorfer PF, Fertel RH, Miller DD, Feller DR. Biochemical and pharmacological characterization of high-affinity trimetoquinol analogs on guinea pig and human beta adrenergic receptor subtypes: evidence for partial agonism. The Journal of Pharmacology and Experimental Therapeutics. 1994;270:665–674. [PubMed] [Google Scholar]
  29. Froböse MI, Westbrook A, Bloemendaal M, Aarts E, Cools R. Catecholaminergic modulation of the cost of cognitive control in healthy older adults. PLOS ONE. 2020;15:e0229294. doi: 10.1371/journal.pone.0229294. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Froböse MI, Cools R. Chemical neuromodulation of cognitive control avoidance. Current Opinion in Behavioral Sciences. 2018;22:121–127. doi: 10.1016/j.cobeha.2018.01.027. [DOI] [Google Scholar]
  31. Gershman SJ. Deconstructing the human algorithms for exploration. Cognition. 2018;173:34–42. doi: 10.1016/j.cognition.2017.12.014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Gershman SJ, Niv Y. Novelty and inductive generalization in human reinforcement learning. Topics in Cognitive Science. 2015;7:391–415. doi: 10.1111/tops.12138. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Gibbs ME, Hutchinson DS, Summers RJ. Noradrenaline release in the locus coeruleus modulates memory formation and consolidation; roles for α- and β-adrenergic receptors. Neuroscience. 2010;170:1209–1222. doi: 10.1016/j.neuroscience.2010.07.052. [DOI] [PubMed] [Google Scholar]
  34. Goldman-Rakic PS, Lidow MS, Gallager DW. Overlap of dopaminergic, adrenergic, and serotoninergic receptors and complementarity of their subtypes in primate prefrontal cortex. The Journal of Neuroscience. 1990;10:2125–2138. doi: 10.1523/JNEUROSCI.10-07-02125.1990. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Guo D, Yu AJ. Advances in Neural Information Processing Systems. MIT Press; 2018. [PMC free article] [PubMed] [Google Scholar]
  36. Hauser TU, Iannaccone R, Ball J, Mathys C, Brandeis D, Walitza S, Brem S. Role of the medial prefrontal cortex in impaired decision making in juvenile attention-deficit/hyperactivity disorder. JAMA Psychiatry. 2014;71:1165–1173. doi: 10.1001/jamapsychiatry.2014.1093. [DOI] [PubMed] [Google Scholar]
  37. Hauser TU, Fiore VG, Moutoussis M, Dolan RJ. Computational psychiatry of ADHD: neural gain impairments across marrian levels of analysis. Trends in Neurosciences. 2016;39:63–73. doi: 10.1016/j.tins.2015.12.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Hauser TU, Eldar E, Dolan RJ. Separate mesocortical and mesolimbic pathways encode effort and reward learning signals. PNAS. 2017a;114:E7395–E7404. doi: 10.1073/pnas.1705643114. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Hauser TU, Allen M, Purg N, Moutoussis M, Rees G, Dolan RJ. Noradrenaline blockade specifically enhances metacognitive performance. eLife. 2017b;6:e24901. doi: 10.7554/eLife.24901. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Hauser TU, Moutoussis M, Purg N, Dayan P, Dolan RJ. Beta-Blocker propranolol modulates decision urgency during sequential information gathering. The Journal of Neuroscience. 2018;38:7170–7178. doi: 10.1523/JNEUROSCI.0192-18.2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Hauser TU, Eldar E, Purg N, Moutoussis M, Dolan RJ. Distinct roles of dopamine and noradrenaline in incidental memory. The Journal of Neuroscience. 2019;39:7715–7721. doi: 10.1523/JNEUROSCI.0401-19.2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Humphries MD, Khamassi M, Gurney K. Dopaminergic control of the Exploration-Exploitation Trade-Off via the basal ganglia. Frontiers in Neuroscience. 2012;6:9. doi: 10.3389/fnins.2012.00009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Iigaya K, Hauser TU, Kurth-Nelson Z, O’Doherty, P JP, Dayan RJ. The value of what’s to come: Neural mechanisms coupling prediction error and the utility of anticipation. bioRxiv. 2019 doi: 10.1101/588699. [DOI] [PMC free article] [PubMed]
  44. Isaacson JS, Scanziani M. How inhibition shapes cortical activity excitation and inhibition walk hand in hand. Neuron. 2011;72:231–243. doi: 10.3389/fnmol.2019.00168. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Jahn CI, Gilardeau S, Varazzani C, Blain B, Sallet J, Walton ME, Bouret S. Dual contributions of noradrenaline to behavioural flexibility and motivation. Psychopharmacology. 2018;235:2687–2702. doi: 10.1007/s00213-018-4963-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Jepma M, Te Beek ET, Wagenmakers EJ, van Gerven JM, Nieuwenhuis S. The role of the noradrenergic system in the exploration-exploitation trade-off: a psychopharmacological study. Frontiers in Human Neuroscience. 2010;4:170. doi: 10.3389/fnhum.2010.00170. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Jepma M, Nieuwenhuis S. Pupil diameter predicts changes in the exploration-exploitation trade-off: evidence for the adaptive gain theory. Journal of Cognitive Neuroscience. 2011;23:1587–1596. doi: 10.1162/jocn.2010.21548. [DOI] [PubMed] [Google Scholar]
  48. Joshi S, Li Y, Kalwani RM, Gold JI. Relationships between pupil diameter and neuronal activity in the locus coeruleus, Colliculi, and cingulate cortex. Neuron. 2016;89:221–234. doi: 10.1016/j.neuron.2015.11.028. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Joshi S, Gold JI. Pupil size as a window on neural substrates of cognition. Trends in Cognitive Sciences. 2020;24:466–480. doi: 10.1016/j.tics.2020.03.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Kahnt T, Weber SC, Haker H, Robbins TW, Tobler PN. Dopamine D2-receptor blockade enhances decoding of prefrontal signals in humans. Journal of Neuroscience. 2015;35:4104–4111. doi: 10.1523/JNEUROSCI.4182-14.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Kahnt T, Tobler PN. Dopamine modulates the functional organization of the orbitofrontal cortex. The Journal of Neuroscience. 2017;37:1493–1504. doi: 10.1523/JNEUROSCI.2827-16.2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Kane GA, Vazey EM, Wilson RC, Shenhav A, Daw ND, Aston-Jones G, Cohen JD. Increased locus coeruleus tonic activity causes disengagement from a patch-foraging task. Cognitive, Affective, & Behavioral Neuroscience. 2017;17:1073–1083. doi: 10.3758/s13415-017-0531-y. [DOI] [PubMed] [Google Scholar]
  53. Kayser AS, Mitchell JM, Weinstein D, Frank MJ. Dopamine, locus of control, and the exploration-exploitation tradeoff. Neuropsychopharmacology. 2015;40:454–462. doi: 10.1038/npp.2014.193. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Kool W, McGuire JT, Rosen ZB, Botvinick MM. Decision making and the avoidance of cognitive demand. Journal of Experimental Psychology: General. 2010;139:665–682. doi: 10.1037/a0020198. [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Koudas V, Nikolaou A, Hourdaki E, Giakoumaki SG, Roussos P, Bitsios P. Comparison of Ketanserin, buspirone and propranolol on arousal, pupil size and autonomic function in healthy volunteers. Psychopharmacology. 2009;205:1–9. doi: 10.1007/s00213-009-1508-5. [DOI] [PubMed] [Google Scholar]
  56. Krebs RM, Schott BH, Schütze H, Düzel E. The novelty exploration bonus and its attentional modulation. Neuropsychologia. 2009;47:2272–2281. doi: 10.1016/j.neuropsychologia.2009.01.015. [DOI] [PubMed] [Google Scholar]
  57. Krugel LK, Biele G, Mohr PN, Li SC, Heekeren HR. Genetic variation in dopaminergic neuromodulation influences the ability to rapidly and flexibly adapt decisions. PNAS. 2009;106:17951–17956. doi: 10.1073/pnas.0905191106. [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Marois R, Ivanoff J. Capacity limits of information processing in the brain. Trends in Cognitive Sciences. 2005;9:296–305. doi: 10.1016/j.tics.2005.04.010. [DOI] [PubMed] [Google Scholar]
  59. Nassar MR, Rumsey KM, Wilson RC, Parikh K, Heasly B, Gold JI. Rational regulation of learning dynamics by pupil-linked arousal systems. Nature Neuroscience. 2012;15:1040–1046. doi: 10.1038/nn.3130. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Navarro D. Learning statistics with R: A tutorial for psychology students and other beginners. (Version 0.5) 2015 http://ua.edu.au/ccs/teaching/lsr
  61. Papadopetraki D, Froböse M, Westbrook A, Zandbelt B, Cools R. Quantifying the cost of cognitive stability and flexibility. bioRxiv. 2019 doi: 10.1101/743120. [DOI]
  62. R Development Core Team . Vienna, Austria: R Foundation for Statistical Computing; 2011. http://www.r-project.org [Google Scholar]
  63. Rajkowski J, Kubiak P, Aston-Jones G. Locus coeruleus activity in monkey: phasic and tonic changes are associated with altered vigilance. Brain Research Bulletin. 1994;35:607–616. doi: 10.1016/0361-9230(94)90175-9. [DOI] [PubMed] [Google Scholar]
  64. Richardson JTE. Eta squared and partial eta squared as measures of effect size in educational research. Educational Research Review. 2011;6:135–147. doi: 10.1016/j.edurev.2010.12.001. [DOI] [Google Scholar]
  65. Rogers RD, Lancaster M, Wakeley J, Bhagwagar Z. Effects of beta-adrenoceptor blockade on components of human decision-making. Psychopharmacology. 2004;172:157–164. doi: 10.1007/s00213-003-1641-5. [DOI] [PubMed] [Google Scholar]
  66. Rossetti ZL, Carboni S. Noradrenaline and dopamine elevations in the rat prefrontal cortex in spatial working memory. Journal of Neuroscience. 2005;25:2322–2329. doi: 10.1523/JNEUROSCI.3038-04.2005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Salamone JD, Yohn SE, López-Cruz L, San Miguel N, Correa M. Activational and effort-related aspects of motivation: neural mechanisms and implications for psychopathology. Brain. 2016;139:1325–1347. doi: 10.1093/brain/aww050. [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Salgado H, Treviño M, Atzori M. Layer- and area-specific actions of norepinephrine on cortical synaptic transmission. Brain Research. 2016;1641:163–176. doi: 10.1016/j.brainres.2016.01.033. [DOI] [PubMed] [Google Scholar]
  69. Sara SJ, Vankov A, Hervé A. Locus coeruleus-evoked responses in behaving rats: a clue to the role of noradrenaline in memory. Brain Research Bulletin. 1994;35:457–465. doi: 10.1016/0361-9230(94)90159-7. [DOI] [PubMed] [Google Scholar]
  70. Schultz W, Dayan P, Montague PR. A neural substrate of prediction and reward. Science. 1997;275:1593–1599. doi: 10.1126/science.275.5306.1593. [DOI] [PubMed] [Google Scholar]
  71. Schulz E, Gershman SJ. The algorithmic architecture of exploration in the human brain. Current Opinion in Neurobiology. 2019;55:7–14. doi: 10.1016/j.conb.2018.11.003. [DOI] [PubMed] [Google Scholar]
  72. Schwartenbeck P, Passecker J, Hauser TU, FitzGerald TH, Kronbichler M, Friston KJ. Computational mechanisms of curiosity and goal-directed exploration. eLife. 2019;8:e41703. doi: 10.7554/eLife.41703. [DOI] [PMC free article] [PubMed] [Google Scholar]
  73. Servan-Schreiber D, Printz H, Cohen JD. A network model of catecholamine effects: gain, signal-to-noise ratio, and behavior. Science. 1990;249:892–895. doi: 10.1126/science.2392679. [DOI] [PubMed] [Google Scholar]
  74. Silvetti M, Seurinck R, van Bochove ME, Verguts T. The influence of the noradrenergic system on optimal control of neural plasticity. Frontiers in Behavioral Neuroscience. 2013;7:1–6. doi: 10.3389/fnbeh.2013.00160. [DOI] [PMC free article] [PubMed] [Google Scholar]
  75. Silvetti M, Vassena E, Abrahamse E, Verguts T. Dorsal anterior cingulate-brainstem ensemble as a reinforcement meta-learner. PLOS Computational Biology. 2018;14:e1006370. doi: 10.1371/journal.pcbi.1006370. [DOI] [PMC free article] [PubMed] [Google Scholar]
  76. Skvortsova V, Palminteri S, Pessiglione M. Learning to minimize efforts versus maximizing rewards: computational principles and neural correlates. Journal of Neuroscience. 2014;34:15621–15630. doi: 10.1523/JNEUROSCI.1350-14.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  77. Sokol-Hessner P, Lackovic SF, Tobe RH, Camerer CF, Leventhal BL, Phelps EA. Determinants of propranolol's Selective Effect on Loss Aversion. Psychological Science. 2015;26:1123–1130. doi: 10.1177/0956797615582026. [DOI] [PMC free article] [PubMed] [Google Scholar]
  78. Soutschek A, Burke CJ, Raja Beharelle A, Schreiber R, Weber SC, Karipidis II, Ten Velden J, Weber B, Haker H, Kalenscher T, Tobler PN. The dopaminergic reward system underpins gender differences in social preferences. Nature Human Behaviour. 2017;1:819–827. doi: 10.1038/s41562-017-0226-y. [DOI] [PubMed] [Google Scholar]
  79. Soutschek A, Gvozdanovic G, Kozak R, Duvvuri S, de Martinis N, Harel B, Gray DL, Fehr E, Jetter A, Tobler PN. Dopaminergic D1 receptor stimulation affects effort and risk preferences. Biological Psychiatry. 2020;87:678–685. doi: 10.1016/j.biopsych.2019.09.002. [DOI] [PubMed] [Google Scholar]
  80. Speekenbrink M, Konstantinidis E. Uncertainty and exploration in a restless bandit problem. Topics in Cognitive Science. 2015;7:351–367. doi: 10.1111/tops.12145. [DOI] [PubMed] [Google Scholar]
  81. Stojić H, Schulz E, P Analytis P, Speekenbrink M. It's new, but is it good? how generalization and uncertainty guide the exploration of novel options. Journal of Experimental Psychology: General. 2020;149:1878–1907. doi: 10.1037/xge0000749. [DOI] [PubMed] [Google Scholar]
  82. Sutton RS, Barto AG. Introduction to Reinforcement Learning. MIT Press; 1998. [Google Scholar]
  83. Tervo DGR, Proskurin M, Manakov M, Kabra M, Vollmer A, Branson K, Karpova AY. Behavioral variability through stochastic choice and its gating by anterior cingulate cortex. Cell. 2014;159:21–32. doi: 10.1016/j.cell.2014.08.037. [DOI] [PubMed] [Google Scholar]
  84. Thompson WR. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika. 1933;25:285–294. doi: 10.1093/biomet/25.3-4.285. [DOI] [Google Scholar]
  85. Toru M, Takashima M. Haloperidol in large doses reduces the cataleptic response and increases noradrenaline metabolism in the brain of the rat. Neuropharmacology. 1985;24:231–236. doi: 10.1016/0028-3908(85)90079-6. [DOI] [PubMed] [Google Scholar]
  86. Trofimova I, Robbins TW. Temperament and arousal systems: a new synthesis of differential psychology and functional neurochemistry. Neuroscience & Biobehavioral Reviews. 2016;64:382–402. doi: 10.1016/j.neubiorev.2016.03.008. [DOI] [PubMed] [Google Scholar]
  87. Usher M, Cohen JD, Servan-Schreiber D, Rajkowski J, Aston-Jones G. The role of locus coeruleus in the regulation of cognitive performance. Science. 1999;283:549–554. doi: 10.1126/science.283.5401.549. [DOI] [PubMed] [Google Scholar]
  88. Varazzani C, San-Galli A, Gilardeau S, Bouret S. Noradrenaline and dopamine neurons in the reward/effort trade-off: a direct electrophysiological comparison in behaving monkeys. Journal of Neuroscience. 2015;35:7866–7877. doi: 10.1523/JNEUROSCI.0454-15.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  89. Wahn B, König P. Is attentional resource allocation across sensory modalities Task-Dependent? Advances in Cognitive Psychology. 2017;13:83–96. doi: 10.5709/acp-0209-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  90. Walton ME, Bouret S. What is the relationship between dopamine and effort? Trends in Neurosciences. 2019;42:79–91. doi: 10.1016/j.tins.2018.10.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  91. Warren CM, Wilson RC, van der Wee NJ, Giltay EJ, van Noorden MS, Cohen JD, Nieuwenhuis S. The effect of atomoxetine on random and directed exploration in humans. PLOS ONE. 2017;12:e0176034. doi: 10.1371/journal.pone.0176034. [DOI] [PMC free article] [PubMed] [Google Scholar]
  92. Waterhouse BD, Moises HC, Yeh HH, Woodward DJ. Norepinephrine enhancement of inhibitory synaptic mechanisms in cerebellum and cerebral cortex: mediation by beta adrenergic receptors. The Journal of Pharmacology and Experimental Therapeutics. 1982;221:495–506. [PubMed] [Google Scholar]
  93. Waterhouse BD, Moises HC, Yeh HH, Geller HM, Woodward DJ. Comparison of norepinephrine- and benzodiazepine-induced augmentation of purkinje cell response to γ-aminobutyric acid (GABA) The Journal of Pharmacology and Experimental Therapeutics. 1984;228:257–267. [PubMed] [Google Scholar]
  94. Watson D, Clark LA, Tellegen A. Development and validation of brief measures of positive and negative affect: the PANAS scales. Journal of Personality and Social Psychology. 1988a;54:1063–1070. doi: 10.1037/0022-3514.54.6.1063. [DOI] [PubMed] [Google Scholar]
  95. Watson D, Clark LA, Carey G. Positive and negative affectivity and their relation to anxiety and depressive disorders. Journal of Abnormal Psychology. 1988b;97:346–353. doi: 10.1037/0021-843X.97.3.346. [DOI] [PubMed] [Google Scholar]
  96. Wechsler D. WASI -II: wechsler abbreviated scale of intelligence - second edition. Journal of Psychoeducational Assessment. 2013;13:56. doi: 10.1177/0734282912467756. [DOI] [Google Scholar]
  97. Wilson RC, Geana A, White JM, Ludvig EA, Cohen JD. Humans use directed and random exploration to solve the explore–exploit dilemma. Journal of Experimental Psychology: General. 2014;143:2074–2081. doi: 10.1037/a0038199. [DOI] [PMC free article] [PubMed] [Google Scholar]
  98. Wittmann BC, Daw ND, Seymour B, Dolan RJ. Striatal activity underlies novelty-based choice in humans. Neuron. 2008;58:967–973. doi: 10.1016/j.neuron.2008.04.027. [DOI] [PMC free article] [PubMed] [Google Scholar]
  99. Wu CM, Schulz E, Speekenbrink M, Nelson JD, Meder B. Generalization guides human exploration in vast decision spaces. Nature Human Behaviour. 2018;2:915–924. doi: 10.1038/s41562-018-0467-4. [DOI] [PubMed] [Google Scholar]
  100. Yu AJ, Dayan P. Uncertainty, neuromodulation, and attention. Neuron. 2005;46:681–692. doi: 10.1016/j.neuron.2005.04.026. [DOI] [PubMed] [Google Scholar]
  101. Zajkowski WK, Kossut M, Wilson RC. A causal role for right frontopolar cortex in directed, but not random, exploration. eLife. 2017;6:e27430. doi: 10.7554/eLife.27430. [DOI] [PMC free article] [PubMed] [Google Scholar]
  102. Zénon A, Solopchuk O, Pezzulo G. An information-theoretic perspective on the costs of cognition. Neuropsychologia. 2019;123:5–18. doi: 10.1016/j.neuropsychologia.2018.09.013. [DOI] [PubMed] [Google Scholar]

Decision letter

Editor: Thorsten Kahnt1
Reviewed by: Christopher Warren2

In the interests of transparency, eLife publishes the most substantive revision requests and the accompanying author responses.

Acceptance summary:

How individuals decide to exploit known options or to explore alternatives with unknown payouts is a fundamental question in neuroscience. This study combines human pharmacology and computational modeling to disentangle the role of two neurotransmitters (noradrenaline and dopamine) in driving different exploration strategies. Results show that value-independent "random" exploration heuristics are mediated by noradrenaline. These findings contribute to a better understanding of the neural processes involved in the exploration-exploitation trade-off.

Decision letter after peer review:

Thank you for submitting your article "Noradrenaline modulates tabula-rasa exploration" for consideration by eLife. Your article has been reviewed by Christian Büchel as the Senior Editor, a Reviewing Editor, and three reviewers. The following individual involved in review of your submission has agreed to reveal their identity: Christopher Warren (Reviewer #3).

The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.

Summary:

Dubois and colleagues investigate how two modes of exploration – tabula-rasa and novelty-seeking – contribute to human choice behavior. They found that subjects used both tabula-rasa and novelty-seeking heuristics when the task conditions were in favor of exploration. Specifically, participants could, and had to, make more responses in the long-horizon condition, which favored exploration, compared to the short-horizon condition, which favored exploitation. Moreover, the authors provide evidence that blockade of norepinephrine β receptors leads to decreased tabula-rasa exploration and increased choice consistency, whereas blockage of D2/D3 dopamine receptors had little effects.

All reviewers agreed that this paper provides interesting evidence on exploration-exploitation trade-offs and the underlying pharmacological mechanisms. However, reviewers felt that there are a number of major conceptual, methodological and interpretational issues that should be addressed in a revised version of the manuscript.

Essential revisions:

1) The term "tabula rasa" exploration is slightly misleading and using "random" exploration would be simpler, and clearer. That is, "tabula rasa" has the connotation that both the current "tabula rasa" choice and all future choices will not take into account information obtained before that choice. Random exploration is a better term because it is easy and intuitive to see that random choices can be sprinkled in with choices based on previous information, whereas "tabular rasa" implies wiping previous information away from that point forward. Indeed, previous related work has not termed the random exploration associated with the e-greedy parameter "tabula rasa". Problematic in this regard is that there is another parameter in one or more of the models that reflects random exploration (Subsection “Choice rules”, inverse temperature). This may be why the authors opted to call the e-greedy parameter something else. However, this raises the question: what is the difference between the e-greedy parameter and the inverse temperature mathematically, but more importantly, conceptually? At the very least, it would be important to provide a better explanation of the choice of term (tabula rasa) as well as a thorough explanation of the difference between tabula rasa and random exploration. We also recommend changing the term used, but we are amenable to accepting an argument for keeping it.

2) It is one thing to come up with computational terms and model-based quantities correlating with behavior but a different one to show their psychological meaning. Did the trials with tabula-rasa exploration or novelty exploration differ in terms of response times from the other types of responses? Did participants report that they indeed intended to explore in the tabula-rasa exploration trials? On a related note, how do the authors distinguish random (tabula-rasa) exploration from making a mistake? From how the task was designed, choosing the low value option appears to receive a more natural interpretation as a mistake rather than as exploration because this option was clearly dominated by the other options and remained so within and across trials.

3) Relatedly, successful performance in the task is based on the ability to discriminate between different reward types and to select the one with the higher value. From the experimental design description, one can see that in order to do so, the subjects needed to distinguish between different apple sizes. In this regard, a question arises: how large was the difference between two adjacent apple sizes? Was it large enough so that after a visual inspection, the participant could easily understand that the apple size = 7 was less rewarding than the apple size = 8? Finally, since the task requires visual inspection of reward stimuli, was the subject vision somehow tested and did it differ between groups?

4) Previous research of the authors (Hauser et al., 2017, 2018, 2019) has associated β receptor blockade with enhanced metacognition, decreased information gathering/increased commitment to an early decision (Hauser et al., 2018) and an arousal (i.e., reward)-induced boost of processing stimuli. In addition, Rogers et al., (2004) suggest that propranolol affects the processing of possible losses in decision-making paradigms, and might also reduce the discrimination between the different levels of possible gains (Rogers et al. 2004). In another study, Sokol-Hessner et al., (2015) also report a loss aversion reduction after propranolol administration. These effects might also change prior information and reset behavioral adaption to look for new opportunities. In this latter study the authors also report a lack of effect of propranolol onto choice consistency, contrary to what the present study reports. How do the current results relate to these previous findings? Of course, it is possible that norepinephrine plays multiple roles, but it appears not exactly parsimonious to imbue it with a different role for each task tested. Are there some commonalities across these effects that could be explained with some common function(s)?

5) Previous studies have shown that propranolol significantly decreased heart rate (e.g. Rogers et al., 2004). Did the authors measure heart rate and can they control for the possibility that peripheral effects of the drug explain the findings? And what was the reason for not collecting pupil diameter data, contrary to the previous research of the authors? Relatedly, in terms of norepinephrine influence and given the distributions of β receptors, could the authors be more explicit about the relation of their work to potential mechanisms (e.g. Goldman-Rakic et al., 1990 or Waterhouse et al., 1982)?

6) One strength of the paper is that the authors compared several computational models. The model selection is presented in Figure 4 and in Figure 4—figure supplement 1, the authors provide additional information regarding the winning model that accounted best for the largest number of subjects in comparison with two other models, namely the UCB model (with novelty and greedy parameters) or hybrid (with novelty and greedy parameters). It would be useful for the reader to get a better sense about the number of subjects which results favored any given model (i.e. a more exhaustive picture). One could use the same table as the one presented as in the Appendix—table 2 with respective number of subjects for which the model achieved the best performance. In fact, as shown in Figure 4, the winning model does not look very different (at least visually) from other models such as UCB (with novelty and greedy parameters) or hybrid (with novelty parameter or novelty and greedy parameters) models. As such, it would be important to know whether the conclusion about the e-greedy parameter would hold true if other model with similar performance were tested e.g. with UCB model (with novelty and greedy parameters) or hybrid (with novelty and greedy parameters)?

7) Related to this issue, the point of heuristics from a psychological perspective is that they dispense with the need to use full-blown algorithmic calculations. However, in the present models, the heuristics are only added on top of these calculations and the winning model includes Thompson exploration. Stand-alone heuristic models would do the term more justice and one wonders how well a model would fare that includes only tabula rasa exploration and novelty exploration.

8) The simulations provide a nice intuition for understanding choice proportions from different models/strategies (Figure 1E and 1F). However, it would be helpful to provide simulated results for long and short horizons separately. Do the models make different predictions for the two horizons? Additionally, it would be helpful to also show the results from other models (i.e. the proportion of low value bandit chosen by novelty agent). These could be added in the supplement.

[Editors” note: further revisions were suggested prior to acceptance, as described below.]

Thank you for resubmitting your article "Human complex exploration strategies are extended via noradrenaline-modulated heuristics" for consideration by eLife. Your revised article has been reviewed by Christian Büchel as the Senior Editor, a Reviewing Editor, and three reviewers. The following individual involved in review of your submission has agreed to reveal their identity: Christopher Warren (Reviewer #3).

The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.

We would like to draw your attention to changes in our revision policy that we have made in response to COVID-19 (https://elifesciences.org/articles/57162). Specifically, we are asking editors to accept without delay manuscripts, like yours, that they judge can stand as eLife papers without additional data, even if they feel that they would make the manuscript stronger. Thus the revisions requested below only address clarity and presentation.

Summary:

The authors have been very responsive to the initial reviews and reviewers feel that the paper is much improved. However, a few points remain. Please address the remaining issues raised by reviewer #1 and #2 and submit a revised version of the manuscript.

Reviewer #1:

Thank you for a largely responsive revision. The paper is much improved. A few points remain:

Related to previous point 1, the argument in subsection “Probing the contributions of heuristic exploration strategies” does not seem to be entirely correct. The authors claim that "A second prediction is that choice consistency, across repeated trials, is substantially affected by value-free random exploration." However, consistency can also be affected by the softmax parameter. If β is higher then choice consistency is also lower. Also, I am a little bit confused about the simulation results in Figure 1—figure supplement 2E,F. Do both models predict that the consistency of selecting the low value bandit is higher than the consistency of selecting the high value bandit? In line with the argument that higher β also lead to more stochastic choices, I also wonder if that can be the reason why UCB and UCB+𝜖 are not that much different in likelihood.

Regarding previous point 2: Were response time differences between value-free exploration and exploitation trials larger in the long horizon than the short horizon condition (i.e., while there was no main effect of bandit, was there an interaction with horizon or trial within horizon and was there a three-way interaction with drug)? Moreover, the response to the mistake issue is not entirely satisfactory. If participants paid (gradually) less attention in the long horizon, then it would also be expected that they make more mistakes in the long horizon condition only.

Regarding previous point 8, it is great that the authors followed our suggestion to simulate all models in both the short and long horizon. However, these figures (Figure 1—figure supplement 3 to Figure 1—figure supplement 5) seem to be somewhat confusing. The problem may lie in the parameters selected for simulation. According to Appendix 2—table 7, the multiple parameters were varied among different models. But I thought they should keep most consistent and only vary the interesting one. For example, shouldn't 𝜂 be kept same or even be zero in the value free random exploration model to show how choices are vary as function of 𝜖? I think the numbers are selected such that the predictions favor the value free random exploration model. If, as authors said, UCB + 𝜖 + 𝜂 is almost as good as Thompson-sampling + 𝜖 + 𝜂, I don’t see what the predictions can be such dramatically different. That is how I interpret the statement that " For simulating the long (versus short) horizon condition, we assumed that not only the key value but also the other exploration strategies increased, as found in our experimental data." Anyway, I feel the simulation data is somehow misleading and need more explanation.

Reviewer #2:

The authors addressed all my comments and made substantial revisions that have strengthened the overall manuscript. Specifically, the new information in Appendix—table 4 with each model's performance and additional analyses on of the other "(close to best)" models further strengthens the authors' claim. The authors also clarified the results on heart rate, RT and PANAS questionnaire, providing additional results and discussing appropriately potential caveats. Further additions in the Discussion address potential mechanisms of propranolol on decision making. The only comment I have relates to the sentence that follows (in the Discussion): “In particular, the results indicate that under propranolol behaviour is more deterministic and less influenced by “task-irrelevant” distractions. This aligns with theoretical ideas, as well as recent optogenetic evidence (32), that propose noradrenaline infuses noise in a temporally targeted way (31). It also accords with studies implicating noradrenaline in attention shifts (for a review cf. (76)). Other theories of noradrenaline/catecholamine function can link to determinism (64, 65), although the hypothesized direction of effect is different (i.e. noradrenaline increases determinism)." Here, it is unclear to me how the authors define determinism and how either increasing or decreasing noradrenaline can increase determinism?

Apart from that, the manuscript is well-written and provides an interesting account about the role of neuromodulatory systems on the processes at play during exploration.

eLife. 2021 Jan 4;10:e59907. doi: 10.7554/eLife.59907.sa2

Author response


Summary:

Dubois and colleagues investigate how two modes of exploration – tabula-rasa and novelty-seeking – contribute to human choice behavior. They found that subjects used both tabula-rasa and novelty-seeking heuristics when the task conditions were in favor of exploration. Specifically, participants could, and had to, make more responses in the long-horizon condition, which favored exploration, compared to the short-horizon condition, which favored exploitation. Moreover, the authors provide evidence that blockade of norepinephrine β receptors leads to decreased tabula-rasa exploration and increased choice consistency, whereas blockage of D2/D3 dopamine receptors had little effects.

All reviewers agreed that this paper provides interesting evidence on exploration-exploitation trade-offs and the underlying pharmacological mechanisms. However, reviewers felt that there are a number of major conceptual, methodological and interpretational issues that should be addressed in a revised version of the manuscript.

We thank the editors and reviewers for their positive evaluation of our manuscript and appreciate the helpful suggestions. We have now addressed all raised concerns and conducted substantial additional analyses in light of these comments.

In short, we have conducted analysis of physiological measures and have analysed our data with the relevant covariates. We have also analysed, in greater depth, the second and third best models in the computational modelling. We have also tested new models, run substantial additional model simulations, behavioural analyses, and analysed reaction times.

Regarding the text, we have clarified the manuscript by replacing the name “tabula-rasa” exploration with “value-free random” exploration and adapting the drug group names according to the reviewers’ suggestions. Moreover, we now provide substantial data simulations/illustrations to illustrate further the different exploration mechanisms. Lastly, we have expanded the discussion taking account of all the reviewers’ constructive suggestions. Importantly, all new analyses support our original Results and we believe this strengthens further the paper. We report key new results in the revised manuscript and trust that it meets the rigorous standards of your journal.

Essential revisions:

1) The term "tabula rasa" exploration is slightly misleading and using "random" exploration would be simpler, and clearer. That is, "tabula rasa" has the connotation that both the current "tabula rasa" choice and all future choices will not take into account information obtained before that choice. Random exploration is a better term because it is easy and intuitive to see that random choices can be sprinkled in with choices based on previous information, whereas "tabular rasa" implies wiping previous information away from that point forward. Indeed, previous related work has not termed the random exploration associated with the e-greedy parameter "tabula rasa". Problematic in this regard is that there is another parameter in one or more of the models that reflects random exploration (subsection “Choice rules”, inverse temperature). This may be why the authors opted to call the e-greedy parameter something else. However, this raises the question: what is the difference between the e-greedy parameter and the inverse temperature mathematically, but more importantly, conceptually? At the very least, it would be important to provide a better explanation of the choice of term (tabula rasa) as well as a thorough explanation of the difference between tabula rasa and random exploration. We also recommend changing the term used, but we are amenable to accepting an argument for keeping it.

We thank the reviewer for raising this relevant point. We apologise for using a potentially misleading term. We chose the term “tabula-rasa” to distinguish it from other forms of stochasticity, such as the modulation of an inverse temperature. We agree that the form of exploration we wish to describe here is a pure form of randomness, one that ignores all available information. However, because “random” exploration has previously been used for inverse temperature related exploration, we refrained from using this term in our original manuscript and instead chose “tabula-rasa” to emphasize the fact that prior beliefs were not taken into account. We agree that tabula-rasa may have a connotation about future use of information, which the reviewer rightly highlights.

On the above basis we think it best to change the terminology. We now refer to “value-free random exploration” in the revised version of the manuscript, as distinct from “value-based random exploration” (as captured by Thompson sampling or softmax temperature). We believe these revised terms adequately reflect the putative computational mechanisms, whilst also highlighting a difference between them.

We apologise if the distinction between these two forms of exploration was not clear in the original manuscript. Mathematically, the value-based random exploration is captured by scaling the inverse temperature with the expected values in a softmax algorithm. This means that this form of exploration is still guided by the value of the choice options (hence “value-based”), and it requires an agent to keep track of each expected value that is compared. In contrast, the 𝜖-greedy algorithm ignores choice option values completely in 𝜖% of the time (hence “value-free”). Importantly, the difference between the inverse temperature and the 𝜖-greedy parameter is that the former requires more cognitive resources. We tabulate a summary of the strategies:

Exploration strategy Value-based random exploration Value-free random exploration
Algorithm Softmax 𝜖-greedy
Equation P (i)=eBVi∑xeBVi P(i)={(1−∈)ifi=bestaction∈otherwise
Free parameters 𝑉: bandit’s mean
𝛽: inverse temperature
𝜖: 𝜖-greedy parameter
Values to compute 𝑛 (number of bandits) 0

Although eventually both strategies increase noise in the decision process, their effect differs. Changing the softmax inverse temperature (cf. Figure 1—figure supplement 2A) affects the slope of the sigmoid, while changing the 𝜖-greedy parameter affects the compression of the sigmoid (cf. Figure 1—figure supplement 2B). Conceptually, in a softmax (value-based random) exploration mode (cf. Figure 1—figure supplement 2C), as each bandit's expected value is taken into account, an agent will still favour the second best bandit (i.e. medium-value bandit) over one with an even lower value (i.e. low-value bandit) when injecting noise. In contrast, in an 𝜖-greedy (value-free random) exploration mode (cf. Figure 1—figure supplement 2D), bandits are explored equally often irrespective of their expected value. This also has a consequence for choice consistency, in value-based random exploration the second best option is most probably explored (i.e. choice is still somehow consistent; cf. Figure 1—figure supplement 2E) versus equal probability of exploring any of the non-optimal options in an 𝜖-greedy (value-free random) exploration mode (i.e. low consistency, cf. Figure 1—figure supplement 2F). Please also see our response to comment 6., where we demonstrate that our effects remain unchanged even when allowing value-based and value-free random exploration to directly compete.

In addition to changing the name of these exploration strategies in the revised manuscript, we now also provide a more thorough explanation of the difference between the two random exploration strategies and we illustrate the difference in the newly added Figure 1—figure supplement 2.

Introduction: “An alternative strategy, sometimes termed “random” exploration, is to induce stochasticity after value computations in the decision process. […] Of relevance in this context is a view that exploration strategies depend on dissociable neural mechanisms (21). Influences from noradrenaline and dopamine are plausible candidates in this regard based on prior evidence (9, 22).”

2) It is one thing to come up with computational terms and model-based quantities correlating with behavior but a different one to show their psychological meaning. Did the trials with tabula-rasa exploration or novelty exploration differ in terms of response times from the other types of responses? Did participants report that they indeed intended to explore in the tabula-rasa exploration trials? On a related note, how do the authors distinguish random (tabula-rasa) exploration from making a mistake? From how the task was designed, choosing the low value option appears to receive a more natural interpretation as a mistake rather than as exploration because this option was clearly dominated by the other options and remained so within and across trials.

We thank the reviewers for raising this important point, on which we now elaborate in more detail in the manuscript.

The key dissociation between value-free random exploration and simply making a mistake is that the former is sensitive to our task (horizon) condition, which would not be expected for mistakes. This means that pure “mistakes” (i.e. independent of any cognitive process) should be equally distributed across all experimental conditions, whereas value-free exploration is deployed more strategically, increasing exploration over a long horizon.

In our data, we find that value-free exploration increases over the long horizon, i.e. when exploration is more useful (low-value bandit: horizon main effect: F(1, 54)=4.069, p=.049, 𝜂2=.070; 𝜖-greedy parameter: horizon-by-WASI interaction: F(1, 54)=6.08, p=.017, 𝜂2=.101). Importantly, we since reproduced this horizon effect multiple times across independent studies and cohorts. We found the same horizon effect in children and adolescents (low-value bandit: horizon main effect: F(1, 94)=8.837, p=.004, 𝜂2=.086; 𝜖-greedy parameter: horizon main effect: F(1, 94)=20.63, p<.001, 𝜂2=.180; Dubois et al., 2020, BioRxiv) as well as in healthy adults online (unpublished pilot data: low-value bandit: t(61)=-3.621, p=.001, d=.46, 95%CI=[-1.615, -.466]; 𝜖-greedy parameter: pilot data: t(61)=-3.286, p=.002, d=.417, 95%CI=[-.058, -.014]). These results demonstrate that this form of exploration is modulated by horizon, which would not be the case if they were simple mistakes.

Based on the reviewers’ suggestion, we also investigated response times, based on a hypothesis that simple mistakes would lead to faster responses. We did not observe any response times difference between low-value bandit trials and trials in which the high-value bandit (i.e. exploitation) or the novel bandit were chosen (bandit main effect: F(1.78 , 99.44)=1.634 , p=.203 , 𝜂2=.028; Figure 3—figure supplement 1). This further speaks against a hypothesis that these choices represent mere mistakes.

Lastly, it is important to note our finding that value-free random exploration is modulated by propranolol. To our knowledge, previous studies using the same drug (Sokol-Hessner et al., 2015; Campbell-Meiklejohn et al., 2011; Rogers et al., 2004; Hauser et al., 2019) did not report any impact on behavioural features that could be interpreted as “mistakes”. For example, our own previous study did not find an effect of propranolol on choice accuracy (Hauser et al., 2019). Instead, in prior studies propranolol impacted a directed cognitive process, similar to what we find here.

In line with previous papers on exploration (e.g. Warren et al., 2017; Wilson et al., 2014; Wu et al., 2018; Stojic et al., 2020), for several reasons we did not collect subjects’ reports about their intentions. Firstly, we were concerned this could have biased the task as subjects might feel compelled to focus on exploration, rather than performing the task and earning as much money as possible. This means that exploration might be perceived as a means to satisfy an experimenter’s intentions, rather than harvesting information for later use. Second, it is unclear whether all forms of exploration invoke a conscious representation (and hence are accessible to self-reports). It is possible that associated heuristics might be phylogenetically old, and not represented explicitly. Lastly, seeking subject reports would have either extended the task duration substantially, or reduced the number of trials, both restrictions that we wished to avoid.

In the revised manuscript, we discuss the psychological meaning of these exploration strategies in more detail with a focus on the dissociation between mistakes and value-free random exploration. We also added the new response latency analyses.

Discussion: “Value-free random exploration might reflect other influences, such as attentional lapses or impulsive motor responses. […] However, future studies could explore these exploration strategies in more detail including by reference to subjects’ own self-reports.”

Appendix I: “There was no difference in response times between bandits in the repeated-measures ANOVA (bandit main effect: F(1.78 , 99.44)=1.634 , p=.203 , η2=.028; Figure 3—figure supplement 1).”

3) Relatedly, successful performance in the task is based on the ability to discriminate between different reward types and to select the one with the higher value. From the experimental design description, one can see that in order to do so, the subjects needed to distinguish between different apple sizes. In this regard, a question arises: how large was the difference between two adjacent apple sizes? Was it large enough so that after a visual inspection, the participant could easily understand that the apple size = 7 was less rewarding than the apple size = 8? Finally, since the task requires visual inspection of reward stimuli, was the subject vision somehow tested and did it differ between groups?

We agree, this is a relevant point and one we investigated in detail when developing the task. In fact, we originally tested versions with different apple size ranges and based upon this opted for a smaller range of 9 different apple sizes, as our pilots showed that they were easily distinguishable. Moreover, the apples were presented (and remained) next to each other on the screen in the “crate”, so that apple sizes were directly comparable.

Even though we did not assess the vision of our subjects formally, we only recruited subjects who had (self-reported) normal or corrected-to-normal vision. This is the standard procedure for participant recruitment at the Wellcome Centre for Human Neuroimaging. To assess subjects’ understanding of apple sizes and to confirm normal vision, we conducted extensive training prior to the main experiment, in which they had to categorise different apple sizes. This training was successfully completed by all participants. We have now added this information and Figure 1—figure supplement 1.

Materials and methods: “Each distribution was truncated to [2, 10], meaning that rewards with values above or below this interval were excluded, resulting in a total of 9 possible rewards (i.e. 9 different apple sizes; cf. Figure 1—figure supplement 1 for a representation). […] In training, based on three displayed apples of similar size, they were tasked to guess between two options, namely which apple was most likely to come from the same tree and then received feedback about their choice.”

4) Previous research of the authors (Hauser et al., 2017, 2018, 2019) has associated β receptor blockade with enhanced metacognition, decreased information gathering/increased commitment to an early decision (Hauser et al., 2018) and an arousal (i.e., reward)-induced boost of processing stimuli. In addition, Rogers et al., (2004) suggest that propranolol affects the processing of possible losses in decision-making paradigms, and might also reduce the discrimination between the different levels of possible gains (Rogers et al., 2004). In another study, Sokol-Hessner et al., (2015) also report a loss aversion reduction after propranolol administration. These effects might also change prior information and reset behavioral adaption to look for new opportunities. In this latter study the authors also report a lack of effect of propranolol onto choice consistency, contrary to what the present study reports. How do the current results relate to these previous findings? Of course, it is possible that norepinephrine plays multiple roles, but it appears not exactly parsimonious to imbue it with a different role for each task tested. Are there some commonalities across these effects that could be explained with some common function(s)?

We thank the reviewers for raising this interesting point about the overarching function of noradrenaline. As the referee indicates we previously associated β receptor blockade with enhanced metacognition, decreased information gathering and decreased arousal-induced boosts of processing stimuli (please note that these studies were conducted in different subjects and are thus not directly relatable). Together with our current findings, the overall pattern of results might suggest that propranolol impacts how neural noise affects information processing in the brain, in line with prior theoretical work (Dayan and Yu, 2006). In particular, all of these results show that by administering propranolol, behaviour is more deterministic and less influenced by “task-irrelevant distractions”, an observation that accords also with reports implicating noradrenaline in attention shifting (for a review cf. Trofimova and Robbins, 2016). For example, an arousal-induced boost in incidental memory is abolished after propranolol (Hauser et al., 2019), a finding that aligns well with the suggestion of Dayan and Yu (2006) that noradrenaline can infuse noise into a system in a temporally targeted way. The latter idea also gains support from recent optogenetic based studies (Tervo et al., 2014) and relates also to other theories of noradrenaline/catecholamine function (Servan-Schreiber et al., 1990; Aston-Jones and Cohen, 2005), although an assumption here was that increases in noradrenaline would lead to an increase in determinism.

The studies pointed out by the reviewers (Rogers et al., 2004; Sokol-Hessner et al., 2015) make an interesting point about loss processing, including a demonstration that propranolol attenuates processing of punishment cues (Rogers et al., 2004) and reduces loss aversion (Sokol-Hessner et al., 2015), suggesting an effect of noradrenaline on prior information in a loss context. As the referee will appreciate our task was conducted in a reward-context, and it is entirely possible that an exploration task in a loss setting would reveal additional interesting results. Our interpretation of the existing data is that propranolol has a minimal, if any, effect on levels of reward. Firstly, the above-mentioned study by Rogers et al., (2004) only found a trend-level result. In addition, unpublished data from our group (Habicht et al., in prep) has not revealed any effect of propranolol on representations of gain and reward magnitudes.

It is interesting to speculate why Sokol-Hessner et al., (2015) did not find an effect on consistency. We do not believe that the latter results question our findings. It is important to note we refer to as consistency as the number of times subjects made the same exact choice on the exact same trial. We built this into the design of our task by duplicating each trial. In the study by Sokol-Hessner et al., (2015), the authors defined consistency as the softmax temperature parameter in a non-exploration related context. In line with their findings, we did not observe any drug effect on our prior variance 𝜎" parameter, the one most closely related to the parameter in the Sokol-Hessner et al., (2015) study.

We now incorporate these points into the revised version of the paper. We have clarified how we measure consistency and how this is different from other studies. We add a paragraph discussing the above papers and speculate on an overarching explanatory framework and how this might relate to the previous theories about the role of noradrenaline.

Discussion: “Noradrenaline blockade by propranolol has been shown previously to enhance metacognition (75), decrease information gathering (59), and attenuate arousal-induced boosts in incidental memory (36). […] Future studies investigating exploration in loss contexts might provide important additional information on these questions.”

Materials and methods: “[Trials] were then duplicated to measure choice consistency, defined as the frequency of making the same choice on identical trials (in contrast to a previous propranolol study where consistency was defined in terms of a value-based exploration parameter (60)).”

5) Previous studies have shown that propranolol significantly decreased heart rate (e.g. Rogers et al., 2004). Did the authors measure heart rate and can they control for the possibility that peripheral effects of the drug explain the findings? And what was the reason for not collecting pupil diameter data, contrary to the previous research of the authors? Relatedly, in terms of norepinephrine influence and given the distributions of β receptors, could the authors be more explicit about the relation of their work to potential mechanisms (e.g. Goldman-Rakic et al., 1990 or Waterhouse et al., 1982)?

We thank the reviewers for raising these points and suggesting these additional analyses. We recorded heart rate and blood pressure (systolic and diastolic) as part of our standard protocol to ensure subjects’ health and safety. Those were collected at 3 time points: at the beginning of the experiment before giving the drug (“at arrival”), after giving the drug just before playing the task (“pre-task”), and after finishing the experiment (“post-task”). We have now, as suggested, analysed these data.

In line with the known physiological effects of propranolol (Koudas et al., 2019) and previous cognitive studies (e.g. Rogers et al., 2004, Hauser et al., 2019), the propranolol group had a lower post-task heart rate (F(2, 55)=7.249, p=.002, 𝜂2=.209). None of the other measures and timepoints showed any drug effect (cf. Appendix 2 Table 2 for all comparisons).

To further evaluate these effects, we ran a two-way ANOVA with the between-subject factor drug group and the within-subject factor time (all three time points). In this analysis, we found a change in all measures for all subjects over time (heart rate: F(1.74, 95.97)=99.341, p<.001, 𝜂2=.644); systolic pressure: F(2, 110)=8.967, p<.001, 𝜂2=.14; diastolic pressure: F(2, 110)=.874, p=.42, 𝜂2=.016, meaning these measures decreased throughout the experiment. However, did not differ between groups (drug main effect: heart rate: F(2, 55)=1.84, p=.169, 𝜂2=.063; systolic pressure: F(2, 55)=1.08, p=.347, 𝜂2=.038; diastolic pressure: F(2, 55)=.239, p=.788, 𝜂2=.009; drug-by-time interaction: heart rate: F(3.49, 95.97)=1.928, p=.121, 𝜂2=.066; systolic pressure: F(4, 110)=1.6, p=.179, 𝜂2=.055; diastolic pressure: F(4, 110)=.951, p=.438, 𝜂2=.033).

To ensure a lower heart rate in the post experiment measurement did not impact our results, we reanalysed all our data by adding the post-experiment heart-rate as an additional covariate. We found that controlling for this peripheral marker did not alter any of our findings. In particular, we replicated the same drug effects for value-free random exploration, in the behavioural measures: frequency of picking the low-value bandit (main effect of drug: F(2, 52) = 4.014, p=.024, 𝜂2=.134; Pairwise comparisons: placebo vs propranolol: t(40) = 2.923, p=.005, d=.654; amisulpride vs propranolol: t(38) = 2.171, p=.034, d=.496; amisulpride vs placebo: t(38) = -.587, p=.559, d=.133) and choice consistency (F(2, 52) = 5.474, p=.007, 𝜂2=.174; Pairwise comparisons: placebo vs propranolol: t(40) = -3.525, p=.001, d=.788; amisulpride vs propranolol: t(38) = -2.267, p=.026, d=.514; amisulpride vs placebo: t(38) = 1.107, p=.272, d=.251), and also the modeling parameter 𝜖-greedy (F(2, 52) = 4.493, p=.016, 𝜂2=.147; Pairwise comparisons: placebo vs propranolol: t(40) = 3.177, p=.002, d=.71; amisulpride vs propranolol: t(38) = 2.723, p=.009, d=.626; amisulpride vs placebo: t(38)=.251, p=.802, d=.057). Thus, we believe our findings are unlikely to have arisen from peripheral effects of the drug. We now mention this in the discussion and report traditional analyses in the revised manuscript (cf. Appendix 1, Appendix 2—table 2).

We thank the reviewers for raising the question about pupillometry, an important topic which we believe is more complex than commonly perceived. We appreciate there is a lot of enthusiasm about pupil size as an indirect measure of noradrenaline function, and indeed animal recordings show a nice alignment between locus coeruleus firing and pupil size (e.g. Joshi et al., 2016). However, there are several issues with respect to human studies that remain unclear, including specificity, directionality and causality of these effects. We discuss this in detail in Hauser et al., (2019), and a summary of these limitations can be found in the recent review by Joshi and Gold, (2020). In short, it is important to highlight that the link between noradrenaline and pupil dilation remains unresolved (Nieuwenhuis et al., 2011). In fact, it has been suggested that noradrenaline does not directly drive pupil dilation, but instead both are driven by a common input (e.g. Nieuwenhuis et al., 2011). Therefore, even though pupillometry might reflect endogenous fluctuations in noradrenaline, pharmacologically induced changes of noradrenaline levels may have very different effects that remain poorly understood. In fact, our own, and others, previous studies have found no effect of propranolol on pupil diameter (Koudas et al., 2009; Hauser et al., 2019), but instead a significant effect of drugs other than noradrenaline (e.g. dopamine; Samuels et al., 2006; Hauser et al., 2019).

An additional reason for not including pupillometry was that pupillometry has strong restrictions in terms of task presentations (luminance matching, absence of eye gaze). In fact, our pilot studies showed that this current task was not feasible when applying these restrictions.

Based on these concerns, we decided against using pupillometry. We now discuss this in detail in the revised version of the manuscript, including highlighting limitations and elaborating on future applications.

Regarding β-receptors and potential mechanisms, we thank the reviewers for pointing out those interesting papers. Waterhouse et al., (1982) have shown that β-receptors increase synaptic inhibition specifically through inhibitory GABA-mediated transmission (Waterhouse et al., 1984). This in line with findings from Goldman-Rakic et al., (1990) who found that intermediate layers in the prefrontal areas, within which inhibition is favored (Isaacson et al., 2011), host a high concentration of β-receptors. All of these results suggest that noradrenaline-related task-distractibility, and randomness, could reflect inhibitory mechanisms. We discuss this in detail in the revised Discussion. Please also see our response 5., where we speculate about the specific receptor effects and the implications on our findings.

Materials and methods: “The groups consisted of 20 subjects each matched (cf. Appendix 2—table 1) for gender and age. To evaluate peripheral drug effects, heart rate, systolic and diastolic blood pressure were collected at three different time-points: “at arrival”, “pre-task” and “post-task”, cf. Appendix 1 for details.”

Results: “Similar results were obtained in an analysis that corrected for physiological effects as from the analysis without covariates (cf. Appendix 1).”

Discussion: “Because the effect of pharmacologically induced changes of noradrenaline levels on pupil size remains poorly understood (36, 67), including the fact that previous studies found no effect of propranolol on pupil diameter (36, 68), we opted against using pupillometry in this study. […] However, future studies using drugs that exclusively targets peripheral, but not central, noradrenaline receptors (e.g. (82)) are needed to answer this question conclusively.”

Appendix 1: “Heart rate, systolic and diastolic pressure were obtained at 3 time points: at the beginning of the experiment before giving the drug (“at arrival”), after giving the drug just before the task (“pre-task”), and after finishing task and questionnaires (“post-task”). […] When analysing results but now correcting for the post-experiment heart rate (cf. Appendix 2 Table 1) in addition to IQ (WASI) and negative affect (PANAS), we obtained similar results. Noradrenaline blockade reduced value-free random exploration as seen in two behavioural signatures, frequency of picking the low-value bandit (F(2, 52) = 4.014, p=.024, 𝜂2=.134; Pairwise comparisons:(placebo vs propranolol: t(40) = 2.923, p=.005, d=.654; amisulpride vs propranolol: t(38) = 2.171, p=.034, d=.496; amisulpride vs placebo: t(38) = -.587, p=.559, d=.133), and consistency F(2, 52) = 5.474, p=.007, 𝜂2=.174; Pairwise comparisons: placebo vs propranolol: t(40) = -3.525, p=.001, d=.788; amisulpride vs propranolol: t(38) = -2.267, p=.026, d=.514; amisulpride vs placebo: t(38) = 1.107, p=.272, d=.251), as well as in a model parameter for value-free random exploration (ϵ: F(2, 52) = 4.493, p=.016, 𝜂2=.147; Pairwise comparisons: placebo vs propranolol: t(40) = 3.177, p=.002, d=.71; amisulpride vs propranolol: t(38) = 2.723, p=.009, d=.626; amisulpride vs placebo: t(38)=.251, p=.802, d=.057).”

6) One strength of the paper is that the authors compared several computational models. The model selection is presented in Figure 4 and in Figure 4—figure supplement 1, the authors provide additional information regarding the winning model that accounted best for the largest number of subjects in comparison with two other models, namely the UCB model (with novelty and greedy parameters) or hybrid (with novelty and greedy parameters). It would be useful for the reader to get a better sense about the number of subjects which results favored any given model (i.e. a more exhaustive picture). One could use the same table as the one presented as in the Appendix—table 2 with respective number of subjects for which the model achieved the best performance. In fact, as shown in Figure 4, the winning model does not look very different (at least visually) from other models such as UCB (with novelty and greedy parameters) or hybrid (with novelty parameter or novelty and greedy parameters) models. As such, it would be important to know whether the conclusion about the e-greedy parameter would hold true if other model with similar performance were tested e.g. with UCB model (with novelty and greedy parameters) or hybrid (with novelty and greedy parameters)?

We thank the reviewer for acknowledging our efforts in terms of model selection. As suggested, we now expand on this section and provide additional details.

We now show that Thompson+𝜖+𝜂 model has the highest subject count when comparing the 3 best models. When comparing across all models, this same model is equally first with the UCB+𝜖+𝜂 model in subject count with a highest average likelihood of held-out data making it the winning model (N=20 for each model). We now show this in Figure 4—figure supplement 1 and we extended Appendix—table 4 with each model’s performance.

It is important to note that the purpose of the model comparison was to demonstrate the relevance of the two new heuristics, in addition to complex exploration strategies. The model comparison, as well as the new head counts, show clearly that the winning models all incorporate both the novelty and value-free exploration modules, strongly supporting our key message. The fact that the Thompson, UCB and hybrid model performed relatively similar may highlight that they make fairly similar predictions in our task, and that benefits of a particular complex model may be explained by these similar heuristic strategies. We discuss this now in more detail in the revised manuscript.

We thank the reviewers for suggesting these additional analyses of the other (close to best) models. We indeed find a very similar effect (i.e. reduction of value-free random exploration following propranolol) in the second winning model (UCB+𝜖+𝜂; drug main effect on 𝜖: F(2, 54)=4.503, p=.016, η2=.143) and (almost) in the third place model (hybrid+𝜖+𝜂; drug main effect on 𝜖: F(2, 54 )=3.04, p=.056, η2=.101). We believe that these effects further extend and support our findings and we now discuss them in the revised Discussion and provide detail in the Appendix.

Results: “Interestingly, although the second and third place models made different prediction about the complex exploration strategy, using a directed exploration with value-based random exploration (UCB) or a combination of complex strategies (hybrid) respectively, they share the characteristic of benefitting from value-free random and novelty exploration. This highlights that subjects used a mixture of computationally demanding and heuristic exploration strategies. […]. Critically, the effect on ϵ was also significant when the complex exploration strategy was a directed exploration with value-based random exploration (second place model) and, marginally significant, when it was a combination of the above (third place model; cf. Appendix 1).”

Discussion: “Importantly, these heuristics were observed in all best models (first, second and third position) even though each incorporated different exploration strategies. This suggests that the complex models made similar predictions in our task, and demonstrates that value-free random exploration is at play even when accounting for other value-based forms of random exploration (1, 7), whether fixed or uncertainty-driven. […] Importantly, this effect was observed whether the complex exploration was an uncertainty-driven value-based random exploration (winning model), a directed exploration with value-based random exploration (second place model) or a combination of the above (third place model; cf. Appendix 1).”

Appendix 1: “When analysing the fitted parameter values of both the second winning model (UCB + ϵ + η) and third winning model (hybrid + ϵ + η), similar results pertain. Thus, a value-free random exploration parameter was reduced following noradrenaline blockade in the second winning model (ϵ: F(2, 54)=4.503, p=.016, 𝜂2=.143; Pairwise comparisons: placebo vs propranolol: t(38)=2.185, p=.033, d=.386; amisulpride vs propranolol: t(40)=1.724, p=.089, d=.501; amisulpride vs placebo: t(40)=-.665, p=.508, d=.151) and was affected at a trend-level significance in the third winning model (ϵ: F(2, 54 )=3.04, p=.056, 𝜂2=.101).”

7) Related to this issue, the point of heuristics from a psychological perspective is that they dispense with the need to use full-blown algorithmic calculations. However, in the present models, the heuristics are only added on top of these calculations and the winning model includes Thompson exploration. Stand-alone heuristic models would do the term more justice and one wonders how well a model would fare that includes only tabula rasa exploration and novelty exploration.

We thank the reviewers for this suggestion. We based our hypotheses/analyses on recent evidence for complex exploration strategies and used model selection to show the presence of explorations heuristics in addition to these complex strategies. Based on the reviewers’ suggestions, we have added stand-alone heuristic models (value-free random exploration and novelty exploration with no value function computation; cf. Appendix 1).

As can be seen from the results in Appendix 1, these models performed poorly, although better than chance level, while adding value-free random exploration substantially improved their performance. Our results thus highlight that subjects combine complex and heuristic modules in exploration. We believe this is a valuable new insight and we have now added these additional models to the revised manuscript.

Appendix 1: “We also analysed stand-alone heuristic models, in which there is no value computation (value of each bandit i: 𝑉i = 0). The held-out data likelihood for such heuristic model combined with novelty exploration had a mean of m=0.367 (sd=0.005). The model in which we added value-free random exploration on top of novelty exploration had a mean of m=0.384 (sd=0.006). These models performed poorly, although better than chance level. Importantly, adding value-free random exploration improved performance. This highlights that subjects’ combine complex and heuristic modules in exploration.”

8) The simulations provide a nice intuition for understanding choice proportions from different models/strategies (Figure 1E and 1F). However, it would be helpful to provide simulated results for long and short horizons separately. Do the models make different predictions for the two horizons? Additionally, it would be helpful to also show the results from other models (i.e. the proportion of low value bandit chosen by novelty agent). These could be added in the supplement.

We now extend the model simulations and agree that they can provide a more detailed understanding. To ensure an intuitive understanding for a general audience, we provide these additional simulations as 3 additional figures in the Figure supplement, and to keep the original simulation graphs as intuitive illustrations.

From the new figures, one can see that the frequency of picking the low-value bandit, as well as choice consistency, are affected specifically by value-free random exploration but not by other exploration strategies; Figure 1—figure supplement 3 and Figure 1—figure supplement 4. Moreover, the frequency of picking the novel bandit is affected mainly by a novelty exploration strategy and to a lower extend by UCB exploration (Figure 1—figure supplement 5).

Based on the reviewers’ suggestion, we now add simulations specific to the short and long horizons (to simulate the latter we allowed all other exploration strategies to increase as well; in Appendix 2—table 7). Our simulations show effects were observed both in the short and long horizon condition. We believe that these new simulations provide additional intuitions regarding the two new exploration heuristics and we mention these simulations in the revised manuscript. Please also see the added simulations and illustrations in Figure 1 —figure supplement 3, Figure 1—figure supplement 4, Figure 1—figure supplement 5, which further expand on our findings.

Results: “Additionally, we simulated the effects of other exploration strategies in short and long horizon conditions (Figure 1—figure supplement 3, Figure 1—figure supplement 4, Figure 1—figure supplement 5). To simulate a long (versus short) horizon condition we increased the overall exploration by increasing other exploration strategies. Details about parameter values can be found in Appendix 2—table 7.”

[Editors' note: further revisions were suggested prior to acceptance, as described below.]

Reviewer #1:

Thank you for a largely responsive revision. The paper is much improved. A few points remain:

Related to previous point 1, the argument in subsection “Probing the contributions of heuristic exploration strategies” does not seem to be entirely correct. The authors claim that "A second prediction is that choice consistency, across repeated trials, is substantially affected by value-free random exploration." However, consistency can also be affected by the softmax parameter. If β is higher then choice consistency is also lower. Also, I am a little bit confused about the simulation results in Figure 1- figure supplement 2E,F. Do both models predict that the consistency of selecting the low value bandit is higher than the consistency of selecting the high value bandit? In line with the argument that higher β also lead to more stochastic choices, I also wonder if that can be the reason why UCB and UCB+ϵ are not that much different in likelihood.

We apologise for a lack of clarity on this point. A key feature of value-free random exploration is that it ignores all information, leading to a completely random selection among the choice options. None of the other exploration strategies shows a similarly strong effect on consistency, because they are still guided by the value of the options, and are disposed to choose the most valuable alternative (in the model’s eye).

The sentence in subsection “Probing the contributions of heuristic exploration strategies”, referred to by the referee, was meant to highlight a comparison to complex exploration strategies, such as directed exploration which consistently explores the less known option. We have now revised this passage to explain what we mean better.

Regarding Figure 1—figure supplement 2E,F, we believe that the referee has misunderstood this figure. We would highlight that the simulations we present here refer to the consistency across all bandits (as consistency is the inverse of switching between bandits). The x-axis shows the simulation of two different levels of our parameters (𝛽, 𝜖, respectively). We agree that this should have been more clear and we have now revised the figure legend to avoid any misunderstanding by the readership. As the reviewer can see, the 𝛽 parameter also has an effect on the consistency, but to a lesser degree, because this clearly prefers medium-valued bandits over low-valued bandits (cf. Figure 1—figure supplement 2C,D).

We agree with the reviewer that this parameter may capture some of the randomness in the UCB model, which leads to a more similar performance between the two models considered. We think this is an interesting observation and we address this point in more detail in the revised discussion.

Results (subsection “Probing the contributions of heuristic exploration strategies”): “A second prediction is that choice consistency, across repeated trials, is directly affected by value-free random exploration, in particular by comparison to more deterministic exploration strategies (e.g. directed exploration) that are value-guided and thus will consistently select the most informative and valuable options.”

Discussion: “A second exploration heuristic that also requires minimal computational resources, value-free random exploration, also plays a role in our task. Even though less optimal, its simplicity and neural plausibility renders it a viable strategy. Indeed, we observe an increase in performance in each model after adding 𝜖, supporting the notion that this strategy is a relevant additional human exploration heuristic. Interestingly, the benefit of 𝜖 is somewhat smaller in a simple UCB model (without novelty bonus), which probably arises because value-based random exploration partially captures some of the increased noisiness.”

Figure 1—figure supplement 2 legend: “Comparison of value-based (softmax) and value-free (𝜖greedy) random exploration. (a) Changing the softmax inverse temperature affects the slope of the sigmoid while changing the 𝜖-greedy parameter (b) affects the compression of the sigmoid. Conceptually, in a softmax exploration mode, as each bandit's expected value is taken into account, (c) the second best bandit (medium-value bandit) is favoured over one with a lower value (low-value bandit) when injecting noise. In contrast, in an 𝜖-greedy exploration mode, (d) bandits are explored equally often irrespective of their expected value. Both simulations were performed on trials without novel bandit. When simulating on all trials we observe this also has a consequence for choice consistency. (e) Choices are more consistent in a low (versus high) softmax exploration mode (i.e. high and low values of 𝛽 respectively), and similarly (f) choices are more consistent in a low (versus high) 𝜖-greedy exploration mode (i.e. low and high values of 𝜖 respectively). When comparing the overall consistency of the two random exploration strategies, consistency is higher in the value-based mode, reflecting a higher probability of (consistently) exploring the second best option, compared to an equal probability of exploring any non-optimal option (inconsistently) in the value-free mode.”

Regarding previous point 2: Were response time differences between value-free exploration and exploitation trials larger in the long horizon than the short horizon condition (i.e., while there was no main effect of bandit, was there an interaction with horizon or trial within horizon and was there a three-way interaction with drug)? Moreover, the response to the mistake issue is not entirely satisfactory. If participants paid (gradually) less attention in the long horizon, then it would also be expected that they make more mistakes in the long horizon condition only.

We would like to highlight here that our previous analysis only investigated the first choice made in each horizon (in line with all other analyses). As previously presented in Figure 2—figure supplement 1B, there is a substantial decrease in response latencies for the long horizon after the first choice, and this could have confounded the analysis. We have now clarified this. We believe that by focusing on the first choice, our analysis should not be affected by subjects paying ‘gradually less attention’ in the long horizon.

Nevertheless, we have now conducted a three-factor analysis that the reviewer suggests, where we investigated the response latencies of the first choice using factors horizon, drug, and bandit. As in our previous findings, we observed no main effect of either bandit (F(1.71,92.46)=1.203, p=.3, 𝜂2=.022), horizon (F(1,54)=.71, p=.403, 𝜂2=.013), or drug (F(2,54)=2.299, p=.11, 𝜂2=.078).

Additionally, there was no interaction between any of the factors (drug-by-bandit interaction: F(3.42,92.46)=.431, p=.757, 𝜂2=.016; drug-by-horizon interaction: F(2,54)=.204, p=.816, 𝜂2=.008; bandit-by-horizon interaction: F(1.39,75.01)=.298, p=.662, 𝜂2=.005; drug-by-bandit-by-horizon interaction: F(2.78,75.01)=1.015, p=.387, 𝜂2=.036). We believe this further strengthens our findings and interpretation, and we have now added this analysis to the revised manuscript.

As the reviewer raised an interesting point about the subsequent choices in the long horizon, we have now conducted a new analysis across all long horizon choices using the three factors of bandit, drug and sample (/choice). As shown in our previous Figure 2—figure supplement 1B, we observed a strong effect of sample ((1.54,86.15)=427.047, p<.001, 𝜂2=.884), meaning that the response latencies decrease over time. Interestingly, we also observed an effect of bandit (F(1.61,90.12)=7.137, p=.003, 𝜂2=.113), as well as a bandit-by-sample interaction (F(3.33,186.41)=4.789, p=.002, 𝜂2=.079). No drug effect or other interaction was observed (drug main effect: F(2,56)=.542, p=.585, 𝜂2=.019; drug-by-bandit interaction: F(3.22,90.12)=.525, p=.679, 𝜂2=.018; drug-by-sample interaction: F(3.08,86.15)=1.039, p=.381, 𝜂2=.036, 𝜂2=.078; drug-bybandit-by-sample interaction: F(6.66,186.41)=.645, p=.71, 𝜂2=.023). Analysing the bandit-sample effect further (Author response image 1), we found this was driven by faster response times for the high-value (exploitation) bandit at the second choice (high-value bandit vs low-value bandit : t(59)=-5.736, p<.001, d=.917; high-value bandit vs novel bandit: t(59)=-6.24, p<.001, d=.599; bandit main effect: F(1.27,70.88)=27.783, p<.001, 𝜂2=.332) and a slower response time for the low-value bandit at the second (low-value bandit vs novel bandit: t(59)=3.756, p<.001, d=.432) and third choice (high-value bandit vs low-value bandit : t(59)=-5.194, p<.001, d=.571; low-value bandit vs novel bandit: t(59)=4.448, p<.001, d=.49; high-value bandit vs novel bandit: t(59)=-1.834, p=.072, d=.09; bandit main effect: F(1.23,68.93)=21.318, p<.001, 𝜂2=.276). We believe this reflects that when subjects decide to exploit for the remainder of the samples, they respond more quickly than in a situation where they continue to explore. Given that the response times are slower, rather than faster, than the other bandits does not support a notion of hasty mistakes. We have now added this observation to Appendix 1.

Author response image 1.

Author response image 1.

Appendix: “When looking at the first choice in both conditions, no differences were evident in RT in a repeated-measures ANOVA with the between-subject factor drug group and the within-subject factors horizon and bandit (bandit main effect: F(1.71,92.46)=1.203, p=.3, 𝜂2=.022; horizon main effect: F(1,54)=.71, p=.403, 𝜂2=.013; drug main effect: F(2,54)=2.299, p=.11, 𝜂2=.078; drug-by-bandit interaction: F(3.42,92.46)=.431, p=.757, 𝜂2=.016; drug-by-horizon interaction: F(2,54)=.204, p=.816, 𝜂2=.008; bandit-by-horizon interaction: F(1.39,75.01)=.298, p=.662, 𝜂2=.005; drug-by-bandit-by-horizon interaction: F(2.78,75.01)=1.015, p=.387, 𝜂2=.036).In the long horizon, when looking at all 6 samples, no differences were evident in RT between drug group in the repeated-measures ANOVA with a between-subject factor drug group, and within subject factors bandits and samples (drug main effect: F(2,56)=.542, p=.585, 𝜂2=.019). There was an effect of bandit (bandit main effect: F(1.61,90.12)=7.137, p=.003, 𝜂2=.113), of sample (sample main effect: F(1.54,86.15)=427.047, p<.001, 𝜂2=.884) and an interaction between the two (bandit-by-sample interaction: F(3.33,186.41)=4.789, p=.002, 𝜂2=.079; drug-by-bandit interaction: F(3.22,90.12)=.525, p=.679, 𝜂2=.018; drug-by-sample interaction: F(3.08,86.15)=1.039, p=.381, 𝜂2=.036; drug-by-bandit-by-sample interaction: F(6.66,186.41)=.645, p=.71, 𝜂2=.023). Further analysis (not corrected for multiple comparisons) revealed that the interaction between bandit and sample reflected the fact that when looking at samples individually, there was a bandit main effect in the second sample (bandit main effect: F(1.27,70.88)=27.783, p<.001, 𝜂2=.332; drug main effect: F(2,56)=.201, p=.819, 𝜂2=.007; drug-by-bandit interaction: F(2.53,70.88)=.906, p=.429, 𝜂2=.031) and in the third sample (bandit main effect: F(1.23,68.93)=21.318, p<.001, 𝜂2=.276; drug main effect: F(2,56)=.102, p=.903, 𝜂2=.004; drug-by-bandit interaction: F(2.46,68.93)=.208, p=.855, 𝜂2=.007), but not in the other samples first sample: drug main effect: F(2,56)=1.108, p=.337, 𝜂2=.038; bandit main effect: F(2,112)=.339, p=.713, 𝜂2=.006; drug-by-bandit interaction: F(4,112)=.414, p=.798, 𝜂2=.015; 4th sample: (drug main effect: F(2,56)=.43, p=.652, 𝜂2=.015; bandit main effect: F(1.36,76.22)=1.348, p=.259, 𝜂2=.024; drug-by-bandit interaction: F(2.72,76.22)=.396, p=.737, 𝜂2=.014; 5th sample: drug main effect: F(2,56)=.216, p=.806, 𝜂2=.008; bandit main effect: F(1.25,69.79)=.218, p=.696, 𝜂2=.004; drug-by-bandit interaction: F(2.49,69.79)=.807, p=.474, 𝜂2=.028; 6th sample: drug main effect: F(2,56)=1.026, p=.365, 𝜂2=.035; bandit main effect: F(1.05,58.81)=.614, p=.444, 𝜂2=.011; drug-by-bandit interaction: F(2.1,58.81)=1.216, p=.305, 𝜂2=.042). In the second sample, the high-value bandit was chosen faster (high-value bandit vs low-value bandit: t(59)=-5.736, p<.001, d=.917; high-value bandit vs novel bandit: t(59)=-6.24, p<.001, d=.599) and the low-value bandit was chosen slower (low-value bandit vs novel bandit: t(59)=3.756, p<.001, d=.432). In the third sample, the low-value bandit was chosen slower (high-value bandit vs low-value bandit: t(59)=-5.194, p<.001, d=.571; low-value bandit vs novel bandit: t(59)=4.448, p<.001, d=.49; high-value bandit vs novel bandit: t(59)=-1.834, p=.072, d=.09).”

Regarding previous point 8, it is great that the authors followed our suggestion to simulate all models in both the short and long horizon. However, these figures (Figure 1—figure supplement 3 to Figure 1—figure supplement 5) seem to be somewhat confusing. The problem may lie in the parameters selected for simulation. According to Appendix 2 Table 7, the multiple parameters were varied among different models. But I thought they should keep most consistent and only vary the interesting one. For example, shouldn't 𝜂 be kept same or even be zero in the value free random exploration model to show how choices are vary as function of 𝜖? I think the numbers are selected such that the predictions favor the value free random exploration model. If, as authors said, UCB + 𝜖 + 𝜂 is almost as good as Thompson-sampling + 𝜖 + 𝜂, I don’t see what the predictions can be such dramatically different. That is how I interpret the statement that " For simulating the long (versus short) horizon condition, we assumed that not only the key value but also the other exploration strategies increased, as found in our experimental data." Anyway, I feel the simulation data is somehow misleading and need more explanation.

We thank the reviewer for appreciating our effort and apologise that we were not sufficiently clear in explaining what these simulations illustrate. The key purpose of these figures is to illustrate how different aspects of the models affect bandit choices.

In the original reviews, the reviewer suggested to simulate both short and long horizon, and the effects of low and high exploration of a specific exploration strategy. For the low and high exploration, we indeed kept all parameters identical bar the parameter of interest, which is what we changed (as suggested by this reviewer). This is what we show in the short horizon.

For illustrating the long horizon, we tried to accommodate the fact that multiple exploration strategies are elevated in our subjects (compared to the short horizon). We thus decided to also increase other exploration strategies, as listed in Appendix 2—table 7, and correctly observed by this reviewer. However, we do agree that this can be confusing and are happy to remove what we label “long horizon” for clarity.

The parameter values for these simulations are well within the range of fitted model parameters. However, because these are primarily to illustrate the effects of the model (rather than align with exact subjects’ behaviours), we have taken somewhat accentuated values that highlight the specific effects of the parameter more clearly. It is important to note that the key purpose of these illustrations is to compare the effect of low and high exploration within each model, rather than comparing the absolute height of the bars. This was not clear enough in the original captions, and we have now entirely revised these captions.

Besides these simulations, we also provide the model simulations of the complete winning model in the original manuscript. This was shown in Figure 5—figure supplement 1. The reviewer indeed raised a relevant point about this, which is that we did not show the same model fit using the second winning model (UCB). We have now performed this simulation using each participants’ fitted model parameters for the second winning model and have added it as a figure to the manuscript (cf. Figure 5 – figure supplement 3). As one can see, both complex models simulations make fairly similar predictions for our data, as we had previously mentioned, but not shown. We hope this now illustrates our previous notion and we have now detailed this in the revised manuscript.

Discussion: “Importantly, these heuristics were observed in all best models (first, second and third position) even though each incorporated different exploration strategies. This suggests that the complex models make similar predictions in our task. This is also observed in our simulations, and demonstrates that value-free random exploration is at play even when accounting for other value-based forms of random exploration (1, 7), whether fixed or uncertainty-driven.”

Figure 1—figure supplement 3 legend: “Simulation illustrations of high and low exploration on the frequency of picking the low-value bandit using different exploration strategies shows that (a) a high (versus low) value-free random exploration increases the selection of the low-value bandit, whereas neither (b) a high (versus low) novelty exploration, (c) a high (versus low) Thompson-sampling exploration nor (d) a high (versus low) UCB exploration affected this frequency. To illustrate the long (versus short) horizon condition, we accommodated the fact that not only key values but also other exploration strategies were enhanced by increasing multiple exploration strategies, as found in our experimental data (cf. Appendix 2 —table 7 for parameter values). Please note that the difference between low and high exploration is critical here, rather than a comparison of the absolute height of the bars between strategies (which is influences in the models by multiple different exploration strategies). For simulations fitting participants’ data, please see Figure 5—figure supplement 1 and Figure 5—figure supplement 3.”

Figure 1—figure supplement 4 legend: “Simulation illustrations of high and low exploration choice consistency using different exploration strategies shows that (a) a high (versus low) value-free random exploration decreases the proportion of same choices, whereas neither (b) a high (versus low) novelty exploration, (c) a high (versus low) Thompson-sampling exploration nor (d) a high (versus low) UCB exploration affected this measure. To illustrate the long (versus short) horizon condition, accommodated the fact that not only the key value but also other exploration strategies were enhanced by increasing multiple exploration strategies, as found in our experimental data (cf. Appendix 2—table 7 for parameter values). Please note that the difference between low and high exploration is critical here, rather than a comparison of the absolute height of the bars between strategies (which is influences in the models by multiple different exploration strategies). For simulations fitting participants’ data, please see Figure 5—figure supplement 1 and Figure 5—figure supplement 3.”

Figure 1—figure supplement 5 legend: “Simulation illustrations of high and low exploration on the frequency of picking the novel bandit using different exploration strategies shows that (a) a high (versus low) value-free random exploration has little effect on the selection of the novel bandit, whereas (b) a high (versus low) novelty exploration increases this frequency. (c) A high (versus low) Thompson-sampling exploration had little effect and (d) a high (versus low) UCB exploration affected this frequency but to a lower extend than novelty exploration. To illustrate the long (versus short) horizon condition, we accommodated the fact that not only the key value but also other exploration strategies were enhanced by increasing multiple exploration strategies, as found in our experimental data (cf. Appendix 2—table 7 for parameter values). Please note that the difference between low and high exploration is critical here, rather than a comparison of the absolute height of the bars between strategies (which is influences in the models by multiple different exploration strategies). For simulations fitting participants’ data, please see Figure 5—figure supplement 1 and Figure 5—figure supplement 3.”

Reviewer #2:

The authors addressed all my comments and made substantial revisions that have strengthened the overall manuscript. Specifically, the new information in Appendix Table 4 with each model's performance and additional analyses on of the other "(close to best)" models further strengthens the authors' claim. The authors also clarified the results on heart rate, RT and PANAS questionnaire, providing additional results and discussing appropriately potential caveats. Further additions in the Discussion address potential mechanisms of propranolol on decision making.

We thank this reviewer for a very positive endorsement of our revisions and for acknowledging our additional analyses have strengthened the paper. We have now addressed the remaining point below.

The only comment I have relates to the sentence that follows (in the Discussion): “In particular, the results indicate that under propranolol behaviour is more deterministic and less influenced by “task-irrelevant” distractions. This aligns with theoretical ideas, as well as recent optogenetic evidence (32), that propose noradrenaline infuses noise in a temporally targeted way (31). It also accords with studies implicating noradrenaline in attention shifts (for a review cf. (76)). Other theories of noradrenaline/catecholamine function can link to determinism (64, 65), although the hypothesized direction of effect is different (i.e. noradrenaline increases determinism)." Here, it is unclear to me how the authors define determinism and how either increasing or decreasing noradrenaline can increase determinism?

Firstly, we apologise for the unclear sentence and confusing wording. We chose “determinism” as the opposite of “stochasticity” (as also capture in our value-free random exploration), but we agree that the term is confusing and have therefore decided to use the latter, more clear terminology.

The directionality of this effect is indeed interesting, and to our understanding not entirely clear.

Theoretical accounts are somewhat contradictory, with a gain-modulation account (Aston-Jones and Cohen, 2005; Servan-Schreiber et al., 1990) suggesting a decrease of stochasticity with increasing noradrenaline function. On the other hand, other theoretical accounts (Dayan and Yu, 2006) suggest that noradrenaline can induce stochasticity. Our findings, showing a reduction in stochasticity after propranolol, favour the latter. However, several other aspects of noradrenaline functioning may explain differential theoretical accounts. They are likely to capture different aspects of the assumed U-shaped noradrenaline functioning curve, and/or they may be relevant in distinct activity modes, such as tonic and phasic firing (cf. Aston-Jones and Cohen, 2005). We have now implemented this discussion in more detail in the revised manuscript.

Discussion: “In particular, the results indicate that under propranolol behaviour is less stochastic and less influenced by “task-irrelevant” distractions. […] Further studies can shed light on how different modes of activity affect value-free random exploration.”

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    Transparent reporting form

    Data Availability Statement

    All necessary resources are publicly available at: https://github.com/MagDub/MFNADA-figures.


    Articles from eLife are provided here courtesy of eLife Sciences Publications, Ltd

    RESOURCES