Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 May 1.
Published in final edited form as: Wiley Interdiscip Rev Cogn Sci. 2022 Feb 8;13(3):e1589. doi: 10.1002/wcs.1589

Decision neuroscience and neuroeconomics: Recent progress and ongoing challenges

Jeffrey B Dennison 1, Daniel Sazhin 1, David V Smith 1
PMCID: PMC9124684  NIHMSID: NIHMS1773569  PMID: 35137549

Abstract

In the past decade, decision neuroscience and neuroeconomics have developed many new insights in the study of decision making. This review provides an overarching update on how the field has advanced in this time period. Although our initial review a decade ago outlined several theoretical, conceptual, methodological, empirical, and practical challenges, there has only been limited progress in resolving these challenges. We summarize significant trends in decision neuroscience through the lens of the challenges outlined for the field and review examples where the field has had significant, direct, and applicable impacts across economics and psychology. First, we review progress on topics including reward learning, explore–exploit decisions, risk and ambiguity, intertemporal choice, and valuation. Next, we assess the impacts of emotion, social rewards, and social context on decision making. Then, we follow up with how individual differences impact choices and new exciting developments in the prediction and neuroforecasting of future decisions. Finally, we consider how trends in decision-neuroscience research reflect progress toward resolving past challenges, discuss new and exciting applications of recent research, and identify new challenges for the field.

This article is categorized under:

Psychology > Reasoning and Decision Making

Psychology > Emotion and Motivation

Keywords: choice, decision neuroscience, economics, neuroimaging, noninvasive brain stimulation

Graphical Abstract

graphic file with name nihms-1773569-f0004.jpg

1 |. INTRODUCTION

Decision neuroscience and neuroeconomics seek to identify the neural processes that underlie decision making (Beck et al., 2008; Glimcher et al., 2009), particularly the subjective value of rewards (Levy & Glimcher, 2012), the uncertainty of different outcomes (Ma & Jazayeri, 2014), and how people make interpersonal decisions (Hackel & Amodio, 2018). Although the terms “neuroeconomics” and “decision neuroscience” have been used interchangeably in the literature, we use the latter term throughout this review for greater clarity and breadth.

The field of decision neuroscience advanced quickly in its early years, identifying many brain regions involved in the valuation of rewards and social decisions. Despite the early advances of decision neuroscience, our initial review in 2010 identified several key questions and challenges that limited the field (Smith & Huettel, 2010). For example, how can we reconcile the frameworks of decision and cognitive neuroscience? What issues may require advancements in the techniques and technology of neuroscience? How can we integrate and link findings from different species and methodologies? Other challenges can even come from outside the laboratory, preventing meaningful impacts on people’s lives. For example, are neuroscientific results convincing enough to meaningfully contribute to economic policy? Taken together, these questions—which are centered on theoretical, methodological, and practical challenges—highlight critical considerations for the interdisciplinary field of decision neuroscience.

This review provides an overarching update on how the field has advanced since our initial review and identifies the varying degrees of progress made toward addressing these challenges. We extend on the original Smith and Huettel review to include literature from the past decade of reward prediction error, reward anticipation, risk, and ambiguity, explore–exploit choices, temporal discounting, emotional and social decision making, value comparison, and recent advances in understanding individual differences. We focus on new developments to models, theories, methods, and concepts of value-based decision making that have made the greatest impact over the last 10 years. This approach may overemphasize recent trends and methods (e.g., human fMRI); however, it helps us to characterize how efforts have been spent and the incentives in the field of decision neuroscience. While there have been advances in value-independent and perceptual decision making related to sensory representation (Yeon & Rahnev, 2020), motor decisions, and option selection (Ota et al., 2020; Parvin et al., 2018), we concentrate on choices related to value, social context, and emotion. After reviewing recent trends in decision neuroscience, we conclude our summary with a focus on recent research assessing individual differences. Here we reflect on decision making as a spectrum of behaviors and how they are related to demographic, developmental, and clinical variables. Finally, we consider how these recent trends reflect progress made on various theoretical, conceptual, methodological, empirical, and practical challenges (Huettel, 2010; Smith & Huettel, 2010) in decision neuroscience. We argue that the field has achieved considerable and direct impacts across economics and psychology.

2 |. VALUE: FROM REWARD LEARNING TO VALUE COMPARISON

Much of the early work in decision neuroscience had focused on reward learning, the explore–exploit dilemma, risky decisions, temporal discounting, and valuation. Earlier advances before 2010 had identified many of the brain regions involved in these processes. Meta-analyses have since confirmed the robust involvement of regions including the ventral striatum (VS), and the ventromedial prefrontal cortex (vmPFC) during different aspects of decision making such as valuation (Bartra et al., 2013; Clithero & Rangel, 2014), reward learning (Chase et al., 2015), reward consumption, and anticipation (Diekhof et al., 2012). We will review how the theories and constructs underlying these problems have progressed over the past decade. To help orient readers to the brain regions that are repeatedly mentioned throughout the review, we provide a statistical map describing the regions consistently involved in value-based decision making (Figure 1; (see Poldrack et al., 2012 for details).

FIGURE 1.

FIGURE 1

Brain regions associated with decision making. The association test for an automated meta-analysis of value-based decision-making studies provided by the Neurosynth platform. Key regions highlighted include the ventral striatum (VS), ventromedial prefrontal cortex (vmPFC), dorsal anterior cingulate cortex (dACC), posterior cingulate cortex (PCC), and anterior insula (aIns). The figure was generated through Neurosynth by using an association test for an automated meta-analysis of value-based decision-making studies

2.1 |. Reward learning

The process of using feedback through positive and negative reinforcement, described as reward learning, is a fundamental component of value-based decision making. Simple reward learning involves calculating the difference between expected and observed outcomes. This difference between expected and observed outcome values is encoded through dopamine activity as reward prediction errors (RPE; Cannon & Bseikri, 2004; Watabe-Uchida et al., 2017; Wise, 1980). While reward learning is a well-established discipline, there have recently been several substantial advancements. Specifically, three major trends in reward learning during the past decade include research on biases in the RPE signal, investigating the role of the hippocampus and episodic memory in reward learning, and distinguishing between model-based and model-free learning.

Advances in reward learning research over the last decade have focused on teasing apart different influences on positive (better than expected) and negative (worse than expected) learning rates. A consistent set of results reflecting greater positive than negative learning rates led to the interpretation of an optimism bias in reward learning (den Ouden et al., 2013; Niv et al., 2012; Sharot & Garrett, 2016). While there is little consensus as to why positive prediction errors are more heavily weighted, explanations include the influence of reward learning on risk preferences (Niv et al., 2012), exploratory behavior (Garrett & Daw, 2020), and psychological defense mechanisms (Sharot & Garrett, 2016). Others have suggested that the biases for positive prediction errors are driven by processes similar to sensory adaptation (Bavard et al., 2018, 2021). By using simulations, researchers have identified an adaptive role for separate learning rates for optimal learning across environments with high and low long run averages of rewards (Cazé & van der Meer, 2013). In other words, when the long run average of rewards is high, it can be beneficial to underweight negative prediction errors. However, when faced with differentially rewarding environments, humans do not adapt these separate learning rates as predicted in these simulations (Gershman, 2015). Strangely, there still may be a link between the long running rewards in an environment and an optimistic RPE. Tonic dopamine has been thought to encode the long-term average of rewards (Dayan, 2012; Niv et al., 2007) and studies increasing tonic dopamine through the administration of levodopa (L-Dopa) have found that it impairs learning specific to feedback that was worse than expected (Sharot et al., 2012). Other research has also shown that RPEs take into account feedback from unchosen options (Palminteri et al., 2015). These results together suggest a complicated relationship between the learned reward environment and an optimistic learning bias.

Understanding positive versus negative learning rates has been an effective tool to explore how we integrate learned information. Further, examining learning rates for obtained versus forgone outcomes (counterfactual learning; Palminteri et al., 2015) has revealed that optimism bias is not present for counterfactual learning (Chambon et al., 2020), and instead negative learning rates dominate the updating of counterfactual information. These combined results further suggest a general confirmation bias. Exploring traditional cognitive biases within the framework of reward learning has helped to alleviate earlier conceptual challenges through integrating decision and cognitive neuroscience (Doll et al., 2011; Jarcho et al., 2011; Kappes et al., 2020).

To investigate the mechanisms of reward learning in the brain, most reinforcement learning research has naturally focused on canonical reward sensitive regions such as the VS, vmPFC, and the VTA. However, there have been increased efforts to assess the relationship between reward learning and memory, with a focus on the role of the hippocampus (Kempadoo et al., 2016; Perez & Lodge, 2018). In particular, the hippocampus can alleviate two major problems for common classes of models that rely on RPE. First, the hippocampus supports reward learning when the outcome of our decisions are delayed (Tan et al., 2008). Second, the hippocampus may help distinguish rewarding elements when learning from stimuli with many features (Gershman & Daw, 2017), and generalize how those features may be relevant to future outcomes (Gershman & Daw, 2017).

The hippocampus can help to connect actions and outcomes that have been separated over time (e.g., associating study habits you have been developing over a long time period with better grades; Gershman & Daw, 2017; Tan et al., 2008). Although learning from immediate rewards is selectively impaired for individuals with striatal damage, they can still learn from delayed feedback, a skill that is selectively impaired for those with middle temporal lobe and hippocampus damage (Foerde et al., 2013). Other work has shown that coordinated activity between the hippocampus, orbitofrontal cortex (OFC), and striatal regions (Miller et al., 2017; Stoianov et al., 2018; F. Wang et al., 2020) may provide a key link between lower-level reward learning and the integration of high-level information in episodic memory to support future decisions.

Furthermore, the hippocampus can support learning when the outcomes associated with a stimulus depend on the combination of features present rather than each feature independently. For example, though the yellow feature of lemons and bananas indicates ripeness, we know that one fruit will be sour and the other one sweet. In these situations, the hippocampus can compute a sparse representation of the stimulus through a process of pattern separation and a value can be associated with the computed representation rather than each individual feature (O’Reilly & McClelland, 1994). Results from fMRI have confirmed this theory showing that patterns of activity in the hippocampus help to guide reward learning from stimuli with multiple features (Ballard et al., 2019; Niv et al., 2015). Combining fMRI and pharmacological techniques identified functional coupling between the midbrain and the hippocampus, which was modulated by dopamine activity (Kahnt & Tobler, 2016). This dopamine-dependent coupling was particularly important for similarity-based processing and generalizing outcome predictions across stimuli (Kahnt & Tobler, 2016). These results suggest that both episodic memory and reinforcement learning mechanisms work together to understand real-world outcomes. Both understanding complex stimuli with multiple features and properly associating delayed rewards are essential parts of real-world learning. Understanding how the hippocampus interacts with value and reward learning will be important to properly depict and model these processes.

The third trend within reward learning research in the past decade has attempted to identify different kinds of learning, their precise roles in various behavior, and whether different kinds of learning are supported by distinct or similar neural structures and functions. In particular, there has been considerable effort to understand model-based and model-free reward learning. Model-free learning only relies on feedback to update an expected reward associated with a stimulus or action and is often associated with habitual learning and Pavlovian conditioning. Model-based learning, however, allows learners to form a mental model of the environment and make predictions and updates based on their model. Model-based learning is also often associated with goal-directed behavior (Huys et al., 2014). While model-free learning focuses on action-outcome associations, adaptive behavior requires humans to understand the interaction of states and actions to produce an outcome (i.e., model-based learning; Daw et al., 2011; Kurdi et al., 2019). The two-stage reward learning task (Figure 2) has been used ubiquitously to dissociate the weights of model-based and model-free learning (Daw et al., 2011). In this task, an initial decision leads to one of two possible outcomes where a second decision is made. By observing how individuals treat rare and common transitions from the first to second stage, we can estimate how heavily individuals rely on each strategy.

FIGURE 2.

FIGURE 2

General schematic for the two-stage model-based learning task. In the first stage, participants choose one of two gray boxes, with a Tibetan character to identify it. Depending on the chosen box, participants transition with different probabilities to a second-stage state, either the red or the blue state. In this example, each box preferentially transitions participants to a particular state (red or blue) with a 70% chance and with the remaining chance (30%) to the opposite state. In the second stage, participants choose between two boxes (with identifying Tibetan characters) and receive a reward or do not. Each box in the second stage has a different reward probability which changes throughout the experiment

One potentially major confound to the original design was that some model-free behavior could appear to be model-based learning. Recent evaluations suggest that participant misconceptions of the commonly used two-stage task may falsely appear as model-free learning, confounding the interpretations of model-free and model-based learning tasks (Feher da Silva & Hare, 2020). Initial alterations to the task to address these shortcomings included adding predictors in the analysis to capture the tendency to repeat correct choices and to map transition reversals from the first step choice (Akam et al., 2015). Further, Kool et al. identified five different factors that reduced the effectiveness of the two-stage task: the low distinguishability of second-stage probabilities, too low drift rate of second-stage probabilities, probabilistic (rather than deterministic) transition structure between stages, possibility of choice in second stage, and low informativeness of outcomes (Kool et al., 2016). By accounting for these factors, the task better reflects the accuracy-demand tradeoffs of model-based and model-free learning. We invite the reader to review Kool et al., 2016 for a detailed analysis. Overall, these recent results suggest that a large set of model-based and model-free algorithms can mimic model-based actions.

Most learners, however, are thought to use both model-based and model-free learning (Daw et al., 2011; Groman et al., 2019). This combined approach suggests that humans need to arbitrate between both approaches to learning. Neuroimaging work seems to suggest that these signals are balanced through the lateral prefrontal cortex as regions encoding model-free values (e.g., putamen and supplementary motor cortex) can be downregulated during functional coupling with the lateral PFC (Lee et al., 2014). This mechanism may help to arbitrate between model-based and model-free strategies by decreasing the influence of model-free learning on future decisions. The lateral PFC has also shown to be active during the arbitration of learning through imitation- and emulation-based social observational learning strategies (Charpentier et al., 2020), mirroring the involvement in model-based and model-free learning. While the lateral PFC seems to be key in both accounts, it is unclear whether these are distinct processes or whether the lateral PFC is a domain-general integrator of learned information. Finally, the successor representation algorithm has proposed a way to balance the flexibility of model-based learning and the efficiency of model-free learning without directly arbitrating between the two (Gershman, 2018). Successor representations rely on stored predictions about future states rather than re-computing the value of every decision or using precomputed action values. By storing predictions about future states, successor representations can balance tradeoffs associated with model-free and model-based learning without requiring either (Momennejad et al., 2017). Still, many of the neuroscientific findings, such as work suggesting that dopamine encodes a reward prediction error, can be difficult to disambiguate from successor representations under which dopamine is thought to signal a similar temporal difference error (Gershman, 2018). Future work should aim to identify hypotheses that might falsify or disambiguate these classes of models.

Reward learning has been a significant area of research for much longer than the past decade. However, we have seen concepts and theories broaden to encompass larger domains of human behavior and begun to address the weaknesses of reward learning by investigating the interaction between reward learning and episodic memory. Finally, there has been considerable research into different types of learning, particularly as they relate to model-based versus model-free strategies. Reward learning promises to be an exciting area of research for the foreseeable future as it relates to the theoretical and conceptual challenges of decision neuroscience.

2.2 |. The explore–exploit dilemma

When making sequential choices, we face a decision to exploit our current source of reward or explore new options. These decisions are significant in a variety of situations, such as choosing how much to invest in a stock or how a firm should allocate its resources. Some ways researchers study such situations include explore–exploit tasks, such as n-armed bandits (Daw et al., 2006) and foraging tasks (Stephens & Krebs, 1986). Research in the preceding decade started investigating the neural correlates of exploration, exploitation, and uncertainty in dynamic situations. This left the field with the major challenge of describing the trade-off between exploitation and exploration as a single problem addressed by a unitary or larger set of mechanisms in the brain (Cohen et al., 2007).

Exploratory behavior has been defined in several ways. One distinction is made between directed exploration which is driven by information-seeking and systematically exploring one’s options, and random exploration, which is driven by decision noise (Wilson, Geana, et al., 2014). Further, exploratory mechanisms and their effectiveness are critically affected by uncertainty in the environment, such as the probabilities of payouts of various choices in an n-armed bandit task (Daw et al., 2006), or the subsequent patches in a foraging task (Stephens & Krebs, 1986). Two major accounts explain explore–exploit behaviors, which are the interaction of several neural regions (e.g., dACC, dorsal striatum, lateral PFC, and VS; Donoso et al., 2014), or a dual-system driven by opponent processes in frontoparietal regions (Mansouri et al., 2017). Since these models seem to be unresolved, understanding how subcortical areas interact with frontopolar regions in explore–exploit decisions and how environmental uncertainty mediates this process may inform the underlying mechanisms of explorative and exploitative choices in a dynamic environment.

Newer evidence has suggested that subcortical areas and those implicated in dopaminergic transmission have a more substantial role in explore–exploit behaviors than previously realized (Chakroun et al., 2020), which complicates the understanding of what mechanisms drive these behaviors. Increased dopamine levels in the striatum promoted explorative behavior (Verharen et al., 2019) and influenced participants to leave patches in poor environments earlier in a foraging task (Heron et al., 2020). However, lower dopamine levels only attenuated directed exploration compared to random exploration (Chakroun et al., 2020), suggesting that varying dopamine may have disparate effects depending on the exploration strategy implemented. Further, exploitative compared to explorative decision making elicits greater activation in the ventral tegmental area (VTA; Laureiro-Martínez et al., 2015), suggesting that explore–exploit decisions are associated with reward regions.

Nonetheless, cortical regions also exert a clear and major role in explore–exploit decisions, specifically in representing the environmental uncertainty of new options (Badre et al., 2012; Navarro et al., 2016; Tomov et al., 2020). Uncertainty in an environment has been reflected in neural responses in several ways, with relative uncertainty in the right rostrolateral PFC driving directed exploration (Badre et al., 2012), and total uncertainty in the right dlPFC driving random exploration (Tomov et al., 2020). When deciding to switch from exploring to exploiting, the activation of the vmPFC seems to impact three major constructs: uncertainty (Trudel et al., 2020), evidence accumulation in switching decisions (Blanchard & Gershman, 2018), and the decision to leave a given patch to explore another in foraging problems (McGuire & Kable, 2015). Further, recent transcranial direct current stimulation (tDCS) used anodal and cathodal stimulation of the frontopolar cortex influencing participants to make slower exploratory and faster exploitative decisions, respectively (Raja Beharelle et al., 2015). Another study using the same approach found these changes in behavior were specific to directed but not random exploration (Zajkowski et al., 2017) indicating a causal relationship in frontopolar regions in modulating explore–exploit choices. Further, risk-taking in a sequential decision-making task was represented in the right lateral PFC (Holper, ten Brincke, et al., 2014), suggesting that assessing risk may be another feature of exploration. Next, exploitative decision making elicits greater activation in the ventral tegmental area (VTA) and PFC compared to exploration decisions (Laureiro-Martínez et al., 2015), suggesting that explore–exploit decision making significantly interacts with reward regions. Finally, activation in the temporoparietal junction (TPJ), intraparietal sulcus (IPS), and anterior cingulate cortex (ACC) in explorative decision making suggest that explore–exploit decisions may employ attentional processes, interacting with executive function (Laureiro-Martínez et al., 2015). Taken together, these findings reinforce the importance of both frontopolar and subcortical regions in explore–exploit decisions.

Recent research has revealed how subcortical regions contribute to reward learning and how frontopolar responses of environmental uncertainty contribute to explore–exploit decisions. Despite these advances, theoretical models of explore–exploit decisions remain unresolved because subcortical regions play a more substantial role than previously realized, complicating the ability to establish a unitary set of mechanisms that cut across multiple explore–exploit paradigms. This result has led to a more complex representation of explore–exploit choices which integrate both model-based and model-free representation of the environment. Another complicating factor may be the lack of converging evidence over different kinds of tasks (von Helversen et al., 2018) and their apparent lack of ecological validity due to tasks lacking realistic tradeoffs found in the natural world (Mobbs et al., 2018). Another unresolved challenge includes understanding how social context affects explore–exploit decisions, such as how teams initiate or terminate a project. However, current findings together have made significant inroads into understanding explorative and exploitative behavior. Future research into explore–exploit problems has promise for individuals and policymakers to provide valuable insights into how people explore and allocate resources in uncertain and dynamic environments.

2.3 |. Risk and ambiguity

It is often challenging to predict the outcome of a decision. When faced with uncertain outcomes, decisions can be described in the form of a risky gamble. For example, a patient might need to weigh the probability and value of a treatment option. Although there is considerable heterogeneity in risk behavior across decision makers, decision neuroscience has focused on a small number of models to explain these processes. We focus on results from three of these models, including prospect theory (Kahneman & Tversky, 1979), expected utility theory (Friedman & Savage, 1948; Knutson & Peterson, 2005), and portfolio theory (Markowitz, 1991; Raggetti et al., 2017). The first model, prospect theory, describes risk aversion using diminishing returns for subjective value, accounts for preference reversals in losses (loss aversion), and weights subjective probabilities to explain how people behave in the face of uncertain outcomes (Barberis, 2013). Applying this theory allows researchers to estimate subjective value at an individual level with results indicating that the vmPFC, lateral PFC (Holper, Wolf, & Tobler, 2014; Schultz, 2010), and VS track the subjective value of risky prospects (Blankenstein et al., 2017; Levy, 2017; Levy et al., 2010). These processes are likely supported by dopamine action in those regions (Castrellon et al., 2019; Morgado et al., 2015; Soutschek et al., 2020). Focusing on the differences in behavior between subjects with differences in neuroanatomy tends to highlight an alternate set of regions including the right posterior parietal cortex (Gilaie-Dotan et al., 2014) and amygdala (Jung et al., 2018). While the meaning of this discrepancy awaits further study, these differences could indicate potential sources for trait- and state-like risk behavior.

Although loss aversion and probability weighting, like subjective value, seem to be reflected in the striatum and vmPFC, there are different regions and neurochemistry that uniquely contribute to subjective probability. Specifically, subjective probability, the tendency to overweight low probabilities and underweight high probabilities (Kahneman & Tversky, 1979; Tversky & Kahneman, 1992), is modulated by dopaminergic action (Burke et al., 2018; Takahashi et al., 2010) and has been reflected in the activation of the dlPFC and PCC (Suter et al., 2015; Wu et al., 2011). Neural markers of loss aversion have been reported in the insula, and the amygdala (Bartra et al., 2013; Canessa et al., 2013, 2017; Sokol-Hessner et al., 2013), and have been related to both norepinephrine (Sokol-Hessner et al., 2015; Takahashi et al., 2013) and dopamine (Chen et al., 2020), suggesting that subjective value arises through the cooperation of multiple independent mechanisms.

Although we have emphasized prospect theory as the prevailing approach to risk taking in decision making, there are alternative models that also explain risk behaviors, including expected utility theory and mean–variance frameworks such as portfolio theory. While the expected utility is a simple model and explains a wide array of human decisions (Burghart et al., 2013), expected utility theory does not predict loss aversion or probability weighting. Alternatively, portfolio and other risk theories have focused on the variance and skew of risky outcomes (Markowitz, 1991; Raggetti et al., 2017). Both mean–variance and expected utility frameworks were difficult to disambiguate with behavior alone and it was hypothesized that neural information might give rise to better insights into the computations of risk (Tobler & Weber, 2014). However, current decision-neuroscience research has been unable to distinguish a leading model as neural signals consistent with both expected utility (Gilaie-Dotan et al., 2014; Levy et al., 2011; Lopez-Guzman et al., 2018) and mean–variance frameworks (Grabenhorst et al., 2019; Holper, Wolf, & Tobler, 2014; Symmonds et al., 2011) have been often observed within the regions thought to typically encode subjective value like VS, aIns, and PFC. Future work may attempt to reconcile mean–variance and expected utility approaches or find signals that are inconsistent with one of these frameworks.

While many day-to-day decisions involve risky choices, most do not have explicitly defined probabilities of success. When a person lacks an explicit description of the probability of an outcome, they are experiencing a condition known as ambiguity (Ellsberg, 1961). Individuals tend to prefer options where the probability of different outcomes is known (i.e., risk) compared to options where the probabilities are unknown (i.e., ambiguity), often at the expense of potential rewards (Jia et al., 2020). Although these types of decisions are modeled differently, a large set of research has focused on whether ambiguous decisions were processed similarly to risky decisions or are supported by distinct mechanisms (Hsu et al., 2005; Huettel et al., 2006). Activation unique to ambiguity, as opposed to risk, has been associated with the AMY, PCC, temporal gyrus, and lateral PFC (I. Levy et al., 2010; Pushkarskaya et al., 2015). Although most individuals are averse to ambiguity, familiarity with a task can greatly decrease this bias (Denison et al., 2018; Grimaldi et al., 2015; Hayden et al., 2010; Lempert et al., 2015). One reason experience may reduce ambiguity is through reimagining ambiguous lotteries as compound lotteries. Compound lotteries are two-stage lotteries where the prize of the first lottery is the opportunity to participate in another lottery. For example, an urn might be filled with winning or losing tickets, determined by flipping a coin. Although the average urn will have equal amounts of winning and losing tickets, individuals seem to be sensitive to the statistical distribution of the urn’s composition, often described as the second-order probability (Halevy, 2007; Klibanoff et al., 2009). Increased sampling of an unknown lottery reduces the variance of these second-order distributions and could reflect the relationship between experience and ambiguity aversion. Research using fMRI has shown that the PCC and precuneus seem to track the second-order probabilities necessary to represent these compound lotteries (Bach et al., 2011; Paul et al., 2015). This suggests that individuals do represent the complex second-order probabilities assumed by these models. A strong relationship between behaviors in the face of ambiguity and compound lotteries may provide avenues for linking behaviors under risk and ambiguity. However, it is still uncertain whether there is a strong relationship between individual preferences for compound lotteries and ambiguous decisions.

2.4 |. Discounting and self-control

A hallmark of typical decision makers is their tendency to act impulsively. Within decision neuroscience, impulsivity has often been assessed by measuring tradeoffs between current rewards and future rewards, referred to as intertemporal choice. Seminal work by George Ainslie has demonstrated that people tend to devalue rewards hyperbolically at a consistent rate over time (Ainslie, 1975). However, a more recent “as soon as possible” model describes this hyperbolic discounting as relative to the soonest possible reward (Kable & Glimcher, 2010). While there is debate over which model best represents discounting behavior, it remains unclear how variations in self-control best explain intertemporal choices (Scheres et al., 2013). A significant question at the end of the preceding decade about temporal discounting was whether there was a value signal unique to discounting (Carter et al., 2010). Most of the recent advances in the field of intertemporal choice have been in acquiring a more nuanced understanding of what drives these decisions.

New findings have established clearer roles of prefrontal and subcortical regions in regulating discounting decisions, characterizing the ventral striatum (VS) as the driver of impulsivity, dlPFC as the brakes, and the vmPFC as the central arbiter between the two with a slight bias toward acting as the brakes. Subcortical structures exert an important role in impulsive decisions, with heightened activity in the dorsal (Hamilton et al., 2020) and ventral striatum (de Water et al., 2017) among adolescents predicting impulsive choices. Framing intertemporal choice problems sequentially highlights opportunity costs and is associated with decreased impulsivity (Magen et al., 2008), which may be due to lower activity within the striatum in response to immediate rewards and a reduced activation of the dlPFC to facilitate the choice of larger later rewards (Magen et al., 2014). These results suggest that the valuation process is significantly modulated by projections from reward-related subcortical structures. Further, some recent evidence suggests that intertemporal valuation converges in the vmPFC, consistent with its characterization as a valuation hub in other domains such as explore–exploit dilemmas and empathetic choices described elsewhere in the review. For example, discounting tendencies varied in several discounting tasks for different age groups but remained consistent when reduced to subjective value mapped in the vmPFC (Seaman et al., 2018).

However, recent investigations into the dlPFC and vmPFC in discounting decisions have found mixed findings with regard to their roles as the “brakes” and the “central hub” respectively. One investigation found that increased temporal discounting was linked to greater connectivity between both the dlPFC and the vmPFC; however, it was the increased activity in the vmPFC during decisions that were associated with a greater tendency to delay rewards (Hare et al., 2014). Stimulation of both the vmPFC (Cho et al., 2015), and dlPFC (He et al., 2016; Shen et al., 2016; Xiong et al., 2019) increased delay discounted choices for future larger rewards and disruption of the lateral prefrontal cortex increased impulsive choices (Figner et al., 2010), suggesting that both the dlPFC and the vmPFC together may serve to counter impulsivity. Surprisingly, another investigation found that stimulating the dlPFC (Kekic et al., 2014) did not increase delayed discounted choices. Nonetheless, the overall direction of recent studies suggests that increased activation of dlPFC is both necessary and sufficient to influence the delay of gratification and may be interpreted as a mirror to an impulsive drive though the vmPFC may also serve a modulating role in impulsivity above and beyond integrating the signal between the VS and the dlPFC. This interpretation seemed to be consistent with earlier work suggesting that while the VS was more sensitive to the magnitude of a reward, the dlPFC was more sensitive to the delay of rewards (Ballard & Knutson, 2009), with increased dlPFC activation associated with shorter delays. In sum, the weight of the evidence suggests that the vmPFC still seems to be the central hub of the valuation process as it integrates signals from both the dlPFC and VS, though it may also exert some “braking” influence on its own account.

The substantial role of the vmPFC in regulating intertemporal choice suggests that this brain region has an important role in computing the value of current and future rewards. However, integrating research in intertemporal choice, valuation, and emotion suggests that the vmPFC calculates value across many types of decisions, diminishing the likelihood that there is a value signal unique to discounting choices. Taken together, in the past decade researchers have increased the spatial specificity of the vmPFC and the dlPFC, their connectivity with reward regions, and developed a causal understanding of these relationships which sheds light on the valuation of current versus future rewards.

2.5 |. Valuation

One of the foundational ideas within decision neuroscience is that choice-related processes about actions and goods reduce to a single common value (Padoa-Schioppa, 2011; Wunderlich et al., 2012). To this end, many fMRI studies have looked for evidence of a brain region or network responsible for a “common currency” value signal. Early work looked for brain regions that encode subjective value independent of a task (reward learning, risk, temporal discounting, etc.) and the type of good (monetary, food, social, information, etc.), with results often converging on the vmPFC (Camille et al., 2011; McNamee et al., 2013; Smith & Huettel, 2010; Winecoff et al., 2013) and VS (Krastev et al., 2016; Tang et al., 2012; Vassena et al., 2014). Although some studies implicated the OFC as a possible source of a common currency (Charpentier et al., 2018; D. J. Levy & Glimcher, 2012; Padoa-Schioppa & Conen, 2017), other accounts suggest that the OFC’s role is more specialized in the comparison process or representing the state space for decisions (Blanchard et al., 2015; Rudebeck & Murray, 2011; Wilson, Takahashi, et al., 2014).

To identify separate value signals, researchers have assessed the roles of the OFC in both social and general value processing, leveraged multivariate pattern approaches, and investigated connectivity between regions with mixed results. These methods have found patterns of responses that code for multiple values (Kobayashi & Hsu, 2019) and some that code specific reward values (Smith et al., 2016; Wake & Izuma, 2017). A multivariate analyses of OFC during food-based decisions found that overall subjective value of a particular food item is represented in patterns of neural activity in both medial and lateral parts of the OFC, despite that only the lateral OFC represented nutritional attribute of the item (Suzuki et al., 2017). These findings suggested that signals from lateral OFC were integrated into medial OFC to compute a common subjective value of the food item. Likewise, the computation of multiple values integrated into a common signal is necessary for social decisions (Ebner et al., 2018; Izuma et al., 2008; Rademacher et al., 2010; Wake & Izuma, 2017). Though these studies face many difficulties such as identifying a common currency when also accounting for nonvalue variables like attention, affect, salience, and premotor activation (O’Doherty, 2014; Rigney et al., 2018), these results have identified some of the organizing principles that describe how these signals are integrated (Suzuki & O’Doherty, 2020). To help overcome these weaknesses, some models represent value as an emergent process that incorporates these variables by integrating value from numerous regions (Hunt & Hayden, 2017). This value integration process compares between attributes of assets under consideration and assigns attribute salience in order to guide choice behavior (Hunt et al., 2014). Understanding and characterizing the consistent patterns of competition in this hierarchy for real-world stimuli could be a persistent challenge for the field.

Another major vein of research has explored the role of choice on the neural signals of value and preference. While some efforts have tried to separate value from choice (Louie & Glimcher, 2010), increasing evidence suggests that the act of choosing may itself have its own value (Ly et al., 2019). One way this has been studied is by presenting participants with agency or no agency situations. Using this approach, when participants were able to make their own choices in a gambling game they reported more favorable ratings and showed increased VS activation compared to watching a computer make selections for them (Leotti & Delgado, 2014). Strikingly, given the choice between playing themselves and allowing the computer to play in their place, participants strongly preferred gambles with the agency even when it presented significant financial costs to themselves, with the degree of cost tracked by activation of the vmPFC (Wang & Delgado, 2019). Other studies have reported increased activation across multiple value-related regions for items that were previously chosen during difficult decisions (Jarcho et al., 2011). In particular, activity in the vmPFC, VS, and inferior frontal gyrus were associated with increased preference for chosen items after decisions were made.

Another method to study the value of choice has been to increase the number of choices available to a participant. In these tasks, as the number of choices increased, BOLD responses in the VS and ACC followed an inverted U shape suggesting a choice overload (Reutskaja et al., 2018). This pattern continued when participants were asked to reconsider their choices, initially increasing value for items chosen from a small set but decreasing again for items chosen from larger sets, where vlPFC activation reflected the influence of choice set size on revaluation (Fujiwara et al., 2017). Another study noted increased connectivity between vlPFC and PCC when individuals have more choice, particularly if they exhibited greater self-reported reward sensitivity (Cho et al., 2016). The combination of results displaying the value of control and choice overload presents an opportunity to understand when and why choice modulates value signals.

Another recent major trend has been the application of models that describe sensory perception, particularly drift-diffusion models (DDM; Ratcliff, 2002; Smith, 2000) and divisive normalization (Heeger, 1992; Louie et al., 2015) to value-based decision making. DDM and divisive normalization both draw parallels between valuation and perceptual cognition and attempt to explain when and why people have trouble discriminating between different values. These models are not mutually exclusive and work has been done to investigate how these effects may interact (Otto & Vassena, 2020). Although we hope that the following descriptions give the reader an intuition about these two models and why they have been so widely applied, the full extent of their contributions in risky decision making (Ma & Jazayeri, 2014; Peters & D’Esposito, 2020) and reward learning (Bavard et al., 2018; Pedersen et al., 2017) cannot be contained here. Over the last decade, both DDM and divisive normalization have been applied in similar areas including multi-attribute choice (Chang et al., 2019; Fisher, 2017), temporal effects of value (Clay et al., 2017; Zimmermann et al., 2018), and investigating the role of attention in decision making (Gluth et al., 2020; Webb et al., 2020).

Drift diffusion models describe the valuation process as an accumulation of evidence between alternatives. As relative evidence accumulates for an option, it drives a decision toward a decision threshold associated with that option. A choice is made when the evidence passes the decision threshold. DDM has been used to account for the effects of timing and attention in the valuation process (Gluth et al., 2020; Krajbich et al., 2012; Mormann et al., 2010). The DDM estimates four parameters: the starting point bias (pre-existing preferences for one response), the drift rate (evidence accumulation), decision threshold (linked to speed/accuracy trade-offs), as well as nondecision time (Clay et al., 2017; Lerche & Voss, 2017). FMRI results suggest that evidence accumulation in value-based decisions is reflected in the posterior-medial frontal cortex (Pisauro et al., 2017), which exhibits task-dependent coupling with the vmPFC and the striatum. Consistent with its role in integrating the evidence prior to reaching a decision, this region also exhibits task-dependent coupling with the vmPFC and the striatum, brain areas known to encode the subjective value of decision alternatives (Polanía et al., 2015; van Vugt et al., 2012). Additionally, stimulation of the dlPFC affected the strength of evidence accumulation showing a causal relationship to activity in these areas and the accumulation process (Maier et al., 2020).

Decision neuroscience has taken advantage of drift-diffusion and other sequential sampling models, extending these processes to examine the role of temporal dynamics (Luzardo et al., 2017), attention (Krajbich & Rangel, 2011), and multiple attributes (Fisher, 2021) in economic decisions. Through exploring the effects of time on decisions, researchers have shown that these processes are remarkably flexible. When given less time to make a decision, individuals experience a decreased decision threshold and increased noise in the slope of their drift process (Milosavljevic et al., 2010). Other results have indicated that the evidence accumulation process can begin at various times in a process guided by attention (Maier et al., 2020). Some drift-diffusion models have explicitly included the influence of attention in decision making, suggesting that evidence accumulation is amplified when certain features are being attended to (Krajbich & Rangel, 2011) and visual attention is crucial when choosing among large sets and familiar stimuli (S. M. Smith & Krajbich, 2019). Incorporating both visual attention and varied temporal dynamics to DDM indicated that shifts in visual gaze under increased time pressure predicted more selfish choices among participants (Teoh et al., 2020). Taken together, DDM has been effectively applied across varying time horizons and in applications assessing the effects of attention on decision making.

However, it has been challenging to extend DDM toward decisions where people consider two or more attributes underlying the binary choices presented to them. For example, a person choosing between two cars may incorporate trade-offs between price, mileage, and horsepower when deciding which car they want to buy. While attentional DDM tends to model evidence accumulation to decide between binary alternatives (e.g., car one and car two), other models have described the evidence accumulation process occurring along with the attributes (e.g., price, mileage, and horsepower) instead (Bhatia, 2013; Trueblood et al., 2014; Tsetsos et al., 2012). To investigate how people model evidence accumulation along with attributes rather than alternatives, researchers introduce additional irrelevant alternatives to classic DDM problems, which reduced choice accuracy or even biased choices (Chau et al., 2020; Kaptein et al., 2016). Such results suggest that there are a variety of contextual distractor effects on decision making (Busemeyer et al., 2019). While one recent experiment was unable to reproduce the distractor effect, instead indicating that high-value distractors attracted greater attention and slower responses (Gluth et al., 2020), a reanalysis of the same dataset employing an alternate model of divisive normalization suggested that high-value distractors biased choices through splitting attention in multi-attribute decisions (Webb et al., 2020). Taken together, DDM has had somewhat mixed applications when integrated with multi-attribute models, though has been successfully extended when accounting for attention and temporal dynamics.

Another neural model of value-based decision making includes divisive normalization. The divisive normalization model was originally built to describe how the efficient coding of visual information affects perception. This model has since been applied to show how decision values are neurally coded in a given context relative to all of the options considered. Specifically, divisive normalization describes the encoding of an asset’s subjective value through a process that re-scales value proportionally to the sum of all values in the current choice set plus some constant (Louie et al., 2015). Activity from neural recordings in the lateral intraparietal cortex of rhesus monkeys was shown to reflect these normalized decision values as the value-related neural activity of each option was proportional to the values of other available alternatives (Louie et al., 2011). In addition, context-based fluctuations in value coding within the vmPFC and OFC seem to follow the divisive normalization rule and can explain a variety of irrational behaviors such as the decoy effect (Louie et al., 2011, 2013; Rangel & Clithero, 2012). In sum, many argue that divisive normalization is optimal for the biological coding of d values across multiple domains (Landry & Webb, 2021; Louie et al., 2015; Steverson et al., 2019; Tsetsos et al., 2016).

Recent extensions of divisive normalization, like DDM, have focused on multi-attribute choice, temporal effects, and the role of attention. In multi-attribute choice, divisive normalization has been linked to the decoy effect and other irrational considerations of nonchosen options (Dumbalska et al., 2020; Soltani et al., 2012). This inclusion of irrelevant alternatives in the choice process arises from the simple neural dynamics of normalization circuits by using a model organism whose neural circuitry has been fully described, Caenorhabditis elegans (Cohen et al., 2019). However, normalization not only occurs over concurrent alternatives, but also over time. This temporal normalization decreases an item’s coded value after recently experiencing high-value items and increases the coded value after experiencing many low-value items (Khaw et al., 2017). Lastly, divisive normalization has been expanded to incorporate the modulation of choice by attentional processes (Reynolds & Heeger, 2009). Some suggested that while attentional modulation may primarily act on sensory areas, value-related modulation may drive decision-related activity in regions like lateral intraparietal cortex (Louie et al., 2011). However, demonstrations of how reward affects early visual processing have suggested that value and top-down attention engage overlapping mechanisms of neuronal selection (Stanişor et al., 2013).

The extensions we have described across DDM and divisive normalization help to highlight not only the utility of computational models to describe various cognitive processes but also the underlying mechanisms of attention and value, which have been of considerable interest to decision neuroscience as a whole. Despite advances in understanding the underlying processes of valuation, there needs to be further work to thoroughly disentangle the neural mechanisms affecting choice. Better descriptions of reward processing mechanisms may provide a foundation to understand real-world behaviors. However, despite the critical importance of valuation in the decision-making process, other factors such as emotional and social influences should be considered to understand human decision processes more fully.

3 |. EMOTIONAL AND SOCIAL MODULATORS OF DECISION MAKING

The effects of emotions and social influence on decision making constitute their own perennial topics for researchers interested in decision neuroscience, developing into fields of affective and social neuroscience respectively. Both domains have made significant inroads into understanding behavior during the 2000s to 2010s but were left with questions assessing whether cognition and emotion should remain distinct elements in psychology (Dalgleish et al., 2009) and how to integrate recent findings of the physiological elements responsible for social behavior (Norman et al., 2010). Most of this work has revolved around the interaction of value-related regions such as the VS and socially engaged areas (Figure 3) like the right-temporoparietal junction (rTPJ). However, the significant overlap of activation in regions like the vmPFC suggests that there is a degree of integration of social, value, and emotion-related information (Delgado et al., 2016). Nonetheless, recent findings suggest that emotional and social related regions such as the ACC and rTPJ can modulate the function of value related areas like the vmPFC during social and emotional decisions (Lockwood et al., 2015; Morelli et al., 2015; Smith, Lohrenz, et al., 2014; van den Bos et al., 2014). Recent research has made significant inroads into characterizing how neural representations of emotional and social information modulate the function of canonically cognitive regions (Smith et al., 2015). Both conceptual and empirical challenges remain in integrating these models with the broader literature; however, questions remain to what degree these factors interact with traditional reward processes.

FIGURE 3.

FIGURE 3

Brain regions associated with social processes. The association test for an automated meta-analysis of social interactions provided by the Neurosynth platform. Key regions highlighted include the ventromedial prefrontal cortex (vmPFC), dorsomedial prefrontal cortex (dmPFC), ventral striatum (VS), and right temporoparietal junction (rTPJ)

3.1 |. Emotion

While in the past, many have viewed emotions and rationality as conflicting processes, they are now seen as collaborative aspects of real-life decision making and either may augment or interfere with optimal decision making across decision frames (Li et al., 2017). Many consequential real-life situations involve making decisions in emotional contexts such as decisions under stress, and decisions where one’s choices affect others. As a result, there has been extensive research in decision neuroscience into factors such as stress and the experience of empathy for others as these factors can modulate preferences related to emotional decision-making and substantially affect one’s choices.

In particular, there has been extensive research on how stress modulates seemingly optimal decision making, with mixed findings. Acute stress impairs the ability to persist in the face of setbacks perceived as uncontrollable (Bhanji et al., 2016), influences people to make riskier less valuable choices (Wemm & Wulfert, 2017), contributes to participants overexploiting resources (Lenow et al., 2017), and make less model-based decisions when their working memory is taxed (Otto et al., 2013). As a result, it has been challenging to isolate patterns in how stress affects behaviors, sometimes hindering and at other times helping to make more optimal decisions. For example, moderate physiological stress may help participants make more optimal choices, augmenting learning to maximize long-term rewards from experience (Byrne et al., 2019). These stress effects seem to directly affect the core circuitry of reward processing, with regions including the dorsal striatum and OFC showing a decreased sensitivity to monetary outcomes in individuals who had just performed a cold pressor task (Porcelli et al., 2012). Taken together, these conflicting results suggest that stress may act as a polarizing variable, eliciting stronger behaviors in somewhat unpredictable ways across several domains of research.

Assessing how stress affects economic measures of individual differences such as risk, uncertainty, and ambiguity aversion has led to mixed results. Although meta-analytic methods have suggested that stress increases risk-taking behavior (Starcke & Brand, 2016), stress has shown incredibly mixed effects on decision making under uncertainty. Some results only show an increased tolerance to ambiguity (FeldmanHall et al., 2016), while others find an overall increase in risk taking without ambiguity mediating this effect (Buckert et al., 2014). Others still have shown that acute physiological stress has no effect on risk attitudes, loss aversion, or choice consistency (Sokol-Hessner et al., 2016). These inconsistent results may reflect methodological variability across studies but also a nonlinear effect of stress on behavior in general (Peifer et al., 2014; Porcelli & Delgado, 2017; Salehi et al., 2010; Schilling et al., 2013). A possible explanation worth investigating is that acute stress may increase optimal behavior related to risk, uncertainty, and ambiguity in the short term, and results in degradation of optimal choices in longer terms. Taken together, the mixed behavioral results present a puzzle for future researchers to link the effects of stress on various domains of choice.

Although not an emotion itself, empathy, or the ability to recognize, understand, and share the thoughts and feelings of another can substantially impact decision making and ties together both social and emotional influences on choice behavior (Lockwood et al., 2016). Empathy is associated with generosity (Park et al., 2017), which is both an intuitive link and a quantitative measure of behavior. People who acted generously had slower reaction times (Hutcherson et al., 2015), greater self-reported happiness (Park et al., 2017) and stronger activation in the TPJ, VS (Hutcherson et al., 2015; Park et al., 2017), and the vmPFC (Hutcherson et al., 2015). Empathy has been further delineated into affective empathy, described as inferring the experience or feeling of others’ emotions, and cognitive empathy, which can be described as mentalizing or taking the perspective of another (Kerr-Gaffney et al., 2019; Zaki et al., 2012). Empathy requires considerable cognitive effort (Cameron et al., 2019) and while people act prosocially, they would rather exert more effort to benefit themselves (Lockwood et al., 2017), suggesting people differentiate the value of helping others compared to selfishly benefiting themselves. Studies attempting to separate affective and cognitive empathy using fMRI have shown that affective empathy was associated with activation in the Anterior Insula (AIns; Masten et al., 2011), whereas cognitive empathy was associated with activation in TPJ, suggesting two separate mechanisms for altruistic behavior (Tusche et al., 2016).

There are several theorized mechanisms that seem to drive cognitive and affective empathetic behavior, including mentalizing and emotionally experiencing the pain of others, respectively. One mechanism that may impact the mentalizing component of cognitive empathy includes value computations for the self versus others in the rostral dmPFC for modeled choices, and vmPFC for choices about to be executed (Nicolle et al., 2012). Further, watching others experiencing rewards is associated with trait empathy (Lockwood et al., 2015) and is encoded in the OFC and ACC in rhesus monkeys (Chang et al., 2013), and the ACC (Lockwood et al., 2015), vmPFC, and the VS in humans (Morelli et al., 2015). With respect to affective empathy, the driving mechanism may be experiencing the pain of others, which is associated with social pain regions such as the dACC and the aIns (Masten et al., 2011). Further, greater gray matter density in the aIns was also correlated with higher affective empathy scores (Eres et al., 2015). Next, the ACC has been robustly associated with various elements of social pain, which include self-reported distress and rejection (Rotge et al., 2015; Woo et al., 2014), further reinforcing its importance in the experience of affective empathy.

However, one common region that shares activation between vicarious and personal reward is the vmPFC (Harris et al., 2018; Morelli et al., 2015), suggesting that value associated with most empathic decisions may be computed in this region. Further, participants with vmPFC lesions gave less money to those who were suffering (Beadle et al., 2018), indicating that the vmPFC may be critical for regulating empathetic behavior. While distinctions are still made for cognitive and affective processes, the commonality of the vmPFC in both systems indicates that a connectivity-based approach across a gradient of activation may be a more accurate representation rather than a dual system of cognition versus emotion. Future directions include developing a causal understanding of empathetic mechanisms using brain stimulation methods. For example, one unresolved question is whether value in the vmPFC moderates affective processes, or is value assessed first, with empathetic processes moderating the subsequent value of a decision? Nonetheless, current findings indicate a more complex and robust understanding of both cognitive and affective empathy, and their respective neural correlates in both cortical and subcortical regions of the brain.

3.2 |. Social context

Many decisions occur in social situations, which, along with emotional experiences can substantially modulate our choices. Social reinforcement is a potent reward in its own right (Distefano et al., 2018; Hackel et al., 2017; Wake & Izuma, 2017) and helps to explain human aversion to inequality (Fehr & Schurtenberger, 2018; Tricomi et al., 2010). Moreover, the presence of peers influences participant’s choices among adolescents (Powers et al., 2018; Somerville et al., 2019; Van Hoorn et al., 2017), which more generally seems to be the product of modulated social value signals (Fareri et al., 2012). Applying social contexts to decision making adds a crucial dimension to understanding human choices, as social situations such as being in the presence of a peer or negotiating at an auction can substantially affect behavior in ways that seem normatively inconsistent with economic models (Somerville et al., 2019; Van Hoorn et al., 2017). Moreover, while canonical decision-making models (e.g., reward learning) can certainly describe social behavior, violations of social expectations can attenuate these mechanisms (FeldmanHall et al., 2018), suggesting that social processes involve a separate and distinct decision-making system. Assessing how social situations affect behavior and how they are neurally represented can substantively inform how social context modulates choices such as when people decide to trust others.

One important aspect that may explain social behaviors includes how social rewards are encoded in the brain, with people deriving unique reward value from social interactions (Lin et al., 2012). Studies have shown that peers can enhance impulsivity (O’Brien et al., 2011) and risky choice among adolescents, an effect tied to increased striatal responses to reward (Chein et al., 2011). In contrast, the presence of a mother can have the opposite effect, reducing risk-taking behavior among adolescents through blunted striatal responses and enhanced connectivity between the striatum and vlPFC (Telzer et al., 2015). The VS also responds to social rewards, such as social acceptance (Distefano et al., 2018; Wake & Izuma, 2017), inequity (Tricomi et al., 2010), rewards given to in-group versus out-group members (Hackel et al., 2017), and social comparison (Bault et al., 2011). The VS plays a critical role in integrating social information by coding social context and rewards (Báez-Mendoza & Schultz, 2013). For example, striatal reward value signals are enhanced during rewarding experiences shared with a friend (D. Fareri et al., 2012), when self-disclosing to others (Tamir et al., 2015; Tamir & Mitchell, 2012), and when receiving positive feedback in social interactions (Jones et al., 2011; Simon et al., 2014; Sip et al., 2015). Receipt of social reward modulates connectivity between regions comprising reward circuitry (e.g., striatum and vmPFC) and social brain regions such as the TPJ (Smith, Clithero, et al., 2014; van den Bos et al., 2014).

Socially rewarding situations, such as trusting another person, evoke activation in regions that overlap with the default mode network (DMN), including the vmPFC, PCC, and TPJ (Acikalin et al., 2017; Mars et al., 2012). Building on these findings, recent work has shown that there is enhanced connectivity between the DMN, superior frontal gyrus, and superior parietal lobule when experiencing reciprocated trust from close social others (Fareri et al., 2020). These findings suggest that DMN helps to integrate signals of the relative importance of positive experiences with close others and strangers. A meta-analysis identified that social conformity converges on activation including the VS, dmPFC, and aIns (Wu et al., 2016). Integrating these findings suggests that social contexts are inherently rewarding and that social reward circuitry, especially involving the VS, have robust impacts on social behavior (Bhanji & Delgado, 2014).

Social situations such as negotiations, or factors like trust and dishonesty, often require imagining the thoughts of one’s social partner. This act of mentalizing involves the temporal–parietal junction (TPJ) through tasks involving perspective taking (Martin et al., 2020), with the disruption of the TPJ decreasing the perceived harm of hurting others (Young et al., 2010). Further, the TPJ was associated with the discounting of delayed and prosocial rewards (Soutschek et al., 2016, 2020), predictive of social actions (Carter et al., 2012), and even the development of social relationships between players during a public goods game (Bault et al., 2015). While many of these findings reflect an increased activation in the TPJ for close social others, there have also been reports of reduced r-TPJ activity after negative interactions with a friend (Park & Young, 2020). Overall, the weight of recent research points toward the TPJ exerting a strong role in social contexts.

However, while the TPJ exerts a strong role in social situations, other regions such as the amygdala, VS, and vmPFC also exert an important modulating role in social contexts. For example, small self-serving instances of dishonesty can escalate over time, with a resulting reduction in amygdala response (Garrett et al., 2016). Moreover, a recent finding suggested that the computation of social value, contingent upon social closeness with a partner, in the VS and mPFC drive collaboration in a trust task (Fareri et al., 2015). Nonetheless, while manipulations of strategic games with human and computer opponents often show increased activation across numerous regions including the vmPFC, and PCC (Kätsyri et al., 2013), the TPJ showed the most distinctive social bias (Carter et al., 2012). During these social decisions, the TPJ has been shown to functionally couple with the vmPFC and influence the encoding of subjective value, with reduced TPJ activation after negative interactions associated with smaller decreases in social closeness (Makwana et al., 2015; Strombach et al., 2015). Although many investigations point toward the TPJ as necessary for social decisions, it remains unclear whether this region computes social-specific information or contributes toward generalized perspective taking that is integrated into other processes during social decisions.

Further investigations are necessary to investigate the precise role of the TPJ and other brain regions in mentalizing, social closeness, and its associations with experiencing trust. Given perceived social distance and trustworthiness of others, these factors may affect how people interpret the fairness of another person’s actions and how they act in their own self-interest. In social interactions and bargaining situations, people have strong attitudes toward fairness, which may be explained through inequity version and a perceived value for punishing norm violators (Mendes et al., 2018). Rejecting a partner who has acted unfairly in the Ultimatum Game has been associated with increased nucleus accumbens activation (Strobel et al., 2011; White et al., 2013), indicating that choosing to punish can itself be rewarding and resulted in increased activity in the dmPFC and bilateral aIns (Bellucci et al., 2018; White et al., 2013), forming elements of a punishment network. Moreover, transcranial direct current stimulation to the lateral PFC significantly affected both voluntary and sanction-based compliance in a variant of the Ultimatum Game (Ruff et al., 2013), suggesting that compliance may be due to fear of punishment.

Another explanation for discrepancies between Dictator and Ultimatum Game choices may be due to variability in participant strategic reasoning and potentially mediated by Emotional Intelligence (Sazhin et al., 2020). Furthermore, social norms tend to be more strongly enforced when a third party was perceived to be treated unfairly, rather than as an individual (FeldmanHall et al., 2014), suggesting that the variability in individual choices in the Ultimatum Game may incorporate other factors than punishment to enforce social norms. However, the aIns was found to be involved in decisions associated with harm done to self, whereas the amygdala was associated with punishment done to others (Stallen et al., 2018). Since unfair choices seem to reliably elicit aIns responses in the Ultimatum Game, the fear of punishment seems to at least partially explain this behavior. Nonetheless, a reliable finding is that cortical regions are recruited in balancing social rewards, with the vmPFC encoding immediate expected rewards in a public goods game as an individual utility while the lateral frontopolar cortex encodes the group value (Park et al., 2019). Taken together, these results indicate that people delicately balance social rewards and the threats of punishment in their choices, and recent research has been able to pinpoint the respective neural correlates of these decisional processes.

In summary, recent evidence has converged on several mechanisms to explain decisional processes in emotional and social contexts. One common theme is that situations involving empathy and trust are interconnected with the VS and vmPFC. These networks are further mediated by the TPJ, aIns, and ACC depending on the emotional or social context. Current and future challenges should assess whether these factors should continue to be viewed as simply modulators of decision making, or whether they should constitute unique parameters in a decision-making model. While these factors have shown robust effects in many studies, it remains unclear how much more variance they explain when integrated with canonical decision-making models associated with RPE and Valuation. Moreover, it remains unclear how strong these effects are outside of the lab. Finally, research into the role of social dynamics in decision making has placed a large emphasis on adolescents, especially in the realm of peer influence. These sample biases and the reliance on college-aged individuals heighten the need for understanding individual and demographic differences in decision making. These and future findings could provide a deeper understanding of bargaining behavior, which is relevant for understanding behavior in negotiations and auctions, clinical research into emotion/social disorders, and can lead to better predictions of social behavior.

4 |. INDIVIDUAL VARIABILITY: AGE DIFFERENCES AND CLINICAL EXTENSIONS

Understanding individual variability in decision making is a central goal of decision neuroscience (Smith & Delgado, 2015; K. S. Wang et al., 2016; Yoon et al., 2012). In particular, the field has made progress toward linking typical neurological development across the lifespan (Samanez-Larkin & Knutson, 2015) with changes in decision making as well as applications to psychopathology (Baskin-Sommers & Foti, 2015). Changes across the lifespan have focused on topics from the seemingly hasty decisions of adolescence to how decisions may reflect symptoms of mild cognitive impairment associated with old age (Lempert et al., 2020). In addition to strides made toward characterizing how responses to decision variables change across the lifespan, the field of computational psychiatry (Huys et al., 2016) has helped to incorporate decision neuroscience into contemporary models of psychiatric or mental disorders. In particular computational psychiatry, has focused on formal models of brain function to understand the mechanisms of psychopathology (Friston et al., 2014; Gillan et al., 2015; Montague et al., 2012).

Clinical applications hold promise for the practical utility of decision-neuroscience research. For example, task-dependent connectivity between the striatum and medial prefrontal cortex (MPFC) has been linked to prosocial value (Distefano et al., 2018; Hackel et al., 2017; Sul et al., 2015; Wake & Izuma, 2017). Other work has shown that prosocial decisions may be linked to gender differences in striatal activation and social preferences (Soutschek et al., 2017). Beyond social decision making and social preferences, understanding individual variability can also provide insight into how people perceive advertising campaigns (Venkatraman et al., 2015), with ventral striatal activation being a relatively strong predictor for how different people rate various 30-s TV advertisements. Further, a recent study demonstrated that the effect of price on an individual’s expectations and perceptions of product quality, referred to as the marketing placebo effect, was linked to gray matter density in the striatum and insula (Plassmann & Weber, 2015). A deeper understanding of consumer preferences and behavior is of great interest to marketers and may inform more effective targeting strategies for advertising campaigns.

Although these examples illustrate how decision neuroscience has begun to characterize individual variability in decision making in healthy adults, a wealth of other studies have extended this basic approach to consider variation across the lifespan (Mohr et al., 2010; Van Duijvenvoorde & Crone, 2013). One important area of research relates to adolescent decision making (Bjork & Pardini, 2015; Blakemore & Robbins, 2012; Hartley & Somerville, 2015). For instance, an influential early study suggested that adolescent risky choice is driven by tolerance to ambiguity (Tymula et al., 2012). Risk-taking behavior in adolescents has been linked to heightened reward sensitivity and VS reactivity (Braams et al., 2015; Somerville et al., 2010) and also reduced functional connectivity between the MPFC and vlPFC (Qu et al., 2015). Building on these findings, recent work has shown that increased cognitive control and integration of future-oriented thoughts reduces impulsivity in adolescents (van den Bos et al., 2015). Taken together, these findings have generally supported a model wherein adolescent risk-taking behavior and impulsivity are characterized by an imbalance of prefrontal and subcortical activity (Meyer & Bucci, 2016; Somerville et al., 2011) [but see (Romer et al., 2017) for an alternative perspective].

Yet, studies on adolescent decision making have extended these findings by considering individual differences in response to social reward and social context (Steinberg, 2008). For example, one study demonstrated that participants who showed increased VS responses to monetary rewards for their family—a prosocial reward—exhibited decreased risk-taking behavior 1 year later (Telzer et al., 2013). Given that much of adolescent risk-taking occurs in the presence of peers, other work has examined the role of social context in risk-taking behavior. Recent work has demonstrated that individual differences in adolescent risk-taking behavior are linked to peer conflict, with greater risk-taking behavior among adolescents with more peer conflict and low social support (Telzer et al., 2014). Although these studies are beginning to shine a light on the neurocognitive mechanisms that shape risky behavior among adolescents, large-scale longitudinal datasets are needed to examine individual differences in neurodevelopmental trajectories (Foulkes & Blakemore, 2018).

Some decision-making processes change as people age into middle and older adulthood (Mohr et al., 2010). Over the past decade, a host of studies within decision neuroscience have sought to characterize age differences in decision making in the laboratory and the real world (Samanez-Larkin, 2015). These studies have provided critical insights into the cognitive, affective, motivational, and neurobiological factors that shape decision making across older adults (Samanez-Larkin & Knutson, 2015). For example, older adults exhibit reduced striatal responses to reward prediction errors but not reward outcomes (Samanez-Larkin et al., 2014). These differences in reward learning appear to be mediated by individual differences in frontostriatal white matter integrity (Samanez-Larkin et al., 2012) and could also be linked to reduced dopaminergic receptors and transporters (Karrer et al., 2017). In addition, older adults are generally less risk-seeking than younger adults, though there is significant variation across domains and individuals (Josef et al., 2016). Building on these results, a recent study demonstrated that reduced temporal discounting in older adults is associated with richer perception-based details of autobiographical memory, an effect that may be linked to entorhinal cortical thickness (Lempert et al., 2020).

Importantly, individual differences across older adults have been linked to maladaptive decision making and negative real-world outcomes. For example, individual differences in striatal responses have been associated with suboptimal decisions in the Iowa Gambling Task (Halfmann et al., 2015) and biases toward accepting lotteries with a small chance of a large win (i.e., positively skewed; Seaman et al., 2017). Other studies have sought to extend these findings beyond the laboratory by examining real-world financial decision making. For example, one study demonstrated that financial literacy is associated with increased white matter integrity between temporal–parietal brain regions (Han, Boyle, Arfanakis, et al., 2016). In addition, other work has examined risk factors for financial exploitation. These studies have implicated a host of individual differences that contribute to risk for financial exploitation, including enhanced emotional arousal (Kircanski et al., 2018), reduced short-term memory, and positive affect (Ebner et al., 2018). Financial exploitation in older adults has also been linked to mild cognitive impairment (Han, Boyle, James, et al., 2016), reduced gray matter in hippocampal and temporal regions (Han, Boyle, Yu, et al., 2016), and altered functional connectivity with the default-mode and executive control networks (Spreng et al., 2017). Taken together, these observations show that individual differences across older adults can have important real-world consequences that can be studied through the lens of decision neuroscience.

Beyond age-related differences in decision making, a host of other studies have related variability in neural responses to reward and decision making to psychopathology (Zald & Treadway, 2017), borderline personality disorder (Hallquist et al., 2018), and substance use (Luijten et al., 2017; Mackey et al., 2019), and ADHD (Sonuga-Barke & Fairchild, 2012). For example, a recent meta-analysis demonstrated that major depressive disorder is characterized by distinct abnormalities in reward processing: blunted VS responses and heightened OFC responses to reward (Ng et al., 2019). Blunted striatal responses to reward are also a hallmark of apathy in patients with schizophrenia (Kirschner et al., 2016), which may be tied to accompanying reward learning abnormalities (Butler et al., 2020). Other, insights from decision neuroscience have been used to understand other maladaptive behaviors (Diehl et al., 2018). For example, a recent study demonstrated that attempted suicide is linked to impaired value comparison during the choice process (Dombrovski et al., 2019). Taken together, these studies illustrate how insights from decision neuroscience can be applied to individual differences in health outcomes and maladaptive behaviors.

Characterizing individual differences in decision making and reward processing across the lifespan and health outcomes has the potential to lead to improved public policy and clinical interventions. Notably, decision-neuroscience research has swayed a recent US Supreme Court ruling regarding the criminal culpability of juveniles, resulting in relaxed sentencing guidelines and the elimination of the death penalty for most cases involving minors (Cohen & Casey, 2014; Steinberg, 2013). In a similar vein, understanding how cognitive changes can result in vulnerability to financial exploitation among older adults can lead to improved public policies that mitigate risks to these populations (Spreng et al., 2016; Wood & Lichtenberg, 2017). Realizing these potential applications will require continued progress on understanding individual differences more broadly in decision neuroscience (Dubois & Adolphs, 2016; Lebreton et al., 2019).

5 |. BEYOND INDIVIDUAL DIFFERENCES: PREDICTION AND NEUROFORECASTING

Characterizing individual variability in brain–behavior relationships establishes an important foundation for efforts that seek to predict individual choices and forecast group behavior. Over the past decade, studies within decision neuroscience have shown that valuation systems including the VMPFC and VS can predict future choices and preferences, even in the absence of conscious awareness (Tusche et al., 2010) and overt choice (Levy et al., 2011; Smith et al., 2010; Smith, Clithero, et al., 2014). In addition, other studies have shown that VS responses associated with reward anticipation predict future problematic drug use in adolescents (Büchel et al., 2017) and also relapse among individuals with substance use disorder (MacNiven et al., 2018). Taken together, these studies illustrate how findings within decision neuroscience can be used to predict future preferences and maladaptive behavior (Sazhin et al., 2020).

Yet, an open and important question relates to whether neuroscientific tools can access “hidden information” beyond traditional behavioral measures that enable forecasting aggregate behavior, thereby integrating individual choices to predict effects on larger groups or whole economies (Ariely & Berns, 2010; Karmarkar & Yoon, 2016; Smidts et al., 2014). This approach—known as “neuroforecasting” (Knutson & Genevsky, 2018) or “brain as predictor” (Berkman & Falk, 2013)—has become increasingly popular since our original review (Smith & Huettel, 2010). In a seminal fMRI study, researchers asked whether MPFC and NAcc responses to unfamiliar music clips related to the eventual cultural popularity of the music, as indexed by subsequent sales of albums including those songs over 3 years. Strikingly, the researchers found that aggregate sales (i.e., total sales across the population) were associated with activation in NAcc but not the vmPFC or behavioral ratings of liking (Berns & Moore, 2012). This seminal fMRI study provided the first evidence that neural measures can forecast aggregate preferences and behavior above and beyond conventional behavioral measures.

Similar neuroforecasting results have been obtained across a range of other contexts. For example, MPFC responses to persuasive messages have been shown to predict the success of public health campaigns (Falk et al., 2012, 2016). Other studies have shown that NAcc responses to advertisements predict eventual success beyond traditional behavioral measures (Kühn et al., 2016; Venkatraman et al., 2015). Although these studies provide insights into which advertisements might subsequently be successful, it is still essential for advertisers to determine which advertisements will be seen and attended to among the myriad of competing stimuli. To examine this issue, a recent study used a neuroforecasting approach to assess aggregate viewing frequency and duration. Importantly, the researchers found that increased NAcc and decreased aIns activation forecasted time allocation (view frequency and duration) in an internet attention market (Tong et al., 2020). Building on these findings, a related line of neuroforecasting studies has extended these advertising and video engagement findings to the domain of microlending and crowdfunding campaigns on the internet. Notably, NAcc responses to loan appeals forecast aggregate loan appeal (Genevsky & Knutson, 2015). Subsequent work on micro-lending and crowdfunding demonstrated that NAcc responses to funding appeals forecast future funding success, above and beyond choices, and affect ratings (Genevsky et al., 2017).

Other studies have successfully used EEG as a tool for neuroforecasting (Telpaz et al., 2015; reviewed in Hakim & Levy, 2019). Although EEG cannot easily detect signals from the VS (Cohen et al., 2011; Seeber et al., 2019), this method is cheaper and more widely available than fMRI, potentially making it more applicable in a consumer context (Hakim & Levy, 2019). Neuroforecasting studies using EEG have shown that increased inter-subject correlations (i.e., the degree to which participants respond similarly) while viewing relevant stimuli (e.g., advertisements, movie trailers, etc.) is predictive of subsequent aggregate preferences for those stimuli (Barnett & Cerf, 2017; Dmochowski et al., 2014). Likewise, other EEG work has found that beta and gamma oscillations can forecast the commercial success of movies, over and beyond traditional preference measures (Boksem & Smidts, 2015).

Although in principle neural data would not improve predictions of aggregate behavior with sufficiently large and representative samples, neuroforecasting has been able to provide powerful insight into aggregate choice behavior using the samples that are common in human subjects’ research and applied efforts in marketing that seek to benefit from small focus groups (Knutson & Genevsky, 2018). Taken together these examples illustrate how neuroscientific tools such as EEG and fMRI can provide access to “hidden information” that enables forecasting of aggregate behavior, sometimes more so than conventional behavioral measures. Yet, whether and how neural data can provide predictive information above and beyond behavioral measures in other contexts and scenarios remains to be seen. For example, recent work has examined the role of emotion regulation in the context of neuroforecasting. The researchers examined the extent to which a neural pattern indicative of emotion regulation was engaged while participants viewed antismoking messages. They found that the expression of this neural pattern tied to emotion regulation moderated the extent to which vmPFC and AMY forecasted aggregate behavior (Doré et al., 2019). Neuroforecasting efforts could also potentially be improved by capitalizing on distributed information across brain regions (Kragel et al., 2018). Although these recent advances signal great promise for future neuroforecasting, we believe future work should go beyond traditional and conventional behavioral measures tied to choice, effort, or ratings (Lopez-Persem et al., 2017). For instance, ratings that incur a cost may provide a more honest metric of appeal in the population as a whole (Tchernichovski et al., 2019); similarly, latent variables derived from computational models may provide more insight into the preferences of the general population (Clithero, 2018a, 2018b). While neuroforecasting may prove to be a powerful tool for predicting consumer choice, it also opens several ethical considerations regarding the proper role of fMRI in these types of applications (for further consideration of ethical issues, see “Practical Challenges” subsection).

6 |. ADDRESSING CHALLENGES: RECENT PROGRESS AND FUTURE CONSIDERATIONS

After reviewing the most recent influential topics of decision neuroscience, we can judge how the field has made progress across the challenges that were originally posed, those where limited progress has been made, and what new challenges are now on the horizon. These challenges raise important questions for the field and have been centered around barriers posed around the development of theory, linking together concepts, growing methods, and practical applications of decision neuroscience. What have been the most successful links between decision neuroscience and other scientific disciplines? How have models for decision making changed and how have the methods of decision neuroscience contributed to our understanding? Does the increased descriptive power of the neuroscience of value and decision making translate into practically usable policies, either personal or organizational? This broad review of the advances and interests of the past 10 years can speak to how well the field has risen to meet the theoretical, conceptual, methodological, empirical, and practical challenges described at the beginning of the preceding decade.

6.1 |. Conceptual and theoretical challenges

Over the last decade, decision neuroscience has developed theories and models of value and decision making, helping to reconcile past theoretical challenges, as well as reaching across other fields and domains to make considerable progress on the various conceptual challenges. Here we will review some of the progress made toward reconciling past theoretical and conceptual challenges and discuss the barriers that decision neuroscientists still contend with.

One notable theoretical challenge in the past decade was deconstructing the dual-systems mindset, where one system is slow but logical and the other efficient but hasty (Huettel, 2010). This dual-systems mindset has declined in popularity in favor of more multi-dimensional approaches that better reflect the neural mechanisms of choice at the expense of losing the simplicity of dual-systems (Melnikoff & Bargh, 2018). Yet, vestiges of dual process theories remain. For example, the distinction between model-based and model-free learning is often still presented as opponent processes; however, it is well recognized that this characterization is an artificial heuristic because many processes rely on both (Etkin et al., 2016). Additionally, research in the last decade has proposed numerous mechanisms for integrating and balancing multiple value signals such as uncertainty-based arbitration (Lee et al., 2014) and judgment-specific connectivity to ensure dominance of currently task-relevant value signals (Weber et al., 2018), which further extend the dual theory process approach in model-based and model-free learning. The field of decision neuroscience may be at a critical juncture, with some researchers arguing that dual system models still hold utility in their broad application if not their empirical descriptiveness (Grayot, 2020). Progress on other theoretical challenges such as understanding the “description-experience” gap has been less clear. For example, two canonical effects related to the description-experience gap appear to be at odds with each other: loss aversion during risky decision making entails overweighting negative outcomes; and in contrast, optimism bias during reward learning entails underweighting negative outcomes (Canessa et al., 2013; Lefebvre et al., 2017). Attempts to understand the description-experience gap have been limited (Garcia et al., 2021; Heilbronner & Hayden, 2016; Wulff et al., 2018).

Beyond addressing theoretical challenges, there has been considerable progress toward reconciling conceptual challenges and differences between decision and cognitive neuroscience. Even for a fundamentally interdisciplinary field, decision neuroscience has made a lot of inroads and reached across many sub-disciplines (Levallois et al., 2012). Recent work in clinical psychology has especially incorporated decision-neuroscience research including reward learning, risk aversion, and delay discounting toward understanding mental disorders (Huys et al., 2016; Robson et al., 2020; Zald & Treadway, 2017). Likewise, neuroeconomic methods have been used to understand human development and growth across the lifespan (Samanez-Larkin & Knutson, 2015; Sharp, 2012; Van Duijvenvoorde & Crone, 2013). As mentioned earlier, the increase of risky behavior usually associated with adolescence seems to be driven mostly by ambiguous rather than pure risk preferences creating an important way to tie these sub-disciplines together (Blankenstein & Crone, 2016; Tymula et al., 2012; van den Bos & Hertwig, 2017). Although many concepts of interest between decision making and fields of psychology remain to be explored, the previous success that decision neuroscience has had collaborating with other disciplines is reassuring for future progress.

6.2 |. Methodological challenges

Decision neuroscience both provides notable insights through its sophisticated methods and is challenged due to their complexities and inherent limitations. Recent research continues to employ a diverse set of methods across human and animal studies. Although animal studies remain essential for characterizing the neuronal mechanisms that underlie choice, human studies using fMRI and EEG have become more prevalent than they were a decade ago (e.g., increased focus on individual differences). In addition to these methods, functional near-infrared spectroscopy (fNIRS) has been used for measuring brain functioning by measuring changes in oxygenated and deoxygenated hemoglobin (León-Carrión & León-Domínguez, 2012) and provides portability, ease of application, and the low purchase and operation costs that can make it an attractive method (Scholkmann et al., 2014). Within decision neuroscience, fNIRS has been used to study branding (Krampe et al., 2018), risk-taking behavior (Cazzell et al., 2012; Holper, ten Brincke, et al., 2014; Holper, Wolf, & Tobler, 2014), and purchasing behavior (Cakir et al., 2018).

Yet, as discussed in our original review (Smith & Huettel, 2010), it is challenging to integrate results across different methods because they measure different aspects of brain functioning (i.e., hemodynamic and electrophysiological) at different temporal (milliseconds to seconds) and spatial (single cells to millimeters) scales. While these differences continue to make it challenging to integrate results across methods, the field has made some limited progress in this important area. For example, multimodal EEG-fMRI studies have found that reward-related electrocortical responses (i.e., the feedback negativity) are correlated with VS and MPFC responses to reward consumption (i.e., win > loss; Carlson et al., 2011). Similar results have been observed with the P300 event-related potential and reward anticipation (Pfabigan et al., 2014). Other studies have begun using fMRI in monkeys, which has helped pinpoint discrepancies between the roles of VLPFC and OFC in value-based learning paradigms (Fouragnan et al., 2019; Kaskan et al., 2016).

A second methodological challenge relates to reconciling results across species. Our original review noted key differences between human and animal studies (Smith & Huettel, 2010). For example, studies using nonhuman primates use liquid rewards and the tasks involve choices whose options are presented symbolically and learned over thousands of trials (Wallis, 2012). This approach creates a significant challenge for decision neuroscience because monkeys can learn value information only through experience (Garcia et al., 2021). As noted by Garcia and colleagues, this methodological difference may explain why monkeys only rarely exhibit similar risk preferences as humans (Garcia et al., 2021). There have been efforts to use fMRI with animals to make results to human studies more comparable (Fouragnan et al., 2019; Kaskan et al., 2016). However, using fMRI to measure BOLD responses in macaque vmPFC has shown a negative relationship between activation and subjective value, an effect that is opposite to what is seen in humans (Papageorgiou et al., 2017). In addition, some types of studies within decision neuroscience are currently limited to humans because individual differences require large samples and neuroforecasting requires an aggregate out-of-sample behavior to forecast. Between confounded results and training difficulties, it remains a challenge to integrate animal and human findings.

A third methodological challenge is moving from correlational to causal models of decision making and reward processing. Much of the work in decision neuroscience relies on measuring neural responses and then essentially correlating those responses with variables linked to decision making and reward processing. Although animal studies can go beyond correlational approaches using techniques that directly manipulate neuronal activity (e.g., via microstimulation; Doi et al., 2020) analogous studies using invasive brain stimulation in humans are rare and limited to patient samples (Ramayya et al., 2014). Recently, however, the inclusion of noninvasive brain stimulation techniques, such as transcranial magnetic stimulation (TMS) and transcranial electrical stimulation (TES), in human studies are becoming more prevalent (Polania et al., 2018). These studies can help develop causal models of decision making and reward processing by selectively modulating activity in regions implicated in a given process and assessing changes in behavior. For example, TMS applied to the TPJ has been shown to alter social interactions in strategic situations (Hill et al., 2017). In addition, TES—which can be applied via direct or alternating currents—can also be used to modulate decision making and reward processing. For instance, transcranial alternating current stimulation applied over frontoparietal regions selectively disrupts value-based decision making (Polanía et al., 2015) and reversal learning (Wischnewski et al., 2016, 2020). Yet, a key problem for noninvasive brain stimulation in humans is in modulating deeper, subcortical brain regions such as the striatum and midbrain. While some studies have suggested that these subcortical structures can be stimulated by targeting their connections with prefrontal cortex (Chib et al., 2013), recent technological developments have indicated that subcortical regions can be targeted directly using transcranial focused ultrasound stimulation (Folloni et al., 2019). Taken together, these observations highlight how decision neuroscience is making progress toward developing causal models of decision making and reward processing.

In addition to electrical and magnetic brain stimulation, recent work has also used other approaches to noninvasively modulate brain function and alter decision making. Many of these approaches in humans opted for the application of pharmacological methods to make causal claims about the effects of specific neurotransmitters (Rogers, 2011). For example, manipulating dopamine levels promotes model-based over model-free choice (Wunderlich et al., 2012), helps restore reward learning deficits associated with old age (Chowdhury et al., 2013), and increase the choice of options involving risky gains but not losses (Rutledge et al., 2015). Pharmacological techniques have also been used to manipulate serotonin (den Ouden et al., 2013) and norepinephrine (Lempert et al., 2017). While pharmacological interventions have demonstrated the causal roles of specific neurotransmitters, they have been less popular than imaging techniques, likely due to a stronger interest in identifying the structures involved in decision making.

Another key form of neural manipulation has come in the form of neurofeedback, often performed with either fMRI or EEG. Neurofeedback studies measure a neural signal, often BOLD for fMRI or local field potential in EEG, and present it back to a subject in real-time (e.g., in the form of a bar that moves up or down). The subject is instructed to learn to increase or decrease this signal on their own to manipulate brain activity. This technique has been used to reduce the fear associated with a conditioned stimulus by having participants increase patterns of activation related to the conditioned stimulus unpaired to any outcomes (Koizumi et al., 2016). Others have shown that neurofeedback, as a supplement to cognitive behavioral therapy, can have a positive impact on the implementation of therapy skills measured up to 4 weeks later (MacDuffie et al., 2018). Neurofeedback has seen some success as a neuroscience method. As opensource tools for neurofeedback (MacInnes et al., 2020) are developed and shared with the greater community we can expect further adoption and refinement of neurofeedback methods.

A final methodological challenge is centered on the interpretation and aggregation of neuroimaging results. Over the past decade, methods and approaches within the neuroimaging community have significantly improved and have become increasingly transparent (Poldrack et al., 2017). Yet, recent work has called attention to two critical issues. First, the flexibility of neuroimaging analyses affords a near-limitless array of analytical decisions, including different software packages (Bowring et al., 2019) and preprocessing pipelines (Carp, 2012). This problem contributes to variability in reported results (i.e., thresholded statistical maps) across analyses of a single neuroimaging dataset, despite substantial agreement in unthresholded statistical maps (Botvinik-Nezer et al., 2020). To address this problem, researchers in decision neuroscience are encouraged to submit preregistrations (Gorgolewski & Poldrack, 2016; Nosek et al., 2018), share thresholded and unthresholded statistical maps (Gorgolewski et al., 2015; Smith & Delgado, 2017), and use standardized preprocessing and analysis pipelines to help maximize computational reproducibility (Esteban et al., 2019; Gorgolewski et al., 2017). Second, efforts to characterize individual differences are hampered by small sample sizes (Button et al., 2013; Poldrack & Gorgolewski, 2017; Yarkoni, 2009). Although sample sizes have been steadily increasing in neuroimaging (Poldrack et al., 2017), efforts to openly share standardized neuroimaging data will afford an opportunity to examine novel questions related to individual differences (Gorgolewski et al., 2016; Poldrack et al., 2013; Poldrack & Gorgolewski, 2017). Nevertheless, characterizing individual differences is also negatively impacted by the poor reliability of neuroimaging signals (Vul et al., 2009). Although a recent study demonstrated that some univariate task signals have poor test–retest reliability (Elliott et al., 2020), other work has demonstrated that NAcc responses linked to reward anticipation have much greater test–retest reliability (Wu et al., 2014). Moreover, moving beyond univariate analyses and examining multivariate pattern-based information in neuroimaging data can maximize effect sizes (Reddan et al., 2017) and yield significant boosts in test–retest reliability (Kragel et al., 2020). Taken together, these observations are encouraging: while the neuroimaging community has identified key problems that hinder interpretation and aggregation of results, they have also developed improved practices that have the potential to significantly strengthen the next decade of neuroimaging results within decision neuroscience and beyond.

6.3 |. Practical and empirical challenges

One major challenge for the interdisciplinary fields of decision neuroscience and neuroeconomics is applying empirical findings in a way that generalizes outside of the laboratory and into other disciplines such as economics. Research tackling aspects of this challenge have been conducted over the past decade, but recently more findings and experiments have started to be used outside of the laboratory. One way to make applicable research is through generating biologically plausible models and through the technique of mechanistic convergence (Clithero et al., 2008). This method studies behavior that is hard to understand through simple observations, correlates it with brain activity, and uses these findings to inform a deeper understanding of the behavior observed (Clithero et al., 2008). Moreover, relating decision-neuroscience concepts to ethological sources may also ground research in ways that are more directly applicable to real-world behavior (Mobbs et al., 2018). Further, the results from neural forecasting research show that results from inside of an fMRI scanner can predict aggregate behaviors outside of the lab (Knutson & Genevsky, 2018). Given research that has promise for application outside of the laboratory, these findings should use evidence-based practices to follow up with implementation studies that can forecast future behavior (Knutson & Genevsky, 2018) and scale up these results to determine if they are robust in the real world (Bauer et al., 2015). In sum, using these methods, decision neuroscience during the past decade has made numerous inroads toward applicability in economics, clinical psychology, and other domains.

At the end of the preceding decade, there was a strong sentiment that neural data was a distraction to the field of economics and that behavior alone was both necessary and sufficient to confirm or falsify economic theory (Gul & Pesendorfer, 2008). However, decision-neuroscience’s applicability to economics has been through reexamining models of behavior to ultimately make better predictions. Examples include using mechanistic convergence to understand overbidding behavior, which was revealed to be correlated with loss signals in the brain and led to increased overbidding when an auction was reframed as a loss (Delgado et al., 2008). Using similar techniques, researchers used subjective value as a utility signal to predict behavior (Smith, Lohrenz, et al., 2014), solving the free-rider problem (Krajbich et al., 2009), and identifying speculative traders (Smith, Clithero, et al., 2014). Such methods provide more grounded explanations for asset bubbles and seemingly irrational financial decisions among managers (Frydman & Camerer, 2016).

Another means of meeting the practical challenge of applying decision neuroscience toward economics is through considering the constraints imposed by neurobiology, leading to the development of predictions about choice behavior using biologically plausible models. Even well-known decision-making biases such as risk aversion (Khaw et al., 2021), the overweighting of small probabilities (Steiner & Stewart, 2016), and decoy effects (Woodford, 2020) have been linked to perceptual or cognitive limitations introduced due to noisiness in the neural processing of information. Some of these processes have been linked to the role of attention. Different formalizations have shown that attention can help to trade off the processes of satisficing and information accumulation (Gossner et al., 2021) and that attention reflecting an optimal accumulation strategy can reproduce several choice biases found in the literature (Callaway et al., 2021).

Another manner biological constraints have been imposed on economic models is through considering the need for efficient neural encoding in subjective valuation (Polania et al., 2018), explaining inconsistent economic choices and preference reversals through naturally skewed distribution of neural activity (Kurtz-David et al., 2019). One manner of studying valuation uses divisive normalization models and DDM, which are inherently built on principles of biological plausibility (Gao & Vasconcelos, 2009; van Ravenzwaaij et al., 2012). Using DDM can make improved out-of-sample choice predictions (Clithero, 2018a) and be an optimal model for value-based choices (Tajima et al., 2016). Further, using divisive normalization modeling for multi-alternative decisions can account for violations of independence of irrelevant alternatives and other irrational behaviors (Tajima et al., 2019). Next, the random utility model (Walker & Ben-Akiva, 2002), a common model from economics, was reexamined by including decision times. This reexamination showed that a random utility model could be rebuilt by starting with a bounded class of accumulation such as DDM (Webb, 2019). In sum, these examples highlight how decision neuroscience and neuroeconomics have met the practical challenge of using insights and models derived from biologically plausible neural models to inform economic theory.

Advances in decision neuroscience have been applied to other fields such as clinical psychology and medical decision making (Ferrer et al., 2015; Zikmund-Fisher et al., 2010), showing how human heuristics and biases undermine optimal health outcomes. For example, patients with slow-growing and low-risk cancer may be better off tracking the progress of their condition, however many instead elect for risky surgeries and radiotherapy with substantial side effects that can decrease their quality of life (Reyna et al., 2015). Future research has great promise to disambiguate clinical pathologies, leading to more targeted interventions. For example, the neural signatures associated with downregulating negative emotions through reappraisal techniques were associated with increased choices for healthy foods (Maier & Hare, 2020; Morawetz et al., 2020), which may affect approaches toward treating obesity. By assessing differences in neural reward sensitivity, researchers have been able to predict for a vulnerability to either motivational anhedonia or approach-related hypo/manic symptoms, which have been difficult to differentiate behaviorally (Nusslock & Alloy, 2017). Such findings may then allow for interventions like noninvasive brain stimulation, which may be able to ameliorate underlying neurological and psychiatric diseases (Polania et al., 2018). Taken together, decision-neuroscience research has matured to the point that findings are beginning to be applied outside of the laboratory.

While decision neuroscience has made significant inroads demonstrating its utility toward findings for the real world, the current landscape has posed practical challenges to ensure interpretation and implementation of these insights for the public good. For example, while neuroscience is becoming better known and referenced in the media, the overwhelming focus of media articles is on “brain optimization” and “brain boosting,” rather than recent peer-reviewed neuroscience work (O’Connor et al., 2012). On the other end of the spectrum, neuroscience results can artificially create the impression that recent insights are immediately ready for policy applications (Weisberg et al., 2015). This disconnect in the public’s awareness of neuroscience research can lead to limited success and greater difficulty in recognizing and applying relevant results. Further, while neuroscience has blossomed into a variety of subdomains, such as neuropolitics (Schreiber, 2011), neuroethics (Levy, 2007), neuromarketing (Hammou et al., 2013), neuroaesthetics (Chatterjee & Vartanian, 2014), and neurofinance (Miendlarzewska et al., 2019), it remains unclear to what degree neuroscience has advanced far enough to provide practical insights in other disciplines. For example, the defense team in a murder case attempted to use fMRI as mitigating evidence (Hughes, 2010); however, it is debatable to what degree associating physiological abnormalities can or should be used as evidence for the lack of culpability. Next, fields such as consumer neuroscience can potentially provide incentives for research that is dubious for the public good, for example using EEG to assess the efficacy of TV messages for political purposes (Vecchiato et al., 2010). However, using similar types of research, governments have also moved to create “Nudge Units” aiming to increase election participation, charitable giving, and reducing medical prescription errors. As researchers, we should question what kind of guardrails could be established to ensure that decision-neuroscience research is applied responsibly and ethically. Finally, as decision neuroscience expands and can better predict individual and group behavior, application of this research will have greater practical and ethical implications (Stanton et al., 2017) for researchers, participants, policymakers, and for society as a whole.

6.4 |. Future challenges

While significant progress has been made, the number of interests and topics studied through a decision-neuroscience lens has kept pace with that progress. To accommodate the growing breadth of theories and concepts we identify two frameworks to resolve future challenges that can be broadly applied rather than identify challenges specific to a topic or theory (e.g., describing the neural mechanisms of self-control; Huettel, 2010). Specifically, we focus on the axiomatic approach to science and David Marr’s levels of analysis (Marr & Poggio, 1976), how these frameworks can be applied to identify challenges, and why the field may consider adopting these frameworks.

The axiomatic approach to science defines the most basic assumptions that underlie a model or theory and rigorously tests those assumptions. Assessing and testing axioms develops testable qualitative predictions that must be true for a theory to be plausible. Although testing these axioms can make decision models more tractable lines of research, this approach has met only limited adoption (Caplin & Dean, 2008; Rutledge et al., 2010). Pairing the axiomatic approach with BOLD activation (Rutledge et al., 2010) and voltammetry to measure reward-evoked dopamine (Hart et al., 2014) has confirmed that VS dopamine can encode RPE signals. However, Rutledge et al. also falsified the possibility of RPE within multiple other regions including the vmPFC, yet meta-analyses show that correlative approaches consistently report activation in these regions as reward prediction errors (Fouragnan et al., 2018; Garrison et al., 2013). Concentrating on the few axioms of RPE encoding that are broken in the PFC learning could help us understand why this signal seems so prevalent and understand what it is doing during reward learning. The axiomatic approach can also help reduce concerns around falsifiability, one of the major touchstones of the scientific method (Hempel, 1970; Popper, 1979). Although falsifying neuroeconomic models and theories is not a strictly new challenge, many studies within the decision and cognitive neuroscience literature are not well-designed to falsify a particular model. For example, evidence for common currency often attempts to verify that regions like the vmPFC or VS encode multiple varieties of reward. However, to show that two rewards are not encoded within the same regions requires an acceptance of the null hypothesis, which is problematic in its own right (Edgell, 1995; Frick, 1995; Nichols et al., 2005) [but see (Keysers et al., 2020) for a Bayesian solution to this problem]. Increasing progress through the identification and testing of axioms that decision processes rely on can create an explicit set of goals and open problems by which to measure the success and progress of decision neuroscience.

Another framework for resolving future challenges includes David Marr’s levels of analysis (Marr & Poggio, 1976). This framework suggests that the goals (computations), rules (algorithms), and physical instantiations (implementations) of a behavior must be described to truly understand that behavior. The three levels in this framework are characterized as computational, algorithmic, and implementational, representing different orders of description. This framework has become increasingly popular, used to argue for the importance of behavioral work in neuroscience (Krakauer et al., 2017), how we might identify specialized social processes (Lockwood et al., 2020), and the success of reward learning models (Niv & Langdon, 2016). Each level can present a different challenge across the topics of decision neuroscience. For instance, one goal of most decision processes can be described as obtaining rewards and avoiding punishments. While some work has posited that rewards are defined through their ability to bring about or predict homeostatic balance (Sescousse et al., 2013), other work has shown that even when it cannot be acted upon, information can be considered valuable or punishing (Levy, 2018). This apparent conflict in behavior represents just one of the challenges worth addressing that we can identify by using this framework.

7 |. CONCLUSIONS

To judge how the field has met the challenges posed in our original assessment, we have reviewed some of the most compelling research produced within decision neuroscience over the past decade. Recent advances in decision neuroscience have made significant strides in resolving past challenges. First, with connectivity-based approaches, the dual-systems mindset has been deconstructed or at least challenged in many popular models such as temporal discounting, empathetic decisions, and model-based/model-free learning, and is instead viewed through a gradient of activation, mediated by valuation processing in the vmPFC. Recent trends to incorporate a variety of individual differences, such as a larger diversity of age groups and clinical factors have greatly increased the generalizability of neuroscience findings and established composite factors. Finally, these advances have inched the field forward to providing better generalizability outside of the laboratory. However, significant challenges remain in distinguishing forms of uncertainty, which may be resolved through new behavioral and modeling approaches, such as exploring ambiguity through the lens of compound lotteries. Further challenges remain in determining the neural basis for meta-decision processes, such as assessing cognitive switching over trends in time.

Early work often focused on how human decision making veers from normative and prescriptive models of decision making. However, the complexity of human decisions and their underlying implementations suggest human processes are not always shortcuts or mistakes, but rather mechanisms that grow to fit human needs. The social world humans inhabit makes it necessary to consider how we should weigh the value of social approval or fairness against options that seem suboptimal from a narrow self-interested view. Moreover, as our understanding of choice becomes more nuanced, it will allow us to explore ever more complex forms of decision making. For example, many studies of risk, uncertainty, and explore–exploit decisions assume a fairly static decision environment. Introducing studies that track how the trend of information affects decisions made over time may provide more ecologically valid measures of behavior and help us understand how choices are made in more dynamic contexts. Next, by observing variations in decision making and reward processing, we have been able to identify how these processes can become maladaptive and consider potential policies and interventions to remediate these negative effects. Finally, findings within decision neuroscience can be used to predict future preferences, maladaptive behavior, and inform targeted interventions (Sazhin et al., 2020). Overall, the interdisciplinary nature of the field and methodological advances across both animal and human work have been a boon to researchers attempting to understand the neurobiological underpinnings of choice. Despite decision neuroscience’s maturation as a field, many challenges remain unresolved, along with new challenges that have recently developed.

Early hopes for understanding the biological foundations of choice, would allow neuroscience to help inform specific economic policy and choice theory (Camerer, 2008; Schultz, 2008). However, as the field has grown it has had a bigger role in understanding the wider lens of maladaptive behaviors. Future research will continue to chip away at the theoretical, conceptual, and methodological challenges that arise. However, the continual advancement of scientific insight needs to be met by advances in applications to encourage human flourishing. To achieve this end, decision neuroscience should ensure that its research is reproducible, falsifiable, and augment our ability to scale up applications outside the lab.

ACKNOWLEDGMENTS

We thank Dominic Fareri, Karen Shen, Fatima Umar, and Lindsey Tepfer for their helpful comments and feedback on an earlier version of this manuscript. DVS was a Research Fellow of the Public Policy Lab at Temple University during the preparation of this manuscript (2019–2020 academic year). We note that this work has been deposited on PsyArxiv as a preprint.

Funding information

This work was supported, in part, by grants from the National Institutes of Health (R21-MH113917, R03-DA046733, and RF1-AG067011 to David V. Smith).

Footnotes

CONFLICT OF INTEREST

The authors have declared no conflicts of interest for this article.

DATA AVAILABILITY STATEMENT

There is no data associated with this review article.

REFERENCES

  1. Acikalin MY, Gorgolewski KJ, & Poldrack RA (2017). A coordinate-based meta-analysis of overlaps in regional specialization and functional connectivity across subjective value and default mode networks. Frontiers in Neuroscience, 11, 259–261. 10.3389/fnins.2017.00001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Ainslie G (1975). Specious reward: A behavioral theory of impulsiveness and impulse control. Psychological Bulletin, 82(4), 463–496. 10.1037/h0076860 [DOI] [PubMed] [Google Scholar]
  3. Akam T, Costa R, & Dayan P (2015). Simple plans or sophisticated habits? State, transition and learning interactions in the two-step task. PLoS Computational Biology, 11(12), e1004648. 10.1371/journal.pcbi.1004648 [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Ariely D, & Berns GS (2010). Neuromarketing: The hope and hype of neuroimaging in business. Nature Reviews Neuroscience, 11(4), 284–292. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Bach DR, Hulme O, Penny WD, & Dolan RJ (2011). The known unknowns: Neural representation of second-order uncertainty, and ambiguity. Journal of Neuroscience, 31(13), 4811–4820. 10.1523/JNEUROSCI.1452-10.2011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Badre D, Doll BB, Long NM, & Frank MJ (2012). Rostrolateral prefrontal cortex and individual differences in uncertainty-driven exploration. Neuron, 73(3), 595–607. 10.1016/j.neuron.2011.12.025 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Báez-Mendoza R, & Schultz W (2013). The role of the striatum in social behavior. Frontiers in Neuroscience, 7, 233. 10.3389/fnins.2013.00233 [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Ballard IC, Wagner AD, & McClure SM (2019). Hippocampal pattern separation supports reinforcement learning. Nature Communications, 10(1), 1073. 10.1038/s41467-019-08998-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Ballard K, & Knutson B (2009). Dissociable neural representations of future reward magnitude and delay during temporal discounting. NeuroImage, 45(1), 143–150. 10.1016/j.neuroimage.2008.11.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Barberis NC (2013). Thirty years of Prospect theory in economics: A review and assessment. Journal of Economic Perspectives, 27(1), 173–196. 10.1257/jep.27.1.173 [DOI] [Google Scholar]
  11. Barnett SB, & Cerf M (2017). A ticket for your thoughts: Method for predicting content recall and sales using neural similarity of moviegoers. Journal of Consumer Research, 44(1), 160–181. [Google Scholar]
  12. Bartra O, McGuire JT, & Kable JW (2013). The valuation system: A coordinate-based meta-analysis of BOLD fMRI experiments examining neural correlates of subjective value. NeuroImage, 76, 412–427. 10.1016/j.neuroimage.2013.02.063 [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Baskin-Sommers AR, & Foti D (2015). Abnormal reward functioning across substance use disorders and major depressive disorder: Considering reward as a transdiagnostic mechanism. International Journal of Psychophysiology, 98(2), 227–239. 10.1016/j.ijpsycho.2015.01.011 [DOI] [PubMed] [Google Scholar]
  14. Bauer MS, Damschroder L, Hagedorn H, Smith J, & Kilbourne AM (2015). An introduction to implementation science for the non-specialist. BMC Psychology, 3(1), 32. 10.1186/s40359-015-0089-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Bault N, Joffily M, Rustichini A, & Coricelli G (2011). Medial prefrontal cortex and striatum mediate the influence of social comparison on the decision process. Proceedings of the National Academy of Sciences of the United States of America, 108(38), 16044–16049. 10.1073/pnas.1100892108 [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Bault N, Pelloux B, Fahrenfort JJ, Ridderinkhof KR, & van Winden F (2015). Neural dynamics of social tie formation in economic decision-making. Social Cognitive and Affective Neuroscience, 10(6), 877–884. 10.1093/scan/nsu138 [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Bavard S, Lebreton M, Khamassi M, Coricelli G, & Palminteri S (2018). Reference-point centering and range-adaptation enhance human reinforcement learning at the cost of irrational preferences. Nature Communications, 9(1), 4503. 10.1038/s41467-018-06781-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Bavard S, Rustichini A, & Palminteri S (2021). Two sides of the same coin: Beneficial and detrimental consequences of range adaptation in human reinforcement learning. Science. Advances, 7(14), eabe0340. 10.1126/sciadv.abe0340 [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Beadle JN, Paradiso S, & Tranel D (2018). Ventromedial prefrontal cortex is critical for helping others who are suffering. Frontiers in Neurology, 9, 288. 10.3389/fneur.2018.00288 [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Beck JM, Ma WJ, Kiani R, Hanks T, Churchland AK, Roitman J, Shadlen MN, Latham PE, & Pouget A (2008). Probabilistic population codes for Bayesian decision making. Neuron, 60(6), 1142–1152. 10.1016/j.neuron.2008.09.021 [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Bellucci G, Feng C, Camilleri J, Eickhoff SB, & Krueger F (2018). The role of the anterior insula in social norm compliance and enforcement: Evidence from coordinate-based and functional connectivity meta-analyses. Neuroscience & Biobehavioral Reviews, 92, 378–389. 10.1016/j.neubiorev.2018.06.024 [DOI] [PubMed] [Google Scholar]
  22. Berkman ET, & Falk EB (2013). Beyond brain mapping: Using neural measures to predict real-world outcomes. Current Directions in Psychological Science, 22(1), 45–50. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Berns GS, & Moore SE (2012). A neural predictor of cultural popularity. Journal of Consumer Psychology, 22(1), 154–160. [Google Scholar]
  24. Bhanji JP, & Delgado MR (2014). The social brain and reward: Social information processing in the human striatum. WIREs Cognitive Science, 5(1), 61–73. 10.1002/wcs.1266 [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Bhanji JP, Kim ES, & Delgado MR (2016). Perceived control alters the effect of acute stress on persistence. Journal of Experimental Psychology. General, 145(3), 356–365. 10.1037/xge0000137 [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Bhatia S (2013). Associations and the accumulation of preference. Psychological Review, 120(3), 522–543. 10.1037/a0032457 [DOI] [PubMed] [Google Scholar]
  27. Bjork JM, & Pardini DA (2015). Who are those “risk-taking adolescents”? Individual differences in developmental neuroimaging research. Developmental Cognitive Neuroscience, 11, 56–64. 10.1016/j.dcn.2014.07.008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Blakemore S-J, & Robbins TW (2012). Decision-making in the adolescent brain. Nature Neuroscience, 15(9), 1184–1191. [DOI] [PubMed] [Google Scholar]
  29. Blanchard TC, & Gershman SJ (2018). Pure correlates of exploration and exploitation in the human brain. Cognitive, Affective, & Behavioral Neuroscience, 18(1), 117–126. 10.3758/s13415-017-0556-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Blanchard TC, Hayden BY, & Bromberg-Martin ES (2015). Orbitofrontal cortex uses distinct codes for different choice attributes in decisions motivated by curiosity. Neuron, 85(3), 602–614. 10.1016/j.neuron.2014.12.050 [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Blankenstein NE, Crone EA, van den Bos W, & van Duijvenvoorde ACK (2016). Dealing with uncertainty: Testing risk- and ambiguity-attitude across adolescence. Developmental Neuropsychology, 41(1–2), 77–92. 10.1080/87565641.2016.1158265 [DOI] [PubMed] [Google Scholar]
  32. Blankenstein NE, Peper JS, Crone EA, & van Duijvenvoorde ACK (2017). Neural mechanisms underlying risk and ambiguity attitudes. Journal of Cognitive Neuroscience, 29(11), 1845–1859. 10.1162/jocn_a_01162 [DOI] [PubMed] [Google Scholar]
  33. Boksem MA, & Smidts A (2015). Brain responses to movie trailers predict individual preferences for movies and their population-wide commercial success. Journal of Marketing Research, 52(4), 482–492. [Google Scholar]
  34. Botvinik-Nezer R, Holzmeister F, Camerer CF, Dreber A, Huber J, Johannesson M, Kirchler M, Iwanir R, Mumford JA, Adcock RA, Avesani P, Baczkowski BM, Bajracharya A, Bakst L, Ball S, Barilari M, Bault N, Beaton D, Beitner J, … Schonberg T (2020). Variability in the analysis of a single neuroimaging dataset by many teams. Nature, 582(7810), 84–88. 10.1038/s41586-020-2314-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Bowring A, Maumet C, & Nichols TE (2019). Exploring the impact of analysis software on task fMRI results. Human Brain Mapping, 40(11), 3362–3384. 10.1002/hbm.24603 [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Braams BR, van Duijvenvoorde ACK, Peper JS, & Crone EA (2015). Longitudinal changes in adolescent risk-taking: A comprehensive study of neural responses to rewards, pubertal development, and risk-taking behavior. The Journal of Neuroscience, 35(18), 7226–7238. 10.1523/jneurosci.4764-14.2015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Büchel C, Peters J, Banaschewski T, Bokde ALW, Bromberg U, Conrod PJ, Flor H, Papadopoulos D, Garavan H, Gowland P, Heinz A, Walter H, Ittermann B, Mann K, Martinot J-L, Paillère-Martinot M-L, Nees F, Paus T, Pausova Z, … IMAGEN Consortium (2017). Blunted ventral striatal responses to anticipated rewards foreshadow problematic drug use in novelty-seeking adolescents. Nature Communications, 8, 14140. 10.1038/ncomms14140 [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Buckert M, Schwieren C, Kudielka BM, & Fiebach CJ (2014). Acute stress affects risk taking but not ambiguity aversion. Frontiers in Neuroscience, 8. 10.3389/fnins.2014.00082 [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Burghart DR, Glimcher PW, & Lazzaro SC (2013). An expected utility maximizer walks into a bar…. Journal of Risk and Uncertainty, 46(3), 215–246. 10.1007/s11166-013-9167-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Burke CJ, Soutschek A, Weber S, Raja Beharelle A, Fehr E, Haker H, & Tobler PN (2018). Dopamine receptor-specific contributions to the computation of value. Neuropsychopharmacology, 43(6), 1415–1424. 10.1038/npp.2017.302 [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Busemeyer JR, Gluth S, Rieskamp J, & Turner BM (2019). Cognitive and neural bases of multi-attribute, multi-alternative, value-based decisions. Trends in Cognitive Sciences, 23(3), 251–263. 10.1016/j.tics.2018.12.003 [DOI] [PubMed] [Google Scholar]
  42. Butler PD, Hoptman MJ, Smith DV, Ermel JA, Calderone DJ, Lee SH, & Barch DM (2020). Grant report on social reward learning in schizophrenia. Journal of Psychiatry and Brain Science, 5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Button KS, Ioannidis JP, Mokrysz C, Nosek BA, Flint J, Robinson ES, & Munafò MR (2013). Power failure: Why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 14(5), 365–376. [DOI] [PubMed] [Google Scholar]
  44. Byrne KA, Cornwall AC, & Worthy DA (2019). Acute stress improves long-term reward maximization in decision-making under uncertainty. Brain and Cognition, 133, 84–93. 10.1016/j.bandc.2019.02.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Cakir MP, Çakar T, Girisken Y, & Yurdakul D (2018). An investigation of the neural correlates of purchase behavior through fNIRS. European Journal of Marketing, 52(1/2), 224–243. [Google Scholar]
  46. Callaway F, Rangel A, & Griffiths TL (2021). Fixation patterns in simple choice reflect optimal information sampling. PLoS Computational Biology, 17(3), e1008863. 10.1371/journal.pcbi.1008863 [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Camerer CF (2008). The potential of neuroeconomics. Economics and Philosophy, 24(3), 369–379. 10.1017/S0266267108002022 [DOI] [Google Scholar]
  48. Cameron CD, Hutcherson CA, Ferguson AM, Scheffer JA, Hadjiandreou E, & Inzlicht M (2019). Empathy is hard work: People choose to avoid empathy because of its cognitive costs. Journal of Experimental Psychology. General, 148(6), 962–976. 10.1037/xge0000595 [DOI] [PubMed] [Google Scholar]
  49. Camille N, Griffiths CA, Vo K, Fellows LK, & Kable JW (2011). Ventromedial frontal lobe damage disrupts value maximization in humans. Journal of Neuroscience, 31(20), 7527–7532. 10.1523/JNEUROSCI.6527-10.2011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Canessa N, Crespi C, Baud-Bovy G, Dodich A, Falini A, Antonellis G, & Cappa SF (2017). Neural markers of loss aversion in resting-state brain activity. NeuroImage, 146, 257–265. 10.1016/j.neuroimage.2016.11.050 [DOI] [PubMed] [Google Scholar]
  51. Canessa N, Crespi C, Motterlini M, Baud-Bovy G, Chierchia G, Pantaleo G, Tettamanti M, & Cappa SF (2013). The functional and structural neural basis of individual differences in loss aversion. Journal of Neuroscience, 33(36), 14307–14317. 10.1523/JNEUROSCI.0497-13.2013 [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Cannon CM, & Bseikri MR (2004). Is dopamine required for natural reward? Physiology & Behavior, 81(5), 741–748. 10.1016/j.physbeh.2004.04.020 [DOI] [PubMed] [Google Scholar]
  53. Caplin A, & Dean M (2008). Axiomatic methods, dopamine and reward prediction error. Current Opinion in Neurobiology, 18(2), 197–202. 10.1016/j.conb.2008.07.007 [DOI] [PubMed] [Google Scholar]
  54. Carlson JM, Foti D, Mujica-Parodi LR, Harmon-Jones E, & Hajcak G (2011). Ventral striatal and medial prefrontal BOLD activation is correlated with reward-related electrocortical activity: A combined ERP and fMRI study. NeuroImage, 57(4), 1608–1616. 10.1016/j.neuroimage.2011.05.037 [DOI] [PubMed] [Google Scholar]
  55. Carp J (2012). On the plurality of (methodological) worlds: Estimating the analytic flexibility of FMRI experiments. Frontiers in Neuroscience, 6, 149. [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Carter RM, Bowling DL, Reeck C, & Huettel SA (2012). A distinct role of the temporal-parietal junction in predicting socially guided decisions. Science (New York, N.Y.), 337(6090), 109–111. 10.1126/science.1219681 [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Carter RM, Meyer JR, & Huettel SA (2010). Functional neuroimaging of intertemporal choice models: A review. Journal of Neuroscience, Psychology, and Economics, 3(1), 27–45. 10.1037/a0018046 [DOI] [Google Scholar]
  58. Castrellon JJ, Young JS, Dang LC, Cowan RL, Zald DH, & Samanez-Larkin GR (2019). Mesolimbic dopamine D2 receptors and neural representations of subjective value. Scientific Reports, 9(1), 20229. 10.1038/s41598-019-56858-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Cazé RD, & van der Meer MAA (2013). Adaptive properties of differential learning rates for positive and negative outcomes. Biological Cybernetics, 107(6), 711–719. 10.1007/s00422-013-0571-5 [DOI] [PubMed] [Google Scholar]
  60. Cazzell M, Li L, Lin Z-J, Patel SJ, & Liu H (2012). Comparison of neural correlates of risk decision making between genders: An exploratory fNIRS study of the balloon analogue risk task (BART). NeuroImage, 62(3), 1896–1911. [DOI] [PubMed] [Google Scholar]
  61. Chakroun K, Mathar D, Wiehler A, Ganzer F, & Peters J (2020). Dopaminergic modulation of the exploration/exploitation trade-off in human decision-making. eLife, 9, e51260. 10.7554/eLife.51260 [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Chambon V, Théro H, Vidal M, Vandendriessche H, Haggard P, & Palminteri S (2020). Information about action outcomes differentially affects learning from self-determined versus imposed choices. Nature Human Behaviour, 1–13, 1067–1079. 10.1038/s41562-020-0919-5 [DOI] [PubMed] [Google Scholar]
  63. Chang LW, Gershman SJ, & Cikara M (2019). Comparing value coding models of context-dependence in social choice. Journal of Experimental Social Psychology, 85, 103847. 10.1016/j.jesp.2019.103847 [DOI] [Google Scholar]
  64. Chang SWC, Gariépy J-F, & Platt ML (2013). Neuronal reference frames for social decisions in primate frontal cortex. Nature Neuroscience, 16(2), 243–250. 10.1038/nn.3287 [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Charpentier CJ, Bromberg-Martin ES, & Sharot T (2018). Valuation of knowledge and ignorance in mesolimbic reward circuitry. Proceedings of the National Academy of Sciences of the United States of America, 115(31), E7255–E7264. 10.1073/pnas.1800547115 [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. Charpentier CJ, Iigaya K, & O’Doherty JP (2020). A neuro-computational account of arbitration between choice imitation and goal emulation during human observational learning. Neuron, 106(4), 687–699.e7. 10.1016/j.neuron.2020.02.028 [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Chase HW, Kumar P, Eickhoff SB, & Dombrovski AY (2015). Reinforcement learning models and their neural correlates: An activation likelihood estimation meta-analysis. Cognitive, Affective, & Behavioral Neuroscience, 15(2), 435–459. 10.3758/s13415-015-0338-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Chatterjee A, & Vartanian O (2014). Neuroaesthetics. Trends in Cognitive Sciences, 18(7), 370–375. 10.1016/j.tics.2014.03.003 [DOI] [PubMed] [Google Scholar]
  69. Chau BK, Law C-K, Lopez-Persem A, Klein-Flügge MC, & Rushworth MF (2020). Consistent patterns of distractor effects during decision making. eLife, 9, e53850. 10.7554/eLife.53850 [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Chein J, Albert D, O’Brien L, Uckert K, & Steinberg L (2011). Peers increase adolescent risk taking by enhancing activity in the brain’s reward circuitry. Developmental Science, 14(2), F1–F10. 10.1111/j.1467-7687.2010.01035.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  71. Chen X, Voets S, Jenkinson N, & Galea JM (2020). Dopamine-dependent loss aversion during effort-based decision-making. Journal of Neuroscience, 40(3), 661–670. 10.1523/JNEUROSCI.1760-19.2019 [DOI] [PMC free article] [PubMed] [Google Scholar]
  72. Chib VS, Yun K, Takahashi H, & Shimojo S (2013). Noninvasive remote activation of the ventral midbrain by transcranial direct current stimulation of prefrontal cortex. Translational Psychiatry, 3, e268. 10.1038/tp.2013.44 [DOI] [PMC free article] [PubMed] [Google Scholar]
  73. Cho C, Smith DV, & Delgado MR (2016). Reward sensitivity enhances ventrolateral prefrontal cortex activation during free choice. Frontiers in Neuroscience, 10. 10.3389/fnins.2016.00529 [DOI] [PMC free article] [PubMed] [Google Scholar]
  74. Cho SS, Koshimori Y, Aminian K, Obeso I, Rusjan P, Lang AE, Daskalakis ZJ, Houle S, & Strafella AP (2015). Investing in the future: Stimulation of the medial prefrontal cortex reduces discounting of delayed rewards. Neuropsychopharmacology, 40(3), 546–553. 10.1038/npp.2014.211 [DOI] [PMC free article] [PubMed] [Google Scholar]
  75. Chowdhury R, Guitart-Masip M, Lambert C, Dayan P, Huys Q, Düzel E, & Dolan RJ (2013). Dopamine restores reward prediction errors in old age. Nature Neuroscience, 16(5), 648–653. 10.1038/nn.3364 [DOI] [PMC free article] [PubMed] [Google Scholar]
  76. Clay SN, Clithero JA, Harris AM, & Reed CL (2017). Loss aversion reflects information accumulation, not bias: A drift-diffusion model study. Frontiers in Psychology, 8. 10.3389/fpsyg.2017.01708 [DOI] [PMC free article] [PubMed] [Google Scholar]
  77. Clithero JA (2018a). Improving out-of-sample predictions using response times and a model of the decision process. Journal of Economic Behavior & Organization, 148, 344–375. [Google Scholar]
  78. Clithero JA (2018b). Response times in economics: Looking through the lens of sequential sampling models. Journal of Economic Psychology, 69, 61–86. [Google Scholar]
  79. Clithero JA, & Rangel A (2014). Informatic parcellation of the network involved in the computation of subjective value. Social Cognitive and Affective Neuroscience, 9(9), 1289–1302. 10.1093/scan/nst106 [DOI] [PMC free article] [PubMed] [Google Scholar]
  80. Clithero JA, Tankersley D, & Huettel SA (2008). Foundations of Neuroeconomics: From philosophy to practice. PLoS Biology, 6(11), e298. 10.1371/journal.pbio.0060298 [DOI] [PMC free article] [PubMed] [Google Scholar]
  81. Cohen AO, & Casey B (2014). Rewiring juvenile justice: The intersection of developmental neuroscience and legal policy. Trends in Cognitive Sciences, 18(2), 63–65. [DOI] [PubMed] [Google Scholar]
  82. Cohen D, Teichman G, Volovich M, Zeevi Y, Elbaum L, Madar A, Louie K, Levy DJ, & Rechavi O (2019). Bounded rationality in C. elegans is explained by circuit-specific normalization in chemosensory pathways. Nature Communications, 10(1), 3692. 10.1038/s41467-019-11715-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  83. Cohen JD, McClure SM, & Yu AJ (2007). Should I stay or should I go? How the human brain manages the trade-off between exploitation and exploration. Philosophical Transactions of the Royal Society, B: Biological Sciences, 362(1481), 933–942. 10.1098/rstb.2007.2098 [DOI] [PMC free article] [PubMed] [Google Scholar]
  84. Cohen MX, Cavanagh JF, & Slagter HA (2011). Event-related potential activity in the basal ganglia differentiates rewards from non-rewards: Temporospatial principal components analysis and source localization of the feedback negativity: Commentary. Human Brain Mapping, 32(12), 2270–2271. [DOI] [PMC free article] [PubMed] [Google Scholar]
  85. Dalgleish T, Dunn BD, & Mobbs D (2009). Affective neuroscience: Past, present, and future. Emotion Review, 1(4), 355–368. 10.1177/1754073909338307 [DOI] [Google Scholar]
  86. Daw ND, Gershman SJ, Seymour B, Dayan P, & Dolan RJ (2011). Model-based influences on Humans’ choices and striatal prediction errors. Neuron, 69(6), 1204–1215. 10.1016/j.neuron.2011.02.027 [DOI] [PMC free article] [PubMed] [Google Scholar]
  87. Daw ND, O’Doherty JP, Dayan P, Seymour B, & Dolan RJ (2006). Cortical substrates for exploratory decisions in humans. Nature, 441(7095), 876–879. 10.1038/nature04766 [DOI] [PMC free article] [PubMed] [Google Scholar]
  88. Dayan P (2012). Instrumental vigour in punishment and reward. European Journal of Neuroscience, 35(7), 1152–1168. 10.1111/j.1460-9568.2012.08026.x [DOI] [PubMed] [Google Scholar]
  89. de Water E, Mies GW, Figner B, Yoncheva Y, van den Bos W, Castellanos FX, Cillessen AHN, & Scheres A (2017). Neural mechanisms of individual differences in temporal discounting of monetary and primary rewards in adolescents. NeuroImage, 153, 198–210. 10.1016/j.neuroimage.2017.04.013 [DOI] [PubMed] [Google Scholar]
  90. Delgado MR, Beer JS, Fellows LK, Huettel SA, Platt ML, Quirk GJ, & Schiller D (2016). Viewpoints: Dialogues on the functional role of the ventromedial prefrontal cortex. Nature Neuroscience, 19(12), 1545–1552. 10.1038/nn.4438 [DOI] [PubMed] [Google Scholar]
  91. Delgado MR, Schotter A, Ozbay EY, & Phelps EA (2008). Understanding overbidding: Using the neural circuitry of reward to design economic auctions. Science, 321(5897), 1849–1852. 10.1126/science.1158860 [DOI] [PMC free article] [PubMed] [Google Scholar]
  92. den Ouden HEM, Daw ND, Fernandez G, Elshout JA, Rijpkema M, Hoogman M, Franke B, & Cools R (2013). Dissociable effects of dopamine and serotonin on reversal learning. Neuron, 80(4), 1090–1100. 10.1016/j.neuron.2013.08.030 [DOI] [PubMed] [Google Scholar]
  93. Denison RN, Adler WT, Carrasco M, & Ma WJ (2018). Humans incorporate attention-dependent uncertainty into perceptual decisions and confidence. Proceedings of the National Academy of Sciences of the United States of America, 115(43), 11090–11095. 10.1073/pnas.1717720115 [DOI] [PMC free article] [PubMed] [Google Scholar]
  94. Diehl MM, Lempert KM, Parr AC, Ballard I, Steele VR, & Smith DV (2018). Toward an integrative perspective on the neural mechanisms underlying persistent maladaptive behaviors. The European Journal of Neuroscience, 48(3), 1870–1883. [DOI] [PMC free article] [PubMed] [Google Scholar]
  95. Diekhof EK, Kaps L, Falkai P, & Gruber O (2012). The role of the human ventral striatum and the medial orbitofrontal cortex in the representation of reward magnitude—An activation likelihood estimation meta-analysis of neuroimaging studies of passive reward expectancy and outcome processing. Neuropsychologia, 50(7), 1252–1266. 10.1016/j.neuropsychologia.2012.02.007 [DOI] [PubMed] [Google Scholar]
  96. Distefano A, Jackson F, Levinson AR, Infantolino ZP, Jarcho JM, & Nelson BD (2018). A comparison of the electrocortical response to monetary and social reward. Social Cognitive and Affective Neuroscience, 13(3), 247–255. 10.1093/scan/nsy006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  97. Dmochowski JP, Bezdek MA, Abelson BP, Johnson JS, Schumacher EH, & Parra LC (2014). Audience preferences are predicted by temporal reliability of neural processing. Nature Communications, 5(1), 1–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  98. Doi T, Fan Y, Gold JI, & Ding L (2020). The caudate nucleus contributes causally to decisions that balance reward and uncertain visual information. eLife, 9, e56694. 10.7554/eLife.56694 [DOI] [PMC free article] [PubMed] [Google Scholar]
  99. Doll BB, Hutchison KE, & Frank MJ (2011). Dopaminergic genes predict individual differences in susceptibility to confirmation bias. The Journal of Neuroscience, 31(16), 6188–6198. 10.1523/JNEUROSCI.6486-10.2011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  100. Dombrovski AY, Hallquist MN, Brown VM, Wilson J, & Szanto K (2019). Value-based choice, contingency learning, and suicidal behavior in mid-and late-life depression. Biological Psychiatry, 85(6), 506–516. [DOI] [PMC free article] [PubMed] [Google Scholar]
  101. Donoso M, Collins AGE, & Koechlin E (2014). Foundations of human reasoning in the prefrontal cortex. Science, 344(6191), 1481–1486. 10.1126/science.1252254 [DOI] [PubMed] [Google Scholar]
  102. Doré BP, Tompson SH, O’Donnell MB, An LC, Strecher V, & Falk EB (2019). Neural mechanisms of emotion regulation moderate the predictive value of affective and value-related brain responses to persuasive messages. The Journal of Neuroscience, 39(7), 1293–1300. 10.1523/jneurosci.1651-18.2018 [DOI] [PMC free article] [PubMed] [Google Scholar]
  103. Dubois J, & Adolphs R (2016). Building a science of individual differences from fMRI. Trends in Cognitive Sciences, 20(6), 425–443. 10.1016/j.tics.2016.03.014 [DOI] [PMC free article] [PubMed] [Google Scholar]
  104. Dumbalska T, Li V, Tsetsos K, & Summerfield C (2020). A map of decoy influence in human multialternative choice. Proceedings of the National Academy of Sciences of the United States of America, 117(40), 25169–25178. 10.1073/pnas.2005058117 [DOI] [PMC free article] [PubMed] [Google Scholar]
  105. Ebner NC, Ellis DM, Lin T, Rocha HA, Yang H, Dommaraju S, Soliman A, Woodard DL, Turner GR, Spreng RN, & Oliveira DS (2018). Uncovering susceptibility risk to online deception in aging. The Journals of Gerontology: Series B, 75(3), 522–533. 10.1093/geronb/gby036 [DOI] [PMC free article] [PubMed] [Google Scholar]
  106. Edgell SE (1995). Commentary on “Accepting the null hypothesis”. Memory & Cognition, 23(4), 525–525. 10.3758/BF03197252 [DOI] [PubMed] [Google Scholar]
  107. Elliott ML, Knodt AR, Ireland D, Morris ML, Poulton R, Ramrakha S, Sison ML, Moffitt TE, Caspi A, & Hariri AR (2020). What is the test-retest reliability of common task-functional MRI measures? New empirical evidence and a meta-analysis. Psychological Science, 31(7), 792–806. [DOI] [PMC free article] [PubMed] [Google Scholar]
  108. Ellsberg D (1961). Risk, ambiguity, and the Savage axioms. The Quarterly Journal of Economics, 75(4), 643–669. 10.2307/1884324 [DOI] [Google Scholar]
  109. Eres R, Decety J, Louis WR, & Molenberghs P (2015). Individual differences in local gray matter density are associated with differences in affective and cognitive empathy. NeuroImage, 117, 305–310. 10.1016/j.neuroimage.2015.05.038 [DOI] [PubMed] [Google Scholar]
  110. Esteban O, Markiewicz CJ, Blair RW, Moodie CA, Isik AI, Erramuzpe A, Kent JD, Goncalves M, DuPre E, & Snyder M (2019). fMRIPrep: A robust preprocessing pipeline for functional MRI. Nature Methods, 16(1), 111–116. [DOI] [PMC free article] [PubMed] [Google Scholar]
  111. Etkin A, Büchel C, & Gross JJ (2016). Emotion regulation involves both model-based and model-free processes. Nature Reviews. Neuroscience, 17(8), 532. 10.1038/nrn.2016.79 [DOI] [PubMed] [Google Scholar]
  112. Falk EB, Berkman ET, & Lieberman MD (2012). From neural responses to population behavior: Neural focus group predicts population-level media effects. Psychological Science, 23(5), 439–445. [DOI] [PMC free article] [PubMed] [Google Scholar]
  113. Falk EB, O’Donnell MB, Tompson S, Gonzalez R, Dal Cin S, Strecher V, Cummings KM, & An L (2016). Functional brain imaging predicts public health campaign success. Social Cognitive and Affective Neuroscience, 11(2), 204–214. [DOI] [PMC free article] [PubMed] [Google Scholar]
  114. Fareri D, Niznikiewicz M, Lee V, & Delgado M (2012). Social network modulation of reward-related signals. The Journal of Neuroscience, 32, 9045–9052. 10.1523/JNEUROSCI.0610-12.2012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  115. Fareri DS, Chang LJ, & Delgado MR (2015). Computational substrates of social value in interpersonal collaboration. Journal of Neuroscience, 35(21), 8170–8180. 10.1523/JNEUROSCI.4775-14.2015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  116. Fareri DS, Smith DV, & Delgado MR (2020). The influence of relationship closeness on default-mode network connectivity during social interactions. Social Cognitive and Affective Neuroscience, 15(3), 261–271. 10.1093/scan/nsaa031 [DOI] [PMC free article] [PubMed] [Google Scholar]
  117. Feher da Silva C, & Hare TA (2020). Humans primarily use model-based inference in the two-stage task. Nature Human Behaviour, 1–14, 1053–1066. 10.1038/s41562-020-0905-y [DOI] [PubMed] [Google Scholar]
  118. Fehr E, & Schurtenberger I (2018). Normative foundations of human cooperation. Nature Human Behaviour, 2(7), 458–468. 10.1038/s41562-018-0385-5 [DOI] [PubMed] [Google Scholar]
  119. FeldmanHall O, Dunsmoor JE, Tompary A, Hunter LE, Todorov A, & Phelps EA (2018). Stimulus generalization as a mechanism for learning to trust. Proceedings of the National Academy of Sciences of the United States of America, 115(7), E1690–E1697. 10.1073/pnas.1715227115 [DOI] [PMC free article] [PubMed] [Google Scholar]
  120. FeldmanHall O, Glimcher P, Baker AL, & Phelps EA (2016). Emotion and decision-making under uncertainty: Physiological arousal predicts increased gambling during ambiguity but not risk. Journal of Experimental Psychology. General, 145(10), 1255–1262. 10.1037/xge0000205 [DOI] [PMC free article] [PubMed] [Google Scholar]
  121. FeldmanHall O, Sokol-Hessner P, Van Bavel JJ, & Phelps EA (2014). Fairness violations elicit greater punishment on behalf of another than for oneself. Nature Communications, 5(1), 5306. 10.1038/ncomms6306 [DOI] [PMC free article] [PubMed] [Google Scholar]
  122. Ferrer RA, McDonald PG, & Barrett LF (2015). Affective science perspectives on cancer control: Strategically crafting a mutually beneficial research agenda. Perspectives on Psychological Science, 10(3), 328–345. 10.1177/1745691615576755 [DOI] [PMC free article] [PubMed] [Google Scholar]
  123. Figner B, Knoch D, Johnson EJ, Krosch AR, Lisanby SH, Fehr E, & Weber EU (2010). Lateral prefrontal cortex and self-control in intertemporal choice. Nature Neuroscience, 13(5), 538–539. 10.1038/nn.2516 [DOI] [PubMed] [Google Scholar]
  124. Fisher G (2017). An attentional drift diffusion model over binary-attribute choice. Cognition, 168, 34–45. 10.1016/j.cognition.2017.06.007 [DOI] [PubMed] [Google Scholar]
  125. Fisher G (2021). A multiattribute attentional drift diffusion model. Organizational Behavior and Human Decision Processes, 165, 167–182. 10.1016/j.obhdp.2021.04.004 [DOI] [Google Scholar]
  126. Foerde K, Race E, Verfaellie M, & Shohamy D (2013). A role for the medial temporal lobe in feedback-driven learning: Evidence from amnesia. Journal of Neuroscience, 33(13), 5698–5704. 10.1523/JNEUROSCI.5217-12.2013 [DOI] [PMC free article] [PubMed] [Google Scholar]
  127. Folloni D, Verhagen L, Mars RB, Fouragnan E, Constans C, Aubry J-F, Rushworth MF, & Sallet J (2019). Manipulation of subcortical and deep cortical activity in the primate brain using transcranial focused ultrasound stimulation. Neuron, 101(6), 1109–1116. e5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  128. Foulkes L, & Blakemore S-J (2018). Studying individual differences in human adolescent brain development. Nature Neuroscience, 21(3), 315–323. 10.1038/s41593-018-0078-4 [DOI] [PubMed] [Google Scholar]
  129. Fouragnan E, Retzler C, & Philiastides MG (2018). Separate neural representations of prediction error valence and surprise: Evidence from an fMRI meta-analysis. Human Brain Mapping, 39(7), 2887–2906. 10.1002/hbm.24047 [DOI] [PMC free article] [PubMed] [Google Scholar]
  130. Fouragnan EF, Chau BKH, Folloni D, Kolling N, Verhagen L, Klein-Flügge M, Tankelevitch L, Papageorgiou GK, Aubry J-F, Sallet J, & Rushworth MFS (2019). The macaque anterior cingulate cortex translates counterfactual choice value into actual behavioral change. Nature Neuroscience, 22(5), 797–808. 10.1038/s41593-019-0375-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  131. Frick RW (1995). Accepting the null hypothesis. Memory & Cognition, 23(1), 132–138. 10.3758/BF03210562 [DOI] [PubMed] [Google Scholar]
  132. Friedman M, & Savage LJ (1948). The utility analysis of choices involving risk. Journal of Political Economy, 56(4), 279–304. 10.1086/256692 [DOI] [Google Scholar]
  133. Friston KJ, Stephan KE, Montague R, & Dolan RJ (2014). Computational psychiatry: The brain as a phantastic organ. The Lancet Psychiatry, 1(2), 148–158. 10.1016/S2215-0366(14)70275-5 [DOI] [PubMed] [Google Scholar]
  134. Frydman C, & Camerer CF (2016). The psychology and neuroscience of financial decision making. Trends in Cognitive Sciences, 20(9), 661–675. 10.1016/j.tics.2016.07.003 [DOI] [PubMed] [Google Scholar]
  135. Fujiwara J, Usui N, Eifuku S, Iijima T, Taira M, Tsutsui K-I, & Tobler PN (2017). Ventrolateral prefrontal cortex updates chosen value according to choice set size. Journal of Cognitive Neuroscience, 30(3), 307–318. 10.1162/jocn_a_01207 [DOI] [PubMed] [Google Scholar]
  136. Gao D, & Vasconcelos N (2009). Decision-theoretic saliency: Computational principles, biological plausibility, and implications for neurophysiology and psychophysics. Neural Computation, 21(1), 239–271. 10.1162/neco.2009.11-06-391 [DOI] [PubMed] [Google Scholar]
  137. Garcia B, Cerrotti F, & Palminteri S (2021). The description–experience gap: A challenge for the neuroeconomics of decision-making under uncertainty. Philosophical Transactions of the Royal Society, B: Biological Sciences, 376(1819), 20190665. 10.1098/rstb.2019.0665 [DOI] [PMC free article] [PubMed] [Google Scholar]
  138. Garrett N, & Daw ND (2020). Biased belief updating and suboptimal choice in foraging decisions. Nature Communications, 11(1), 3417. 10.1038/s41467-020-16964-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  139. Garrett N, Lazzaro SC, Ariely D, & Sharot T (2016). The brain adapts to dishonesty. Nature Neuroscience, 19(12), 1727–1732. 10.1038/nn.4426 [DOI] [PMC free article] [PubMed] [Google Scholar]
  140. Garrison J, Erdeniz B, & Done J (2013). Prediction error in reinforcement learning: A meta-analysis of neuroimaging studies. Neuroscience & Biobehavioral Reviews, 37(7), 1297–1310. 10.1016/j.neubiorev.2013.03.023 [DOI] [PubMed] [Google Scholar]
  141. Genevsky A, & Knutson B (2015). Neural affective mechanisms predict market-level microlending. Psychological Science, 26(9), 1411–1422. 10.1177/0956797615588467 [DOI] [PMC free article] [PubMed] [Google Scholar]
  142. Genevsky A, Yoon C, & Knutson B (2017). When brain beats behavior: Neuroforecasting crowdfunding outcomes. The Journal of Neuroscience, 37(36), 8625–8634. 10.1523/JNEUROSCI.1633-16.2017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  143. Gershman SJ (2015). Do learning rates adapt to the distribution of rewards? Psychonomic Bulletin & Review, 22(5), 1320–1327. 10.3758/s13423-014-0790-3 [DOI] [PubMed] [Google Scholar]
  144. Gershman SJ (2018). The successor representation: Its computational logic and neural substrates. Journal of Neuroscience, 38(33), 7193–7200. 10.1523/JNEUROSCI.0151-18.2018 [DOI] [PMC free article] [PubMed] [Google Scholar]
  145. Gershman SJ, & Daw ND (2017). Reinforcement learning and episodic memory in humans and animals: An integrative framework. Annual Review of Psychology, 68, 101–128. 10.1146/annurev-psych-122414-033625 [DOI] [PMC free article] [PubMed] [Google Scholar]
  146. Gilaie-Dotan S, Tymula A, Cooper N, Kable JW, Glimcher PW, & Levy I (2014). Neuroanatomy predicts individual risk attitudes. Journal of Neuroscience, 34(37), 12394–12401. 10.1523/JNEUROSCI.1600-14.2014 [DOI] [PMC free article] [PubMed] [Google Scholar]
  147. Gillan CM, Otto AR, Phelps EA, & Daw ND (2015). Model-based learning protects against forming habits. Cognitive, Affective, & Behavioral Neuroscience, 15(3), 523–536. 10.3758/s13415-015-0347-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  148. Glimcher P, Camerer C, Fehr E, & Poldrack R (2009). Neuroeconomics: Decision making and the brain. Elsevier Academic Press. [Google Scholar]
  149. Gluth S, Kern N, Kortmann M, & Vitali CL (2020). Value-based attention but not divisive normalization influences decisions with multiple alternatives. Nature Human Behaviour, 4(6), 634–645. 10.1038/s41562-020-0822-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  150. Gorgolewski KJ, Alfaro-Almagro F, Auer T, Bellec P, Capotă M, Chakravarty MM, Churchill NW, Cohen AL, Craddock RC, & Devenyi GA (2017). BIDS apps: Improving ease of use, accessibility, and reproducibility of neuroimaging data analysis methods. PLoS Computational Biology, 13(3), e1005209. [DOI] [PMC free article] [PubMed] [Google Scholar]
  151. Gorgolewski KJ, Auer T, Calhoun VD, Craddock RC, Das S, Duff EP, Flandin G, Ghosh SS, Glatard T, & Halchenko YO (2016). The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments. Scientific Data, 3(1), 1–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  152. Gorgolewski KJ, & Poldrack RA (2016). A practical guide for improving transparency and reproducibility in neuroimaging research. PLoS Biology, 14(7), e1002506. [DOI] [PMC free article] [PubMed] [Google Scholar]
  153. Gorgolewski KJ, Varoquaux G, Rivera G, Schwarz Y, Ghosh SS, Maumet C, Sochat VV, Nichols TE, Poldrack RA, & Poline J-B (2015). NeuroVault. Org: A web-based repository for collecting and sharing unthresholded statistical maps of the human brain. Frontiers in Neuroinformatics, 9, 8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  154. Gossner O, Steiner J, & Stewart C (2021). Attention please! Econometrica, 89(4), 1717–1751. 10.3982/ECTA17173 [DOI] [Google Scholar]
  155. Grabenhorst F, Tsutsui K-I, Kobayashi S, & Schultz W (2019). Primate prefrontal neurons signal economic risk derived from the statistics of recent reward experience. eLife, 8, e44838. 10.7554/eLife.44838 [DOI] [PMC free article] [PubMed] [Google Scholar]
  156. Grayot JD (2020). Dual process theories in behavioral economics and Neuroeconomics: A critical review. Review of Philosophy and Psychology, 11(1), 105–136. 10.1007/s13164-019-00446-9 [DOI] [Google Scholar]
  157. Grimaldi P, Lau H, & Basso MA (2015). There are things that we know that we know, and there are things that we do not know we do not know: Confidence in decision-making. Neuroscience & Biobehavioral Reviews, 55, 88–97. 10.1016/j.neubiorev.2015.04.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  158. Groman SM, Massi B, Mathias SR, Lee D, & Taylor JR (2019). Model-free and model-based influences in addiction-related behaviors. Biological Psychiatry, 85(11), 936–945. 10.1016/j.biopsych.2018.12.017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  159. Gul F, & Pesendorfer W (2008). The Case for Mindless Economics. In The Foundations of Positive and Normative Economics: A Hand Book. Oxford University Press. [Google Scholar]
  160. Hackel LM, & Amodio DM (2018). Computational neuroscience approaches to social cognition. Current Opinion in Psychology, 24, 92–97. 10.1016/j.copsyc.2018.09.001 [DOI] [PubMed] [Google Scholar]
  161. Hackel LM, Zaki J, & Van Bavel JJ (2017). Social identity shapes social valuation: Evidence from prosocial behavior and vicarious reward. Social Cognitive and Affective Neuroscience, 12(8), 1219–1228. 10.1093/scan/nsx045 [DOI] [PMC free article] [PubMed] [Google Scholar]
  162. Hakim A, & Levy DJ (2019). A gateway to consumers’ minds: Achievements, caveats, and prospects of electroencephalography-based prediction in neuromarketing. Wiley Interdisciplinary Reviews: Cognitive Science, 10(2), e1485. [DOI] [PubMed] [Google Scholar]
  163. Halevy Y (2007). Ellsberg revisited: An experimental study. Econometrica, 75(2), 503–536. 10.1111/j.1468-0262.2006.00755.x [DOI] [Google Scholar]
  164. Halfmann K, Hedgcock W, Kable J, & Denburg NL (2015). Individual differences in the neural signature of subjective value among older adults. Social Cognitive and Affective Neuroscience, 11(7), 1111–1120. 10.1093/scan/nsv078 [DOI] [PMC free article] [PubMed] [Google Scholar]
  165. Hallquist MN, Hall NT, Schreiber AM, & Dombrovski AY (2018). Interpersonal dysfunction in borderline personality: A decision neuroscience perspective. Current Opinion in Psychology, 21, 94–104. 10.1016/j.copsyc.2017.09.011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  166. Hamilton KR, Smith JF, Gonçalves SF, Nketia JA, Tasheuras ON, Yoon M, Rubia K, Chirles TJ, Lejuez CW, & Shackman AJ (2020). Striatal bases of temporal discounting in early adolescents. Neuropsychologia, 144, 107492. 10.1016/j.neuropsychologia.2020.107492 [DOI] [PMC free article] [PubMed] [Google Scholar]
  167. Hammou K, Galib M, & Melloul J (2013). The contributions of Neuromarketing in marketing research. Journal of Management Research, 5, 20. 10.5296/jmr.v5i4.4023 [DOI] [Google Scholar]
  168. Han SD, Boyle PA, Arfanakis K, Fleischman D, Yu L, James BD, & Bennett DA (2016). Financial literacy is associated with white matter integrity in old age. NeuroImage, 130, 223–229. 10.1016/j.neuroimage.2016.02.030 [DOI] [PMC free article] [PubMed] [Google Scholar]
  169. Han SD, Boyle PA, James BD, Yu L, & Bennett DA (2016). Mild cognitive impairment and susceptibility to scams in old age. Journal of Alzheimer’s Disease, 49, 845–851. 10.3233/JAD-150442 [DOI] [PMC free article] [PubMed] [Google Scholar]
  170. Han SD, Boyle PA, Yu L, Arfanakis K, James BD, Fleischman DA, & Bennett DA (2016). Grey matter correlates of susceptibility to scams in community-dwelling older adults. Brain Imaging and Behavior, 10(2), 524–532. [DOI] [PMC free article] [PubMed] [Google Scholar]
  171. Hare TA, Hakimi S, & Rangel A (2014). Activity in dlPFC and its effective connectivity to vmPFC are associated with temporal discounting. Frontiers in Neuroscience, 8, 50. 10.3389/fnins.2014.00050 [DOI] [PMC free article] [PubMed] [Google Scholar]
  172. Harris A, Clithero JA, & Hutcherson CA (2018). Accounting for taste: A multi-attribute Neurocomputational model explains the neural dynamics of choices for self and others. Journal of Neuroscience, 38(37), 7952–7968. 10.1523/JNEUROSCI.3327-17.2018 [DOI] [PMC free article] [PubMed] [Google Scholar]
  173. Hart AS, Rutledge RB, Glimcher PW, & Phillips PEM (2014). Phasic dopamine release in the rat nucleus Accumbens symmetrically encodes a reward prediction error term. Journal of Neuroscience, 34(3), 698–704. 10.1523/JNEUROSCI.2489-13.2014 [DOI] [PMC free article] [PubMed] [Google Scholar]
  174. Hartley CA, & Somerville LH (2015). The neuroscience of adolescent decision-making. Current Opinion in Behavioral Sciences, 5, 108–115. 10.1016/j.cobeha.2015.09.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  175. Hayden B, Heilbronner S, & Platt M (2010). Ambiguity aversion in rhesus macaques. Frontiers in Neuroscience, 4, 166. 10.3389/fnins.2010.00166 [DOI] [PMC free article] [PubMed] [Google Scholar]
  176. He Q, Chen M, Chen C, Xue G, Feng T, & Bechara A (2016). Anodal stimulation of the left DLPFC increases IGT scores and decreases delay discounting rate in healthy males. Frontiers in Psychology, 7, 1421. 10.3389/fpsyg.2016.01421 [DOI] [PMC free article] [PubMed] [Google Scholar]
  177. Heeger DJ (1992). Normalization of cell responses in cat striate cortex. Visual Neuroscience, 9(2), 181–197. 10.1017/s0952523800009640 [DOI] [PubMed] [Google Scholar]
  178. Heilbronner SR, & Hayden BY (2016). The description-experience gap in risky choice in nonhuman primates. Psychonomic Bulletin & Review, 23(2), 593–600. 10.3758/s13423-015-0924-2 [DOI] [PubMed] [Google Scholar]
  179. Hempel CG (1970). On the “standard conception” of scientific theories. In Radner M & Winoker S (Eds.), Analyses of theories and methods of physics and psychology (NED-new edition) (Vol. 4, pp. 142–163). University of Minnesota Press, JSTOR. Retrieved from 10.5749/j.ctttsvns.6 [DOI] [Google Scholar]
  180. Heron CL, Kolling N, Plant O, Kienast A, Janska R, Ang Y-S, Fallon S, Husain M, & Apps MAJ (2020). Dopamine modulates dynamic decision-making during foraging. Journal of Neuroscience, 40(27), 5273–5282. 10.1523/JNEUROSCI.2586-19.2020 [DOI] [PMC free article] [PubMed] [Google Scholar]
  181. Hill CA, Suzuki S, Polania R, Moisa M, O’Doherty JP, & Ruff CC (2017). A causal account of the brain network computations underlying strategic social behavior. Nature Neuroscience, 20(8), 1142–1149. [DOI] [PubMed] [Google Scholar]
  182. Holper L, ten Brincke RHW, Wolf M, & Murphy RO (2014). FNIRS derived hemodynamic signals and electrodermal responses in a sequential risk-taking task. Brain Research, 1557, 141–154. 10.1016/j.brainres.2014.02.013 [DOI] [PubMed] [Google Scholar]
  183. Holper L, Wolf M, & Tobler PN (2014). Comparison of functional near-infrared spectroscopy and electrodermal activity in assessing objective versus subjective risk during risky financial decisions. NeuroImage, 84, 833–842. 10.1016/j.neuroimage.2013.09.047 [DOI] [PubMed] [Google Scholar]
  184. Hsu M, Bhatt M, Adolphs R, Tranel D, & Camerer CF (2005). Neural systems responding to degrees of uncertainty in human decision-making. Science, 310(5754), 1680–1683. 10.1126/science.1115327 [DOI] [PubMed] [Google Scholar]
  185. Huettel SA (2010). Ten challenges for decision neuroscience. Frontiers in Neuroscience, 4, 171. 10.3389/fnins.2010.00171 [DOI] [PMC free article] [PubMed] [Google Scholar]
  186. Huettel SA, Stowe CJ, Gordon EM, Warner BT, & Platt ML (2006). Neural signatures of economic preferences for risk and ambiguity. Neuron, 49(5), 765–775. 10.1016/j.neuron.2006.01.024 [DOI] [PubMed] [Google Scholar]
  187. Hughes V (2010). Science in court: Head case. Nature, 464(7287), 340–342. 10.1038/464340a [DOI] [PubMed] [Google Scholar]
  188. Hunt LT, Dolan RJ, & Behrens TE (2014). Hierarchical competitions subserving multi-attribute choice. Nature Neuroscience, 17(11), 1613–1622. 10.1038/nn.3836 [DOI] [PMC free article] [PubMed] [Google Scholar]
  189. Hunt LT, & Hayden BY (2017). A distributed, hierarchical and recurrent framework for reward-based choice. Nature Reviews Neuroscience, 18(3), 172–182. 10.1038/nrn.2017.7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  190. Hutcherson CA, Bushong B, & Rangel A (2015). A Neurocomputational model of altruistic choice and its implications. Neuron, 87(2), 451–462. 10.1016/j.neuron.2015.06.031 [DOI] [PMC free article] [PubMed] [Google Scholar]
  191. Huys QJM, Cruickshank A, & Seriès P (2014). Reward-based learning, model-based and model-free. In Jaeger D & Jung R (Eds.), Encyclopedia of Computational Neuroscience. Springer. 10.1007/978-1-4614-7320-6_674-1 [DOI] [Google Scholar]
  192. Huys QJM, Maia TV, & Frank MJ (2016). Computational psychiatry as a bridge from neuroscience to clinical applications. Nature Neuroscience, 19(3), 404–413. 10.1038/nn.4238 [DOI] [PMC free article] [PubMed] [Google Scholar]
  193. Izuma K, Saito DN, & Sadato N (2008). Processing of social and monetary rewards in the human striatum. Neuron, 58(2), 284–294. 10.1016/j.neuron.2008.03.020 [DOI] [PubMed] [Google Scholar]
  194. Jarcho JM, Berkman ET, & Lieberman MD (2011). The neural basis of rationalization: Cognitive dissonance reduction during decision-making. Social Cognitive and Affective Neuroscience, 6(4), 460–467. 10.1093/scan/nsq054 [DOI] [PMC free article] [PubMed] [Google Scholar]
  195. Jia R, Furlong E, Gao S, Santos LR, & Levy I (2020). Learning about the Ellsberg paradox reduces, but does not abolish, ambiguity aversion. PLoS One, 15(3), e0228782. 10.1371/journal.pone.0228782 [DOI] [PMC free article] [PubMed] [Google Scholar]
  196. Jones RM, Somerville LH, Li J, Ruberry EJ, Libby V, Glover G, Voss HU, Ballon DJ, & Casey BJ (2011). Behavioral and neural properties of social reinforcement learning. Journal of Neuroscience, 31(37), 13039–13045. 10.1523/JNEUROSCI.2972-11.2011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  197. Josef AK, Richter D, Samanez-Larkin GR, Wagner GG, Hertwig R, & Mata R (2016). Stability and change in risk-taking propensity across the adult life span. Journal of Personality and Social Psychology, 111(3), 430–450. 10.1037/pspp0000090 [DOI] [PMC free article] [PubMed] [Google Scholar]
  198. Jung WH, Lee S, Lerman C, & Kable JW (2018). Amygdala functional and structural connectivity predicts individual risk tolerance. Neuron, 98(2), 394–404.e4. 10.1016/j.neuron.2018.03.019 [DOI] [PMC free article] [PubMed] [Google Scholar]
  199. Kable JW, & Glimcher PW (2010). An “as soon as possible” effect in human intertemporal decision making: Behavioral evidence and neural mechanisms. Journal of Neurophysiology, 103(5), 2513–2531. 10.1152/jn.00177.2009 [DOI] [PMC free article] [PubMed] [Google Scholar]
  200. Kahneman D, & Tversky A (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2), 263–291. 10.2307/1914185 [DOI] [Google Scholar]
  201. Kahnt T, & Tobler PN (2016). Dopamine regulates stimulus generalization in the human hippocampus. eLife, 5, e12678. 10.7554/eLife.12678 [DOI] [PMC free article] [PubMed] [Google Scholar]
  202. Kappes A, Harvey AH, Lohrenz T, Montague PR, & Sharot T (2020). Confirmation bias in the utilization of others’ opinion strength. Nature Neuroscience, 23(1), 130–137. 10.1038/s41593-019-0549-2 [DOI] [PubMed] [Google Scholar]
  203. Kaptein MC, Van Emden R, & Iannuzzi D (2016). Tracking the decoy: Maximizing the decoy effect through sequential experimentation. Palgrave Communications, 2(1), 1–9. 10.1057/palcomms.2016.82 [DOI] [Google Scholar]
  204. Karmarkar UR, & Yoon C (2016). Consumer neuroscience: Advances in understanding consumer psychology. Current Opinion in Psychology, 10, 160–165. 10.1016/j.copsyc.2016.01.010 [DOI] [Google Scholar]
  205. Karrer TM, Josef AK, Mata R, Morris ED, & Samanez-Larkin GR (2017). Reduced dopamine receptors and transporters but not synthesis capacity in normal aging adults: A meta-analysis. Neurobiology of Aging, 57, 36–46. 10.1016/j.neurobiolaging.2017.05.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  206. Kaskan PM, Costa VD, Eaton HP, Zemskova JA, Mitz AR, Leopold DA, Ungerleider LG, & Murray EA (2016). Learned value shapes responses to objects in frontal and ventral stream networks in macaque monkeys. Cerebral Cortex, 27(5), 2739–2757. 10.1093/cercor/bhw113 [DOI] [PMC free article] [PubMed] [Google Scholar]
  207. Kätsyri J, Hari R, Ravaja N, & Nummenmaa L (2013). The opponent matters: Elevated fMRI reward responses to winning against a human versus a computer opponent during interactive video game playing. Cerebral Cortex, 23(12), 2829–2839. 10.1093/cercor/bhs259 [DOI] [PubMed] [Google Scholar]
  208. Kekic M, McClelland J, Campbell I, Nestler S, Rubia K, David AS, & Schmidt U (2014). The effects of prefrontal cortex transcranial direct current stimulation (tDCS) on food craving and temporal discounting in women with frequent food cravings. Appetite, 78, 55–62. 10.1016/j.appet.2014.03.010 [DOI] [PubMed] [Google Scholar]
  209. Kempadoo KA, Mosharov EV, Choi SJ, Sulzer D, & Kandel ER (2016). Dopamine release from the locus coeruleus to the dorsal hippocampus promotes spatial learning and memory. Proceedings of the National Academy of Sciences of the United States of America, 113(51), 14835–14840. 10.1073/pnas.1616515114 [DOI] [PMC free article] [PubMed] [Google Scholar]
  210. Kerr-Gaffney J, Harrison A, & Tchanturia K (2019). Cognitive and affective empathy in eating disorders: A systematic review and meta-analysis. Frontiers in Psychiatry, 10, 102. 10.3389/fpsyt.2019.00102 [DOI] [PMC free article] [PubMed] [Google Scholar]
  211. Keysers C, Gazzola V, & Wagenmakers E-J (2020). Using Bayes factor hypothesis testing in neuroscience to establish evidence of absence. Nature Neuroscience, 23(7), 788–799. 10.1038/s41593-020-0660-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  212. Khaw MW, Glimcher PW, & Louie K (2017). Normalized value coding explains dynamic adaptation in the human valuation process. Proceedings of the National Academy of Sciences of the United States of America, 114(48), 12696–12701. 10.1073/pnas.1715293114 [DOI] [PMC free article] [PubMed] [Google Scholar]
  213. Khaw MW, Li Z, & Woodford M (2021). Cognitive imprecision and Small-stakes risk aversion. The Review of Economic Studies, 88(4), 1979–2013. 10.1093/restud/rdaa044 [DOI] [Google Scholar]
  214. Kircanski K, Notthoff N, DeLiema M, Samanez-Larkin GR, Shadel D, Mottola G, Carstensen LL, & Gotlib IH (2018). Emotional arousal may increase susceptibility to fraud in older and younger adults. Psychology and Aging, 33(2), 325–337. 10.1037/pag0000228 [DOI] [PMC free article] [PubMed] [Google Scholar]
  215. Kirschner M, Hager OM, Bischof M, Hartmann MN, Kluge A, Seifritz E, Tobler PN, & Kaiser S (2016). Ventral striatal hypoactivation is associated with apathy but not diminished expression in patients with schizophrenia. Journal of Psychiatry & Neuroscience, 41(3), 152–161. 10.1503/jpn.140383 [DOI] [PMC free article] [PubMed] [Google Scholar]
  216. Klibanoff P, Marinacci M, & Mukerji S (2009). Recursive smooth ambiguity preferences. Journal of Economic Theory, 144(3), 930–976. 10.1016/j.jet.2008.10.007 [DOI] [Google Scholar]
  217. Knutson B, & Genevsky A (2018). Neuroforecasting aggregate choice. Current Directions in Psychological Science, 27(2), 110–115. 10.1177/0963721417737877 [DOI] [PMC free article] [PubMed] [Google Scholar]
  218. Knutson B, & Peterson R (2005). Neurally reconstructing expected utility. Games and Economic Behavior, 52(2), 305–315. 10.1016/j.geb.2005.01.002 [DOI] [Google Scholar]
  219. Kobayashi K, & Hsu M (2019). Common neural code for reward and information value. Proceedings of the National Academy of Sciences of the United States of America, 116(26), 13061–13066. 10.1073/pnas.1820145116 [DOI] [PMC free article] [PubMed] [Google Scholar]
  220. Koizumi A, Amano K, Cortese A, Shibata K, Yoshida W, Seymour B, Kawato M, & Lau H (2016). Fear reduction without fear through reinforcement of neural activity that bypasses conscious exposure. Nature Human Behaviour, 1(1), 1–7. 10.1038/s41562-016-0006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  221. Kool W, Cushman FA, & Gershman SJ (2016). When does model-based control pay off? PLoS Computational Biology, 12(8), e1005090. 10.1371/journal.pcbi.1005090 [DOI] [PMC free article] [PubMed] [Google Scholar]
  222. Kragel P, Han X, Kraynak T, Gianaros PJ, & Wager T (2020). FMRI can be highly reliable, but it depends on what you measure. Psychological Science, 32(4), 622–626. [DOI] [PMC free article] [PubMed] [Google Scholar]
  223. Kragel PA, Koban L, Barrett LF, & Wager TD (2018). Representation, pattern information, and brain signatures: From neurons to neuroimaging. Neuron, 99(2), 257–273. 10.1016/j.neuron.2018.06.009 [DOI] [PMC free article] [PubMed] [Google Scholar]
  224. Krajbich I, Camerer C, Ledyard J, & Rangel A (2009). Using neural measures of economic value to solve the public goods free-rider problem. Science, 326(5952), 596–599. 10.1126/science.1177302 [DOI] [PubMed] [Google Scholar]
  225. Krajbich I, Lu D, Camerer C, & Rangel A (2012). The attentional drift-diffusion model extends to simple purchasing decisions. Frontiers in Psychology, 3, 193. 10.3389/fpsyg.2012.00193 [DOI] [PMC free article] [PubMed] [Google Scholar]
  226. Krajbich I, & Rangel A (2011). Multialternative drift-diffusion model predicts the relationship between visual fixations and choice in value-based decisions. Proceedings of the National Academy of Sciencesof the United States of America, 108(33), 13852–13857. 10.1073/pnas.1101328108 [DOI] [PMC free article] [PubMed] [Google Scholar]
  227. Krakauer JW, Ghazanfar AA, Gomez-Marin A, MacIver MA, & Poeppel D (2017). Neuroscience needs behavior: Correcting a reductionist bias. Neuron, 93(3), 480–490. 10.1016/j.neuron.2016.12.041 [DOI] [PubMed] [Google Scholar]
  228. Krampe C, Gier NR, & Kenning P (2018). The application of Mobile fNIRS in marketing research—Detecting the “first-choice-Brand” effect. Frontiers in Human Neuroscience, 12, 433. [DOI] [PMC free article] [PubMed] [Google Scholar]
  229. Krastev S, McGuire JT, McNeney D, Kable JW, Stolle D, Gidengil E, & Fellows LK (2016). Do political and economic choices rely on common neural substrates? A systematic review of the emerging Neuropolitics literature. Frontiers in Psychology, 7, 264. 10.3389/fpsyg.2016.00264 [DOI] [PMC free article] [PubMed] [Google Scholar]
  230. Kühn S, Strelow E, & Gallinat J (2016). Multiple “buy buttons” in the brain: Forecasting chocolate sales at point-of-sale based on functional brain activation using fMRI. NeuroImage, 136, 122–128. 10.1016/j.neuroimage.2016.05.021 [DOI] [PubMed] [Google Scholar]
  231. Kurdi B, Gershman SJ, & Banaji MR (2019). Model-free and model-based learning processes in the updating of explicit and implicit evaluations. Proceedings of the National Academy of Sciences of the United States of America, 116(13), 6035–6044. 10.1073/pnas.1820238116 [DOI] [PMC free article] [PubMed] [Google Scholar]
  232. Kurtz-David V, Persitz D, Webb R, & Levy DJ (2019). The neural computation of inconsistent choice behavior. Nature Communications, 10(1), 1583. 10.1038/s41467-019-09343-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  233. Landry P, & Webb R (2021). Pairwise normalization: A neuroeconomic theory of multi-attribute choice. Journal of Economic Theory, 193, 105221. 10.1016/j.jet.2021.105221 [DOI] [Google Scholar]
  234. Laureiro-Martínez D, Brusoni S, Canessa N, & Zollo M (2015). Understanding the exploration-exploitation dilemma: An fMRI study of attention control and decision-making performance: Understanding the exploration-exploitation dilemma. Strategic Management Journal, 36(3), 319–338. 10.1002/smj.2221 [DOI] [Google Scholar]
  235. Lebreton M, Bavard S, Daunizeau J, & Palminteri S (2019). Assessing inter-individual differences with task-related functional neuroimaging. Nature Human Behaviour, 3(9), 897–905. 10.1038/s41562-019-0681-8 [DOI] [PubMed] [Google Scholar]
  236. Lee SW, Shimojo S, & O’Doherty JP (2014). Neural computations underlying arbitration between model-based and model-free learning. Neuron, 81(3), 687–699. 10.1016/j.neuron.2013.11.028 [DOI] [PMC free article] [PubMed] [Google Scholar]
  237. Lefebvre G, Lebreton M, Meyniel F, Bourgeois-Gironde S, & Palminteri S (2017). Behavioural and neural characterization of optimistic reinforcement learning. Nature Human Behaviour, 1(4), 1–9. 10.1038/s41562-017-0067 [DOI] [Google Scholar]
  238. Lempert KM, Chen YL, & Fleming SM (2015). Relating pupil dilation and metacognitive confidence during auditory decision-making. PLoS One, 10(5), e0126588. 10.1371/journal.pone.0126588 [DOI] [PMC free article] [PubMed] [Google Scholar]
  239. Lempert KM, Lackovic SF, Tobe RH, Glimcher PW, & Phelps EA (2017). Propranolol reduces reference-dependence in intertemporal choice. Social Cognitive and Affective Neuroscience, 12(9), 1394–1401. 10.1093/scan/nsx081 [DOI] [PMC free article] [PubMed] [Google Scholar]
  240. Lempert KM, MacNear KA, Wolk DA, & Kable JW (2020). Links between autobiographical memory richness and temporal discounting in older adults. Scientific Reports, 10(1), 6431. 10.1038/s41598-020-63373-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  241. Lenow JK, Constantino SM, Daw ND, & Phelps EA (2017). Chronic and acute stress promote overexploitation in serial decision making. The Journal of Neuroscience, 37(23), 5681–5689. 10.1523/JNEUROSCI.3618-16.2017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  242. León-Carrión J, & León-Domínguez U (2012). Functional near-infrared spectroscopy (fNIRS): Principles and neuroscientific applications. In Bright P (Ed.), Neuroimaging methods (pp. 48–74). IntechOpen. [Google Scholar]
  243. Leotti LA, & Delgado MR (2014). The value of exercising control over monetary gains and losses. Psychological Science, 25(2), 596–604. 10.1177/0956797613514589 [DOI] [PMC free article] [PubMed] [Google Scholar]
  244. Lerche V, & Voss A (2017). Retest reliability of the parameters of the Ratcliff diffusion model. Psychological Research, 81(3), 629–652. 10.1007/s00426-016-0770-5 [DOI] [PubMed] [Google Scholar]
  245. Levallois C, Clithero JA, Wouters P, Smidts A, & Huettel SA (2012). Translating upwards: Linking the neural and social sciences via neuroeconomics. Nature Reviews Neuroscience, 13(11), 789–797. 10.1038/nrn3354 [DOI] [PubMed] [Google Scholar]
  246. Levy DJ, & Glimcher PW (2012). The root of all value: A neural common currency for choice. Current Opinion in Neurobiology, 22(6), 1027–1038. 10.1016/j.conb.2012.06.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  247. Levy I (2017). Neuroanatomical substrates for risk behavior. The Neuroscientist, 23(3), 275–286. 10.1177/1073858416672414 [DOI] [PMC free article] [PubMed] [Google Scholar]
  248. Levy I (2018). Information utility in the human brain. Proceedings of the National Academy of Sciences of the United States of America, 115(31), 7846–7848. 10.1073/pnas.1809865115 [DOI] [PMC free article] [PubMed] [Google Scholar]
  249. Levy I, Lazzaro SC, Rutledge RB, & Glimcher PW (2011). Choice from non-choice: Predicting consumer preferences from blood oxygenation level-dependent signals obtained during passive viewing. Journal of Neuroscience, 31(1), 118–125. [DOI] [PMC free article] [PubMed] [Google Scholar]
  250. Levy I, Snell J, Nelson AJ, Rustichini A, & Glimcher PW (2010). Neural representation of subjective value under risk and ambiguity. Journal of Neurophysiology, 103(2), 1036–1047. 10.1152/jn.00853.2009 [DOI] [PubMed] [Google Scholar]
  251. Levy N (2007). Neuroethics: Challenges for the 21st century. Cambridge University Press. [Google Scholar]
  252. Li R, Smith DV, Clithero JA, Venkatraman V, Carter RM, & Huettel SA (2017). Reason’s enemy is not emotion: Engagement of cognitive control networks explains biases in gain/loss framing. The Journal of Neuroscience, 37(13), 3588–3598. 10.1523/JNEUROSCI.3486-16.2017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  253. Lin A, Adolphs R, & Rangel A (2012). Social and monetary reward learning engage overlapping neural substrates. Social Cognitive and Affective Neuroscience, 7(3), 274–281. 10.1093/scan/nsr006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  254. Lockwood PL, Apps MAJ, & Chang SWC (2020). Is there a ‘social’ brain? Implementations and Algorithms. Trends in Cognitive Sciences, 24(10), 802–813. 10.1016/j.tics.2020.06.011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  255. Lockwood PL, Apps MAJ, Roiser JP, & Viding E (2015). Encoding of vicarious reward prediction in anterior cingulate cortex and relationship with trait empathy. The Journal of Neuroscience, 35(40), 13720–13727. 10.1523/JNEUROSCI.1703-15.2015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  256. Lockwood PL, Apps MAJ, Valton V, Viding E, & Roiser JP (2016). Neurocomputational mechanisms of prosocial learning and links to empathy. Proceedings of the National Academy of Sciences of the United States of America, 113(35), 9763–9768. 10.1073/pnas.1603198113 [DOI] [PMC free article] [PubMed] [Google Scholar]
  257. Lockwood PL, Hamonet M, Zhang SH, Ratnavel A, Salmony FU, Husain M, & Apps MAJ (2017). Prosocial apathy for helping others when effort is required. Nature Human Behaviour, 1(7), 0131. 10.1038/s41562-017-0131 [DOI] [PMC free article] [PubMed] [Google Scholar]
  258. Lopez-Guzman S, Konova AB, Louie K, & Glimcher PW (2018). Risk preferences impose a hidden distortion on measures of choice impulsivity. PLoS One, 13(1), e0191357. 10.1371/journal.pone.0191357 [DOI] [PMC free article] [PubMed] [Google Scholar]
  259. Lopez-Persem A, Rigoux L, Bourgeois-Gironde S, Daunizeau J, & Pessiglione M (2017). Choose, rate or squeeze: Comparison of economic value functions elicited by different behavioral tasks. PLoS Computational Biology, 13(11), e1005848. 10.1371/journal.pcbi.1005848 [DOI] [PMC free article] [PubMed] [Google Scholar]
  260. Louie K, & Glimcher PW (2010). Separating value from choice: Delay discounting activity in the lateral intraparietal area. The Journal of Neuroscience, 30(16), 5498–5507. 10.1523/JNEUROSCI.5742-09.2010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  261. Louie K, Glimcher PW, & Webb R (2015). Adaptive neural coding: From biological to behavioral decision-making. Current Opinion in Behavioral Sciences, 5, 91–99. 10.1016/j.cobeha.2015.08.008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  262. Louie K, Grattan LE, & Glimcher PW (2011). Reward value-based gain control: Divisive normalization in parietal cortex. The Journal of Neuroscience, 31(29), 10627–10639. 10.1523/JNEUROSCI.1237-11.2011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  263. Louie K, Khaw MW, & Glimcher PW (2013). Normalization is a general neural mechanism for context-dependent decision making. Proceedings of the National Academy of Sciences of the United States of America, 110(15), 6139–6144. 10.1073/pnas.1217854110 [DOI] [PMC free article] [PubMed] [Google Scholar]
  264. Luijten M, Schellekens AF, Kühn S, Machielse MWJ, & Sescousse G (2017). Disruption of reward processing in addiction: An image-based meta-analysis of functional magnetic resonance imaging studies. JAMA Psychiatry, 74(4), 387–398. 10.1001/jamapsychiatry.2016.3084 [DOI] [PubMed] [Google Scholar]
  265. Luzardo A, Rivest F, Alonso E, & Ludvig EA (2017). A drift–diffusion model of interval timing in the peak procedure. Journal of Mathematical Psychology, 77, 111–123. 10.1016/j.jmp.2016.10.002 [DOI] [Google Scholar]
  266. Ly V, Wang KS, Bhanji J, & Delgado MR (2019). A reward-based framework of perceived control. Frontiers in Neuroscience, 13. 10.3389/fnins.2019.00065 [DOI] [PMC free article] [PubMed] [Google Scholar]
  267. Ma WJ, & Jazayeri M (2014). Neural coding of uncertainty and probability. Annual Review of Neuroscience, 37, 205–220. 10.1146/annurev-neuro-071013-014017 [DOI] [PubMed] [Google Scholar]
  268. MacDuffie KE, MacInnes J, Dickerson KC, Eddington KM, Strauman TJ, & Adcock RA (2018). Single session real-time fMRI neurofeedback has a lasting impact on cognitive behavioral therapy strategies. NeuroImage: Clinical, 19, 868–875. 10.1016/j.nicl.2018.06.009 [DOI] [PMC free article] [PubMed] [Google Scholar]
  269. MacInnes JJ, Adcock RA, Stocco A, Prat CS, Rao RPN, & Dickerson KC (2020). Pyneal: Open source real-time fMRI software. Frontiers in Neuroscience, 14, 900. 10.3389/fnins.2020.00900 [DOI] [PMC free article] [PubMed] [Google Scholar]
  270. Mackey S, Allgaier N, Chaarani B, Spechler P, Orr C, Bunn J, Allen NB, Alia-Klein N, Batalla A, & Blaine S (2019). Mega-analysis of gray matter volume in substance dependence: General and substance-specific regional effects. American Journal of Psychiatry, 176(2), 119–128. [DOI] [PMC free article] [PubMed] [Google Scholar]
  271. MacNiven KH, Jensen ELS, Borg N, Padula CB, Humphreys K, & Knutson B (2018). Association of Neural Responses to drug cues with subsequent relapse to stimulant use. JAMA Network Open, 1(8), e186466–e186466. 10.1001/jamanetworkopen.2018.6466 [DOI] [PMC free article] [PubMed] [Google Scholar]
  272. Magen E, Dweck CS, & Gross JJ (2008). The hidden zero effect: Representing a single choice as an extended sequence reduces impulsive choice. Psychological Science, 19(7), 648–649. 10.1111/j.1467-9280.2008.02137.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  273. Magen E, Kim B, Dweck CS, Gross JJ, & McClure SM (2014). Behavioral and neural correlates of increased self-control in the absence of increased willpower. Proceedings of the National Academy of Sciences of the United States of America, 111(27), 9786–9791. 10.1073/pnas.1408991111 [DOI] [PMC free article] [PubMed] [Google Scholar]
  274. Maier SU, & Hare TA (2020). BOLD activity during emotion reappraisal positively correlates with dietary self-control success. Social Cognitive and Affective Neuroscience, 1–22. 10.1093/scan/nsaa097 [DOI] [PMC free article] [PubMed] [Google Scholar]
  275. Maier SU, Raja Beharelle A, Polanía R, Ruff CC, & Hare TA (2020). Dissociable mechanisms govern when and how strongly reward attributes affect decisions. Nature Human Behaviour, 4(9), 949–963. 10.1038/s41562-020-0893-y [DOI] [PubMed] [Google Scholar]
  276. Makwana A, Grön G, Fehr E, & Hare TA (2015). A neural mechanism of strategic social choice under sanction-induced norm compliance. ENeuro, 2(3), ENEURO.0066–14.2015. 10.1523/ENEURO.0066-14.2015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  277. Mansouri FA, Koechlin E, Rosa MGP, & Buckley MJ (2017). Managing competing goals—A key role for the frontopolar cortex. Nature Reviews Neuroscience, 18(11), 645–657. 10.1038/nrn.2017.111 [DOI] [PubMed] [Google Scholar]
  278. Markowitz HM (1991). Foundations of portfolio theory. The Journal of Finance, 46(2), 469–477. 10.2307/2328831 [DOI] [Google Scholar]
  279. Marr D, & Poggio T (1976). From understanding computation to understanding neural circuitry. MIT Libraries. Retrieved from. https://dspace.mit.edu/handle/1721.1/5782 [Google Scholar]
  280. Mars RB, Neubert F-X, Noonan MP, Sallet J, Toni I, & Rushworth MFS (2012). On the relationship between the “default mode network” and the “social brain”. Frontiers in Human Neuroscience, 6, 189. 10.3389/fnhum.2012.00189 [DOI] [PMC free article] [PubMed] [Google Scholar]
  281. Martin AK, Kessler K, Cooke S, Huang J, & Meinzer M (2020). The right Temporoparietal junction is causally associated with embodied perspective-taking. Journal of Neuroscience, 40(15), 3089–3095. 10.1523/JNEUROSCI.2637-19.2020 [DOI] [PMC free article] [PubMed] [Google Scholar]
  282. Masten CL, Morelli SA, & Eisenberger NI (2011). An fMRI investigation of empathy for ‘social pain’ and subsequent prosocial behavior. NeuroImage, 55(1), 381–388. 10.1016/j.neuroimage.2010.11.060 [DOI] [PubMed] [Google Scholar]
  283. McGuire JT, & Kable JW (2015). Medial prefrontal cortical activity reflects dynamic re-evaluation during voluntary persistence. Nature Neuroscience, 18(5), 760–766. 10.1038/nn.3994 [DOI] [PMC free article] [PubMed] [Google Scholar]
  284. McNamee D, Rangel A, & O’Doherty JP (2013). Category-dependent and category-independent goal-value codes in human ventromedial prefrontal cortex. Nature Neuroscience, 16(4), 479–485. 10.1038/nn.3337 [DOI] [PMC free article] [PubMed] [Google Scholar]
  285. Melnikoff DE, & Bargh JA (2018). The mythical number two. Trends in Cognitive Sciences, 22(4), 280–293. 10.1016/j.tics.2018.02.001 [DOI] [PubMed] [Google Scholar]
  286. Mendes N, Steinbeis N, Bueno-Guerra N, Call J, & Singer T (2018). Preschool children and chimpanzees incur costs to watch punishment of antisocial others. Nature Human Behaviour, 2(1), 45–51. 10.1038/s41562-017-0264-5 [DOI] [PubMed] [Google Scholar]
  287. Meyer HC, & Bucci DJ (2016). Imbalanced activity in the orbitofrontal cortex and nucleus Accumbens impairs behavioral inhibition. Current Biology, 26(20), 2834–2839. 10.1016/j.cub.2016.08.034 [DOI] [PMC free article] [PubMed] [Google Scholar]
  288. Miendlarzewska EA, Kometer M, & Preuschoff K (2019). Neurofinance. Organizational Research Methods, 22(1), 196–222. 10.1177/1094428117730891 [DOI] [Google Scholar]
  289. Miller KJ, Botvinick MM, & Brody CD (2017). Dorsal hippocampus contributes to model-based planning. Nature Neuroscience, 20(9), 1269–1276. 10.1038/nn.4613 [DOI] [PMC free article] [PubMed] [Google Scholar]
  290. Milosavljevic M, Malmaud J, Huth A, Koch C, & Rangel A (2010). The drift diffusion model can account for value-based choice response times under high and low time pressure. Judgment and Decision making, 5(6), 437–449. [Google Scholar]
  291. Mobbs D, Trimmer PC, Blumstein DT, & Dayan P (2018). Foraging for foundations in decision neuroscience: Insights from ethology. Nature Reviews. Neuroscience, 19(7), 419–427. 10.1038/s41583-018-0010-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  292. Mohr PN, Li S-C, & Heekeren HR (2010). Neuroeconomics and aging: Neuromodulation of economic decision making in old age. Neuroscience & Biobehavioral Reviews, 34(5), 678–688. [DOI] [PubMed] [Google Scholar]
  293. Momennejad I, Russek EM, Cheong JH, Botvinick MM, Daw ND, & Gershman SJ (2017). The successor representation in human reinforcement learning. Nature Human Behaviour, 1(9), 680–692. 10.1038/s41562-017-0180-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  294. Montague PR, Dolan RJ, Friston KJ, & Dayan P (2012). Computational psychiatry. Trends in Cognitive Sciences, 16(1), 72–80. 10.1016/j.tics.2011.11.018 [DOI] [PMC free article] [PubMed] [Google Scholar]
  295. Morawetz C, Steyrl D, Berboth S, Heekeren HR, & Bode S (2020). Emotion regulation modulates dietary decision-making via activity in the prefrontal-striatal valuation system. Cerebral Cortex, 30(11), 5731–5749. 10.1093/cercor/bhaa147 [DOI] [PMC free article] [PubMed] [Google Scholar]
  296. Morelli SA, Sacchet MD, & Zaki J (2015). Common and distinct neural correlates of personal and vicarious reward: A quantitative meta-analysis. NeuroImage, 112, 244–253. 10.1016/j.neuroimage.2014.12.056 [DOI] [PMC free article] [PubMed] [Google Scholar]
  297. Morgado P, Marques F, Ribeiro B, Leite-Almeida H, Pêgo JM, Rodrigues AJ, Dalla C, Kokras N, Sousa N, & Cerqueira JJ (2015). Stress induced risk-aversion is reverted by D2/D3 agonist in the rat. European Neuropsychopharmacology, 25(10), 1744–1752. 10.1016/j.euroneuro.2015.07.003 [DOI] [PubMed] [Google Scholar]
  298. Mormann MM, Malmaud J, Huth A, Koch C, & Rangel A (2010). The drift diffusion model can account for the accuracy and reaction time of value-based choices under high and low time pressure (SSRN scholarly paper ID 1901533). Judgment and Decision Making. 5(6), 437–449. Retrieved from 10.2139/ssrn.1901533 [DOI] [Google Scholar]
  299. Navarro DJ, Newell BR, & Schulze C (2016). Learning and choosing in an uncertain world: An investigation of the explore–exploit dilemma in static and dynamic environments. Cognitive Psychology, 85, 43–77. 10.1016/j.cogpsych.2016.01.001 [DOI] [PubMed] [Google Scholar]
  300. Ng TH, Alloy LB, & Smith DV (2019). Meta-analysis of reward processing in major depressive disorder reveals distinct abnormalities within the reward circuit. Translational Psychiatry, 9(1), 293. 10.1038/s41398-019-0644-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  301. Nichols T, Brett M, Andersson J, Wager T, & Poline J-B (2005). Valid conjunction inference with the minimum statistic. NeuroImage, 25(3), 653–660. 10.1016/j.neuroimage.2004.12.005 [DOI] [PubMed] [Google Scholar]
  302. Nicolle A, Klein-Flügge MC, Hunt LT, Vlaev I, Dolan RJ, & Behrens TEJ (2012). An agent independent Axis for executed and modeled choice in medial prefrontal cortex. Neuron, 75(6), 1114–1121. 10.1016/j.neuron.2012.07.023 [DOI] [PMC free article] [PubMed] [Google Scholar]
  303. Niv Y, Daniel R, Geana A, Gershman SJ, Leong YC, Radulescu A, & Wilson RC (2015). Reinforcement learning in multi-dimensional environments relies on attention mechanisms. Journal of Neuroscience, 35(21), 8145–8157. 10.1523/JNEUROSCI.2978-14.2015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  304. Niv Y, Daw ND, Joel D, & Dayan P (2007). Tonic dopamine: Opportunity costs and the control of response vigor. Psychopharmacology, 191(3), 507–520. 10.1007/s00213-006-0502-4 [DOI] [PubMed] [Google Scholar]
  305. Niv Y, Edlund JA, Dayan P, & O’Doherty JP (2012). Neural prediction errors reveal a risk-sensitive reinforcement-learning process in the human brain. Journal of Neuroscience, 32(2), 551–562. 10.1523/JNEUROSCI.5498-10.2012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  306. Niv Y, & Langdon A (2016). Reinforcement learning with Marr. Current Opinion in Behavioral Sciences, 11, 67–73. 10.1016/j.cobeha.2016.04.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  307. Norman GJ, Cacioppo JT, & Berntson GG (2010). Social neuroscience. WIREs. Cognitive Science, 1(1), 60–68. 10.1002/wcs.29 [DOI] [PubMed] [Google Scholar]
  308. Nosek BA, Ebersole CR, DeHaven AC, & Mellor DT (2018). The preregistration revolution. Proceedings of the National Academy of Sciences of the United States of America, 115(11), 2600–2606. [DOI] [PMC free article] [PubMed] [Google Scholar]
  309. Nusslock R, & Alloy LB (2017). Reward processing and mood-related symptoms: An RDoC and translational neuroscience perspective. Journal of Affective Disorders, 216, 3–16. 10.1016/j.jad.2017.02.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  310. O’Brien L, Albert D, Chein J, & Steinberg L (2011). Adolescents prefer more immediate rewards when in the presence of their peers. Journal of Research on Adolescence, 21(4), 747–753. 10.1111/j.1532-7795.2011.00738.x [DOI] [Google Scholar]
  311. O’Connor C, Rees G, & Joffe H (2012). Neuroscience in the public sphere. Neuron, 74(2), 220–226. 10.1016/j.neuron.2012.04.004 [DOI] [PubMed] [Google Scholar]
  312. O’Doherty JP (2014). The problem with value. Neuroscience & Biobehavioral Reviews, 43, 259–268. 10.1016/j.neubiorev.2014.03.027 [DOI] [PMC free article] [PubMed] [Google Scholar]
  313. O’Reilly RC, & McClelland JL (1994). Hippocampal conjunctive encoding, storage, and recall: Avoiding a trade-off. Hippocampus, 4(6), 661–682. 10.1002/hipo.450040605 [DOI] [PubMed] [Google Scholar]
  314. Ota K, Tanae M, Ishii K, & Takiyama K (2020). Optimizing motor decision-making through competition with opponents. Scientific Reports, 10(1), 950. 10.1038/s41598-019-56659-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  315. Otto AR, Raio CM, Chiang A, Phelps EA, & Daw ND (2013). Working-memory capacity protects model-based learning from stress. Proceedings of the National Academy of Sciences of the United States of America, 110(52), 20941–20946. 10.1073/pnas.1312011110 [DOI] [PMC free article] [PubMed] [Google Scholar]
  316. Otto AR, & Vassena E (2020). It’s all relative: Reward-induced cognitive control modulation depends on context. Journal of Experimental Psychology. General, 150, 306–313. 10.1037/xge0000842 [DOI] [PubMed] [Google Scholar]
  317. Padoa-Schioppa C (2011). Neurobiology of economic choice: A good-based model. Annual Review of Neuroscience, 34(1), 333–359. 10.1146/annurev-neuro-061010-113648 [DOI] [PMC free article] [PubMed] [Google Scholar]
  318. Padoa-Schioppa C, & Conen KE (2017). Orbitofrontal cortex: A neural circuit for economic decisions. Neuron, 96(4), 736–754. 10.1016/j.neuron.2017.09.031 [DOI] [PMC free article] [PubMed] [Google Scholar]
  319. Palminteri S, Khamassi M, Joffily M, & Coricelli G (2015). Contextual modulation of value signals in reward and punishment learning. Nature Communications, 6(1), 8096. 10.1038/ncomms9096 [DOI] [PMC free article] [PubMed] [Google Scholar]
  320. Papageorgiou GK, Sallet J, Wittmann MK, Chau BKH, Schüffelgen U, Buckley MJ, & Rushworth MFS (2017). Inverted activity patterns in ventromedial prefrontal cortex during value-guided decision-making in a less-is-more task. Nature Communications, 8(1), 1886. 10.1038/s41467-017-01833-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  321. Park B, & Young L (2020). An association between biased impression updating and relationship facilitation: A behavioral and fMRI investigation. Journal of Experimental Social Psychology, 87, 103916. 10.1016/j.jesp.2019.103916 [DOI] [PMC free article] [PubMed] [Google Scholar]
  322. Park SA, Sestito M, Boorman ED, & Dreher J-C (2019). Neural computations underlying strategic social decision-making in groups. Nature Communications, 10, 5287. 10.1038/s41467-019-12937-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  323. Park SQ, Kahnt T, Dogan A, Strang S, Fehr E, & Tobler PN (2017). A neural link between generosity and happiness. Nature Communications, 8, 15964. 10.1038/ncomms15964 [DOI] [PMC free article] [PubMed] [Google Scholar]
  324. Parvin DE, McDougle SD, Taylor JA, & Ivry RB (2018). Credit assignment in a motor decision making task is influenced by agency and not sensory prediction errors. Journal of Neuroscience, 38(19), 4521–4530. 10.1523/JNEUROSCI.3601-17.2018 [DOI] [PMC free article] [PubMed] [Google Scholar]
  325. Paul EJ, Smith JD, Valentin VV, Turner BO, Barbey AK, & Ashby FG (2015). Neural networks underlying the metacognitive uncertainty response. Cortex, 71, 306–322. 10.1016/j.cortex.2015.07.028 [DOI] [PMC free article] [PubMed] [Google Scholar]
  326. Pedersen ML, Frank MJ, & Biele G (2017). The drift diffusion model as the choice rule in reinforcement learning. Psychonomic Bulletin & Review, 24(4), 1234–1251. 10.3758/s13423-016-1199-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  327. Peifer C, Schulz A, Schächinger H, Baumann N, & Antoni CH (2014). The relation of flow-experience and physiological arousal under stress—Can u shape it? Journal of Experimental Social Psychology, 53, 62–69. 10.1016/j.jesp.2014.01.009 [DOI] [Google Scholar]
  328. Perez SM, & Lodge DJ (2018). Convergent inputs from the hippocampus and thalamus to the nucleus Accumbens regulate dopamine neuron activity. The Journal of Neuroscience, 38(50), 10607–10618. 10.1523/JNEUROSCI.2629-16.2018 [DOI] [PMC free article] [PubMed] [Google Scholar]
  329. Peters J, & D’Esposito M (2020). The drift diffusion model as the choice rule in inter-temporal and risky choice: A case study in medial orbitofrontal cortex lesion patients and controls. PLoS Computational Biology, 16(4), e1007615. 10.1371/journal.pcbi.1007615 [DOI] [PMC free article] [PubMed] [Google Scholar]
  330. Pfabigan DM, Seidel E-M, Sladky R, Hahn A, Paul K, Grahl A, Küblböck M, Kraus C, Hummer A, & Kranz GS (2014). P300 amplitude variation is related to ventral striatum BOLD response during gain and loss anticipation: An EEG and fMRI experiment. NeuroImage, 96, 12–21. [DOI] [PMC free article] [PubMed] [Google Scholar]
  331. Pisauro MA, Fouragnan E, Retzler C, & Philiastides MG (2017). Neural correlates of evidence accumulation during value-based decisions revealed via simultaneous EEG-fMRI. Nature Communications, 8(1), 15808. 10.1038/ncomms15808 [DOI] [PMC free article] [PubMed] [Google Scholar]
  332. Plassmann H, & Weber B (2015). Individual differences in marketing placebo effects: Evidence from brain imaging and behavioral experiments. Journal of Marketing Research, 52(4), 493–510. 10.1509/jmr.13.0613 [DOI] [Google Scholar]
  333. Polanía R, Moisa M, Opitz A, Grueschow M, & Ruff CC (2015). The precision of value-based choices depends causally on frontoparietal phase coupling. Nature Communications, 6(1), 1–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  334. Polania R, Nitsche MA, & Ruff CC (2018). Studying and modifying brain function with non-invasive brain stimulation. Nature Neuroscience, 21(2), 174–187. [DOI] [PubMed] [Google Scholar]
  335. Poldrack RA, Baker CI, Durnez J, Gorgolewski KJ, Matthews PM, Munafo MR, Nichols TE, Poline JB, Vul E, & Yarkoni T (2017). Scanning the horizon: Towards transparent and reproducible neuroimaging research. Nature Reviews. Neuroscience, 18(2), 115–126. 10.1038/nrn.2016.167 [DOI] [PMC free article] [PubMed] [Google Scholar]
  336. Poldrack RA, Barch DM, Mitchell J, Wager T, Wagner AD, Devlin JT, Cumba C, Koyejo O, & Milham M (2013). Toward open sharing of task-based fMRI data: The OpenfMRI project. Frontiers in Neuroinformatics, 7, 12. [DOI] [PMC free article] [PubMed] [Google Scholar]
  337. Poldrack RA, & Gorgolewski KJ (2017). OpenfMRI: Open sharing of task fMRI data. NeuroImage, 144, 259–261. [DOI] [PMC free article] [PubMed] [Google Scholar]
  338. Poldrack RA, Mumford JA, Schonberg T, Kalar D, Barman B, & Yarkoni T (2012). Discovering relations between mind, brain, and mental disorders using topic mapping. PLoS Computational Biology, 8(10), e1002707. 10.1371/journal.pcbi.1002707 [DOI] [PMC free article] [PubMed] [Google Scholar]
  339. Popper K (1979). The two fundamental problems of the theory of knowledge. Routledge. 10.4324/9780203371107 [DOI] [Google Scholar]
  340. Porcelli AJ, & Delgado MR (2017). Stress and decision making: Effects on valuation, learning, and risk-taking. Current Opinion in Behavioral Sciences, 14, 33–39. 10.1016/j.cobeha.2016.11.015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  341. Porcelli AJ, Lewis AH, & Delgado MR (2012). Acute stress influences neural circuits of reward processing. Frontiers in Neuroscience, 6, 157. 10.3389/fnins.2012.00157 [DOI] [PMC free article] [PubMed] [Google Scholar]
  342. Powers KE, Yaffe G, Hartley CA, Davidow JY, Kober H, & Somerville LH (2018). Consequences for peers differentially bias computations about risk across development. Journal of Experimental Psychology. General, 147(5), 671–682. 10.1037/xge0000389 [DOI] [PubMed] [Google Scholar]
  343. Pushkarskaya H, Smithson M, Joseph JE, Corbly C, & Levy I (2015). Neural correlates of decision-making under ambiguity and conflict. Frontiers in Behavioral Neuroscience, 9, 325. 10.3389/fnbeh.2015.00325 [DOI] [PMC free article] [PubMed] [Google Scholar]
  344. Qu Y, Galvan A, Fuligni AJ, Lieberman MD, & Telzer EH (2015). Longitudinal changes in prefrontal cortex activation underlie declines in adolescent risk taking. The Journal of Neuroscience, 35(32), 11308–11314. 10.1523/jneurosci.1553-15.2015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  345. Rademacher L, Krach S, Kohls G, Irmak A, Gründer G, & Spreckelmeyer KN (2010). Dissociation of neural networks for anticipation and consumption of monetary and social rewards. NeuroImage, 49(4), 3276–3285. 10.1016/j.neuroimage.2009.10.089 [DOI] [PubMed] [Google Scholar]
  346. Raggetti G, Ceravolo MG, Fattobene L, & Di Dio C (2017). Neural correlates of direct access trading in a real stock market: An fMRI investigation. Frontiers in Neuroscience, 11, 536. 10.3389/fnins.2017.00536 [DOI] [PMC free article] [PubMed] [Google Scholar]
  347. Raja Beharelle A, Polanía R, Hare TA, & Ruff CC (2015). Transcranial stimulation over frontopolar cortex elucidates the choice attributes and neural mechanisms used to resolve exploration-exploitation trade-offs. The Journal of Neuroscience, 35(43), 14544–14556. 10.1523/JNEUROSCI.2322-15.2015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  348. Ramayya AG, Misra A, Baltuch GH, & Kahana MJ (2014). Microstimulation of the human substantia Nigra alters reinforcement learning. The Journal of Neuroscience, 34(20), 6887–6895. 10.1523/jneurosci.5445-13.2014 [DOI] [PMC free article] [PubMed] [Google Scholar]
  349. Rangel A, & Clithero JA (2012). Value normalization in decision making: Theory and evidence. Current Opinion in Neurobiology, 22(6), 970–981. 10.1016/j.conb.2012.07.011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  350. Ratcliff R (2002). A diffusion model account of response time and accuracy in a brightness discrimination task: Fitting real data and failing to fit fake but plausible data. Psychonomic Bulletin & Review, 9(2), 278–291. 10.3758/BF03196283 [DOI] [PubMed] [Google Scholar]
  351. Reddan MC, Lindquist MA, & Wager TD (2017). Effect size estimation in neuroimaging. JAMA Psychiatry, 74(3), 207–208. 10.1001/jamapsychiatry.2016.3356 [DOI] [PubMed] [Google Scholar]
  352. Reutskaja E, Lindner A, Nagel R, Andersen RA, & Camerer CF (2018). Choice overload reduces neural signatures of choice set value in dorsal striatum and anterior cingulate cortex. Nature Human Behaviour, 2(12), 925–935. 10.1038/s41562-018-0440-2 [DOI] [PubMed] [Google Scholar]
  353. Reyna VF, Nelson WL, Han PK, & Pignone MP (2015). Decision making and cancer. The American Psychologist, 70(2), 105–118. 10.1037/a0036834 [DOI] [PMC free article] [PubMed] [Google Scholar]
  354. Reynolds JH, & Heeger DJ (2009). The normalization model of attention. Neuron, 61(2), 168–185. 10.1016/j.neuron.2009.01.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
  355. Rigney AE, Koski JE, & Beer JS (2018). The functional role of ventral anterior cingulate cortex in social evaluation: Disentangling valence from subjectively rewarding opportunities. Social Cognitive and Affective Neuroscience, 13(1), 14–21. 10.1093/scan/nsx132 [DOI] [PMC free article] [PubMed] [Google Scholar]
  356. Robson SE, Repetto L, Gountouna V-E, & Nicodemus KK (2020). A review of neuroeconomic gameplay in psychiatric disorders. Molecular Psychiatry, 25(1), 67–81. 10.1038/s41380-019-0405-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  357. Rogers RD (2011). The roles of dopamine and serotonin in decision making: Evidence from pharmacological experiments in humans. Neuropsychopharmacology, 36(1), 114–132. 10.1038/npp.2010.165 [DOI] [PMC free article] [PubMed] [Google Scholar]
  358. Romer D, Reyna VF, & Satterthwaite TD (2017). Beyond stereotypes of adolescent risk taking: Placing the adolescent brain in developmental context. Developmental Cognitive Neuroscience, 27, 19–34. 10.1016/j.dcn.2017.07.007 [DOI] [PMC free article] [PubMed] [Google Scholar]
  359. Rotge J-Y, Lemogne C, Hinfray S, Huguet P, Grynszpan O, Tartour E, George N, & Fossati P (2015). A meta-analysis of the anterior cingulate contribution to social pain. Social Cognitive and Affective Neuroscience, 10(1), 19–27. 10.1093/scan/nsu110 [DOI] [PMC free article] [PubMed] [Google Scholar]
  360. Rudebeck PH, & Murray EA (2011). Balkanizing the primate orbitofrontal cortex: Distinct subregions for comparing and contrasting values. Annals of the New York Academy of Sciences, 1239, 1–13. 10.1111/j.1749-6632.2011.06267.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  361. Ruff CC, Ugazio G, & Fehr E (2013). Changing social norm compliance with noninvasive brain stimulation. Science, 342(6157), 482–484. 10.1126/science.1241399 [DOI] [PubMed] [Google Scholar]
  362. Rutledge RB, Dean M, Caplin A, & Glimcher PW (2010). Testing the reward prediction error hypothesis with an axiomatic model. The Journal of Neuroscience, 30(40), 13525–13536. 10.1523/JNEUROSCI.1747-10.2010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  363. Rutledge RB, Skandali N, Dayan P, & Dolan RJ (2015). Dopaminergic modulation of decision making and subjective well-being. Journal of Neuroscience, 35(27), 9811–9822. 10.1523/JNEUROSCI.0702-15.2015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  364. Salehi B, Cordero MI, & Sandi C (2010). Learning under stress: The inverted-U-shape function revisited. Learning & Memory, 17(10), 522–530. 10.1101/lm.1914110 [DOI] [PubMed] [Google Scholar]
  365. Samanez-Larkin GR (2015). Chapter 3—Decision neuroscience and aging. In Hess TM, Strough J, & Löckenhoff CE (Eds.), Aging and decision making (pp. 41–60). Academic Press. 10.1016/B978-0-12-417148-0.00003-0 [DOI] [Google Scholar]
  366. Samanez-Larkin GR, & Knutson B (2015). Decision making in the ageing brain: Changes in affective and motivational circuits. Nature Reviews Neuroscience, 16(5), 278–289. 10.1038/nrn3917 [DOI] [PMC free article] [PubMed] [Google Scholar]
  367. Samanez-Larkin GR, Levens SM, Perry LM, Dougherty RF, & Knutson B (2012). Frontostriatal white matter integrity mediates adult age differences in probabilistic reward learning. Journal of Neuroscience, 32(15), 5333–5337. [DOI] [PMC free article] [PubMed] [Google Scholar]
  368. Samanez-Larkin GR, Worthy DA, Mata R, McClure SM, & Knutson B (2014). Adult age differences in frontostriatal representation of prediction error but not reward outcome. Cognitive, Affective, & Behavioral Neuroscience, 14(2), 672–682. 10.3758/s13415-014-0297-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  369. Sazhin D, Frazier A, Haynes CR, Johnston C, Chat IK-Y, Dennison JB, Bart C, McCloskey M, Chein J, Fareri DS, Alloy LB, Jarcho J, & Smith DV (2020). The role of social reward and Corticostriatal connectivity in substance use. Journal of Psychiatry and Brain Science, 5(5), 1–33. 10.20900/jpbs.20200024 [DOI] [PMC free article] [PubMed] [Google Scholar]
  370. Scheres A, de Water E, & Mies GW (2013). The neural correlates of temporal reward discounting. WIREs Cognitive Science, 4(5), 523–545. 10.1002/wcs.1246 [DOI] [PubMed] [Google Scholar]
  371. Schilling TM, Kölsch M, Larra MF, Zech CM, Blumenthal TD, Frings C, & Schächinger H (2013). For whom the bell (curve) tolls: Cortisol rapidly affects memory retrieval by an inverted U-shaped dose–response relationship. Psychoneuroendocrinology, 38(9), 1565–1572. 10.1016/j.psyneuen.2013.01.001 [DOI] [PubMed] [Google Scholar]
  372. Scholkmann F, Kleiser S, Metz AJ, Zimmermann R, Mata Pavia J, Wolf U, & Wolf M (2014). A review on continuous wave functional near-infrared spectroscopy and imaging instrumentation and methodology. NeuroImage, 85, 6–27. 10.1016/j.neuroimage.2013.05.004 [DOI] [PubMed] [Google Scholar]
  373. Schreiber D (2011). From Scan to Neuropolitics. In Man Is by Nature a Political Animal. University of Chicago Press. Retrieved from 10.7208/chicago/9780226319117.001.0001/upso-9780226319094-chapter-11. [DOI] [Google Scholar]
  374. Schultz W (2008). Introduction. Neuroeconomics: The promise and the profit. Philosophical transactions of the Royal Society B: Biological Sciences, 363(1511), 3767–3769. 10.1098/rstb.2008.0153 [DOI] [PMC free article] [PubMed] [Google Scholar]
  375. Schultz W (2010). Subjective neuronal coding of reward: Temporal value discounting and risk. European Journal of Neuroscience, 31(12), 2124–2135. 10.1111/j.1460-9568.2010.07282.x [DOI] [PubMed] [Google Scholar]
  376. Seaman KL, Brooks N, Karrer TM, Castrellon JJ, Perkins SF, Dang LC, Hsu M, Zald DH, & Samanez-Larkin GR (2018). Subjective value representations during effort, probability and time discounting across adulthood. Social Cognitive and Affective Neuroscience, 13(5), 449–459. 10.1093/scan/nsy021 [DOI] [PMC free article] [PubMed] [Google Scholar]
  377. Seaman KL, Leong JK, Wu CC, Knutson B, & Samanez-Larkin GR (2017). Individual differences in skewed financial risk-taking across the adult life span. Cognitive, Affective, & Behavioral Neuroscience, 17(6), 1232–1241. 10.3758/s13415-017-0545-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  378. Seeber M, Cantonas L-M, Hoevels M, Sesia T, Visser-Vandewalle V, & Michel CM (2019). Subcortical electrophysiological activity is detectable with high-density EEG source imaging. Nature Communications, 10(1), 753. 10.1038/s41467-019-08725-w [DOI] [PMC free article] [PubMed] [Google Scholar]
  379. Sescousse G, Caldú X, Segura B, & Dreher J-C (2013). Processing of primary and secondary rewards: A quantitative meta-analysis and review of human functional neuroimaging studies. Neuroscience & Biobehavioral Reviews, 37(4), 681–696. 10.1016/j.neubiorev.2013.02.002 [DOI] [PubMed] [Google Scholar]
  380. Sharot T, & Garrett N (2016). Forming beliefs: Why valence matters. Trends in Cognitive Sciences, 20(1), 25–33. 10.1016/j.tics.2015.11.002 [DOI] [PubMed] [Google Scholar]
  381. Sharot T, Guitart-Masip M, Korn CW, Chowdhury R, & Dolan RJ (2012). How dopamine enhances an optimism bias in humans. Current Biology, 22(16), 1477–1481. 10.1016/j.cub.2012.05.053 [DOI] [PMC free article] [PubMed] [Google Scholar]
  382. Sharp C (2012). The use of Neuroeconomic games to examine social decision making in child and adolescent externalizing disorders. Current Directions in Psychological Science, 21(3), 183–188. 10.1177/0963721412444726 [DOI] [Google Scholar]
  383. Shen B, Yin Y, Wang J, Zhou X, McClure SM, & Li J (2016). High-definition tDCS alters impulsivity in a baseline-dependent manner. NeuroImage, 143, 343–352. 10.1016/j.neuroimage.2016.09.006 [DOI] [PubMed] [Google Scholar]
  384. Simon D, Becker MPI, Mothes-Lasch M, Miltner WHR, & Straube T (2014). Effects of social context on feedback-related activity in the human ventral striatum. NeuroImage, 99, 1–6. 10.1016/j.neuroimage.2014.05.071 [DOI] [PubMed] [Google Scholar]
  385. Sip KE, Smith DV, Porcelli AJ, Kar K, & Delgado MR (2015). Social closeness and feedback modulate susceptibility to the framing effect. Social Neuroscience, 10(1), 35–45. 10.1080/17470919.2014.944316 [DOI] [PMC free article] [PubMed] [Google Scholar]
  386. Smidts A, Hsu M, Sanfey AG, Boksem MA, Ebstein RB, Huettel SA, Kable JW, Karmarkar UR, Kitayama S, & Knutson B (2014). Advancing consumer neuroscience. Marketing Letters, 25(3), 257–267. [Google Scholar]
  387. Smith A, Lohrenz T, King J, Montague PR, & Camerer CF (2014). Irrational exuberance and neural crash warning signals during endogenous experimental market bubbles. Proceedings of the National Academy of Sciences of the United States of America, 111(29), 10503–10508. 10.1073/pnas.1318416111 [DOI] [PMC free article] [PubMed] [Google Scholar]
  388. Smith DV, Clithero JA, Boltuck SE, & Huettel SA (2014). Functional connectivity with ventromedial prefrontal cortex reflects subjective value for social rewards. Social Cognitive and Affective Neuroscience, 9(12), 2017–2025. 10.1093/scan/nsu005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  389. Smith DV, & Delgado MR (2015). Reward processing. In Toga AW (Ed.), Brain Mapping (pp. 361–366). Academic Press. 10.1016/B978-0-12-397025-1.00255-4 [DOI] [Google Scholar]
  390. Smith DV, & Delgado MR (2017). Meta-analysis of psychophysiological interactions: Revisiting cluster-level thresholding and sample sizes. Human Brain Mapping, 38(1), 588–591. 10.1002/hbm.23354 [DOI] [PMC free article] [PubMed] [Google Scholar]
  391. Smith DV, Hayden BY, Truong T-K, Song AW, Platt ML, & Huettel SA (2010). Distinct value signals in anterior and posterior ventromedial prefrontal cortex. The Journal of Neuroscience, 30(7), 2490–2495. 10.1523/JNEUROSCI.3319-09.2010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  392. Smith DV, & Huettel SA (2010). Decision neuroscience: Neuroeconomics. Wiley Interdisciplinary Reviews: Cognitive Science, 1(6), 854–871. 10.1002/wcs.73 [DOI] [PMC free article] [PubMed] [Google Scholar]
  393. Smith DV, Rigney AE, & Delgado MR (2016). Distinct reward properties are encoded via Corticostriatal interactions. Scientific Reports, 6(1), 1–12. 10.1038/srep20093 [DOI] [PMC free article] [PubMed] [Google Scholar]
  394. Smith DV, Sip KE, & Delgado MR (2015). Functional connectivity with distinct neural networks tracks fluctuations in gain/loss framing susceptibility. Human Brain Mapping, 36(7), 2743–2755. 10.1002/hbm.22804 [DOI] [PMC free article] [PubMed] [Google Scholar]
  395. Smith PL (2000). Stochastic dynamic models of response time and accuracy: A foundational primer. Journal of Mathematical Psychology, 44(3), 408–463. 10.1006/jmps.1999.1260 [DOI] [PubMed] [Google Scholar]
  396. Smith SM, & Krajbich I (2019). Gaze amplifies value in decision making. Psychological Science, 30(1), 116–128. 10.1177/0956797618810521 [DOI] [PubMed] [Google Scholar]
  397. Sokol-Hessner P, Camerer CF, & Phelps EA (2013). Emotion regulation reduces loss aversion and decreases amygdala responses to losses. Social Cognitive and Affective Neuroscience, 8(3), 341–350. 10.1093/scan/nss002 [DOI] [PMC free article] [PubMed] [Google Scholar]
  398. Sokol-Hessner P, Lackovic SF, Tobe RH, Camerer CF, Leventhal BL, & Phelps EA (2015). Determinants of propranolol’s selective effect on loss aversion. Psychological Science, 26(7), 1123–1130. 10.1177/0956797615582026 [DOI] [PMC free article] [PubMed] [Google Scholar]
  399. Sokol-Hessner P, Raio CM, Gottesman SP, Lackovic SF, & Phelps EA (2016). Acute stress does not affect risky monetary decision-making. Neurobiology of Stress, 5, 19–25. 10.1016/j.ynstr.2016.10.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  400. Soltani A, Martino BD, & Camerer C (2012). A range-normalization model of context-dependent choice: A new model and evidence. PLoS Computational Biology, 8(7), e1002607. 10.1371/journal.pcbi.1002607 [DOI] [PMC free article] [PubMed] [Google Scholar]
  401. Somerville LH, Haddara N, Sasse SF, Skwara AC, Moran JM, & Figner B (2019). Dissecting “peer presence” and “decisions” to deepen understanding of peer influence on adolescent risky choice. Child Development, 90(6), 2086–2103. 10.1111/cdev.13081 [DOI] [PubMed] [Google Scholar]
  402. Somerville LH, Hare T, & Casey BJ (2011). Frontostriatal maturation predicts cognitive control failure to appetitive cues in adolescents. Journal of Cognitive Neuroscience, 23(9), 2123–2134. 10.1162/jocn.2010.21572 [DOI] [PMC free article] [PubMed] [Google Scholar]
  403. Somerville LH, Jones RM, & Casey BJ (2010). A time of change: Behavioral and neural correlates of adolescent sensitivity to appetitive and aversive environmental cues. Brain and Cognition, 72(1), 124–133. 10.1016/j.bandc.2009.07.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  404. Sonuga-Barke EJS, & Fairchild G (2012). Neuroeconomics of attention-deficit/hyperactivity disorder: Differential influences of medial, dorsal, and ventral prefrontal brain networks on suboptimal decision making? Biological Psychiatry, 72(2), 126–133. 10.1016/j.biopsych.2012.04.004 [DOI] [PubMed] [Google Scholar]
  405. Soutschek A, Burke CJ, Raja Beharelle A, Schreiber R, Weber SC, Karipidis II, ten Velden J, Weber B, Haker H, Kalenscher T, & Tobler PN (2017). The dopaminergic reward system underpins gender differences in social preferences. Nature Human Behaviour, 1(11), 819–827. 10.1038/s41562-017-0226-y [DOI] [PubMed] [Google Scholar]
  406. Soutschek A, Gvozdanovic G, Kozak R, Duvvuri S, de Martinis N, Harel B, Gray DL, Fehr E, Jetter A, & Tobler PN (2020). Dopaminergic D1 receptor stimulation affects effort and risk preferences. Biological Psychiatry, 87(7), 678–685. 10.1016/j.biopsych.2019.09.002 [DOI] [PubMed] [Google Scholar]
  407. Soutschek A, Ruff CC, Strombach T, Kalenscher T, & Tobler PN (2016). Brain stimulation reveals crucial role of overcoming self-centeredness in self-control. Science Advances, 2(10), e1600992. 10.1126/sciadv.1600992 [DOI] [PMC free article] [PubMed] [Google Scholar]
  408. Spreng RN, Cassidy BN, Darboh BS, DuPre E, Lockrow AW, Setton R, & Turner GR (2017). Financial exploitation is associated with structural and functional brain differences in healthy older adults. The Journals of Gerontology: Series A, 72(10), 1365–1368. 10.1093/gerona/glx051 [DOI] [PMC free article] [PubMed] [Google Scholar]
  409. Spreng RN, Karlawish J, & Marson DC (2016). Cognitive, social, and neural determinants of diminished decision-making and financial exploitation risk in aging and dementia: A review and new model. Journal of Elder Abuse & Neglect, 28(4–5), 320–344. [DOI] [PMC free article] [PubMed] [Google Scholar]
  410. Stallen M, Rossi F, Heijne A, Smidts A, Dreu CKWD, & Sanfey AG (2018). Neurobiological mechanisms of responding to injustice. Journal of Neuroscience, 38(12), 2944–2954. 10.1523/JNEUROSCI.1242-17.2018 [DOI] [PMC free article] [PubMed] [Google Scholar]
  411. Stanişor L, van der Togt C, Pennartz CMA, & Roelfsema PR (2013). A unified selection signal for attention and reward in primary visual cortex. Proceedings of the National Academy of Sciences of the United States of America, 110(22), 9136–9141. 10.1073/pnas.1300117110 [DOI] [PMC free article] [PubMed] [Google Scholar]
  412. Stanton SJ, Sinnott-Armstrong W, & Huettel SA (2017). Neuromarketing: Ethical implications of its use and potential misuse. Journal of Business Ethics, 144(4), 799–811. [Google Scholar]
  413. Starcke K, & Brand M (2016). Effects of stress on decisions under uncertainty: A meta-analysis. Psychological Bulletin, 142(9), 909–933. 10.1037/bul0000060 [DOI] [PubMed] [Google Scholar]
  414. Steinberg L (2008). A social neuroscience perspective on adolescent risk-taking. Developmental Review, 28(1), 78–106. 10.1016/j.dr.2007.08.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
  415. Steinberg L (2013). The influence of neuroscience on US supreme court decisions about adolescents’ criminal culpability. Nature Reviews Neuroscience, 14(7), 513–518. [DOI] [PubMed] [Google Scholar]
  416. Steiner J, & Stewart C (2016). Perceiving prospects properly. American Economic Review, 106(7), 1601–1631. 10.1257/aer.20141141 [DOI] [Google Scholar]
  417. Stephens DW, & Krebs JR (1986). Foraging theory. Princeton University Press. [Google Scholar]
  418. Steverson K, Brandenburger A, & Glimcher P (2019). Choice-theoretic foundations of the divisive normalization model. Journal of Economic Behavior & Organization, 164, 148–165. 10.1016/j.jebo.2019.05.026 [DOI] [PMC free article] [PubMed] [Google Scholar]
  419. Stoianov IP, Pennartz CMA, Lansink CS, & Pezzulo G (2018). Model-based spatial navigation in the hippocampus-ventral striatum circuit: A computational analysis. PLoS Computational Biology, 14(9), e1006316. 10.1371/journal.pcbi.1006316 [DOI] [PMC free article] [PubMed] [Google Scholar]
  420. Strobel A, Zimmermann J, Schmitz A, Reuter M, Lis S, Windmann S, & Kirsch P (2011). Beyond revenge: Neural and genetic bases of altruistic punishment. NeuroImage, 54(1), 671–680. 10.1016/j.neuroimage.2010.07.051 [DOI] [PubMed] [Google Scholar]
  421. Strombach T, Weber B, Hangebrauk Z, Kenning P, Karipidis II, Tobler PN, & Kalenscher T (2015). Social discounting involves modulation of neural value signals by temporoparietal junction. Proceedings of the National Academy of Sciences of the United States of America, 112(5), 1619–1624. 10.1073/pnas.1414715112 [DOI] [PMC free article] [PubMed] [Google Scholar]
  422. Sul S, Tobler PN, Hein G, Leiberg S, Jung D, Fehr E, & Kim H (2015). Spatial gradient in value representation along the medial prefrontal cortex reflects individual differences in prosociality. Proceedings of the National Academy of Sciences of the United States of America, 112(25), 7851–7856. 10.1073/pnas.1423895112 [DOI] [PMC free article] [PubMed] [Google Scholar]
  423. Suter RS, Pachur T, Hertwig R, Endestad T, & Biele G (2015). The neural basis of risky choice with affective outcomes. PLoS One, 10(4), e0122475. 10.1371/journal.pone.0122475 [DOI] [PMC free article] [PubMed] [Google Scholar]
  424. Suzuki S, Cross L, & O’Doherty JP (2017). Elucidating the underlying components of food valuation in the human orbitofrontal cortex. Nature Neuroscience, 20(12), 1780–1786. 10.1038/s41593-017-0008-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  425. Suzuki S, & O’Doherty JP (2020). Breaking human social decision making into multiple components and then putting them together again. Cortex, 127, 221–230. 10.1016/j.cortex.2020.02.014 [DOI] [PMC free article] [PubMed] [Google Scholar]
  426. Symmonds M, Wright ND, Bach DR, & Dolan RJ (2011). Deconstructing risk: Separable encoding of variance and skewness in the brain. NeuroImage, 58(4), 1139–1149. 10.1016/j.neuroimage.2011.06.087 [DOI] [PMC free article] [PubMed] [Google Scholar]
  427. Tajima S, Drugowitsch J, Patel N, & Pouget A (2019). Optimal policy for multi-alternative decisions. Nature Neuroscience, 22(9), 1503–1511. 10.1038/s41593-019-0453-9 [DOI] [PubMed] [Google Scholar]
  428. Tajima S, Drugowitsch J, & Pouget A (2016). Optimal policy for value-based decision-making. Nature Communications, 7, 12400. 10.1038/ncomms12400 [DOI] [PMC free article] [PubMed] [Google Scholar]
  429. Takahashi H, Fujie S, Camerer C, Arakawa R, Takano H, Kodaka F, Matsui H, Ideno T, Okubo S, Takemura K, Yamada M, Eguchi Y, Murai T, Okubo Y, Kato M, Ito H, & Suhara T (2013). Norepinephrine in the brain is associated with aversion to financial loss. Molecular Psychiatry, 18(1), 3–4. 10.1038/mp.2012.7 [DOI] [PubMed] [Google Scholar]
  430. Takahashi H, Matsui H, Camerer C, Takano H, Kodaka F, Ideno T, Okubo S, Takemura K, Arakawa R, Eguchi Y, Murai T, Okubo Y, Kato M, Ito H, & Suhara T (2010). Dopamine D1 receptors and nonlinear probability weighting in risky choice. Journal of Neuroscience, 30(49), 16567–16572. 10.1523/JNEUROSCI.3933-10.2010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  431. Tamir DI, & Mitchell JP (2012). Disclosing information about the self is intrinsically rewarding. Proceedings of the National Academy of Sciences of the United States of America, 109(21), 8038–8043. 10.1073/pnas.1202129109 [DOI] [PMC free article] [PubMed] [Google Scholar]
  432. Tamir DI, Zaki J, & Mitchell JP (2015). Informing others is associated with behavioral and neural signatures of value. Journal of Experimental Psychology. General, 144(6), 1114–1123. 10.1037/xge0000122 [DOI] [PubMed] [Google Scholar]
  433. Tan A-H, Lu N, & Xiao D (2008). Integrating temporal difference methods and self-organizing neural networks for reinforcement learning with delayed evaluative feedback. IEEE Transactions on Neural Networks, 19(2), 230–244. 10.1109/TNN.2007.905839 [DOI] [PubMed] [Google Scholar]
  434. Tang DW, Fellows LK, Small DM, & Dagher A (2012). Food and drug cues activate similar brain regions: A meta-analysis of functional MRI studies. Physiology & Behavior, 106(3), 317–324. 10.1016/j.physbeh.2012.03.009 [DOI] [PubMed] [Google Scholar]
  435. Tchernichovski O, Parra LC, Fimiarz D, Lotem A, & Conley D (2019). Crowd wisdom enhanced by costly signaling in a virtual rating system. Proceedings of the National Academy of Sciences of the United States of America, 116(15), 7256–7265. 10.1073/pnas.1817392116 [DOI] [PMC free article] [PubMed] [Google Scholar]
  436. Telpaz A, Webb R, & Levy DJ (2015). Using EEG to predict Consumers’ future choices. Journal of Marketing Research, 52(4), 511–529. 10.1509/jmr.13.0564 [DOI] [Google Scholar]
  437. Telzer EH, Fuligni AJ, Lieberman MD, & Galván A (2013). Ventral striatum activation to prosocial rewards predicts longitudinal declines in adolescent risk taking. Developmental Cognitive Neuroscience, 3, 45–52. 10.1016/j.dcn.2012.08.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  438. Telzer EH, Fuligni AJ, Lieberman MD, Miernicki ME, & Galvan A (2014). The quality of adolescents’ peer relationships modulates neural sensitivity to risk taking. Social Cognitive and Affective Neuroscience, 10(3), 389–398. 10.1093/scan/nsu064 [DOI] [PMC free article] [PubMed] [Google Scholar]
  439. Telzer EH, Ichien NT, & Qu Y (2015). Mothers know best: Redirecting adolescent reward sensitivity toward safe behavior during risk taking. Social Cognitive and Affective Neuroscience, 10(10), 1383–1391. 10.1093/scan/nsv026 [DOI] [PMC free article] [PubMed] [Google Scholar]
  440. Teoh YY, Yao Z, Cunningham WA, & Hutcherson CA (2020). Attentional priorities drive effects of time pressure on altruistic choice. Nature Communications, 11(1), 3534. 10.1038/s41467-020-17326-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  441. Tobler PN, & Weber EU (2014). Chapter 9—Valuation for risky and uncertain choices. In Glimcher PW & Fehr E (Eds.), Neuroeconomics (2nd ed., pp. 149–172). Academic Press. 10.1016/B978-0-12-416008-8.00009-7 [DOI] [Google Scholar]
  442. Tomov MS, Truong VQ, Hundia RA, & Gershman SJ (2020). Dissociable neural correlates of uncertainty underlie different exploration strategies. Nature Communications, 11(1), 2371. 10.1038/s41467-020-15766-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  443. Tong LC, Acikalin MY, Genevsky A, Shiv B, & Knutson B (2020). Brain activity forecasts video engagement in an internet attention market. Proceedings of the National Academy of Sciences of the United States of America, 117(12), 6936–6941. 10.1073/pnas.1905178117 [DOI] [PMC free article] [PubMed] [Google Scholar]
  444. Tricomi E, Rangel A, Camerer CF, & O’Doherty JP (2010). Neural evidence for inequality-averse social preferences. Nature, 463(7284), 1089–1091. 10.1038/nature08785 [DOI] [PubMed] [Google Scholar]
  445. Trudel N, Scholl J, Klein-Flügge MC, Fouragnan E, Tankelevitch L, Wittmann MK, & Rushworth MFS (2020). Polarity of uncertainty representation during exploration and exploitation in ventromedial prefrontal cortex. Nature Human Behaviour, 1–16, 83–98. 10.1038/s41562-020-0929-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  446. Trueblood JS, Brown SD, & Heathcote A (2014). The multiattribute linear ballistic accumulator model of context effects in multialternative choice. Psychological Review, 121(2), 179–205. 10.1037/a0036137 [DOI] [PubMed] [Google Scholar]
  447. Tsetsos K, Chater N, & Usher M (2012). Salience driven value integration explains decision biases and preference reversal. Proceedings of the National Academy of Sciences of the United States of America, 109(24), 9659–9664. 10.1073/pnas.1119569109 [DOI] [PMC free article] [PubMed] [Google Scholar]
  448. Tsetsos K, Moran R, Moreland J, Chater N, Usher M, & Summerfield C (2016). Economic irrationality is optimal during noisy decision making. Proceedings of the National Academy of Sciences of the United States of America, 113(11), 3102–3107. 10.1073/pnas.1519157113 [DOI] [PMC free article] [PubMed] [Google Scholar]
  449. Tusche A, Böckler A, Kanske P, Trautwein F-M, & Singer T (2016). Decoding the charitable brain: Empathy, perspective taking, and attention shifts differentially predict altruistic giving. The Journal of Neuroscience, 36(17), 4719–4732. 10.1523/JNEUROSCI.3392-15.2016 [DOI] [PMC free article] [PubMed] [Google Scholar]
  450. Tusche A, Bode S, & Haynes J-D (2010). Neural responses to unattended products predict later consumer choices. Journal of Neuroscience, 30(23), 8024–8031. [DOI] [PMC free article] [PubMed] [Google Scholar]
  451. Tversky A, & Kahneman D (1992). Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and Uncertainty, 5(4), 297–323. 10.1007/BF00122574 [DOI] [Google Scholar]
  452. Tymula A, Belmaker LAR, Roy AK, Ruderman L, Manson K, Glimcher PW, & Levy I (2012). Adolescents’ risk-taking behavior is driven by tolerance to ambiguity. Proceedings of the National Academy of Sciences of the United States of America, 109(42), 17135–17140. 10.1073/pnas.1207144109 [DOI] [PMC free article] [PubMed] [Google Scholar]
  453. van den Bos W, & Hertwig R (2017). Adolescents display distinctive tolerance to ambiguity and to uncertainty during risky decision making. Scientific Reports, 7(1), 40962. 10.1038/srep40962 [DOI] [PMC free article] [PubMed] [Google Scholar]
  454. van den Bos W, Rodriguez CA, Schweitzer JB, & McClure SM (2015). Adolescent impatience decreases with increased frontostriatal connectivity. Proceedings of the National Academy of Sciences of the United States of America, 112(29), E3765–E3774. 10.1073/pnas.1423095112 [DOI] [PMC free article] [PubMed] [Google Scholar]
  455. van den Bos W, Vahl P, Güroğlu B, van Nunspeet F, Colins O, Markus M, Rombouts SARB, van der Wee N, Vermeiren R, & Crone EA (2014). Neural correlates of social decision-making in severely antisocial adolescents. Social Cognitive and Affective Neuroscience, 9(12), 2059–2066. 10.1093/scan/nsu003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  456. Van Duijvenvoorde ACK, & Crone EA (2013). The teenage brain: A Neuroeconomic approach to adolescent decision making. Current Directions in Psychological Science, 22(2), 108–113. 10.1177/0963721413475446 [DOI] [Google Scholar]
  457. Van Hoorn J, Crone EA, & Van Leijenhorst L (2017). Hanging out with the right crowd: Peer influence on risk-taking behavior in adolescence. Journal of Research on Adolescence, 27(1), 189–200. 10.1111/jora.12265 [DOI] [PubMed] [Google Scholar]
  458. van Ravenzwaaij D, van der Maas HLJ, & Wagenmakers E-J (2012). Optimal decision making in neural inhibition models. Psychological Review, 119(1), 201–215. 10.1037/a0026275 [DOI] [PubMed] [Google Scholar]
  459. van Vugt MK, Simen P, Nystrom LE, Holmes P, & Cohen JD (2012). EEG oscillations reveal neural correlates of evidence accumulation. Frontiers in Neuroscience, 6, 106. 10.3389/fnins.2012.00106 [DOI] [PMC free article] [PubMed] [Google Scholar]
  460. Vassena E, Silvetti M, Boehler CN, Achten E, Fias W, & Verguts T (2014). Overlapping neural systems represent cognitive effort and reward anticipation. PLoS One, 9(3), e91008. 10.1371/journal.pone.0091008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  461. Vecchiato G, Astolfi L, Tabarrini A, Salinari S, Mattia D, Cincotti F, Bianchi L, Sorrentino D, Aloise F, Soranzo R, & Babiloni F (2010). EEG analysis of the brain activity during the observation of commercial, political, or public service announcements. Computational Intelligence and Neuroscience, 2010, 985867. 10.1155/2010/985867 [DOI] [PMC free article] [PubMed] [Google Scholar]
  462. Venkatraman V, Dimoka A, Pavlou PA, Vo K, Hampton W, Bollinger B, Hershfield HE, Ishihara M, & Winer RS (2015). Predicting advertising success beyond traditional measures: New insights from neurophysiological methods and market response modeling. Journal of Marketing Research, 52(4), 436–452. 10.1509/jmr.13.0593 [DOI] [Google Scholar]
  463. Verharen JPH, Adan RAH, & Vanderschuren LJMJ (2019). Differential contributions of striatal dopamine D1 and D2 receptors to component processes of value-based decision making. Neuropsychopharmacology, 44(13), 2195–2204. 10.1038/s41386-019-0454-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  464. von Helversen B, Mata R, Samanez-Larkin GR, & Wilke A (2018). Foraging, exploration, or search? On the (lack of) convergent validity between three behavioral paradigms. Evolutionary Behavioral Sciences, 12(3), 152–162. 10.1037/ebs0000121 [DOI] [Google Scholar]
  465. Vul E, Harris C, Winkielman P, & Pashler H (2009). Puzzlingly high correlations in fMRI studies of emotion, personality, and social cognition. Perspectives on Psychological Science, 4(3), 274–290. [DOI] [PubMed] [Google Scholar]
  466. Wake SJ, & Izuma K (2017). A common neural code for social and monetary rewards in the human striatum. Social Cognitive and Affective Neuroscience, 12(10), 1558–1564. 10.1093/scan/nsx092 [DOI] [PMC free article] [PubMed] [Google Scholar]
  467. Walker J, & Ben-Akiva M (2002). Generalized random utility model. Mathematical Social Sciences, 43(3), 303–343. 10.1016/S0165-4896(02)00023-9 [DOI] [Google Scholar]
  468. Wallis JD (2012). Cross-species studies of orbitofrontal cortex and value-based decision-making. Nature Neuroscience, 15(1), 13–19. 10.1038/nn.2956 [DOI] [PMC free article] [PubMed] [Google Scholar]
  469. Wang F, Schoenbaum G, & Kahnt T (2020). Interactions between human orbitofrontal cortex and hippocampus support model-based inference. PLoS Biology, 18(1), e3000578. 10.1371/journal.pbio.3000578 [DOI] [PMC free article] [PubMed] [Google Scholar]
  470. Wang KS, & Delgado MR (2019). Corticostriatal circuits encode the subjective value of perceived control. Cerebral Cortex, 29(12), 5049–5060. 10.1093/cercor/bhz045 [DOI] [PMC free article] [PubMed] [Google Scholar]
  471. Wang KS, Smith DV, & Delgado MR (2016). Using fMRI to study reward processing in humans: Past, present, and future. Journal of Neurophysiology, 115(3), 1664–1678. 10.1152/jn.00333.2015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  472. Watabe-Uchida M, Eshel N, & Uchida N (2017). Neural circuitry of reward prediction error. Annual Review of Neuroscience, 40(1), 373–394. 10.1146/annurev-neuro-072116-031109 [DOI] [PMC free article] [PubMed] [Google Scholar]
  473. Webb R (2019). The (neural) dynamics of stochastic choice. Management Science, 65(1), 230–255. 10.1287/mnsc.2017.2931 [DOI] [Google Scholar]
  474. Webb R, Glimcher PW, & Louie K (2020). Divisive normalization does influence decisions with multiple alternatives. Nature Human Behaviour, 4(11), 1118–1120. 10.1038/s41562-020-00941-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  475. Weber SC, Kahnt T, Quednow BB, & Tobler PN (2018). Frontostriatal pathways gate processing of behaviorally relevant reward dimensions. PLoS Biology, 16(10), e2005722. 10.1371/journal.pbio.2005722 [DOI] [PMC free article] [PubMed] [Google Scholar]
  476. Weisberg DS, Taylor JCV, & Hopkins EJ (2015). Deconstructing the seductive allure of neuroscience explanations. Judgment and Decision making, 10(5), 429–441. [Google Scholar]
  477. Wemm SE, & Wulfert E (2017). Effects of acute stress on decision making. Applied Psychophysiology and Biofeedback, 42(1), 1–12. 10.1007/s10484-016-9347-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  478. White SF, Brislin SJ, Sinclair S, & Blair JR (2013). Punishing unfairness: Rewarding or the organization of a reactively aggressive response? Human Brain Mapping, 35(5), 2137–2147. 10.1002/hbm.22316 [DOI] [PMC free article] [PubMed] [Google Scholar]
  479. Wilson RC, Geana A, White JM, Ludvig EA, & Cohen JD (2014). Humans use directed and random exploration to solve the explore-exploit dilemma. Journal of Experimental Psychology. General, 143(6), 2074–2081. 10.1037/a0038199 [DOI] [PMC free article] [PubMed] [Google Scholar]
  480. Wilson RC, Takahashi YK, Schoenbaum G, & Niv Y (2014). Orbitofrontal cortex as a cognitive map of task space. Neuron, 81(2), 267–279. 10.1016/j.neuron.2013.11.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  481. Winecoff A, Clithero JA, Carter RM, Bergman SR, Wang L, & Huettel SA (2013). Ventromedial prefrontal cortex encodes emotional value. Journal of Neuroscience, 33(27), 11032–11039. 10.1523/JNEUROSCI.4317-12.2013 [DOI] [PMC free article] [PubMed] [Google Scholar]
  482. Wischnewski M, Joergensen ML, Compen B, & Schutter DJLG (2020). Frontal Beta transcranial alternating current stimulation improves reversal learning. Cerebral Cortex, 30, 3286–3295. 10.1093/cercor/bhz309 [DOI] [PMC free article] [PubMed] [Google Scholar]
  483. Wischnewski M, Zerr P, & Schutter DJ (2016). Effects of theta transcranial alternating current stimulation over the frontal cortex on reversal learning. Brain Stimulation, 9(5), 705–711. [DOI] [PubMed] [Google Scholar]
  484. Wise RA (1980). The dopamine synapse and the notion of ‘pleasure centers’ in the brain. Trends in Neurosciences, 3(4), 91–95. 10.1016/0166-2236(80)90035-1 [DOI] [Google Scholar]
  485. Woo C-W, Koban L, Kross E, Lindquist MA, Banich MT, Ruzic L, Andrews-Hanna JR, & Wager TD (2014). Separate neural representations for physical pain and social rejection. Nature Communications, 5(1), 5380. 10.1038/ncomms6380 [DOI] [PMC free article] [PubMed] [Google Scholar]
  486. Wood S, & Lichtenberg PA (2017). Financial capacity and financial exploitation of older adults: Research findings, policy recommendations and clinical implications. Clinical Gerontologist, 40(1), 3–13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  487. Woodford M (2020). Modeling imprecision in perception, valuation, and choice. Annual Review of Economics, 12(1), 579–601. 10.1146/annurev-economics-102819-040518 [DOI] [Google Scholar]
  488. Wu CC, Samanez-Larkin GR, Katovich K, & Knutson B (2014). Affective traits link to reliable neural markers of incentive anticipation. NeuroImage, 84, 279–289. [DOI] [PMC free article] [PubMed] [Google Scholar]
  489. Wu H, Luo Y, & Feng C (2016). Neural signatures of social conformity: A coordinate-based activation likelihood estimation meta-analysis of functional brain imaging studies. Neuroscience and Biobehavioral Reviews, 71, 101–111. 10.1016/j.neubiorev.2016.08.038 [DOI] [PubMed] [Google Scholar]
  490. Wu S-W, Delgado MR, & Maloney LT (2011). The neural correlates of subjective utility of monetary outcome and probability weight in economic and in motor decision under risk. Journal of Neuroscience, 31(24), 8822–8831. 10.1523/JNEUROSCI.0540-11.2011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  491. Wulff DU, Mergenthaler-Canseco M, & Hertwig R (2018). A meta-analytic review of two modes of learning and the description-experience gap. Psychological Bulletin, 144(2), 140–176. 10.1037/bul0000115 [DOI] [PubMed] [Google Scholar]
  492. Wunderlich K, Smittenaar P, & Dolan RJ (2012). Dopamine enhances model-based over model-free choice behavior. Neuron, 75(3), 418–424. 10.1016/j.neuron.2012.03.042 [DOI] [PMC free article] [PubMed] [Google Scholar]
  493. Xiong G, Li X, Dong Z, Cai S, Huang J, & Li Q (2019). Modulating activity in the prefrontal cortex changes intertemporal choice for loss: A transcranial direct current stimulation study. Frontiers in Human Neuroscience, 13, 167. 10.3389/fnhum.2019.00167 [DOI] [PMC free article] [PubMed] [Google Scholar]
  494. Yarkoni T (2009). Big correlations in little studies: Inflated fMRI correlations reflect low statistical power—Commentary on Vul et al. (2009). Perspectives on Psychological Science, 4(3), 294–298. [DOI] [PubMed] [Google Scholar]
  495. Yeon J, & Rahnev D (2020). The suboptimality of perceptual decision making with multiple alternatives. Nature Communications, 11(1), 3857. 10.1038/s41467-020-17661-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  496. Yoon C, Gonzalez R, Bechara A, Berns GS, Dagher AA, Dubé L, Huettel SA, Kable JW, Liberzon I, Plassmann H, Smidts A, & Spence C (2012). Decision neuroscience and consumer decision making. Marketing Letters, 23(2), 473–485. 10.1007/s11002-012-9188-z [DOI] [Google Scholar]
  497. Young L, Camprodon JA, Hauser M, Pascual-Leone A, & Saxe R (2010). Disruption of the right temporoparietal junction with transcranial magnetic stimulation reduces the role of beliefs in moral judgments. Proceedings of the National Academy of Sciences of the United States of America, 107(15), 6753–6758. 10.1073/pnas.0914826107 [DOI] [PMC free article] [PubMed] [Google Scholar]
  498. Zajkowski WK, Kossut M, & Wilson RC (2017). A causal role for right frontopolar cortex in directed, but not random, exploration. eLife, 6, e27430. 10.7554/eLife.27430 [DOI] [PMC free article] [PubMed] [Google Scholar]
  499. Zaki J, Ochsner KN, & Ochsner K (2012). The neuroscience of empathy: Progress, pitfalls and promise. Nature Neuroscience, 15(5), 675–680. 10.1038/nn.3085 [DOI] [PubMed] [Google Scholar]
  500. Zald DH, & Treadway MT (2017). Reward processing, Neuroeconomics, and psychopathology. Annual Review of Clinical Psychology, 13(1), 471–495. 10.1146/annurev-clinpsy-032816-044957 [DOI] [PMC free article] [PubMed] [Google Scholar]
  501. Zikmund-Fisher BJ, Couper MP, Singer E, Levin CA, Fowler FJ, Ziniel S, Ubel PA, & Fagerlin A (2010). The DECISIONS study: A Nationwide survey of United States adults regarding 9 common medical decisions. Medical Decision Making, 30(5_suppl), 20–34. 10.1177/0272989X09353792 [DOI] [PubMed] [Google Scholar]
  502. Zimmermann J, Glimcher PW, & Louie K (2018). Multiple timescales of normalized value coding underlie adaptive choice behavior. Nature Communications, 9(1), 3206. 10.1038/s41467-018-05507-8 [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

There is no data associated with this review article.

RESOURCES