Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2013 Jan 1.
Published in final edited form as: Psychopharmacology (Berl). 2011 Oct 7;219(2):363–375. doi: 10.1007/s00213-011-2520-0

Effects of α-2A adrenergic receptor agonist on time and risk preference in primates

Soyoun Kim 1, Irina Bobeica 1, Nao J Gamo 1, Amy F T Arnsten 1, Daeyeol Lee 1
PMCID: PMC3269972  NIHMSID: NIHMS351589  PMID: 21979441

Abstract

Rationale

Subjective values of actions are influenced by the uncertainty and immediacy of expected rewards. Multiple brain areas, including the prefrontal cortex and basal ganglia, are implicated in selecting actions according to their subjective values. Alterations in these neural circuits therefore might contribute to symptoms of impulsive choice behaviors in disorders such as substance abuse and attention-deficit hyperactivity disorder (ADHD). In particular, the α-2A noradrenergic system is known to have a key influence on prefrontal cortical circuits, and medications that stimulate this receptor are currently in use for the treatment of ADHD.

Objective

We tested whether the preference of rhesus monkeys for delayed and uncertain reward is influenced by the α-2A adrenergic receptor agonist, guanfacine.

Methods

In each trial, the animal chose between a small, certain and immediate reward and another larger, more delayed reward. In half of the trials, the larger reward was certain, whereas in the remaining trials, the larger reward was uncertain.

Results

Guanfacine increased the tendency for the animal to choose the larger and more delayed reward only when it was certain. By applying an econometric model to the animal’s choice behavior, we found that guanfacine selectively reduced the animal’s time preference, increasing their choice of delayed, larger rewards, without significantly affecting their risk preference.

Conclusions

In combination with previous findings that guanfacine improves the efficiency of working memory and other prefrontal functions, these results suggest that impulsive choice behaviors may also be ameliorated by strengthening prefrontal functions.

Keywords: temporal discounting, intertemporal choice, reward, decision making, neuroeconomics, prefrontal cortex, gambling, impulsivity, guanfacine, ADHD

Introduction

For humans as well as animals, the likelihood of choosing a particular action is closely related to the quality, quantity, and other statistical properties of the reward expected from that action. In particular, a large number of studies in psychology and behavioral economics have characterized how uncertainty and immediacy of expected reward affect its preference in humans and animals (see Loewenstein et al. 2003; Glimcher et al. 2009). Decision makers trying to maximize the total amount of reward resulting from their behaviors should maximize the expected value of reward and choose their actions independently of their delays. However, this simple principle of maximizing expected value of reward poorly describes actual human and animal behaviors. For example, humans often behave as if the same change in reward matters less as the absolute amount of reward increases (von Neumann and Morgenstern 1944; Bernoulli 1954). In addition, according to the prospect theory, the subjective value of reward is determined not by its probability, but rather by its non-linear transformation, often known as the weighting function (Kahneman and Tversky 1979). For example, people tend to over-value the reward with a small probability and under-value the reward that is almost certain. Moreover, people and animals often forgo a large reward when it is delayed, and instead choose a smaller but more immediate reward (Frederick et al. 2002; Green and Myerson 2004; Kalenscher and Pennartz 2008; Hwang et al. 2009).

A tendency to seek or avoid risky rewards and to prefer more immediate rewards may be adaptive in some circumstances, for example, when the animal requires certain minimum amount of food before a particular deadline (Stephens and Krebs 1986). However, multiple psychiatric disorders are associated with the excessive tendency to choose risky or immediate rewards (Paulus 2007). For example, drug addicts, pathological gamblers, and patients with schizophrenia tend to discount the value of delayed rewards more steeply compared to control subjects (Madden et al. 1997; Vuchinich and Simpson 1998; Mitchell 1999; Kirby and Petry 2004; Reynolds 2006; Heerey et al. 2007). Similarly, patients with ADHD show steeper discounting functions, which correlate with symptoms of impulsivity and hyperactivity (Schweitzer and Sulzer-Azaroff 1995; Scheres et al. 2010). In addition, sleep deprivation biases healthy human decision makers to prefer risky gains, suggesting that the arousal systems modulate decision-making circuits (Venkatraman et al. 2011). All of these conditions are associated with impaired prefrontal cortical function (Durmer and Dinges 2005; Arnsten 2010).

Although the neural mechanisms responsible for choosing among uncertain and delayed rewards remain poorly characterized, previous studies in both humans and animals have implicated fronto-cortico-striatal networks (Cardinal 2006; Kim et al. 2009). In particular, neuroimaging studies in human subjects as well as single-neuron recording studies in non-human primates have found that the activity in the prefrontal cortex and striatum is influenced by multiple reward parameters and is often correlated with the subjective value of reward chosen by the subject (Cardinal et al. 2001; McClure et al. 2004; Kable and Glimcher 2007; Kim et al. 2008; Pine et al. 2009; Luhmann et al. 2008; Cai et al. 2011). Moreover, individual neurons in these regions of the brain also encode the difference in subjective values of reward expected from alternative actions, suggesting that the prefrontal cortex and striatum play an important role in selecting a particular action from alternative actions (Barraclough et al. 2004; Samejima et al. 2005; Kim et al. 2008; Lau and Glimcher 2008; Cai et al. 2011). Indeed, lesions of the prefrontal cortex produce a steeper delay discounting function in both humans (Sellitto et al. 2010) and rodents (Mobini et al. 2002; Kheramin et al. 2003; Rudebeck et al. 2006; but see Winstanley et al. 2004).

The network of brain areas involved in decision making are diffusely innervated by multiple neurochemical systems, including dopamine, norepinephrine, and serotonin (Doya 2008). Phasic activity of midbrain dopamine neurons encodes multiple signals, including reward prediction errors (Schultz 1998) and saliency of sensory stimuli (Matsumoto and Hikosaka 2009). In addition, it has been proposed that norepinephrine might play a role in optimizing the learning rate according to the amount of uncertainty in the animal’s environment (Aston-Jones and Cohen 2005). The integrity of prefrontal cortical function is markedly affected by the neuro-modulatory arousal systems (Robbins and Arnsten 2009). Therefore, disruption in prefrontal functions caused by abnormal neuromodulation may contribute to the suboptimal patterns of decision making associated with many psychiatric illnesses. There is general appreciation for the important role of dopamine actions in the prefrontal cortex. In addition, noradrenergic stimulation of post-synaptic α-2A receptors also improves prefrontal network physiology and cognitive performance during a working memory task (Wang et al. 2007). Moreover, many ADHD medications, including methylphenidate, atomoxetine and guanfacine, improve working memory performance via indirect or direct stimulation of α-2A receptors (Arnsten et al. 1988; Franowicz et al. 2002; Gamo et al. 2010). Thus, the current study examined the effects of the noradrenergic α-2A receptor agonist, guanfacine, on decision making in monkeys.

Method

Animal preparation

Two male rhesus monkeys (J and M, 10 and 6 years old, BW=13 and 11 kg, respectively) were used. During the experiment, the animal’s head was fixed using a metal halo and a set of 4 titanium screws that were attached to the animal’s skull during an aseptic surgery (Kim et al. 2008). Animals performed the behavioral task described below while they were seated in a primate chair and faced a computer screen placed approximately 57 cm away from their eyes. The animal’s gaze was monitored using a video-based eye-tracking system at a sampling rate of 225 Hz (ET-49, Thomas Recording, Giessen, Germany). All procedures were approved by the Institutional Animal Care and Use Committee at Yale University and conformed to the Public Health Services Policy on Humane Care and Use of Laboratory Animals and the NIH Guide for the Care and Use of Laboratory Animals.

Behavioral task: intertemporal gambling task

Each trial began when the animal fixated on a white square (1°×1°) at the center of a computer screen (Fig. 1). After the animal maintained its fixation at the white square for 1 s (fore-period), two circular targets (diameter = 1°) were presented along the horizontal meridian, while the animal was required to continue its central fixation for another 1 s (cue period). At the end of the cue period, the central fixation target was extinguished, and the animal was required to shift its gaze towards one of the two targets within 1 s.

Fig. 1.

Fig. 1

Inter-temporal gambling task. Top, certain trials; bottom, uncertain trials. The animal was rewarded for correct performance in all certain trials, but only in 50% of uncertain trials when the animal chose the large reward.

There were two different types of trials. In “certain” trials (Fig. 1 top), one of the peripheral targets (TL) was red and always delivered a large reward (0.4 ml of apple juice) at the end of the trial when it was chosen by the animal, and the other target (TS) was green and always delivered a small reward (0.3 ml). In “uncertain” trials (Fig. 1 bottom), one of the peripheral targets was blue and delivered a large reward (0.4 ml) with the probability of 0.5 and no reward in the remaining trials, and the other target was green and always delivered a small reward (0.3 ml) as in the certain trials. In both types of trials, the delay between the time when the animal chose a peripheral target and the time of reward delivery was indicated by the number of small yellow disks surrounding each peripheral target (1.5 s/disk). This is referred to as reward delay, and was 0 or 3 s for the small reward, and 0, 3, 6, or 9 s for the large reward in both certain and uncertain trials. The inter-trial interval was 1 s after the animal chose the large reward, but the difference between the delays for small and large rewards was added to it after the animal chose the small reward, so that the onset of the next trial was not influenced by the animal’s choice. Within a block of 32 trials, each of 8 different combinations of reward delays was presented pseudo-randomly, twice for certain trials and twice for uncertain trials with the positions of small and large reward targets counter-balanced. For each daily session, the animals were allowed to perform the task until they were no longer willing to do so. The minimum number of blocks tested in a given daily session was 31 and 10 for monkeys J and M, respectively, and on average, monkeys J and M completed 1,344±193 and 646±190 trials/day (mean ± SD). The average duration of daily sessions was 219.5 ± 33.1 and 122.4 ± 18.6 minutes (mean ± SD) for monkeys J and M, respectively.

Pharmacology

On each testing day, each animal was injected intramuscularly with either guanfacine (0.2 mg/kg) dissolved in sterile saline (0.1 ml/kg) or just saline, 2 hours before the testing started. The dose of guanfacine used in this study was chosen because it has been previously shown to improve working memory performance in young adult rhesus monkeys (Franowicz and Arnsten 1998). This is also close to the dose ranged used to treat ADHD (Sallee et al. 2009). Each animal was tested for 5 weeks from Monday through Friday, and guanfacine was given once a week on Tuesday, Wednesday, or Thursday. Monkey J was tested during 5 consecutive weeks. Monkey M was first tested for 3 consecutive weeks, and for 2 additional weeks approximately 5 months later (Table 1). During the interim period, it was not tested in any other experiments. Monkey M was not tested on Monday during the first week, and on Friday of the 5th week. Accordingly, the total number of testing sessions was 25 and 23 for monkeys J and M.

Table 1.

Daily schedules for behavioral testing. S, saline; GFC, guanfacine; NT, no tested.

Monkey Week (Mon) Mon Tue Wed Thu Fri
J May 17, 2010 S GFC S S S
May 24, 2010 S S GFC S S
May 31, 2010 S S S GFC S
June 7, 2010 S GFC S S S
June 14, 2010 S S GFC S S
M January 12, 2009 S GFC S S NT
January 19, 2009 S S GFC S S
January 26, 2009 S S S GFC S
June 22, 2009 NT S GFC S S
June 29, 2009 S S S GFC S

Data analysis

We first computed the proportion of trials in which the animal chose the smaller reward, separately for certain and uncertain trials in each session. To test whether guanfacine affected the animal’s preference for delayed or uncertain rewards, we compared the probabilities of choosing the smaller reward for the sessions in which the animal was given guanfacine with the average of the same probabilities estimated for the sessions immediately before and after the sessions with guanfacine. The statistical significance of the difference between these two conditions was determined with a paired t-test (p<0.05). Whether the effect of guanfacine on the preference for the immediate reward was significantly different for certain and uncertain trials was tested using a repeated measures analysis of variance (ANOVA) with drug and uncertainty as two main factors.

We have also quantified the animal’s preference for delayed and uncertain reward using a discounted expected value (DEV) model. In this model, it is assumed that the probability of choosing the small reward, P(S|Ω), is given by the difference in the discounted expected values for the small and large rewards. The variable Ω refers to the probability (P), magnitude (M), and delay (D) of reward. Namely, denoting the discounted expected value of target x as DEV(Px, Mx, Dx),

logitp(SΩ)=logp(SΩ)/{1p(SΩ)}=β{DEV(PS,MS,DS)DEV(PL,ML,DL)},

where β refers to the inverse temperature related to the randomness of the animal’s choice. This is equivalent to the softmax rule, which is given by the following.

p(SΩ)=σ[β{DEV(PS,MS,DS)DEV(PL,ML,DL)}],

where the function σ[z]={1+exp(−z)}−1 corresponds to the logistic transformation.

In the present study, DEV for reward x is given by the discount function multiplied by a weighting1 function. Formally,

DEV(Px,Mx,Dx)=MxW(Px)F(Dx),

where W(P) and F(D) refer to the weighting function and discount function, respectively. We set W(1)=1, and W(0.5) = w, which was estimated from the data. We also used the hyperbolic discount function for F, namely, F(D) = 1/(1+k D). The parameter k determines how steeply the value of delayed reward decreases over time, and the half-life of the reward value corresponds to the inverse of k. All model parameters (β, w, and k) were estimated separately for each session, using a maximum likelihood procedure (Kim et al. 2008; Hwang et al. 2009). Statistical significance for the difference between the average value of each parameter for the sessions with guanfacine and saline was tested using a paired t-test.

In addition, we have also fit a series of modified DEV models in which a subset of model parameters were estimated separately for the sessions with saline and guanfacine. In this analysis, the null model (model 0) corresponds to the hypothesis that guanfacine has no effect on any of the model parameters, and therefore includes only 3 model parameters (w, k, and β). The remaining models correspond to a series of hypotheses that guanfacine influences different subsets of model parameters. This analysis was applied to the entire data set obtained from each animal. The performance of different models was then compared using the Bayesian information criterion (BIC), which is defined as the following.

BIC=2lnL+mlnN,

where L denotes the likelihood of the animal’s choices for a particular model, m the number of model parameters, and N the number of trials.

Finally, to control for any systematic changes in the animals’ performance during the course of testing days in a given week, we applied the following regression model to each of the model parameters after subtracting the weekly mean value of each parameter,

y=aoMon+a1Tue+a2Wed+a3Thu+a4Fri+a5GFC,

where Mon~Fri correspond to a set of dummy variables (e.g., Mon = 1 for the session on Monday, and 0 otherwise), and GFC is a dummy variable for the session with guanfacine (GFC = 1 for the sessions with guanfacine, and 0 otherwise). Statistical significance for the effect of guanfacine was given by a t-test on its regression coefficient (null hypothesis: a5 = 0, p<0.05).

Results

Effects of reward delay and certainty

During the intertemporal gambling task used in the present study, the animals faced a choice between small and large rewards while their delays were systematically manipulated. The animals always received their chosen reward when they chose the small reward or when they chose the large reward in certain trials. By contrast, the reward was delivered stochastically with probability of 0.5 when the animals chose the large reward in uncertain trials. Therefore, the expected value of the large reward in uncertain trials was reduced by half compared to that of the large reward in certain trials. To the extent that the animal’s choice was influenced by the expected value of the reward available from each choice, the animals were expected to choose the small reward more frequently in uncertain trials, since other aspects of reward, such as delay and magnitude, were matched for certain and uncertain trials. Indeed, we found that the animals chose the small reward much more frequently in uncertain trials (Fig. 2A). Collapsed across all sessions and reward delays, regardless of whether the animals received guanfacine or saline, the percentage of trials in which the animals chose the small reward was 53.2% and 44.0% in certain trials for monkeys J and M. This percentage increased significantly to 66.4% and 75.9% in uncertain trials for monkeys J and M (z-test, p<10−100). In particular, when the reward delays were equal for small and large rewards (0 or 3 s), the frequency of choosing the small reward was very low in both animals during certain trials (5.2% and 5.4% for monkeys J and M, respectively), but significantly increased when the large reward was uncertain (29.4% and 62.6% for monkeys J and M; z-test, p<10−100, for both animals).

Fig. 2.

Fig. 2

Effects of reward delay and certainty. A. Overall probability of choosing the small-reward target, p(TS), in certain (C) and uncertain (UC) trials. B. Probability of choosing small reward is plotted as a function of delay for the small reward (black, 0 s; gray, 3 s) and large reward (abscissa), as well as the certainty of the large reward (filled, certain; empty, uncertain). Lines correspond to the predictions of the discounted expected value model (continuous, certain; dashed, uncertain). TS, small-reward target; TL, large-reward target.

Consistent with the findings from previous studies on intertemporal choice in monkeys, we also found that the animal’s preference for a particular reward decreased as its delay increased (Fig. 2B). For example, when both small reward and large reward were available immediately without any delays, animals almost never chose the small reward in certain trials (6.0% and 4.4% of the trials for monkeys J and M, respectively). By contrast, when the delay for the large reward increased to 9 s, both animals chose the small reward in the majority of the trials (99.8% and 97.1% of the trials for monkeys J and M, respectively). Animals also chose the small reward significantly more frequently when the delay for the large uncertain reward was 9 s (99.9% and 99.7% for monkeys J and M), compared to when both rewards were available without any delays in uncertain trials (28.2% and 67.3% for monkeys J and M).

Effects of guanfacine on choice between immediate and delayed rewards

To test whether and how the animal’s preference for delayed and uncertain reward was differentially affected by guanfacine, we compared the animal’s choice behavior during the sessions in which the animal received guanfacine to the behavior during the control sessions in which the animal received saline injections. To minimize the possibility of confounding the effect of guanfacine with any systematic changes in the animal’s performance throughout the week, we used the sessions one day before and one day after the animal received guanfacine as the control conditions in the following analyses. However, none of the effects described below changed qualitatively when we included the results from all sessions without guanfacine in the control data.

The average number of trials completed by the animals in each daily session did not vary significantly according to whether the animals received guanfacine or saline (paired t-test, p>0.2). This finding suggests that guanfacine did not have a major effect on the animal’s motivation to perform the task. Nevertheless, in one of the animals (monkey J), guanfacine significantly increased the proportion of trials that were aborted as a result of breaking the fixation on the central target prematurely before the end of the cue period (Table 2). The task used in the present study was not a reaction time task, since the animal was required to produce its behavioral response only after a fixed 1-s cue period. Therefore, the reaction times, defined as the interval between central fixation offset and saccade onset, were relatively short. However, the average reaction times were not significantly affected by the administration of guanfacine (Table 2). Importantly, guanfacine did not produce any dramatic effects on how the animal’s choice was affected by the delay and uncertainty of expected reward. Similar to the choice behavior during the control condition, the animals were more likely to choose the smaller reward when it was immediately available without any delay, as the delay of the large reward increased, and when the large reward became uncertain (Fig. 3B and 3D).

Table 2.

Effects of guanfacine on fixation errors and reaction times (mean SEM).

Monkeys Error rates (%) Reaction time (ms)
J control 13.0 ± 0.8 114.1 ± 1.9
GFC 26.8 ± 3.1 * 119.8 ± 3.8
M control 22.1 ± 3.9 119.9 ± 3.5
GFC 20.3 ± 8.4 118.1 ± 6.5
*

, p<0.05 (paired t-test).

Fig. 3.

Fig. 3

Effects of guanfacine on the probability of choosing small reward. A and C. The probability of choosing the small-reward target, p(TS), in control (Con) and guanfacine (GFC) conditions, shown separately for certain and uncertain trials in monkeys J (A) and M (C). B and D. Probability of choosing the small reward during the control sessions and those with guanfacine (GFC) is plotted as a function of reward delays. Same format as in Fig. 2.

Nevertheless, closer examination revealed that guanfacine produced robust and consistent effects on the animal’s choice behavior during the intertemporal gambling task. For example, both animals were less likely to choose the small reward in certain trials when treated with guanfacine than during the control conditions immediately before and after the session with guanfacine injection. The overall percentage of certain trials in which the animals chose the small reward was 55.1% and 46.2% for monkeys J and M during the control sessions, respectively, and decreased significantly to 47.9% and 39.5% with guanfacine (paired t-test, p<0.001). By contrast, the proportion of uncertain trials in which the animals chose the small reward did not change significantly when the animals were treated with guanfacine (Fig. 3A and 3C). During the control condition, the percentage of uncertain trials in which the animals chose the small reward was 66.6% and 77.3% for monkeys J and M, whereas the corresponding values were 66.1% and 73.5% with guanfacine (paired t-test, p>0.1). The effect of guanfacine on the preference for the small reward was significantly larger in certain trials than in uncertain trials (guanfacine × certainty interaction in 2-way repeated measures ANOVA, p<0.05 for both animals).

We further examined the effect of guanfacine on the animal’s choice in certain trials by plotting the probability of choosing the small reward as a function of delay for small and large rewards (Fig. 3). The results show that the probability of choosing the small reward was affected only slightly when the delays for the small and large rewards were the same. For example, guanfacine had little effect on the animal’s choice in certain trials when both the small and large rewards were delivered without any delays. When both rewards were available without any delays during uncertain trials, monkeys J and M chose the small reward in 26.8% and 69.3% of the trials with saline, respectively, and these percentages increased to 35.2% and 73.6% with guanfacine. However, this difference was statistically significant only in monkey J (z-test, p<0.05). By contrast, guanfacine tended to have a bigger effect when there was a conflict between the animal’s preferences for the large reward and the more immediate reward. For example, when the delays for the small and large rewards were 3 s and 9 s, monkey J chose the small reward in 94.2% and 73.8% of the trials with saline and guanfacine, respectively, while monkey M chose the small reward in 83.9% and 67.4% of the trials with saline and guanfacine (z-test, p<10−5 for both cases). In other words, the animals were more likely to wait for 9 s and receive the larger reward instead of a smaller and more immediate reward when they were treated with guanfacine.

Effects of guanfacine on time and risk preference

Presumably, the animal’s choice in certain trials was largely determined by how steeply the animals discounted the value of reward according to its delay, namely, the animal’s time preference. By contrast, the choice in uncertain trials would be influenced not only by the animal’s time preference, but also by the animal’s risk preference, namely, how the subjective value of reward was affected by its uncertainty. The fact that guanfacine reduced the animal’s tendency to choose the small reward in certain trials suggests that guanfacine might reduce the extent to which the subjective value of the large reward decreased with its delay. By contrast, it is more difficult to understand intuitively the effect of guanfacine on the animal’s time preference and/or risk preference in uncertain trials, since the effects of reward delay and uncertainty on the animal’s choice are influenced by non-linearities in the discount function, weighting function, and logistic transformation relating values and choices. For example, it is difficult to determine whether any changes in the probability of choosing a small immediate reward during uncertain trials arise from the changes in time preference, risk preference, or both. Therefore, to test quantitatively how guanfacine influenced the animal’s time and risk preference, we applied the discounted expected value (DEV) model described in the Methods. In this model, the value of the k parameter determines how steeply the subjective value of discounted value decreases with reward delay. The half-life of the reward value corresponds to the inverse of k, and therefore, a small value of k indicates that the animal highly valued the delayed reward. We also modeled the animal’s risk preference with parameter w, which is the subjective weight given to the uncertain reward. If the animal maximizes the expected value, w should be 0.5, since this is the true probability of the large reward in uncertain trials. Accordingly, w larger (smaller) than 0.5 indicates that the animal is risk-seeking (risk-averse).

We found that the value of k parameter decreased significantly when the animal was treated with guanfacine (Fig. 4). For the daily sessions immediately before and after the session in which the animal received guanfacine, the average value of k parameter was 0.207 and 0.127 for monkeys J and M, whereas this value decreased to 0.146 and 0.101, respectively, for the sessions with guanfacine. This difference was significant for both animals (paired t-test, p<0.005). Moreover, the value of k parameter was not significantly different for the two sessions that preceded and followed the administration of guanfacine (paired t-test, p>0.6), suggesting that the effect of guanfacine did not persist for multiple days. The effect of guanfacine on the k parameter was significant, even when we included the data from all the sessions without guanfacine (p<0.05). In addition, the effect of guanfacine on the value of k parameter was still significant even after we controlled for any potential confounding effects associated with daily changes in the animal’s behaviors using a regression model (see Methods). In addition, the value of k parameter was significantly smaller even when the hyperbolic discount function was fit separately to the data collected in certain trials (paired t-test, p<0.01).

Fig. 4.

Fig. 4

Effects of guanfacine on the parameters of discounted expected value model. Left, the value of k parameter estimated separately for each week. Right, the value of w parameter. GFC, sessions with guanfacine; Con-I, sessions immediately preceding and following the sessions with guanfacine; Con-II, all control sessions. Dotted horizontal lines correspond to the average values in each condition. *, p<0.05; **, p<0.01 (paired t-test).

By contrast, guanfacine did not produce any significant changes in the animal’s risk preference during the intertemporal gambling task. The average values of the w parameter during the control sessions, immediately preceding and following the day with the guanfacine injection, were 0.816 and 0.710 for monkeys J and M. Thus, both animals were strongly risk-seeking during the intertemporal gambling task used in this study. During the sessions in which the animal received guanfacine, the corresponding values were 0.805 and 0.722, which were not significantly different from the values in the control conditions for either animal (paired t-test, p>0.1). The effect of guanfacine was not significant, even when we included the results from all the sessions without guanfacine. We also tested the effect of guanfacine on the value of the β parameter, and found inconsistent effects for the two animals. For monkey J, the value of β decreased significantly from 4.71 for the control condition to 3.71 for the guanfacine condition (paired t-test, p<0.01). For monkey M, the value of β also decreased from 4.32 to 3.86, but this difference was not statistically significant (p>0.25). In addition, the value of β parameter was not significantly correlated with the value of k or w parameter (z-test, p>0.05; Table 3).

Table 3.

Correlation between model parameters.

monkey J monkey M

k w k w

w −0.134 - −0.261 -

β −0.183 0.337 −0.009 0.084

None was statistically significant at the significance level of p=0.05.

We have also tested whether the rate of fixation errors and reaction times (Table 2) were correlated with the parameters of the DEV model estimated for individual sessions. For monkey M, neither the average error rate nor the reaction time was significantly correlated with any of the model parameters. For monkey J, both the error rate and the reaction time were significantly and negatively correlated with w and β parameters (r= −0.45 and −0.46, for w and β, respectively; p<0.05), but not with k parameter (r= −0.39, p>0.05). Therefore, although guanfacine significantly increased the rate of fixation errors in one animal, this effect is unlikely to account for its effect on delay discounting.

To test further whether the effect of guanfacine was specific to k parameter, namely, delay discounting, we calculated the Bayesian information criterion (BIC) for a series of nested discounted expected utility models (Table 4). Compared to the null model in which the animal’s choices were fit by the model with the same parameters for guanfacine and control conditions, the model with separate k parameters for these two conditions accounted for the animal’s choice behavior better. This was true, even when separate k parameters for control and guanfacine were introduced in addition to separate parameters for w and/or β. For both animals, the data were best fit by the model that included separate parameters for both k and β (Table 4).

Table 4.

Effects of guanfacine on model parameters: summary of model comparison.

model m parameters BIC
monkey J monkey M

0 3 k, w, β 17,893.5 9,560.9

1 4 kCON, kGFC, w, β 17,686.0 9,444.4

2 4 k, wCON, wGFC, β 17,901.0 9,564.4

3 4 k, w, βCON, βGFC 17,903.9 9,570.5

4 5 kCON, kGFC, wCON, wGFC, β 17,678.9 9,447.4

5 5 kCON, kGFC, w, βCON, βGFC 17,677.3 9,420.8

6 5 k, wCON, wGFC, βCON, βGFC 17,730.9 9,461.6

7 6 kCON, kGFC, wCON, wGFC, βCON, βGFC 17,687.5 9,438.4

m, number of free parameters; BIC, Bayesian information criterion. Bold type face in the BIC values indicates the best model. kCON(kGFC), wCON(wGFC), βCONGFC) refer to the model parameters specific for control and guanfacine sessions.

In summary, these results demonstrated that guanfacine diminished the effect of reward delay. In addition, the results of the econometric model showed that the animal’s risk preference was not affected by guanfacine. Nevertheless, guanfacine might have a subtle effect on the animal’s risk preference, since the probability of choosing the small and immediate reward was significantly affected by guanfacine only when the large reward was certain. Guanfacine might also increase the randomness of the animal’s choice by making it less dependent on the difference in the expected discounted utilities of rewards from the two alternative targets.

Discussion

State-dependent decision making with uncertainty and delay

Primates can integrate many different types of information during decision making to evaluate the overall desirability of the outcomes expected from alternative actions. In most cases, the animal’s choices are well accounted for by decision-making models assuming that all relevant information about the outcomes from each option is incorporated into a single scalar quantity, often referred to as subjective value or utility (Padoa-Schioppa 2011). For example, the expected utility theory (von Neumann and Morgenstern 1944) and prospect theory (Kahneman and Tversky 1979) describe how the uncertainty and magnitude of reward can be combined, whereas discounted utility theory provides a framework in which the temporal delay of reward can be factored in (Frederick et al. 2002). However, these economic decision-making models are designed to account for choices made in a particular state, and therefore are limited in explaining how humans and animals may change their preference as their metabolic states or environmental conditions fluctuate (Doya 2008). For example, in behavioral ecology, an energy budget refers to the difference between the animal’s energy requirements and its current reserve. When the energy budget is negative, namely, when the animal has the danger of starvation, the animal might take more risk and prefer more variable reward option, compared to when the animal’s energy budget is positive (Caraco 1980; Stephens 1981). This is referred to as the expected energy-budget rule (Stephens and Krebs 1986), and some studies have indeed found that the animal’s risk preference changes with its energy budget (Caraco et al. 1980; Pietras et al. 2003b). While empirical results have not been always consistent with the predictions of this rule (Bateson and Kacelnik 1997; Kacelnk and Bateson 1997), the animal’s metabolic state is likely to influence multiple aspects of decision making, including risk preference (Schuck-Paim et al. 2004; Nevai et al. 2007).

The results in the present study suggest that the level of catecholamines, especially norepinepherine, may influence how the reward delays are incorporated into the animal’s choices. Therefore, this might be a mechanism by which changes in the animal’s metabolic and other physiological conditions influence the animal’s time preference. In addition, we found that guanfacine increased the animal’s tendency to choose the delayed reward only when it was certain. This might suggest that the effects of guanfacine on the animal’s preference towards delayed and uncertain rewards were opposite and therefore canceled each other when delayed rewards were uncertain. Thus, it is possible that guanfacine reduces the animal’s risk preference. In contrast, the results of econometric modeling showed that the effect of guanfacine was specific to the animal’s time preference. It is difficult to resolve this discrepancy in the present study, since we did not vary the probability of uncertain rewards. Therefore, the effect of guanfacine on the animal’s risk preference should be studied further in future studies.

The subjective value of a delayed reward and delay discounting are closely related to risk preference, since future events are almost always uncertain. There is both theoretical and empirical support to the notion that delay discounting is due to the intrinsic uncertainty of delayed outcomes (Benzion et al. 1989; Keren and Roelofsma 1995; Anderhub et al. 2003). Nevertheless, additional factors other than the uncertainty of delayed rewards, such as the animal’s metabolic needs, are likely to influence the preference for delayed rewards. Indeed, during intertemporal choice, the brain regions often referred to as the default network that may be involved in the prospection of future events increased their activation (Luhmann et al. 2008; Weber and Huettel 2008), suggesting that the information about uncertain and delayed rewards may be handled differently in the brain. The results from the present study also suggest that the preference for uncertain and delayed rewards might be mediated by at least partially separate neural mechanisms.

Neuromodulators in the prefrontal cortex and decision making

In order to select an action leading to the most desirable outcome, many different types of information need to be considered, and often conflicts between different attributes of possible outcomes, such as certainty and immediacy, must be resolved. Therefore, it is not surprising that different properties of expected rewards and aversive outcomes influence neural activity in many different brain areas, including both cortical and subcortical structures (Rangel et al. 2008; Kable and Glimcher 2009; Bromberg-Martin et al. 2010). Projections from many of these areas, such as the basal ganglia and posterior parietal cortex, converge in multiple areas in the prefrontal cortex, which can then distribute this information to pre-motor areas. Therefore, the prefrontal cortex may play a pivotal role in decision making (Lee et al. 2007; Wallis and Kennerley 2010). For example, the orbitofrontal cortex may evaluate the subjective values of various objects (Padoa-Schioppa 2011), whereas the dorsolateral prefrontal cortex may be more involved in selecting the most appropriate actions and monitoring their outcomes (Watanabe 1996; Barraclough et al. 2004; Seo and Lee 2009; Kim et al. 2009).

Traditionally, the function of the prefrontal cortex has been extensively studied using a variety of working memory tasks (Goldman-Rakic 1995). Furthermore, its functions related to working memory depend on appropriate levels of catecholamines, such as dopamine and norepinephrine (Brozoski et al. 1979; Arnsten and Goldman-Rakic 1985; Robbins and Arnsten 2009; Arnsten 2010). Blocking dopamine D1 receptors or norepinephrine α-2A receptors impairs spatial working memory (Sawaguchi and Goldman-Rakic 1991; Li and Mei 1994). In addition, activation of D1 receptors influences the persistent activity in the prefrontal cortex according to an inverted U-shaped dose-response (Williams and Goldman-Rakic 1995; Vijayraghavan et al. 2007). Similarly, activation of α-2A receptors strengthens the recurrent connectivity in the prefrontal cortex, and thereby enhances the persistent activity of neurons that encode the information held in working memory (Wang et al. 2007, 2011).

Given that both reward-related signals and sensitivity to manipulation of dopamine and norepinephrine receptors are widespread in the prefrontal cortex, it is perhaps not surprising that drugs related to these neuromodulators often alter the animal’s preference to risky and delayed rewards. Medications that increase both norepinephrine and dopamine in the prefrontal cortex can reduce the tendency to choose immediate rewards during intertemporal choice. For example, systemic administration of stimulants, such as amphetamine and methylphenidate, increases norepinephrine and dopamine release in the rat prefrontal cortex (Berridge et al. 2006) and increases the tendency to choose a large reward with a longer delay in both humans (de Wit et al. 2002; Pietras et al. 2003a) and rodents (Wade et al. 2000). Similarly, the selective norepinephrine uptake inhibitor, atomoxetine, which increases both dopamine and norepinephrine in rat prefrontal cortex (Bymaster et al. 2002), reduces the rat’s preference for immediate reward (Robinson et al. 2008). Nevertheless, the respective roles of norepinephrine vs. dopamine in mediating these behavioral effects are not yet fully understood. The level of dopamine rises in the rodent prefrontal cortex during intertemporal choice (Winstanley et al. 2006), yet depletion of dopamine in the orbitofrontal cortex reduces the animal’s preference for immediate reward (Kheramin et al. 2004). Thus, some aspects of dopamine actions in prefrontal cortex may be detrimental to impulse control. Alpha-2 agonists such as guanfacine can diminish stress-induced dopamine release in the rodent prefrontal cortex (Morrow et al. 2004), and can also mimic norepinephrine’s enhancing actions in prefrontal cortex (Ramos et al. 2006). Either of these actions may be helpful depending on arousal conditions.

Implications for the treatment of neuropsychiatric disorders

A strong preference for immediate reward that is harmful for the long-term wellbeing of the decision maker is a key problem in a number of psychiatric disorders, including substance abuse and ADHD (Schweitzer and Sulzer-Azaroff 1995; Madden et al. 1997; Vuchinich and Simpson 1998; Mitchell 1999; Kirby and Petry 2004; Reynolds 2006; Heerey et al. 2007; Scheres e al. 2010). Thus, drug treatments that enhance the ability to inhibit impulsive decision making may be especially useful in treating such symptoms. The results of the present study suggest that some of guanfacine’s therapeutic effects in treating ADHD may arise from helping patients reduce impulsive behavioral choices. These results also suggest that guanfacine may be helpful in ameliorating other conditions associated with impaired decision-making. For example, exposure to uncontrollable stress can weaken prefrontal regulation of behavior (Arnsten 2009), and increase the use of addictive substances (Sinha 2001), including smoking. Smokers who are trying to quit the habit are less able to resist reaching for a cigarette if they have just listened to a script describing the stressors in their own lives (McKee et al. 2011). Thus, an optimal neurochemical environment in the prefrontal cortex may strengthen top-down regulation of behavioral choices, and facilitate attainment of long-term goals.

Acknowledgments

This study was supported by the National Institute of Health (RL1 DA024855). Dr. Arnsten and Yale University receive royalties from Shire Pharmaceuticals from the sales of extended release guanfacine (IntunivTM) for the treatment of pediatric ADHD and related disorders (royalties are not received for sales of immediate release guanfacine which is approved for use in adults). Dr. Arnsten consults and engages in teaching for Shire, and receives research funding from Shire for the study of catecholamine mechanisms in prefrontal cortex. All the procedures used in this study comply with the current laws in the USA.

References

  1. Anderhub V, Güth W, Gneezy U, Sonsino D. On the interaction of risk and time preferences: an experimental study. German Econ Rev. 2003;2:239–253. doi: 10.1111/1468-0475.00036. [DOI] [Google Scholar]
  2. Arnsten AFT. Stress signalling pathways that impair prefrontal cortex structure and function. Nat Rev Neurosci. 2009;10:410–422. doi: 10.1038/nrn2648. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Arnsten AFT. The use of α-2A adrenergic agonists for the treatment of attention-deficit/hyperactivity disorder. Expert Rev Neurother. 2010;10:1595–1605. doi: 10.1586/ern.10.133. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Arnsten AF, Cai JX, Goldman-Rakic PS. The alpha-2 adrenergic agonist guanfacine improves memory in aged monkeys without sedative or hypotensive side effects: evidence for alpha-2 receptor subtypes. J Neurosci. 1988;8:4287–4298. doi: 10.1523/JNEUROSCI.08-11-04287.1988. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Arnsten AF, Goldman-Rakic PS. α2-adrenergic mechanisms in prefrontal cortex associated with cognitive decline in aged nonhuman primates. Science. 1985;230:1273–1276. doi: 10.1126/science.2999977. [DOI] [PubMed] [Google Scholar]
  6. Aston-Jones G, Cohen JD. An integrative theory of locus coeruleus-norepinephrine function: adaptive gain and optimal performance. Annu Rev Neurosci. 2005;28:403–450. doi: 10.1146/annurev.neuro.28.061604.135709. [DOI] [PubMed] [Google Scholar]
  7. Barraclough DJ, Conroy ML, Lee D. Prefrontal cortex and decision making in a mixed-strategy game. Nat Neurosci. 2004;7:404–410. doi: 10.1038/nn1209. [DOI] [PubMed] [Google Scholar]
  8. Bateson M, Kacelnik A. Starlings’ preferences for predictable and unpredictable delays to food. Anim Behav. 1997;53:1129–1142. doi: 10.1006/anbe.1996.0388. [DOI] [PubMed] [Google Scholar]
  9. Benzion U, Rapoport A, Yagil J. Discount rates inferred from decisions: an experimental study. Manag Sci. 1989;35:270–284. [Google Scholar]
  10. Bernoulli D. Exposition of a new theory on the measurement of risk. Econometrica. 1954;22:23–36. [Google Scholar]
  11. Berridge CW, Devilbiss DM, Andrzejewski ME, Arnsten AFT, Kelley AE, Schmeichel B, Hamilton C, Spencer RC. Methylphenidate preferentially increases catecholamine neurotransmission within the prefrontal cortex at low doses that enhance cognitive function. Biol Psychiatry. 2006;60:1111–1120. doi: 10.1016/j.biopsych.2006.04.022. [DOI] [PubMed] [Google Scholar]
  12. Bromberg-Martin ES, Matsumoto M, Hikosaka O. Dopamine in motivational control: rewarding, aversive, and alerting. Neuron. 2010;68:815–834. doi: 10.1016/j.neuron.2010.11.022. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Brozoski TJ, Brown RM, Rosvold HE, Goldman PS. Cognitive deficit caused by regional depletion of dopamine in prefrontal cortex of rhesus monkey. Science. 1979;205:929–932. doi: 10.1126/science.112679. [DOI] [PubMed] [Google Scholar]
  14. Bymaster FP, Katner JS, Nelson DL, Hemrick-Luecke SK, Threlkeld PG, Heiligenstein JH, Morin SM, Gehlert DR, Perry KW. Atomoxetine increases extracellular levels of norepinephrine and dopamine in prefrontal cortex of rat: a potential mechanism for efficacy in attention deficit/hyperactivity disorder. Neuropsychopharmacology. 2002;27:699–711. doi: 10.1016/S0893-133X(02)00346-9. [DOI] [PubMed] [Google Scholar]
  15. Cai X, Kim S, Lee D. Heterogeneous coding of temporally discounted values in the dorsal and ventral striatum during intertemporal choice. Neuron. 2011;69:170–182. doi: 10.1038/nn2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Caraco T. On foraging time allocation in a stochastic environment. Ecology. 1980;61:119–128. [Google Scholar]
  17. Caraco T, Martindale S, Whittam TS. An empirical demonstration of risk-sensitive foraging preferences. Anim Behav. 1980;28:820–830. [Google Scholar]
  18. Cardinal RN. Neural systems implicated in delayed and probabilistic reinforcement. Neural Netw. 2006;19:1277–1301. doi: 10.1016/j.neunet.2006.03.004. [DOI] [PubMed] [Google Scholar]
  19. Cardinal RN, Pennicott DR, Sugathapala CL, Robbins TW, Everitt BJ. Impulsive choice induced by rats by lesions of the nucleus accumbens core. Science. 2001;292:2499–2501. doi: 10.1126/science.1060818. [DOI] [PubMed] [Google Scholar]
  20. de Wit H, Enggasser JL, Richards JB. Acute administration of d-amphetamine decreases impulsivity in healthy volunteers. Neuropsychopharmacology. 2002;27:813–825. doi: 10.1016/S0893-133X(02)00343-3. [DOI] [PubMed] [Google Scholar]
  21. Doya K. Modulators of decision making. Nat Neurosci. 2008;4:410–416. doi: 10.1038/nn2077. [DOI] [PubMed] [Google Scholar]
  22. Durmer JS, Dinges DF. Neurocognitive consequences of sleep deprivation. Semin Neurobiol. 2005;25:117–129. doi: 10.1055/s-2005-867080. [DOI] [PubMed] [Google Scholar]
  23. Franowicz JS, Arnsten AFT. The α-2a noradrenergic agonist, guanfacine, improves delayed response performance in young adult rhesus monkeys. Psychopharmacology. 1998;136:8–14. doi: 10.1007/s002130050533. [DOI] [PubMed] [Google Scholar]
  24. Franowicz JS, Kessler LE, Borja CMD, Kobilka BK, Limbrid LE, Arnsten AFT. Mutation of the α2A-adrenoceptor impairs working memory performance and annuls cognitive enhancement by guanfacine. J Neurosci. 2002;22:8771–8777. doi: 10.1523/JNEUROSCI.22-19-08771.2002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Frederick S, Loewenstein G, O’Donoghue T. Time discounting and time preference: a critical review. J Econ Lit. 2002;40:351–401. doi: 10.1257/002205102320161311. [DOI] [Google Scholar]
  26. Gamo NJ, Wang M, Arnsten AF. Methylphenidate and atomoxetine enhance prefrontal function through α2-adrenergic and dopamine D1 receptors. J Am Acad Child Adolesc Psychiatry. 2010;49:1011–1023. doi: 10.1016/j.jaac.2010.06.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Glimcher PW, Camerer CF, Fehr E, Poldrack RA. Neuroeconomics: decision making and the brain. Elsevier; London: 2009. [Google Scholar]
  28. Goldman-Rakic PS. Cellular basis of working memory. Neuron. 1995;14:477–485. doi: 10.1016/0896-6273(95)90304-6. [DOI] [PubMed] [Google Scholar]
  29. Green L, Myerson J. A discounting framework for choice with delayed and probabilistic rewards. Psychol Bull. 2004;130:769–792. doi: 10.1037/0033-2909.130.5.769. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Heerey EA, Robinson BM, McMahon RP, Gold JM. Delay discounting in schizophrenia. Cogn Neuropsychiatry. 2007;12:213–221. doi: 10.1080/13546800601005900. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Hwang J, Kim S, Lee D. Temporal discounting and inter-temporal choice in rhesus monkeys. Front Behav Neurosci. 2009;3:9. doi: 10.3389/neuro.08.009.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Kable JW, Glimcher PW. The neural correlates of subjective value during intertemporal choice. Nat Neurosci. 2007;10:1625–1633. doi: 10.1038/nn2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Kable JW, Glimcher PW. The neurobiology of decision: consensus and controversy. Neuron. 2009;63:733–745. doi: 10.1016/j.neuron.2009.09.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Kacelnik A, Bateson M. Risk-sensitivity: crossroads for theories of decision-making. Trends Cogn Sci. 1997;1:304–309. doi: 10.1016/S1364-6613(97)01093-0. [DOI] [PubMed] [Google Scholar]
  35. Kahneman D, Tversky A. Prospect theory: an analysis of decision under risk. Econometrica. 1979;47:263–291. [Google Scholar]
  36. Kalenscher T, Pennartz CMA. Is a bird in the hand worth two in the future? The neuroeconomics of intertemporal decision-making. Prog Neurobiol. 2008;84:284–315. doi: 10.1016/j.pneurobio.2007.11.004. [DOI] [PubMed] [Google Scholar]
  37. Keren G, Roelofsma P. Immediacy and certainty in intertemporal choice. Organ Behav Hum Decis Process. 1995;63:287–297. [Google Scholar]
  38. Kheramin S, Body S, Ho MY, Velázquez-Martinez DN, Bradshaw CM, Szabadi E, Deakin JFW, Anderson IM. Role of the orbital prefrontal cortex in choice between delayed and uncertain reinforcers: a quantitative analysis. Behavioral Processes. 2003;64:239–250. doi: 10.1016/S0376-6357(03)00142-6. [DOI] [PubMed] [Google Scholar]
  39. Kheramin S, Body S, Ho MY, Velázquez-Martinez DN, Bradshaw CM, Szabadi E, Deakin JFW, Anderson IM. Effects of orbital prefrontal cortex dopamine depletion on inter-temporal choice: a quantitative analysis. Psychopharmacology. 2004;175:206–214. doi: 10.1007/s00213-004-1813-y. [DOI] [PubMed] [Google Scholar]
  40. Kim S, Hwang J, Lee D. Prefrontal coding of temporally discounted values during intertemporal choice. Neuron. 2008;59:161–172. doi: 10.1016/j.neuron.2008.05.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Kim S, Hwang J, Seo H, Lee D. Valuation of uncertain and delayed rewards in primate prefrontal cortex. Neural Netw. 2009;22:294–304. doi: 10.1016/j.neunet.2009.03.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Kirby KN, Petry NM. Heroin and cocaine abusers have higher discount rates for delayed rewards than alcoholics or non-drug-using controls. Addiction. 2004;99:461–471. doi: 10.1111/j.1360-0443.2003.00669.x. [DOI] [PubMed] [Google Scholar]
  43. Lau B, Glimcher PW. Value representations in the primate striatum during matching behavior. Neuron. 2008;58:451–463. doi: 10.1016/j.neuron.2008.02.021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Lee D, Rushworth MFS, Walton ME, Watanabe M, Sakagami M. Functional specialization of the primate frontal cortex. J Neurosci. 2007;27:8170–8173. doi: 10.1523/JNEUROSCI.1561-07.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Li BM, Mei ZT. Delayed response deficit induced by local injection of the alpha-2 adrenergic antagonist yohimbine into the dorsolateral prefrontal cortex in young adult monkeys. Behav Neural Biol. 1994;62:134–139. doi: 10.1016/s0163-1047(05)80034-2. [DOI] [PubMed] [Google Scholar]
  46. Loewenstein G, Read D, Baumeister R. Time and decision. Russell Sage Foundation; New York: 2003. [Google Scholar]
  47. Luhmann CC, Chun MM, Yi DJ, Lee D, Wang XJ. Neural dissociation of delay and uncertainty in intertemporal choice. J Neurosci. 2008;28:14459–14466. doi: 10.1523/JNEUROSCI.5058-08.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Madden GJ, Petry NM, Badger GJ, Bickel WK. Impulsive and self-control choices in opioid-dependent patients and non-drug-using control patients: drug and monetary rewards. Exp Clin Psychopharmacol. 1997;5:256–262. doi: 10.1037//1064-1297.5.3.256. [DOI] [PubMed] [Google Scholar]
  49. Matsumoto M, Hikosaka O. Two types of dopamine neuron distinctly convey positive and negative motivational signals. Nature. 2009;459:837–841. doi: 10.1038/nature08028. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. McClure SM, Laibson DI, Loewenstein G, Cohen JD. Separate neural systems value immediate and delayed monetary rewards. Science. 2004;306:503–507. doi: 10.1126/science.1100907. [DOI] [PubMed] [Google Scholar]
  51. McKee SA, Sinha R, Weinberger AH, Sofuoglu M, Harrison ELR, Lavery M, Wanzer J. Stress decreases the ability to resist smoking and potentiates smoking intensity and reward. J Psychopharmacol. 2011;25:490–502. doi: 10.1177/0269881110376694. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Mitchell SH. Measures of impulsivity in cigarette smokers and non-smokers. Psychopharmacology. 1999;146:455–464. doi: 10.1007/pl00005491. [DOI] [PubMed] [Google Scholar]
  53. Mobini S, Body S, Ho MY, Bradshaw CM, Szabadi E, Deakin JFW, Anderson IM. Effects of lesions of the orbitofrontal cortex on sensitivity to delayed and probabilistic reinforcement. Psychopharmacology. 2002;160:290–298. doi: 10.1007/s00213-001-0983-0. [DOI] [PubMed] [Google Scholar]
  54. Morrow BA, George TP, Roth RH. Noradrenergic α-2 agonists have anxiolytic-like actions on stress-related behavior and mesoprefrontal dopamine biochemistry. Brain Res. 2004;1027:173–178. doi: 10.1016/j.brainres.2004.08.057. [DOI] [PubMed] [Google Scholar]
  55. Nevai AL, Waite TA, Passino KM. State-dependent choice and ecological rationality. J Theor Biol. 2007;247:471–479. doi: 10.1016/j.jtbi.2007.03.029. [DOI] [PubMed] [Google Scholar]
  56. Padoa-Schioppa C. Neurobiology of economic choice: a good-based model. Ann Rev Neurosci. 2011;34:333–359. doi: 10.1146/annurev-neuro-061010-113648. [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Paulus MP. Decision-making dysfunctions in psychiatry - altered homeostatic processing? Science. 2007;318:602–606. doi: 10.1126/science.1142997. [DOI] [PubMed] [Google Scholar]
  58. Pietras CJ, Cherek DR, Lane SD, Tcheremissine OV, Steinberg JL. Effects of methylphenidate on impulsive choice in adult humans. Psychopharmacology. 2003a;170:390–398. doi: 10.1007/s00213-003-1547-2. [DOI] [PubMed] [Google Scholar]
  59. Pietras CJ, Locey ML, Hackenberg TD. Human risky choice under temporal constraints: tests of an energy-budget model. J Exp Anal Behav. 2003b;80:59–75. doi: 10.1901/jeab.2003.80-59. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Pine A, Seymour B, Roiser JP, Bossaerts P, Friston KJ, Curran HV, Dolan RJ. Encoding of marginal utility across time in the human brain. J Neurosci. 2009;29:9575–9581. doi: 10.1523/JNEUROSCI.1126-09.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Ramos BP, Stark D, Verduzco L, van Dyck CH, Arnsten AFT. α2A-adrenoceptor stimulation improves prefrontal cortical regulation of behavior through inhibition of cAMP signaling in aging animals. Learn Mem. 2006;13:770–776. doi: 10.1101/lm.298006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Rangel A, Camerer C, Montague PR. A framework for studying the neurobiology of value-based decision making. Nat Rev Neurosci. 2008;9:545–556. doi: 10.1038/nrn2357. [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Reynolds B. Review of delay-discounting research with humans: relations to drug use and gambling. Behav Pharmacol. 2006;17:651–667. doi: 10.1097/FBP.0b013e3280115f99. [DOI] [PubMed] [Google Scholar]
  64. Robbins TW, Arnsten AF. The neuropsychopharmacology of fronto-executive function: monoaminergic modulation. Annu Rev Neurosci. 2009;32:267–287. doi: 10.1146/annurev.neuro.051508.135535. [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Robinson ESJ, Eagle DM, Mar AC, Bari A, Banerjee G, Jiang X, Dalley JW, Robbins TW. Similar effects of the selective noradrenaline reuptake inhibitor atomoxetine on three distinct forms of impulsivity in the rat. Neuropsychopharmacology. 2008;33:1028–1037. doi: 10.1038/sj.npp.1301487. [DOI] [PubMed] [Google Scholar]
  66. Rudebeck PH, Walton ME, Smyth AN, Bannerman DM, Rushworth MFS. Separate neural pathways process different decision costs. Nat Neurosci. 2006;9:1161–1168. doi: 10.1038/nn1756. [DOI] [PubMed] [Google Scholar]
  67. Sallee F, McGough J, Wigal T, Donahue J, Lyne A, Biederman J For the SPD503 Study Group. Guanfacine extended release in children and adolescents with attention-deficit/hyperactivity disorder: a placebo-controlled trial. J Am Acad Child Adolesc Psychiatry. 2009;48:155–165. doi: 10.1097/CHI.0b013e318191769e. [DOI] [PubMed] [Google Scholar]
  68. Samejima K, Ueda Y, Doya K, Kimura M. Representation of action-specific reward values in the striatum. Science. 2005;310:1337–1340. doi: 10.1126/science.1115270. [DOI] [PubMed] [Google Scholar]
  69. Sawaguchi T, Goldman-Rakic PS. D1 dopamine receptors in prefrontal cortex: involvement in working memory. Science. 1991;251:947–950. doi: 10.1126/science.1825731. [DOI] [PubMed] [Google Scholar]
  70. Scheres A, Tontsch C, Thoeny AL, Kaczkurkin A. Temporal reward discounting in attention-deficit/hyperactivity disorder: the contribution of symptom domains, reward magnitude, and session length. Biol Psychiatry. 2010;67:641–648. doi: 10.1016/j.biopsych.2009.10.033. [DOI] [PubMed] [Google Scholar]
  71. Schuck-Paim C, Pompilio L, Kacelnik A. State-dependent decisions cause apparent violations of rationality in animal choice. PLoS Biol. 2004;2:e402. doi: 10.1371/journal.pbio.0020402. [DOI] [PMC free article] [PubMed] [Google Scholar]
  72. Schultz W. Predictive reward signal of dopamine neurons. J Neurophysiol. 1998;80:1–27. doi: 10.1152/jn.1998.80.1.1. [DOI] [PubMed] [Google Scholar]
  73. Schweitzer J, Sulzer-Azaroff B. Self-control in boys with attention deficit hyperactivity disorder: effects of added stimulation and time. J Child Psychol Psychiatry. 1995;36:671–686. doi: 10.1111/j.1469-7610.1995.tb02321.x. [DOI] [PubMed] [Google Scholar]
  74. Sellitto M, Ciaramelli E, di Pellegrino G. Myopic discounting of future rewards after medial orbitofrontal damage in humans. J Neurosci. 2010;30:16429–16436. doi: 10.1523/jneurosci.2516-10.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  75. Seo H, Lee D. Behavioral and neural changes after gains and losses of conditioned reinforcers. J Neurosci. 2009;29:3627–3641. doi: 10.1523/JNEUROSCI.4726-08.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  76. Sinha R. How does stress increase risk of drug abuse and relapse. Psychopharmacology. 2001;158:343–359. doi: 10.1007/s002130100917. [DOI] [PubMed] [Google Scholar]
  77. Stephens DW. The logic of risk-sensitive foraging preferences. Anim Behav. 1981;29:628–629. [Google Scholar]
  78. Stephens DW, Krebs JR. Foraging theory. Princeton: Princeton Univ Press; 1986. [Google Scholar]
  79. Venkatraman V, Huettel SA, Chuah LYM, Payne JW, Chee MWL. Sleep deprivation biases the neural mechanisms underlying economic preferences. J Neurosci. 2011;31:3712–3718. doi: 10.1523/JNEUROSCI.4407-10.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  80. Vijayraghavan S, Wang M, Birnbaum SG, Williams GV, Arnsten AFT. Inverted-U dopamine D1 receptor actions on prefrontal neurons engaged in working memory. Nat Neurosci. 2007;10:376–384. doi: 10.1038/nn1846. [DOI] [PubMed] [Google Scholar]
  81. von Neumann J, Morgenstern O. The theory of games and economic behavior. Princeton Univ Press; Princeton: 1944. [Google Scholar]
  82. Vuchinich RE, Simpson CA. Hyperbolic temporal discounting in social drinkers and problem drinkers. Exp Clin Psychopharmacol. 1998;6:292–305. doi: 10.1037//1064-1297.6.3.292. [DOI] [PubMed] [Google Scholar]
  83. Wade TR, de Wit H, Richards JB. Effects of dopaminergic drugs on delayed reward as a measure of impulsive behavior in rats. Psychopharmacology. 2000;150:90–101. doi: 10.1007/s002130000402. [DOI] [PubMed] [Google Scholar]
  84. Wallis JD, Kennerley SW. Heterogeneous reward signals in the prefrontal cortex. Curr Opin Neurobiol. 2010;20:191–198. doi: 10.1016/j.conb.2010.02.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  85. Wang M, Gamo NJ, Yang Y, Jin LE, Wang XJ, Laubach M, Mazer JA, Lee D, Arnsten AFT. Neuronal basis of age-related working memory decline. Nature. 2011;476:210–213. doi: 10.1038/nature10243. [DOI] [PMC free article] [PubMed] [Google Scholar]
  86. Wang M, Ramos BP, Paspalas C, Shu Y, Simen A, Duque A, Vijayraghavan S, Brennan A, Nou E, Mazer JA, McCormick DA, Arnsten AFT. α2A-adrenoceptors strengthen working memory networks by inhibiting cAMP-HCN channel signaling in prefrontal cortex. Cell. 2007;129:397–410. doi: 10.1016/j.cell.2007.03.015. [DOI] [PubMed] [Google Scholar]
  87. Watanabe M. Reward expectancy in primate prefrontal neurons. Nature. 1996;382:629–632. doi: 10.1038/382629a0. [DOI] [PubMed] [Google Scholar]
  88. Weber BJ, Huettel SA. The neural substrates of probabilistic and intertemporal decision making. Brain Res. 2008;1234:104–115. doi: 10.1016/j.brainres.2008.07.105. [DOI] [PMC free article] [PubMed] [Google Scholar]
  89. Williams GV, Goldman-Rakic PS. Modulation of memory fields by dopamine D1 receptors in prefrontal cortex. Nature. 1995;376:572–575. doi: 10.1038/376572a0. [DOI] [PubMed] [Google Scholar]
  90. Winstanley CA, Theobald DEH, Cardinal RN, Robbins TW. Contrasting roles of basolateral amygdala and orbitofrontal cortex in impulsive choice. J Neurosci. 2004;24:4718–4722. doi: 10.1523/JNEUROSCI.5606-03.2004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  91. Winstanley CA, Theobald DEH, Dalley JW, Cardinal RN, Robbins TW. Double dissociation between serotonergic and dopaminergic modulation of medial prefrontal and orbitofrontal cortex during a test of impulsive choice. Cereb Cortex. 2006;16:106–114. doi: 10.1093/cercor/bhi088. [DOI] [PubMed] [Google Scholar]

RESOURCES