Skip to main content
Journal of the Experimental Analysis of Behavior logoLink to Journal of the Experimental Analysis of Behavior
. 2011 Nov;96(3):363–385. doi: 10.1901/jeab.2011.96-363

A Mechanism for Reducing Delay Discounting by Altering Temporal Attention

Peter T Radu 1, Richard Yi 2, Warren K Bickel 3, James J Gross 1, Samuel M McClure 1,
PMCID: PMC3213002  PMID: 22084496

Abstract

Rewards that are not immediately available are discounted compared to rewards that are immediately available. The more a person discounts a delayed reward, the more likely that person is to have a range of behavioral problems, including clinical disorders. This latter observation has motivated the search for interventions that reduce discounting. One surprisingly simple method to reduce discounting is an “explicit-zero” reframing that states default or null outcomes. Reframing a classical discounting choice as “something now but nothing later” versus “nothing now but more later” decreases discount rates. However, it is not clear how this “explicit-zero” framing intervention works. The present studies delineate and test two possible mechanisms to explain the phenomenon. One mechanism proposes that the explicit-zero framing creates the impression of an improving sequence, thereby enhancing the present value of the delayed reward. A second possible mechanism posits an increase in attention allocation to temporally distant reward representations. In four experiments, we distinguish between these two hypothesized mechanisms and conclude that the temporal attention hypothesis is superior for explaining our results. We propose a model of temporal attention whereby framing affects intertemporal preferences by modifying present bias.

Keywords: delay discounting, hidden-zero effect, temporal attention, reward sequences, priming, humans


We frequently face choices between outcomes that may be realized at different points in time. For example, we may use vacation time to take a needed day off work, but saving those hours would afford the luxury of an extended future vacation. When faced with such choices—those that pit an immediate reward against larger alternatives available after some delay—we often opt for immediate gratification, even when doing so conflicts with our longer-term interests. Procrastination is a clear example: despite a self-professed preference for the timely completion of a task, we often postpone it in favor of immediate distraction. Temptation hints at another common problem; sometimes, despite the full expectation of later regret, momentary hedonism beckons us to indulge in one more glass of wine or one more piece of cake. In general, we are prone to disregarding our longer-term interests in favor of immediate gratification, often despite our better intentions (Ainslie & Haslam, 1992; Rachlin, 2000).

Formally, these behaviors may be quantified by expressing how subjective value depends on delay. Mazur (1987) developed the highly successful hyperbolic discount function, expressed as

graphic file with name jeab-96-03-05-e01.jpg

for rewards indexed by i with magnitudes and time to delivery given by ri and ti, respectively. Discount rates (k) from this function are frequently used as a measure of impulsivity (Ainslie, 1975); high discount rates correspond to a greater preference for immediacy. Critically, this function allows for the phenomenon of time-dependent preference reversals. If a larger, delayed reward is preferred to a smaller but more proximate reward when both are delayed, then preference reversals arise when the smaller, proximate reward becomes immediately available due to the passage of the intervening time (Ainslie & Haslam, 1992). To illustrate, we may prefer to skip dessert when the prospect is remote but impulsively succumb when the cake is brought to the table.

For most people, selection of the immediate reward is an occasional and largely inconsequential behavior. However, for some, preference for proximate rewards is pathologically persistent. Indeed, a variety of clinical disorders, including substance abuse (Bickel & Marsch, 2001; Kirby, Petry, & Bickel, 1999; Reynolds, 2006), attention-deficit hyperactivity disorder (Barkley, Edwards, Laneri, Fletcher, & Metevia, 2001; Critchfield & Kollins, 2001), obesity (Epstein, Salvy, Carr, Dearing, & Bickel, 2010), and pathological gambling (Petry, 2001; Reynolds, 2006), are associated with a heightened preference for smaller but immediately available rewards. That high discounting characterizes so many disorders suggests it may function as a trans-disease process (Bickel & Mueller, 2009). Consequently, interventions that reduce discounting are of broad clinical interest.

In prescribing methods to ameliorate myopic intertemporal choice, dominant theories to date have largely posited some degree of intrapsychic conflict between immediate, pleasure-seeking impulses and future-oriented, regulatory or inhibitory control mechanisms (Ainslie & Haslam, 1992; Baumeister & Heatherton, 1996; Metcalfe & Mischel, 1999). Shifting preferences to favor long-term outcomes, such a view holds, requires either suppressing or ignoring one's latent desire for the immediate reward or down-regulating its value through cognitive reconstrual (Fujita & Han, 2009; Magen & Gross, 2007; Metcalfe & Mischel, 1999). However, emerging data offer interventions that may effectively preempt the need for such cognitively demanding techniques. For instance, discount rates can be reduced simply by reframing questions to increase the subjective value of larger, distant (LD) outcomes over smaller, proximate (SP) alternatives (Magen, Dweck, & Gross, 2008; Read, Frederick, Orsel, & Rahman, 2005). Little to no traditional “self-control” efforts may be required if the presentation format itself alters reward processing in favor of larger, later outcomes.

One such framing manipulation, which expresses immediate outcomes as “something now but nothing later” and deferred outcomes as “nothing now but more later,” has been labeled the “hidden-zero effect” by Magen et al. (2008). In their experiment, Magen and colleagues contrasted two conditions, whose format and terminology we also employ herein. In the hidden-zero condition, choices take the form, “$5.00 today, OR $8.20 in 26 days.” The explicit-zero condition presents choices as “$5.00 today and $0 in 26 days, OR $0 today and $8.20 in 26 days” and promotes reduced discounting. Loewenstein and Prelec (1993) conducted a similar experiment with nonmonetary rewards (dinners at fancy restaurants versus dinners at home) and arrived at the same conclusion: Explicitly stating default outcomes associated with each choice alters intertemporal preferences.

That reframing of intertemporal options can increase patience is of great potential interest to clinicians and theorists alike. As such, the aim of the present article is to elucidate the psychological mechanism underlying the hidden-zero effect. Below, we present two possible hypotheses, delineating the respective processes by which each would explain the effect of reduced discounting. For ease of reference, we refer throughout to the effect of reduced discounting as the “hidden-zero effect” and the inclusion of zero-dollar outcomes in discount questions as “explicit-zero framing.”

Improving Sequence Hypothesis

In general, people tend to prefer scenarios in which their prospects improve, rather than decline, as time progresses (Loewenstein & Prelec, 1993). This preference can be measured in various ways. For example, people prefer salary profiles that gradually increase with years of job experience (Loewenstein & Sicherman, 1991) and experiences that end on positive rather than sour notes (Ross & Simonson, 1991), independent of cumulative gains. In the case of the hidden-zero effect, the inclusion of null outcomes may similarly create the impression of a sequence. Specifically, “$5 today and $0 in 26 days” is a declining sequence, whereas “$0 today and $8.20 in 26 days” is improving. When discounting options are thus construed as successive events rather than isolated outcomes, the net result is greater relative preference for delayed options. Notably, Equation 1 cannot account for this phenomenon.

With this in mind, the value function in Equation 1 may be amended to include the mere impression of sequences (following Loewenstein & Prelec, 1993):

graphic file with name jeab-96-03-05-e02.jpg

(see Appendix 1 for the derivation of this equation.) Here, Is is an indicator variable that is 1 when each component reward ri is expressed as a sequence (explicit-zero condition) and 0 when it is not (hidden-zero condition). r1 and r2 are the rewards available at times t1 and t2 (where the choice is between r1 at time t1 and r2 at time t2, t1 < t2 and either r1 or r2 is $0). We use the absolute value for time to account for the recent finding, addressed below, that k is the same when discounting rewards in the future (t > 0) and in the past (t < 0) (Yi, Gatchalian, & Bickel, 2006). Importantly, the free parameter γ indicates an individual's valuation of sequences: Positive γ implies preference for improving sequences, whereas negative γ favors declining sequences. According to the improving sequence hypothesis, positive γ adds overall utility to the value function when considering the delayed option in an explicit-zero discounting frame.

Temporal Attention Hypothesis

Recent data have challenged traditional conceptions of discounting, suggesting that it may reflect time perspective rather than traditional notions of self-control (Bickel, Kowal, & Gatchalian, 2006; Ebert & Prelec, 2007; Zauberman, Kim, Malkoc, & Bettman, 2009). Consider the case of substance addiction, a powerful example of suboptimal intertemporal choice. One study asked heroin-dependent individuals and matched controls to complete a story beginning, “After awakening, Bill began to think about his future. In general, he expected to…” (Petry, Bickel, & Arnett, 1998). Of interest was not the subject matter of each participant's response, but rather the time frame in which it was set: Whereas controls projected stories an average of 4.7 years into the future, heroin addicts considered futures of only 9 days. With future time perspectives this truncated, it is perhaps no wonder that addicted individuals repeatedly ingest drugs despite the long-term legal, interpersonal, and economic hardships they portend—such delayed consequences may simply fall outside the restricted range of their temporal attention (Bickel et al., 2006).

Following this reasoning, future-minded reward responding may reflect not an exercise in delay of gratification per se, but rather a proficiency in episodic future thinking (cf. Peters & Büchel, 2010). Accumulating data suggest that the ability to self-project into the future in such a fashion relies on a neural system involving prefrontal, parietal, and mediotemporal sites; interestingly, this very network has also been linked to recalling the past (Buckner & Carroll, 2007). Shifting attentional resources from “now” to “not now”—that is, either the future or the past—seems to involve similar neural and psychological processes (Addis, Wong, & Schacter, 2007; Okuda et al., 2003), so much so that past memory and future projection have been called “two sides of the same coin, two mutually complementary aspects of temporal integration” (Fuster, 1989). Indeed, just as the consideration of future outcomes is inconsistent with respect to time—hyperbolic discounting yields steep discounting for the near future and progressively less discounting in the far future—so, too, is the process of memory decay inconsistent across the past. Up to 50% of an event is likely to be forgotten within 20 min, and up to 75% forgotten after 24 hr, yet the remainder decays much more slowly (Baddeley, 1990; Hammersley, 1994). Given this similarity, studying the valuation of past rewards may help elucidate the mechanisms underlying intertemporal choice in general.

Recent work has begun examining current preferences for past rewards in both healthy and clinical populations (Bickel, Yi, Kowal, & Gatchalian, 2008; Dixon & Holton, 2009; Yi et al., 2006). In this paradigm, participants are asked to rate satisfaction for rewards just received (SP past rewards; e.g., “$5 one hour ago”) compared with larger but temporally distant alternatives (LD past rewards; e.g., “$8.20 26 days ago”). Preferences for these choices are well fitted by a hyperbolic function, and notably, past and future reward discount rates (k) are correlated. Furthermore, cigarette smokers discount the past and future symmetrically and, in both cases, more than controls (Bickel et al., 2008).

Traditional explanations, which posit impulsive inabilities to delay gratification, cannot account for hyperbolic past discounting, since regulating tempting impulses is unnecessary for events that have already transpired. We posit that temporal attention contributes to both past and future outcomes and that, in either case, selection of the LD alternative depends in part on increased attention allocation away from “now.” Since natural inclination is attentional myopia for the present (as evidenced by preference reversals in hyperbolic discounting), the addition of a delayed $0 outcome to an immediate reward mitigates its value by placing it in the same timeframe as its delayed, larger alternative. This hypothesis, which we refer to as the temporal attention hypothesis, suggests that explicit-zero framing increases patience by emphasizing the unpleasant distant consequences associated with present responding.

To formalize the temporal attention hypothesis, we begin from the conceptual similarity to two-systems models of temporal discounting (Laibson, 1997; Loewenstein, 1996; McClure, Ericson, Laibson, Loewenstein, & Cohen, 2007; McClure, Laibson, Loewenstein, & Cohen, 2004; Thaler & Shefrin, 1981). We recently proposed a simplified mathematical formulation as a weighted sum of two processes that differ in temporal horizon (McClure et al., 2007):

graphic file with name jeab-96-03-05-e03.jpg

One system, described by the value function βt, is myopic in temporal focus, sharply discounting the value of rewards that are not available immediately. The other system values rewards at all time points with a moderate discount rate; its value is given by δt (1 ≥ δ > β ≥ 0). The relative activity of these separate systems determines whether a smaller, proximate or larger, distant reward is selected (McClure et al., 2004, 2007). Crucially, the δ parameter has been correlated with activity in the frontal-parietal network associated with past and future self-projection (Buckner & Carroll, 2007; McClure et al., 2004). We therefore believe that it may reflect the allocation of attention to temporally distant (past or future) scenarios. Finally, the weighting term, ω′, captures the relative impact of each system in the overall valuation. If considering distant time perspectives is hypothesized to depend on δ, then expanding temporal attention amounts to enhancing the relative importance (ω′) of this process by some amount ε:

graphic file with name jeab-96-03-05-e04.jpg

As in Equation 2, Is is an indicator variable that is 1 when zeros are explicit and 0 when they are hidden. From this formulation, explicit-zero frames change temporal attention by directly shifting processing in favor of a far-sighted attentional (δ) system.

The Present Experiments

The two hypotheses outlined above (the improving sequence hypothesis and temporal attention hypothesis) can both account for the hidden-zero effect described by Magen and colleagues (2008). However, future discounting alone in this format cannot be employed to delineate a precise psychological mechanism, as both hypotheses predict increased selection of LD rewards when framed with explicit zeros. However, as we detail below, the hypotheses make opposite predictions regarding the hidden-zero effect for past temporal discounting. We take advantage of this fact and, in four experiments, examine these hypotheses and their respective behavioral implications in greater detail. We conclude that the temporal attention hypothesis is superior for explaining our results.

In Experiments 1 and 2, we extend the hidden-zero effect to the discounting of past rewards, demonstrate its relatedness to the future hidden-zero effect, and argue that the phenomena cannot be explained by a preference for improving sequences. However, in evaluating the two hypotheses, a potential confound arises from our formulation of Equation 2. Although we specify the absolute value of time in the hyperbolic term, we do not simultaneously require that |t1| < |t2| in the past, but simply that t1 < t2. Essentially, Equation 2 assumes a unidirectional perception of time passage. (For simplicity, we always refer to sequences with respect to unidirectional time passage; therefore, “$0 an hour ago and $8.20 26 days ago” would be a past declining sequence.) While this assumption is consistent with other work in psychology (see Boroditsky, 2001), previous work offers no direct empirical basis for our formulation. Furthermore, our assumption on this point has direct implications for preferences among sequences of rewards in the past: If the absolute value of time were used in the sequence term of Equation 2, then we would expect preferences for improving sequences in the future but declining sequences in the past. In our formulation, preferences are for improving sequences in both the future and past. Our assumption is therefore an empirical question; we test it directly in Experiment 3 and demonstrate preferences for improving sequences in the past. This effect is opposite that of the past hidden-zero effect and thus provides further evidence against the improving sequence hypothesis. Finally, in Experiment 4, we demonstrate that priming the distant past—and thus drawing attention away from the “now”—increases preferences for temporally distant past rewards, providing additional support for the feasibility of the temporal attention hypothesis.

EXPERIMENT 1: THE PAST HIDDEN-ZERO EFFECT

The improving sequence and temporal attention hypotheses suggest different explanations for the reduction in discounting rates observed when questions are framed with explicit zeros. However, these hypotheses cannot be discriminated on the basis of future discounting alone, as both predict an increased preference for larger, distant (LD) over smaller, proximate (SP) rewards (see the “Future” panels of Figures 1B and 1C). To demonstrate this, note the pattern in Figure 1A: both hypotheses predict that adding explicit zeros to a future discounting choice (e.g., “[option i] $5.00 today and $0.00 in 26 days, OR [option ii] $0.00 today and $8.20 in 26 days”) should increase the subjective valuation of the larger, distant (LD) reward (thick line for option ii, an improving sequence) and decrease subjective valuation of the smaller, proximate (SP) reward (broken line for option i, a declining sequence). Therefore, in order to differentiate between the two hypotheses, we extend explicit-zero framings to the discounting of past rewards (Bickel et al., 2008; Yi et al., 2006). The two hypotheses make opposite predictions for past discounting (see Figure 1A and “Past” panels of Figures 1B and 1C), which we describe in turn below.

Fig 1.

Fig 1

Hypotheses for Experiments 1 and 2. (A) Predictions of the improving sequence and temporal attention hypotheses for effects of explicit-zero framing on intertemporal reward valuation. (B) Model simulation for the improving sequence hypothesis (Equation 2), starting with arbitrary initial parameter values (k  =  .04, γ  =  .2, m  =  10) and 100 iterations of the maximum likelihood estimation procedure. This model predicts a decrease in SP choices in the future explicit-zero condition but a decrease in past explicit-zero SP choices. (C) Model simulation for the temporal attention hypothesis (Equation 3), starting with arbitrary initial parameter values (β  =  .85, δ  =  .998, ω  =  .5, ε  =  .1, m  =  10) and 100 iterations of the maximum likelihood estimation procedure. This model predicts a decrease in SP explicit-zero choices in both past and future and no difference in effect between past and future.

Improving Sequence Hypothesis

Extending Loewenstein and Prelec's (1993) formulation for valuing sequences (Equation 2) to the past suggests that the effect of explicit-zero framing should be opposite to the effect for future valuation. For example, see Figure 1A, which presents a choice between “(option iii) $5.00 one hour ago and $0.00 26 days ago, OR (iv) $0.00 one hour ago and $8.20 26 days ago.” In both instances the reward from 26 days ago (t1) is the earliest in a sequence of two rewards over time and therefore corresponds to r1 in Equation 2; the reward from one hour ago comes next in time (t2) and thus corresponds to r2.

The lynchpin of this hypothesis is Loewenstein and Prelec's (1993) finding that individuals prefer improving sequences (i.e. γ > 0). According to an objective, linear conception of time, option iii represents an improving temporal sequence, and option iv represents a declining sequence. Equation 2, therefore, predicts that the SP reward (option iii) should preferentially benefit from the explicit-zero framing, since it results in (r2r1) > 0, adding overall utility to the option (thick line for option iii). Conversely, the value of the LD reward (option iv) decreases with the explicit-zero framing, as in this case (r2r1) < 0 (broken line for option iv). Overall, a preference for increasing sequences should increase preference for temporally near outcomes in the past with explicit-zero frames.

Temporal Attention Hypothesis

The temporal attention account of the hidden-zero effect posits reduced present bias with the inclusion of null (zero-dollar) outcomes. Accordingly, this hypothesis predicts that framing past options with explicit zeros will increase preferences for LD options in the past (thick line for option iv in Figure 1A) and decrease preference for SP past options (broken line for option iii). Notably, such a preference would represent a preference for declining sequences in the past, directly contradicting an improving sequence account of the hidden-zero effect.

Method

Participants

Twenty-seven undergraduates were recruited from an introductory psychology course at Stanford University and compensated with course credit. One subject was excluded from all analyses for a stated suspicion of hypothesis. The remaining 26 participants (18 females, 8 males) ranged in age from 17 to 22 years (M  =  18.8 years; SD  =  0.88). One individual indicated a prior psychiatric diagnosis and current use of psychiatric medication. None indicated any prior history of substance use disorders.

Materials

All discounting questions were presented in randomized order on a computer, immediately following a set of on-screen instructions. Responses were recorded with keystrokes. The money amounts and their corresponding relative delays were taken directly from the set used by Magen et al. (2008); we altered only the direction of the temporal delays, such that they now projected into the past: SP options were presented as having occurred “one hour ago” (Yi et al., 2006), whereas LD options were presented as having occurred (for example) “26 days ago.” The full set of past discounting options is presented in Appendix 2.

Procedure

After providing consent, participants were led to the computer on which the task was programmed and were told by the experimenter to complete the task at their own pace. Participants were then presented with the following computerized instructions:

“This is a study about money preferences. You will be asked to choose between different sums of money. Please read the following instructions carefully. You may ask the experimenter questions at any time if anything is unclear. Take a moment to get comfortable in your chair. Please also take a moment to gently rest your hands on the keyboard so that your left index finger rests on the ‘F’ key, your right index finger rests on the ‘J’ key, and your thumbs rest on the spacebar. All set?

“For each money choice that follows, please indicate the option you WOULD PREFER TO HAVE RECEIVED, if the money had been available to you at that point in time. Press the ‘F’ key (your left index finger) if you would prefer the option on the left-hand side of the screen. Press the ‘J’ key (your right index finger) if you would prefer the option on the right-hand side of the screen.

“Let's take a moment to practice the task. Remember, press the ‘F’ key if you would prefer the option on the left side of the screen, and press the ‘J’ key if you would prefer the option on the right side of the screen. Ready to practice?”

Participants then made three practice choices (none of which had explicit zeros) to ensure familiarity with the choice-selection procedure. Finally, they read the following before moving on to the test phase of the experiment:

“You are now ready to begin the experiment. Again, for each of the following choices, please select the option you would prefer to have received. Please make your choices as thoughtfully as you can.”

We employed a within-subject design in which all participants completed 15 past hidden-zero questions and 15 past explicit-zero questions. These two blocks of 15 questions were counterbalanced in presentation order across participants, and the order of choices was randomized within each block. Reward pairs were presented individually on the screen; in accordance with common practice, the SP and LD options always appeared on the left and right sides of the screen, respectively.

Model Fitting

A simple way to determine participant behavior is by summarizing the number of smaller, proximate rewards chosen in each of the hidden- and explicit-zero conditions; these analyses are presented below. Nevertheless, there is a crucial limitation to this approach. Consider the possibility that explicit-zero framing does not lower discount rates per se, but that this novel question format merely increases randomness in participant responding. A decrease in the mean number of SP choices across the hidden- and explicit-zero conditions (towards a 50-50 split in responses) might then be misinterpreted as reduced discounting, when in fact it would merely reflect higher degrees of “noise” or random responding to explicit-zero frames. Importantly, this possibility would not be detectable from a simple count of binary responses to each question.

Although an F test revealed no difference in the variance of responses across experimental conditions, F(25,25)  =  0.68, p > .34, we more rigorously addressed this concern by fitting Equation 2 to all participants' individual choices (from both conditions combined). We used a maximum likelihood process to estimate free parameters from Equation 2, k, γ, and m (see Equations 5, 6 below), for each subject individually. For any value of k and γ, there is an estimated value of the SP option, V1, and of the LD option, V2. We assume that choices follow a softmax decision function where the probability of selecting the smaller reward is

graphic file with name jeab-96-03-05-e05.jpg

and

graphic file with name jeab-96-03-05-e06.jpg

The likelihood of the observed sequence of choices is then given by the product of the probabilities for all choices and depends on k, γ, and m. We used a simplex procedure with 100 randomly selected initial parameter values to maximize this likelihood function. This procedure also allowed us to determine the significance of the fit for each subject using a likelihood ratio test, indicating good model fits relative to an intercept-only (binomial) model. We calculated the likelihood of the binomial model from each subject's observed number of SP choices alone. All subjects' model fits passed this likelihood ratio test: all χ2 (3) > 11.81, all ps < .009.

Results

We found no effects of demographic variables (age, gender, ethnicity) or psychiatric status on the primary outcome of interest: Namely, the difference in SP choices in the past hidden- and past explicit-zero conditions: all Fs < 1, all ps > .37. Hence, we do not consider demographics further.

Our aim was to determine whether explicitly stating zero-dollar outcomes alters preferences for past rewards. For the 26 participants retained for analyses, we found fewer SP choices in the past explicit-zero condition (M  =  9.04) than in the past hidden-zero condition (M  =  9.81), t(25)  =  2.04, p  =  .05, paired (see Figure 2). The difference in choices across conditions did not differ as a function of block presentation order (Wilcoxon rank sum test; W  =  75, p > .6). These results demonstrate that the hidden-zero effect indeed extends to past outcomes, and contradicts the improving sequence hypothesis as expressed in Equation 2.

Fig 2.

Fig 2

Behavioral results from Experiment 1: identification of the past hidden-zero effect, with fewer SP choices in the explicit-zero relative to the hidden-zero condition (out of 15 options per condition; * p  =  .05, paired).

Recall from Equation 2 that the parameter summarizing the behavioral hidden-zero effect (i.e., the difference in discounting across explicit- and hidden-zero conditions) is γ, which captures the impact of improving sequences on valuation (and hence choice). When fitting Equation 2 to the data, we found that the estimated mean value of γ across all participants was negative (M  =  −0.10, SD  =  0.25, range  =  −0.54–0.35) and was significantly different from zero, t(25)  =  −2.07, p < .05, implying a significant framing effect irrespective of response noise.

Discussion

Experiment 1 is the first demonstration of the hidden-zero effect for past discounting: Participants chose the larger, distant outcome more frequently when presented with the explicit-zero frame. We also found initial evidence against the improving sequence hypothesis: In our sample, subjects preferred past sequences that declined through time (i.e. γ < 0). Importantly, preference for declining sequences is normatively correct (assuming nonzero inflation or the possibility of positive interest rate). However, as discussed, individuals tend to prefer future sequences and impressions of sequences (i.e., explicit-zero frames) that improve through time. It is therefore important to test for a within-subject relationship between past and future hidden-zero effects. Evidence for simultaneous valuation of past declining sequences and future improving sequences would challenge the improving sequence hypothesis. In Experiment 2, we explored the past and future hidden-zero relationship and test the alternative model of temporal attention.

EXPERIMENT 2: RELATIONSHIP OF PAST AND FUTURE HIDDEN-ZERO EFFECTS

Our next experiment extended the finding from Experiment 1 by replicating the original (future) hidden-zero effect and relating it to the discounting of past hidden- and explicit-zero rewards. We predicted that past and future hidden-zero effects would be positively correlated (in other words, that explicit-zero framing would increase the number of LD choices in both the past and the future). If a strong preference for improving sequences in the future were simultaneously associated with a strong preference for declining sequences in the past, we would have strong evidence against the improving sequence hypothesis.

In this experiment, we assessed our two competing hypotheses by fitting Equations 2 and 3 to the data. To facilitate interpretation of the results, we conducted simulations across past and future choices for the improving sequence hypothesis (Equation 2; Figure 1B) and the temporal attention hypothesis (Equation 3; Figure 1C). We used arbitrary parameter values for the models, as we were only interested in qualitative differences in model predictions. The number of SP choices calculated for Figure 1 is the mean number expected for each condition. The improving sequence hypothesis (see Figure 1A, B), predicts increased preference for SP options in the past explicit-zero condition but a decreased preference for SP options in the future explicit-zero condition (with γ > 0; the sign on both differences changes with γ < 0). For the temporal attention hypothesis (Figure 1C), the model predicts that explicit-zero frames will decrease preference for SP choices equally in both past and future choices.

Method

Participants

Participants were 47 undergraduates in an introductory psychology course at Stanford University who completed the experiment for course credit. These participants (28 females, 19 males) ranged in age from 17 to 22 years (M  =  18.8, SD  =  .95). None indicated a psychiatric diagnosis, although one participant reported currently taking psychiatric medication. None indicated any past history of substance use disorders.

Materials

The past discounting questions were identical to those described in Experiment 1 (see Appendix 2). For the future hidden-zero discounting questions, we used the set from Magen et al. (2008) (the full set is presented in Appendix 2). All money amounts were presented as hypothetical to create procedural symmetry across past and future conditions.

Procedure

Participants adhered to the same protocol and instructions as described in Experiment 1, completing 15 explicit-zero and 15 hidden-zero past discounting questions (blocked, counterbalanced, and randomized in order across participants). They then went on to complete blocks of 15 future hidden-zero and 15 future explicit-zero questions, again counterbalanced and randomized in order across participants. All participants completed past discounting questions first.

Model Fitting

The improving sequence model (Equation 2) was fitted twice (once for past and once for future choices) to each participant's data individually (hidden- and explicit-zero conditions combined). We used the maximum likelihood estimation procedure described in Experiment 1. Doing so required dropping participants who selected either all SP or all LD choices. This pattern of choices precludes accurate estimation of discounting parameters, suggesting that the task was not appropriately calibrated to determine these participants' true indifference point. As such, we dropped 8 participants from the following analyses1. All remaining participants' model fits were submitted to a likelihood ratio test for goodness of fit; 1 additional participant was dropped from all analyses because of a failure to meet significance at the p  =  .05 level in both fits—past: χ2 (3)  =  7.46, p  =  0.06; future: χ2 (3)  =  3.80, p  =  .28. Post hoc inspection of this participant's choices using the Kirby scoring procedure (Kirby et al., 1999) revealed seemingly random responding. All other model fits passed [all χ2 (3) > 7.95, all ps < .05]. This left a final sample of 38 participants for all reported model-based analyses.

To fit the temporal attention model (Equation 3), we again used a maximum likelihood process to estimate the free parameters ω, ε, β, and δ, and m (see Equation 7 below) for each participant individually (and separately for past and future). To allow for formal model comparisons, we retained the 38 participants for whom we also fitted Equation 2 (see above). We assumed that choices follow a softmax decision function (Equation 5) with

graphic file with name jeab-96-03-05-e07.jpg

The likelihood of the observed choices is then given by the product of the probabilities for all choices and depends on ω, ε, β, δ and m. We used a simplex procedure with 100 randomly selected initial parameter values to maximize this likelihood function. To determine goodness of fit, we submitted the fits to a likelihood ratio test relative to an intercept-only (binomial) model based on number of SP choices alone. In general, results indicated that the model summarized behavior quite well, all χ2 (4) > 14.6, all ps < .006, for all but 1 participant, for whom the model fitted to past choices was only marginally significant, χ2 (4)  =  8.25, p  =  0.083. Nonetheless, we retained this participant for the analyses below in order to directly compare the fits for Equations 2 and 3.

Results

Demographics

As in Experiment 1, we found no effects of demographics or psychiatric status (all ps > .18) on either the past (all Fs < 1) or future (all Fs < 2) hidden-zero effect, and so we do not consider these variables further.

Behavioral Demonstration of the Hidden Zero Effect

Replicating our result from Experiment 1, we found that the past hidden-zero effect was highly significant, t(46)  =  4.60, p < .00004, paired; participants (N  =  47) made fewer smaller, proximate choices when past options were framed with explicit zeros. There was a difference across conditions for the future choices as well, t(46)  =  3.05, p < .004, paired, replicating the original finding of Magen and colleagues (2008). These findings are presented in Figure 3A. As with Experiment 1, order of block presentation had no effect in either the past, t(42.27)  =  −0.96, p > .3, or future condition, t(43.55)  =  −0.13, p > .8, and the variance in responses was homogeneous across hidden- and explicit-zero conditions: future, F(46,46)  =  1.20, p > .54; past, F(46,46)  =  1.03, p > .91.

Fig 3.

Fig 3

Behavioral results from Experiment 2. (A) Similarity of past and future hidden-zero effect (Expt. 2), resembling the predictions of the temporal attention hypothesis (see Figure 1C) (out of 15 options per condition; ** p < .00004, paired; * p < .004, paired). (B) Hidden-zero effect magnitude: in both past and future, Δ SP choices [(# hidden-zero SP choices) − (# explicit-zero SP choices)] is significantly greater than 0. Error bars  =  standard error of the mean. (C) Correlation between past and future hidden-zero effects, where Δ SP choices  =  (# hidden-zero SP choices) − (# explicit-zero SP choices).

Numerically, we summarized the hidden-zero effect by calculating delta scores for smaller, proximate choices ([number of SP choices, hidden condition] – [number of SP choices, explicit condition]) in both the past and future discounting sets. As shown in Figure 3B, these delta scores did not differ across past and future conditions, t(46)  =  1.56, p > .12, paired. Finally, these delta scores were modestly but positively correlated, r  =  .29, p  =  .05, across the past and future conditions, hinting at a common mechanism that is sensitive to manipulation (see Figure 3C).

Improving Sequence Model Fits

We first assessed the relationship between past and future discounting in general by fitting Equation 1. As distributions of k were skewed in both the past and future, we used a log-transform to normalize the fitted parameter values. Past and future values of k did not differ, t(37)  =  1.77, p > .08, and were significantly correlated, r  =  .65, p < .00001, (see Figure 4A), replicating the finding that the hyperbolic discount factor is related for past and future discounting (Bickel et al., 2008; Yi et al., 2006).

Fig 4.

Fig 4

Model-fitting results from Experiment 2. (A) Correlation between log-transformed k values in the past and future discounting conditions. (B) Mean estimate of γ for past discounting options only, future options only, and across the entire choice set (both past and future options). Error bars  =  standard error of the mean. (C) Mean parameter estimates from Equation 3 for past and future discounting choices (both hidden- and explicit-zero). Error bars (where appropriate)  =  standard error of the mean. (D) Mean estimates of ε for past discounting options only, future options only, and across the entire choice set (both past and future options). Error bars  =  standard error of the mean.

Figure 4B summarizes the γ fits for each condition. We found negative values of γ (M  =  −0.14, SD  =  0.19, range  =  −.66–.15) for past discounting; the mean was significantly different from 0, t(37)  =  −4.33, p  =  .0001. Additionally, we found positive values of γ (M  =  0.08, SD  =  0.20, range  =  −0.26–0.65) for future discounting; this mean was also significantly different from 0, t(37)  =  2.43, p  =  .02. Values of γ were significantly and negatively correlated across the past and future conditions, r  =  −.34, p < .04, converging with the behavioral correlation between the delta scores (see above) and demonstrating a relationship between past and future hidden-zero effects.

Importantly, these results imply that participants preferred declining sequences (γ < 0) in the past while concurrently preferring improving (γ > 0) future sequences. Behaviorally, this is demonstrated by a mirror-symmetric hidden-zero effect (see Figure 3A). We thus expected to find that, when fitting Equation 2 across past and future choices combined, γ would no longer express this behavioral symmetry (as positive future and negative past values of the parameter would cancel to zero). We fitted the model a third time to both choice sets concurrently (pooling all 60 future and past questions); all models passed a likelihood ratio test for goodness of fit relative to a binomial model, all χ2 (3) > 16.95, all ps < .0008. As expected, we found that γ was not significantly different from zero, t(37)  =  0.39, p  =  .7, (see Figure 4B). An estimated value of γ  =  0 implies that there should be no hidden-zero effect, further demonstrating the inadequacy of Equation 2 (and hence the improving sequence hypothesis) in describing the data.

Temporal Attention Model Fits

These model fits contrast with those for the temporal attention hypothesis, which we assessed by fitting Equation 3 to these participants' choices (two separate times, once for the past and once for the future options). We report results from nonparametric tests wherever distributions showed evidence of nonnormality.

Figure 4C compares parameter values across past and future choice sets. Paired-sample tests found no significant differences for β, δ, or the weighting parameter ω (β: t(37)  =  −0.94, p > .35; δ: W  =  274, p > .22; ω: t(37)  =  1.26, p > .21), indicating similarity in the temporal attention domain. In other words, the addition of explicit zeros affects temporal attention (as approximated by model parameters) similarly for past and future outcomes2.

Figure 4D summarizes ε fits for each condition. Estimates of ε were positive for both the future (M  =  0.13, SD  =  0.33, range: −0.78–0.99) and past (M  =  0.18, SD  =  0.41, range: −1–1), significantly different from zero: future, t(37)  =  2.45, p < .02; past, t(37)  =  2.71, p  =  .01, and not different from one another, t(37)  =  0.68, p > .5, paired. Behaviorally, this reveals a similar effect of explicit-zero framing on both past and future discounting. However, the correlation between past and future ε, though positive, was not significant, r  =  .24, p  =  0.14. This may mean that, while attentional allocation mechanisms operate similarly for distant past and future outcomes, the effect is idiosyncratic across individuals.

To determine whether the explicit-zero frame widened temporal attention across all choices (irrespective of past or future), we fitted Equation 3 to both choice sets (past and future combined) concurrently; this result is presented in Figure 4D. All models passed a likelihood ratio test for goodness of fit, relative to a binomial model based on SP choices alone: all χ2 (4) > 24.27, all ps < .000083. When examining values of ε (M  =  0.18, SD  =  0.34), we found they were significantly different from 0, t(37)  =  3.29, p < .003, indicating that increases in ε, and its corresponding effects on the relative weights placed on β and δ, widen temporal attention and decrease discounting for explicit-zero frames. To this point, estimates of ε were positively correlated with delta scores for number of SP choices across conditions in both the past (r  =  .40, p < .02) and future (r  =  .32, p < .05) choice sets. This demonstrates a scaling of ε with behavior: The less one chooses immediate (past or future) choices in the explicit-zero condition, the more ε is affecting temporal attention.

Model Comparison

One critical difference between the improving sequence and temporal attention models is the number of free parameters. The temporal attention model explains the behavioral results better than does the improving sequence model, but a formal comparison is necessary to determine whether this improvement is significant relative to the increased number of model parameters. To determine this, we collapsed across all participants' data to provide a representative fit for Equations 2 and 3. The observed hidden-zero effect, (i.e,, the mean difference in SP choices [#SP hidden – #SP explicit]) across past and future conditions was 1.1 for the 38 participants. For each model, we determined the likelihood of observing a mean effect of this magnitude by running a Monte Carlo simulation, with 10,000 iterations and using the mean parameter values from the fits for combined past and future choices reported above. As expected, Equation 3 (Bayesian information criterion [BIC]  =  24.58) accounted for the hidden-zero effect better than Equation 2 (BIC  =  26.54). This indicates that the reduction in discounting promoted by explicit-zero frames is better explained as a shift in temporal attention than as a preference for improving sequences.

Discussion

The results of this experiment demonstrate a relationship between past and future hidden-zero discounting: Explicit-zero framing reduces SP choice preference for both future and past outcomes. Importantly, we found evidence for similarity in the temporal attention domain for both past and future rewards, consistent with recent evidence that consideration of past and future involves overlapping neural systems (Buckner & Carroll, 2007). The similarity of past and future hidden-zero effects suggests that discounting may be reduced through framing effects that preempt the need for cognitively demanding techniques such as temptation inhibition. After all, there is nothing to inhibit in the past (Bickel et al., 2008).

For a sequence account, these findings indicate that if one prefers improving sequences in the future, one also prefers declining sequences in the past. Since Equation 2 predicts uniform preference for improving sequences across time, in both past and future (see Figure 1C), we conclude that the results do not support the improving sequence hypothesis but rather follow directly from the temporal attention hypothesis. However, a simple alternative explanation (to a change in temporal attention) is that individuals favor declining sequences in the past while concurrently favoring improving sequences in the future. In other words, our finding in Experiments 1 and 2 that γ < 0 in the past may actually reflect inherent preferences.

To see this, note that a central argument for the temporal attention hypothesis is the fact that past and future discounting are mirror-symmetric (Bickel et al., 2008; Yi et al., 2006). To reflect this fact, we have amended Mazur's (1987) hyperbolic equation by indicating |t| in expressing Equation 2. The sequence term, however, does not require that |t1| < |t2|, but simply that t1 < t2. For our formulation of Equation 2, we assumed a linear, unidirectional perception of time passage (see, for example, Boroditsky, 2001). In other words, time passage may be mapped onto a number line, with the origin (0) corresponding to “now,” negative numbers corresponding to the past, and positive numbers corresponding to the future. To return to our running example, “26 days ago” (−26, or t1) is indeed less than “now” (0, or t2) according to this conception of time passage. This formulation predicts that improving sequences should be preferred when considering both past and future explicit-zero frames. Experiments 1 and 2 have demonstrated that this prediction contradicts actual behavior.

Nevertheless, our stipulation that t1 < t2 remains open to empirical inquiry. Instead of unidirectional time passage, it may be that individuals demonstrate relative perception of past and future time passage (i.e., that time increases positively towards the distant future as well as towards the distant past). This would imply a preference for sequences that improve towards the distant future as well as the distant past, a pattern that would be consistent with the data in Experiment 2. Were this the case, the sequence term in Equation 2 would need to stipulate |t1| < |t2|, making Equation 2 an adequate summary of behavior. The aim of Experiment 3 is to explicitly explore this possibility—that is, we assessed whether individuals actually prefer sequences that improve relative to the distant past.

EXPERIMENT 3: PREFERENCE FOR PAST IMPROVING SEQUENCES

To test participant preferences for sequences of rewards in the past, we extended a paradigm employed by Loewenstein and Sicherman (1991), who asked workers to select between a range of salary profiles that increased, decreased, or remained constant with years of experience on the job. Given this scenario, individuals preferred increasing payment sequences in the future. We employed the same procedure to determine whether people prefer declining sequences in the past. Participants were told to imagine that, 6 years ago, they had won a $150,000 lottery, to be paid in yearly increments over the following 6 years (thus leading up to the present). They then ranked preferences for various payment sequences—some that improved from 6 years ago to last year, some that declined, and one that remained constant. If participants prefer sequences that increase from the distant to the more recent past—but nonetheless prefer the LD option in past explicit-zero discounting—an improving-sequence account can be ruled out as the explanatory mechanism for the hidden-zero effect.

Method

Participants

Participants were 123 introductory psychology students (from Stanford University) and community sample participants (from the San Francisco Bay Area). Thirty-two participants had to be dropped because we discovered an experimenter error in task programming, resulting in useless data for these participants. An additional 11 participants were dropped from analyses for providing incomplete data. This left a final sample size of 80 participants (42 females, 38 males) who ranged in age from 18 to 52 years (M  =  20.9, SD  =  5.90). Six participants indicated a prior psychiatric diagnosis, 3 indicated that they were currently taking psychiatric medications, and 1 indicated a substance use disorder. As compensation, participants received $5 or course credit.

Materials

We presented participants with a slide on the computer screen containing seven bar graphs, adapted from Loewenstein and Sicherman (1991), that summarized yearly payment sequences over 6 years, leading up to “last year.” Three of these sequences declined towards the recent past (with slopes of varying steepness), three improved towards the recent past (again, with varying slopes), and one remained constant (flat slope). The graphs varied in yearly incremental differences between payment sequences, but each added to a total of $150,000. Each graph's component values were taken from Loewenstein and Sicherman (1991).

Procedure

After providing consent, participants were seated at the computer and told to work through the upcoming instructions and task at their own pace. They then read the following computerized instruction set:

“Imagine that five years ago, you won a lottery and were awarded a total of $150,000 to be paid in yearly increments over the past five years. This money was paid to you in a sequence of yearly increments that varied according to which of the following options you selected.

“On the following slide, you will see a series of graphs, labeled A-G, which depict the payment increments you were provided at the time you won.

“Please rank the following payment sequences according to how satisfied each of them would make you today, with the first being the sequence that makes you the most satisfied today, and the last being the sequence that makes you the least satisfied today.

“PLEASE RANK EVERY GRAPH. In other words, you should make an entry for ALL 7 of the following graphs.”

Once they had finished this task, they went on to complete Experiment 4.

Results

To assess participant preferences, we assigned a within-participant rank score, ranging from 1 (least preferred) to 7 (most preferred), to each of their selected graphs. Using this metric, we were able to summarize how highly participants, on average, tended to rank each of the improving and declining past sequences (see Figure 5A). We report results from nonparametric tests because of the nonnormality of these rank distributions.

Fig 5.

Fig 5

(A) Mean rank score for each graph in Experiment 3 (1  =  lowest ranked; 7  =  highest ranked). (B) Mean aggregate rank score for all the improving and declining sequences in Experiment 3. Values derived from averaging the ranks of all graphs with increasing and decreasing slopes, respectively. (* p < .01, ** p < .001).

We examined whether mean rankings given to improving and declining sequences varied by gender, age, and psychiatric diagnosis. We found that females were slightly more likely than males to rank declining sequences as their least preferred, Kruskal-Wallis rank sum test, χ2 (1, N  =  80)  =  5.41, p  =  .02. As no prior research of which we are aware has reported gender effects on sequence preferences, we had no a priori reason to expect a difference, and so we do not discuss this result further. No other demographic variable influenced the sequence preferences (Kruskal-Wallis rank sum test; all ps > .05).

Results revealed a pattern of overwhelming rank preference for improving sequences. The most sharply improving sequence was ranked higher than the most sharply declining sequence (Wilcoxon-Mann-Whitney test; U  =  2271, p < .001) and the third most sharply declining sequence, U  =  2628, p < .05, and it showed a trend for a higher ranking than the second most sharply declining sequence, U  =  2712.5, p < .10. Furthermore, the second most sharply improving sequence was ranked more highly than all of the declining sequences, all Us < 2450, all ps < .01, as was the third most sharply improving sequence, all Us < 2298, all ps < .003. Finally, when the mean ranks of all three improving sequence graphs and all three declining sequence graphs were averaged into single vectors (see Figure 5B), the improving sequences (M  =  4.38, SD  =  1.73) were ranked more highly than the declining sequences (M  =  3.4, SD  =  1.79), U  =  2297, p < .001.

Interestingly, the uniform sequence received a high average rank (M  =  4.69, SD  =  1.48). Nevertheless, while it was ranked more highly than all of the declining sequences, all Us < 1951, all ps < .00002, it was not ranked differently than the improving sequence graphs, all Us > 2984, all ps > .4. Therefore, while individuals may not distinguish between improving and evenly distributed installments of past payments, they overwhelmingly prefer both to payment sequences that decline. In terms of the improving sequence model (Equation 2), this is clear evidence that γ should not be negative for past rewards.

Discussion

These results provide evidence that individuals prefer past sequences that improve (or, at the least, that do not change) as time progresses from the distant past towards the present. This rules out the hypothesis that declining sequences might be preferred in the past. Instead, individuals prefer to experience larger rewards in the more immediate past, despite the fact that this represents an economically irrational preference (it decreases the time over which large financial gains could have been invested and compounded). Rather, the still-tangible positive emotion associated with having recently received a larger reward (a “warm glow” of reward enjoyment) may drive this pattern of choices (see Ekman & Lundberg, 1971; Elster & Loewenstein, 1992). Notably, the high preference assigned to the uniform sequence is consistent with prior research (Frederick & Loewenstein, 2008), in which allocation framings were shown to increase preferences for equal distributions over time. Since our experiment asked participants to allocate payments from a $150,000 windfall over 6 years, we believe this framing likely accounts for the finding. Nevertheless, the crucial fact remains: Decreasing sequences were not endorsed by our participants.

Altogether, these data demonstrate that a linear, unidirectional conception of time—as is assumed in Equation 2—is justified and that, consequently, an improving-sequence based explanation does not account for the hidden-zero effect. Rather, we suggest that explicit-zero framings operate by biasing participants' attention towards the past and the future. To directly examine the feasibility of this attention-based model, we manipulated temporal attention in Experiment 4 by priming past events and subsequently assessing past discounting.

EXPERIMENT 4: ALTERING PAST DISCOUNTING WITH TEMPORAL PRIMING

Recent experiments by Zauberman and colleagues (2009) have demonstrated that priming of various durations of time decreases hyperbolic discounting. Participants who estimated the amount of time it would take to complete each of a set of activities (such as learning a new language or studying for a difficult exam) requested less money to delay the use of a hypothetical gift certificate than participants who guessed the calorie content of various food items. This demonstrates that certain attentional manipulations can alter discount rates, a finding that converges with the present temporal attention hypothesis.

Importantly, Zauberman and colleagues' (2009) manipulation focused on increasing attention to durations, hence altering how their participants construed the passage of time. In this experiment, we tested a priming manipulation more akin to the hidden-zero effect: namely, drawing attention away from the present to some other specific point in time. We hypothesized that drawing attention to specific events in one's past (and hence increasing ω′), would reduce preference for proximate past rewards. Because of the inherent uncertainty associated with predicting the occurrence of future life events, we focused solely on past outcomes (and past discounting), as uncertainty is known to affect discounting (e.g. Rachlin, Raineri, & Cross, 1991).

Method

Participants

We collected data from 123 participants (the cohort for Experiment 3). We dropped 12 from the following analyses (1 for stated failure to understand the priming manipulation, and 11 for providing incomplete data). This left a final sample of 111 (62 females, 49 males), who ranged in age from 18 to 52 years (M  =  20.4, SD  =  5.09). Of these 111 participants, 8 reported a prior psychiatric diagnosis, 4 reported currently taking psychiatric medication, and 1 reported a prior substance abuse disorder. Participants were compensated with either course credit or a $5 cash payment.

Materials

For the control condition, we borrowed the calorie estimation procedure directly from Zauberman and colleagues (2009). Participants were asked to estimate the calorie content for each of seven food items (one slice of a large one-topping pizza, a bowl of salad, a quarter-pound cheeseburger, one serving of chicken wings, a six-inch turkey sandwich with cheese, six pieces of California roll sushi, and one beef burrito). In the target condition, we presented participants with a series of seven common life events and asked them to indicate how long ago they had experienced them. These events were all presumed to have occurred at some memorable point in their past, but one whose exact time of occurrence would need to be effortfully recalled (“Last time you went to the zoo,” “Your first cavity,” “Last time you did your laundry,” “Last time you were sick and vomited,” “Opening your first bank account,” “Last time you got a haircut,” “First learning to ride a bike”). We opted for a distribution of events that, presumably, would have occurred at both relatively distant and relatively recent times in the past.

The subsequent discounting reward pairs were presented with E-Prime, with the SP past option and the LD past option always appearing on the left and right sides of the screen, respectively. We constructed five past discounting pairs (all with hidden zeros) that spanned a wide range of discount rates. SP options, all available “one hour ago,” were $5.10 to $5.90. LD options (from $5.90 to $9.30) were available after delays ranging from 4 to 94 days ago. The full set is provided in Appendix 3.

Procedure

After providing consent, participants were led to the computer and were assigned, in alternating order, to one of two conditions. In the control condition, they were asked to estimate the calorie content for each of seven food items. They read the following computerized instructions:

“In this task, we ask you to consider the typical food items you would consume and estimate how many calories each of the food items would contain.

“Please think about each of the following food items and provide your most accurate estimate of the total number of calories each would contain.

“In your answers, please type in ‘calories’ after each of your estimates.”

In the target (temporal priming) condition (the time estimation condition), participants were asked to estimate, as accurately as they could remember, how long ago they experienced or engaged in each of seven common events (see above). They were presented with the following computerized instructions:

“In this task, we ask you to think about several events and estimate, to the best you can remember, how long ago they happened to you.

“Please think about each of the following events and provide your best estimate of how long ago each of these events happened to you. Try your hardest to remember each event, and give your best estimate of how long ago it occurred.

“ROUND YOUR ANSWERS to the nearest unit of time from the following list: hours, days, weeks, months, or years. For example, one of your answers might be ‘5 years.’”

All target and control condition stimuli appeared one at a time on the screen, and their order of appearance was randomized across participants.

After making entries for the seven estimation prompts, participants then moved on to the critical phase of the experiment. They were presented with the following computerized instructions:

“The following questions are about money preferences. You will be presented with two sums of money, one on the left side of the screen, and one on the right side.

“For each pair of sums, please select the option you prefer more.

“Press the ‘F’ key if you prefer the option on the left-hand side of the screen.

“Press the ‘J’ key if you prefer the option on the right-hand side of the screen.”

All participants made five past discounting selections (all with hidden zeros), the presentation order of which was randomized across participants. Finally, they provided demographic information before being thanked, compensated, and dismissed.

Results

No demographic variable differed by experimental condition, all χ2 (1, N  =  111) < 3.30, all ps > .07, and so we do not consider these variables in our analyses.

Figure 6 summarizes our results. We found that the mean number of SP choices in the time estimation (target) condition (n  =  56) was 2.18 (SD  =  1.72) and was significantly fewer than the number of SP choices made in the calorie estimation (control) condition (n  =  55), which was 2.85 (SD  =  1.60), t(109)  =  2.14, p < .04.

Fig 6.

Fig 6

Reduction in the mean number of past smaller, proximate (SP) choices after priming of past events. Error bars  =  standard error of the mean.

Discussion

The results indicate that individuals are less inclined to select temporally proximate past rewards when their attention is drawn away from the present through a priming manipulation. In this experiment, all past rewards were necessarily hypothetical so that no behavioral suppression of or cognitive distraction from tempting impulses was involved as participants made their discounting choices. We posit that the results were a direct result of a broadening of temporal attention following recollection of temporally distant past events; the data thus further support the feasibility of the temporal attention hypothesis.

GENERAL DISCUSSION

These studies examined two candidate hypotheses for the mechanism by which a simple manipulation, the hidden-zero effect, reduces temporal discounting. The extension of the hidden-zero effect to past discounting, in which participants preferred reward sequences that decreased through time, contradicts an account of the phenomenon that relies on preferences for improving sequences of rewards. Importantly, Experiment 3 explicitly rules out mirror-symmetric sequence preferences as an alternative explanation of the effect. Instead, we posit that an increase in temporal attention to distant past and future events is sufficient to reduce discounting. That is, when attention is drawn away from “now”—through either the hidden-zero effect (Experiments 1 and 2) or through priming (Experiment 4)—behavior becomes less focused on immediate gratification. Model fits confirm that the hidden-zero effect, which is correlated across past and future, is better explained by an increased reliance on far-sighted attentional processes through which rewards are valuated (Equation 3) than by a preference for time-dependent sequences (Equation 2).

These results converge with recent evidence suggesting that focus on the temporal domain can alter delay discounting (Ebert & Prelec, 2007; Zauberman et al., 2009) and represent a novel approach to the study of intertemporal choice. By demonstrating the hidden-zero effect for discounting of past outcomes, the present results provide further evidence that temporally distant reward values can be enhanced with no concurrent need to “control” a latent desire for the proximate alternative. After all, no behavioral suppression of or cognitive distraction from tempting impulses can be realistically implicated in explaining choices involving past rewards. A more likely explanation is that similar cognitive processes consider both the past and the future (Bickel et al., 2008), often demonstrate bias for immediate over temporally distant outcomes, and are sensitive to attentional manipulations that reduce this bias. Below, we highlight a number of future directions and clinical implications this hypothesis engenders. We focus on the issue of substance abuse, as it is among the most costly public health problems in the United States (Office of National Drug Control Policy, 2004).

First and foremost, it remains to be seen whether temporal attention reallocation can reduce present bias among clinical populations who repeatedly struggle to avoid immediate rewards. Cigarette smokers represent a particularly challenging case. Although up to 80% of smokers report some desire to quit, there is a 78% relapse rate within the first 6 months of cessation (Sigmon, Lamb, & Dallery, 2008), underscoring the fact that dynamic inconsistency in reward preference is a large contributor to smoking relapse (Herrnstein & Prelec, 1992). It will be important to investigate whether explicit-zero frames can reduce preference for immediate rewards in both the future and the past among nicotine (and other substance) addicts. High rates of delay discounting have already been identified as a risk factor for poor smoking abstinence outcome (Dallery & Raiff, 2007; MacKillop & Kahler, 2009; Yoon et al., 2007); an intriguing question is whether the hidden-zero effect predicts cessation outcome at treatment intake, or whether the magnitude of the effect changes as a function of abstinence duration. If so, public health campaigns might successfully employ framing manipulations to heighten focus on the delayed financial consequences associated with purchasing cigarettes. As a majority of smokers are aware of the health risks associated with smoking (Weinstein, Slovic, Waters, & Gibson, 2004) but continue to smoke nonetheless, such an approach may be more salient and impactful in the decision to quit (or even to initiate) smoking.

Additionally, the present results align conceptually with recent interventions developed to reduce discount rates. Specifically, increasing focus on the future and past has successfully been shown to decrease present bias among substance abusers. Smokers in a recent laboratory study, for example, were able to reduce cravings for cigarettes by focusing on the long-term future effects of smoking (Kober, Kross, Mischel, Hart, & Ochsner, 2010). Furthermore, extensive working memory training results in a reduction of discount rates among stimulant users seeking treatment (Bickel, Yi, Landes, Hill, & Baxter, 2010). As working memory recruits the same frontal-parietal network implicated in intertemporal projection (Buckner & Carroll, 2007), this training intervention may reduce discounting in part by targeting the very networks required for far-sighted episodic reflection. Finally, addiction may be considered a disorder of reward-related learning and memory (Hyman, 2007) that results in hyporesponsivity to natural reinforcers and hyperresponsivity to drug stimuli (Volkow, Fowler, Wang, Swanson, & Telang, 2007). As suggested by our results from Experiment 4, then, improving the ability to recall positive past experiences that did not involve drug use may help reduce past discounting rates—and may also have beneficial effects on future reward valuation. Elements of this latter approach are currently employed in traditional cognitive-behavioral therapy for substance abuse (Beck, Wright, Newman, & Liese, 1993), but remain to be tested within a temporal attention framework.

Nevertheless, we caution against the unwavering conclusion that temporal attention to the past and future is identical, such that reallocating attention to the past automatically results in quantitatively equivalent reallocation of attention to the future (and vice-versa). Recall that, in our formulation of the temporal attention hypothesis (Equation 3), the ε parameter quantified the attentional shift resulting from explicit-zero framing. Although estimates of ε in the past and future were not significantly different, they failed to correlate at a significant level. This may be a statistical artifact (possibly resulting from a trend for a larger mean hidden-zero effect in the past than in the future), but it may also reflect individual differences in the ease with which one's attention can be drawn to the past or future. Indeed, research on time perspective indicates that there are multiple temporal orientations and that orienting to the future and the past loads on separate factors (Zimbardo & Boyd, 1999). If one is especially apt to consider the future but not as inclined to consider the past, it may be easier to focus attention to the former. Accordingly, the extent to which trait individual differences in time perspective mediate the hidden-zero effect in past and future is an important future research question.

Finally, we posit that temporal attention may be the mechanism by which some existing substance abuse interventions gain effect. In particular, contingency management (CM) has a conceptually similar structure to the explicit-zero framing that we have investigated (see Higgins, Silverman, & Heil, 2008). Contingency management provides incremental reinforcement (in the form of vouchers or other tangible rewards) that is contingent upon verifiable abstinence from the target substance; as such, it pits a choice between “using now and earning no reward later” and “not using now and earning reward later.” Although the technique was initially inspired by evidence that drinking and drug use could be reduced with traditional operant techniques (Higgins & Silverman, 2008), it is conceivable that CM achieves its high success rate by changing temporal attention in people with a pathologically hyperactive “impulsive” system (Bickel et al., 2007). In accordance with this hypothesis, a brief (2 week) contingency management intervention was shown to increase preference for delayed hypothetical money over smaller values of immediately available cigarettes (Yoon, Higgins, Bradstreet, Badger, & Thomas, 2009). Reduced discounting of hypothetical money has also been shown in cigarette smokers undergoing CM treatment (Yi et al., 2008). At present, research into the psychological mechanisms by which CM interventions promote abstinence is lacking. Based on the results in Experiments 1, 2, and 4, we hypothesize that the promise of tangible rewards in the future partially ameliorates the temporal myopia of drug addiction.

Overall, we have developed a novel hypothesis by which delay discounting may be altered through changes in attention. Although this hypothesis is suggested by our studies, it has not been conclusively validated. We have assayed temporal attention only indirectly, through a temporal priming manipulation (Experiment 4). Importantly, we have not attempted to actually measure a correlate of temporal attention (such as implicit behavioral measures or physiological measures such as eye-tracking). This is certainly a crucial next step in advancing this theory, but the methods will need to be developed anew and validated. As such, this remains beyond the scope of the current report.

Temporal attention is an under-investigated mode by which delay discounting may be manipulated. In four experiments, we have demonstrated that temporal attention alters behavior in a phenomenon that had previously been explained by a different mechanism (i.e., improving sequences). The temporal attention model additionally offers a new interpretation of some of the literature on temporal discounting and the clinical disorders with which it is associated. This model should be submitted to future investigation to determine its usefulness in describing other aspects of intertemporal reward valuation. Doing so will enhance the growing arsenal of approaches aimed at helping us refrain from the tyranny of the immediate.

Acknowledgments

Peter T. Radu, James J. Gross, and Samuel M. McClure: Department of Psychology, Stanford University; Richard Yi: Department of Psychology, University of Maryland; and Warren K. Bickel: Carilion Research Institute, Virginia Tech University.

This research was supported by the John Philip Coghlan Fellowship (SMM), National Institute on Drug Abuse Grants R01DA024080 and R01DA022386, 1UL1RR029884, Wilbur Mills Chair Endowment, and the Arkansas Biosciences Institute (WKB), National Institute on Drug Abuse Grant R01 DA011692-11 (RY), and National Institute of Health Grant R01MH76074 (JJG). Thanks especially to members (and friends) of the Decision Neuroscience Laboratory for their immensely helpful feedback and ideas.

Appendix 1

Here we derive Equation 2 from the model of preferences for sequences proposed by Loewenstein and Prelec (1993; LP). We begin with Equation 7 from LP, which separates temporal discounting (in time-dependent weights wt) from sequence preference. The total value of a sequence of rewards, X, is given by

graphic file with name jeab-96-03-05-e08.jpg

In deriving Equation 2, we substitute the hyperbolic equation (Equation 1) for wt:

graphic file with name jeab-96-03-05-e09.jpg

LP define dt as

graphic file with name jeab-96-03-05-e10.jpg

With sequences of length 2 (n = 2), as in our experiments,

graphic file with name jeab-96-03-05-e11.jpg

We arrive at Equation 2 by letting γ  =  σ/2 and assuming a linear utility function.

Appendix 2

Past discounting options (employed in Experiments 1 and 2):

Explicit-zero condition:

  1. A. $5.50 one hour ago and $0 61 days ago

    B. $0 one hour ago and $7.50 61 days ago

  2. A. $6.90 one hour ago and $0 102 days ago

    B. $0 one hour ago and $8.70 102 days ago

  3. A. $3.30 one hour ago and $0 14 days ago

    B. $0 one hour ago and $8.00 14 days ago

  4. A. $5.40 one hour ago and $0 30 days ago

    B. $0 one hour ago and $8.00 30 days ago

  5. A. $3.10 one hour ago and $0 7 days ago

    B. $0 one hour ago and $8.50 7 days ago

  6. A. $6.70 one hour ago and $0 119 days ago

    B. $0 one hour ago and $7.50 119 days ago

  7. A. $6.00 one hour ago and $0 46 days ago

    B. $0 one hour ago and $8.50 46 days ago

  8. A. $4.30 one hour ago and $0 22 days ago

    B. $0 one hour ago and $7.50 22 days ago

  9. A. $5.00 one hour ago and $0 34 days ago

    B. $0 one hour ago and $7.20 34 days ago

  10. A. $4.90 one hour ago and $0 42 days ago

    B. $0 one hour ago and $5.80 42 days ago

  11. A. $4.50 one hour ago and $0 28 days ago

    B. $0 one hour ago and $7.70 28 days ago

  12. A. $2.00 one hour ago and $0 18 days ago

    B. $0 one hour ago and $8.50 18 days ago

  13. A. $8.00 one hour ago and $0 140 days ago

    B. $0 one hour ago and $8.40 140 days ago

  14. A. $4.70 one hour ago and $0 92 days ago

    B. $0 one hour ago and $5.40 92 days ago

  15. A. $4.10 one hour ago and $0 20 days ago

    B. $0 one hour ago and $7.50 20 days ago

Hidden-zero condition:

  1. A. $5.50 one hour ago

    B. $7.50 61 days ago

  2. A. $6.90 one hour ago

    B. $8.70 102 days ago

  3. A. $3.30 one hour ago

    B. $8.00 14 days ago

  4. A. $5.40 one hour ago

    B. $8.00 30 days ago

  5. A. $3.10 one hour ago

    B. $8.50 7 days ago

  6. A. $6.70 one hour ago

    B. $7.50 119 days ago

  7. A. $6.00 one hour ago

    B. $8.50 46 days ago

  8. A. $4.30 one hour ago

    B. $7.50 22 days ago

  9. A. $5.00 one hour ago

    B. $7.20 34 days ago

  10. A. $4.90 one hour ago

    B. $5.80 42 days ago

  11. A. $4.50 one hour ago

    B. $7.70 28 days ago

  12. A. $2.00 one hour ago

    B. $8.50 18 days ago

  13. A. $8.00 one hour ago

    B. $8.40 140 days ago

  14. A. $4.70 one hour ago

    B. $5.40 92 days ago

  15. A. $4.10 one hour ago

    B. $7.50 20 days ago

Future discounting options (employed in Experiment 2):

Explicit-zero condition:

  1. A. $5.50 today and $0 in 61 days

    B. $0 today and $7.50 in 61 days

  2. A. $6.90 today and $0 in 102 days

    B. $0 today and $8.70 in 102 days

  3. A. $3.30 today and $0 in 14 days

    B. $0 today and $8.00 in 14 days

  4. A. $5.40 today and $0 in 30 days

    B. $0 today and $8.00 in 30 days

  5. A. $3.10 today and $0 in 7 days

    B. $0 today and $8.50 in 7 days

  6. A. $6.70 today and $0 in 119 days

    B. $0 today and $7.50 in 119 days

  7. A. $6.00 today and $0 in 46 days

    B. $0 today and $8.50 in 46 days

  8. A. $4.30 today and $0 in 22 days

    B. $0 today and $7.50 in 22 days

  9. A. $5.00 today and $0 in 34 days

    B. $0 today and $7.20 in 34 days

  10. A. $4.90 today and $0 in 42 days

    B. $0 today and $5.80 in 42 days

  11. A. $4.50 today and $0 in 28 days

    B. $0 today and $7.70 in 28 days

  12. A. $2.00 today and $0 in 18 days

    B. $0 today and $8.50 in 18 days

  13. A. $8.00 today and $0 in 140 days

    B. $0 today and $8.40 in 140 days

  14. A. $4.70 today and $0 in 92 days

    B. $0 today and $5.40 in 92 days

  15. A. $4.10 today and $0 in 20 days

    B. $0 today and $7.50 in 20 days

Hidden-zero condition:

  1. A. $5.50 today

    B. $7.50 in 61 days

  2. A. $6.90 today

    B. $8.70 in 102 days

  3. A. $3.30 today

    B. $8.00 in 14 days

  4. A. $5.40 today

    B. $8.00 in 30 days

  5. A. $3.10 today

    B. $8.50 in 7 days

  6. A. $6.70 today

    B. $7.50 in 119 days

  7. A. $6.00 today

    B. $8.50 in 46 days

  8. A. $4.30 today

    B. $7.50 in 22 days

  9. A. $5.00 today

    B. $7.20 in 34 days

  10. A. $4.90 today

    B. $5.80 in 42 days

  11. A. $4.50 today

    B. $7.70 in 28 days

  12. A. $2.00 today

    B. $8.50 in 18 days

  13. A. $8.00 today

    B. $8.40 in 140 days

  14. A. $4.70 today

    B. $5.40 in 92 days

  15. A. $4.10 today

    B. $7.50 in 20 days

Appendix 3

Past discounting set used in Experiment 4:

  1. A. $5.10 one hour ago

    B. $6.30 94 days ago

  2. A. $5.70 one hour ago

    B. $6.90 23 days ago

  3. A. $5.30 one hour ago

    B. $5.90 7 days ago

  4. A. $5.90 one hour ago

    B. $8.90 15 days ago

  5. A. $5.40 one hour ago

    B. $9.30 4 days ago

Footnotes

1

Dropping these 8 participants did not affect the behavioral results reported in Experiment 2. If anything, it led to a more conservative analysis; in both the past, t(37)  =  4.40, p < .00008, and future, t(37)  =  2.41, p < .03, conditions, the hidden-zero effect was slightly weaker when excluding these participants.

2

Wilcoxon-Mann-Whitney tests on data from all subjects (N  =  47) found similar results for the estimates of β [t(46)  =  −1.36, p  =  .18], δ (W  =  392, p > .35), and ε [t(46)  =  −1.14, p > .26]. Estimates of ω, however, did differ slightly across past and future conditions, t(46)  =  2.05, p < .05. We note that this result mirrors the behavioral data, in which group differences in SP choices revealed a slightly larger hidden-zero effect in the past condition. Accordingly, we found that ω estimates across all participants was slightly larger in the future (M  =  .62, SD  =  .31) than in the past (M  =  .50, SD  =  0.34), corresponding to a smaller weighting on β (see Equation 3) and hence a smaller effect. Excluding the data of 8 participants who chose either all SP or all LD rewards produced this slight discrepancy.

3

Notably, log-likelihood tests on the model fits of all participants (N  =  47) revealed a much stronger performance when past and future choices were aggregated; all participants passed, all χ2 (4) > 12.01, all ps < .02, except for one, χ2 (4)  =  6.07, p  =  .2. Overall, then, Equation 3 describes behavior best when past and future choices are aggregated, adding further support to the conclusion that temporal attention is similar for past and future outcomes.

REFERENCES

  1. Addis D.R, Wong A.T, Schacter D.L. Remembering the past and imagining the future: Common and distinct neural substrates during event construction and elaboration. Neuropsychologia. 2007;45:1363–1377. doi: 10.1016/j.neuropsychologia.2006.10.016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Ainslie G. Specious reward: A behavioral theory of impulsiveness and impulse control. Psychological Bulletin. 1975;82:463–496. doi: 10.1037/h0076860. [DOI] [PubMed] [Google Scholar]
  3. Ainslie G, Haslam N. Self-control. In: Loewenstein G, Elster J, editors. Choice Over Time. New York: Russell Sage; 1992. pp. 177–209. (Eds.) [Google Scholar]
  4. Baddeley A. Human Memory: Theory and Practice. London: Erlbaum; 1990. [Google Scholar]
  5. Barkley R.A, Edwards G, Laneri M, Fletcher K, Metevia L. Executive functioning, temporal discounting, and sense of time in adolescents with attention deficit hyperactivity disorder (ADHD) and oppositional defiant disorder (ODD) Journal of Abnormal Child Psychology. 2001;29:541–556. doi: 10.1023/a:1012233310098. [DOI] [PubMed] [Google Scholar]
  6. Baumeister R.F, Heatherton T.F. Self-regulation failure: An overview. Psychological Inquiry. 1996;7:1–15. [Google Scholar]
  7. Beck A.T, Wright F.D, Newman C.F, Liese B.S. Cognitive therapy of substance abuse. New York: Guilford; 1993. [PubMed] [Google Scholar]
  8. Bickel W.K, Kowal B.P, Gatchalian K.M. Understanding addiction as a pathology of temporal horizon. The Behavior Analyst Today. 2006;7:32–47. [Google Scholar]
  9. Bickel W.K, Marsch L. Toward a behavioral economic understanding of drug dependence: Delay discounting processes. Addiction. 2001;96:73–86. doi: 10.1046/j.1360-0443.2001.961736.x. [DOI] [PubMed] [Google Scholar]
  10. Bickel W.K, Miller M.L, Yi R, Kowal B.P, Lindquist D.M, Pitcock J.A. Behavioral and neuroeconomics of drug addiction: Competing neural systems and temporal discounting processes. Drug and Alcohol Dependence. 2007;90S:S85–S91. doi: 10.1016/j.drugalcdep.2006.09.016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Bickel W.K, Mueller E.T. Toward the study of trans-disease processes: A novel approach with special reference to the study of co-morbidity. Journal of Dual Diagnosis. 2009;5:131–138. doi: 10.1080/15504260902869147. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Bickel W.K, Yi R, Kowal B.P, Gatchalian K.M. Cigarette smokers discount past and future rewards symmetrically and more than controls: Is discounting a measure of impulsivity. Drug and Alcohol Dependence. 2008;96:256–262. doi: 10.1016/j.drugalcdep.2008.03.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Bickel W.K, Yi R, Landes R.D, Hill P.F, Baxter C. Remember the future: Working memory training decreases delay discounting among stimulant addicts. Biological Psychiatry. 2010;69:260–265. doi: 10.1016/j.biopsych.2010.08.017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Boroditsky Does language shape thought? Mandarin and English speakers' conceptions of time. Cognitive Psychology. 2001;43:1–22. doi: 10.1006/cogp.2001.0748. [DOI] [PubMed] [Google Scholar]
  15. Buckner R.L, Carroll D.C. Self-projection and the brain. Trends in Cognitive Sciences. 2007;11:49–57. doi: 10.1016/j.tics.2006.11.004. [DOI] [PubMed] [Google Scholar]
  16. Critchfield T, Collins S. Temporal discounting: Basic research and the analysis of socially important behavior. Journal of Applied Behavioral Analysis. 2001;34:101–122. doi: 10.1901/jaba.2001.34-101. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Dallery J, Raiff B.R. Delay discounting predicts cigarette smoking in a laboratory model of abstinence reinforcement. Psychopharmacology. 2007;190:485–496. doi: 10.1007/s00213-006-0627-5. [DOI] [PubMed] [Google Scholar]
  18. Dixon M.R, Holton B. Altering the magnitude of delay discounting by pathological gamblers. Journal of Applied Behavioral Analysis. 2009;42:269–275. doi: 10.1901/jaba.2009.42-269. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Ebert J.E, Prelec D. The fragility of time: Time-insensitivity and valuation of the near and far future. Management Science. 2007;53:1423–1438. [Google Scholar]
  20. Ekman G, Lundberg U. Emotional reaction to past and future events as a function of temporal distance. Acta Psychologica. 1971;35:430–441. doi: 10.1016/0001-6918(71)90002-3. [DOI] [PubMed] [Google Scholar]
  21. Elster J, Loewenstein G. Utility from memory and anticipation. In: Loewenstein G, Elster J, editors. Choice over time. New York: Russell Sage; 1992. pp. 213–234. (Eds.) [Google Scholar]
  22. Epstein L.H, Salvy S.J, Carr K.A, Dearing K.K, Bickel W.K. Food reinforcement, delay discounting and obesity. Physiology & Behavior. 2010;100:438–445. doi: 10.1016/j.physbeh.2010.04.029. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Frederick S, Loewenstein G. Conflicting motives in evaluations of sequences. Journal of Risk and Uncertainty. 2008;37:221–235. [Google Scholar]
  24. Fujita K, Han H.A. Moving beyond deliberative control of impulses: The effect of construal levels on evaluative associations in self-control conflicts. Psychological Science. 2009;20:799–804. doi: 10.1111/j.1467-9280.2009.02372.x. [DOI] [PubMed] [Google Scholar]
  25. Fuster J.M. The Prefrontal Cortex: Anatomy, Physiology, and Neuropsychology of the Frontal Lobe (2nd ed.) New York: Raven Press; 1989. [Google Scholar]
  26. Hammersley R. A digest of memory phenomena for addiction research. Addiction. 1994;89:283–293. doi: 10.1111/j.1360-0443.1994.tb00890.x. [DOI] [PubMed] [Google Scholar]
  27. Herrnstein R.J, Prelec D. A theory of addiction. In: Loewenstein G, Elster J, editors. Choice Over Time. New York: Russell Sage; 1992. pp. 331–360. (Eds.) [Google Scholar]
  28. Higgins S.T, Silverman K. Introduction. In: Higgins S.T, Silverman K, Heil S.H, editors. Contingency management in substance abuse treatment. New York: Guilford Press; 2008. pp. 1–15. (Eds.) [Google Scholar]
  29. Higgins S.T, Silverman K, Heil S.H, editors. Contingency management in substance abuse treatment. New York: Guilford Press; 2008. [Google Scholar]
  30. Hyman S.E. Addiction: A disease of learning and memory. Focus. 2007;5:220–228. doi: 10.1176/appi.ajp.162.8.1414. [DOI] [PubMed] [Google Scholar]
  31. Kirby K.N, Petry N.M, Bickel W.K. Heroin addicts have higher discount rates for delayed rewards than non-drug-using controls. Journal of Experimental Psychology: General. 1999;128:78–87. doi: 10.1037//0096-3445.128.1.78. [DOI] [PubMed] [Google Scholar]
  32. Kober H, Kross E.F, Mischel W, Hart C.L, Ochsner K.N. Regulation of craving by cognitive strategies in cigarette smokers. Drug and Alcohol Dependence. 2010;106:52–55. doi: 10.1016/j.drugalcdep.2009.07.017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Laibson D. Golden eggs and hyperbolic discounting. Quarterly Journal of Economics. 1997;112:443–477. [Google Scholar]
  34. Loewenstein G. Out of control: Visceral influences on behavior. Organizational Behavior and Human Decision Processes. 1996;65:272–292. [Google Scholar]
  35. Loewenstein G.F, Prelec D. Preferences for sequences of outcomes. Psychological Review. 1993;100:91–108. [Google Scholar]
  36. Loewenstein G, Sicherman N. Do workers prefer increasing wage profiles. Journal of Labor Economics. 1991;9:67–84. [Google Scholar]
  37. MacKillop J, Kahler C.W. Delayed reward discounting predicts treatment response for heavy drinkers receiving smoking cessation treatment. Drug and Alcohol Dependence. 2009;104:197–203. doi: 10.1016/j.drugalcdep.2009.04.020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Magen E, Dweck C.S, Gross J.J. The hidden-zero effect: Representing a single choice as an extended sequence reduces impulsive choice. Psychological Science. 2008;19:648–649. doi: 10.1111/j.1467-9280.2008.02137.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Magen E, Gross J.J. Harnessing the need for immediate gratification: Cognitive reconstrual modulates the reward value of temptations. Emotion. 2007;7:415–428. doi: 10.1037/1528-3542.7.2.415. [DOI] [PubMed] [Google Scholar]
  40. Mazur J.E. An adjusting procedure for studying delayed reinforcement. In: Commons M.L, Mazur J.E, Nevin J.A, Rachlin H, editors. Quantitative Analysis of Behavior: Vol 5. The Effect of Delay and Intervening Events on Reinforcement Value. Hillsdale, NJ: Erlbaum; 1987. pp. 55–73. (Eds.) [Google Scholar]
  41. McClure S.M, Ericson K.M, Laibson D.I, Loewenstein G, Cohen J.D. Time discounting for primary rewards. Journal of Neuroscience. 2007;27:5796–5804. doi: 10.1523/JNEUROSCI.4246-06.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. McClure S.M, Laibson D.I, Loewenstein G, Cohen J.D. Separate neural systems value immediate and delayed monetary rewards. Science. 2004;306:503–507. doi: 10.1126/science.1100907. [DOI] [PubMed] [Google Scholar]
  43. Metcalfe J, Mischel W. A hot/cool-system analysis of delay of gratification: Dynamics of willpower. Psychological Review. 1999;106:3–19. doi: 10.1037/0033-295x.106.1.3. [DOI] [PubMed] [Google Scholar]
  44. Office of National Drug Control Policy. The economic costs of drug abuse in the United States, 1992–2002 (No. 207303) Washington, DC: Executive Office of the President; 2004. [Google Scholar]
  45. Okuda J, Fujii T, Ohtake H, Tsukiura T, Tanji K, Suzuki K, …Yamadori A. Thinking of the future and past: The roles of the frontal pole and the medial temporal lobes. NeuroImage. 2003;19:1369–1380. doi: 10.1016/s1053-8119(03)00179-4. [DOI] [PubMed] [Google Scholar]
  46. Peters J, Büchel C. Episodic future thinking reduces reward delay discounting through an enhancement of prefrontal-mediotemporal interactions. Neuron. 2010;66:138–148. doi: 10.1016/j.neuron.2010.03.026. [DOI] [PubMed] [Google Scholar]
  47. Petry N.M. Pathological gamblers, with and without substance use disorders, discount delayed rewards at high rates. Journal of Abnormal Psychology. 2001;110:482–487. doi: 10.1037//0021-843x.110.3.482. [DOI] [PubMed] [Google Scholar]
  48. Petry N.M, Bickel W.K, Arnett M. Shortened time horizons and insensitivity to future consequences in heroin addicts. Addiction. 1998;93:729–738. doi: 10.1046/j.1360-0443.1998.9357298.x. [DOI] [PubMed] [Google Scholar]
  49. Rachlin H. The science of self-control. Cambridge, MA: Harvard University Press; 2000. [Google Scholar]
  50. Rachlin H, Raineri A, Cross D. Subjective probability and delay. Journal of the Experimental Analysis of Behavior. 1991;55:233–244. doi: 10.1901/jeab.1991.55-233. [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Read D, Frederick S, Orsel B, Rahman J. Four score and seven years from now: The date/delay effect in temporal discounting. Management Science. 2005;51:1326–1335. [Google Scholar]
  52. Reynolds B. A review of delay-discounting research with humans: Relations to drug use and gambling. Behavioural Pharmacology. 2006;17:651–667. doi: 10.1097/FBP.0b013e3280115f99. [DOI] [PubMed] [Google Scholar]
  53. Ross W.T, Jr, Simonson I. Evaluations of pairs of sequences: A preference for happy endings. Journal of Behavioral Decision Making. 1991;4:273–282. [Google Scholar]
  54. Sigmon S, Lamb R.J, Dallery J. Tobacco. In: Higgins S.T, Silverman K, Heil S.H, editors. Contingency management in substance abuse treatment. New York: Guilford Press; 2008. pp. 99–119. (Eds.) [Google Scholar]
  55. Thaler R.H, Shefrin H.M. An economic theory of self-control. The Journal of Political Economy. 1981;89:392–406. [Google Scholar]
  56. Volkow N.D, Fowler J.S, Wang G.-J, Swanson J.M, Telang F. Dopamine in drug abuse and addiction: Results of imaging studies and treatment implications. Archives of Neurology. 2007;64:1575–1579. doi: 10.1001/archneur.64.11.1575. [DOI] [PubMed] [Google Scholar]
  57. Weinstein N.D, Slovic P, Waters E, Gibson G. Public understanding of the illnesses caused by cigarette smoking. Nicotine and Tobacco Research. 2004;6:349–355. doi: 10.1080/14622200410001676459. [DOI] [PubMed] [Google Scholar]
  58. Yi R, Gatchalian K, Bickel W.K. Discounting of past outcomes. Experimental and Clinical Psychopharmacology. 2006;14:311–317. doi: 10.1037/1064-1297.14.3.311. [DOI] [PubMed] [Google Scholar]
  59. Yi R, Johnson M.W, Giordano L.A, Landes R.D, Badger G.J, Bickel W.K. The effects of reduced cigarette smoking on discounting future rewards: An initial evaluation. The Psychological Record. 2008;58:163–174. doi: 10.1007/bf03395609. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Yoon J.H, Higgins S.T, Bradstreet M.P, Badger G.J, Thomas C.S. Changes in the relative reinforcing effects of cigarette smoking as a function of initial abstinence. Psychopharmacology. 2009;205:305–318. doi: 10.1007/s00213-009-1541-4. [DOI] [PubMed] [Google Scholar]
  61. Yoon J.H, Higgins S.T, Heil S.H, Sugarbaker R.J, Thomas C.S, Badger G.J. Delay discounting predicts postpartum relapse to cigarette smoking among pregnant women. Experimental and Clinical Psychopharmacology. 2007;15:176–186. doi: 10.1037/1064-1297.15.2.186. [DOI] [PubMed] [Google Scholar]
  62. Zauberman G, Kim B.K, Malkoc S.A, Bettman J.R. Discounting time and time discounting: Subjective time perception and intertemporal preferences. Journal of Marketing Research. 2009;46:543–556. [Google Scholar]
  63. Zimbardo P.G, Boyd J.N. Putting time in perspective: A valid, reliable, individual-differences metric. Journal of Personality and Social Psychology. 1999;77:1271–1288. [Google Scholar]

Articles from Journal of the Experimental Analysis of Behavior are provided here courtesy of Society for the Experimental Analysis of Behavior

RESOURCES