Skip to main content
. Author manuscript; available in PMC: 2011 Jun 1.
Published in final edited form as: Neural Comput. 2010 Jun;22(6):1511–1527. doi: 10.1162/neco.2010.08-09-1080

Figure 2.

Figure 2

Behavior of the HDTD model (A) when the discounting factor is not scaled by estimated reward per trial (eq. 2.4, κ = 0.2), and (B) when the discounting factor is scaled by the estimated reward per trial(eq. 2.6, κ = 0.2, σ = 1). The HDTD model reverses preferences (B) depending on the temporal proximity of two unequal rewards. When a small reward is immediately available (t1), the value function for that reward (solid line) is higher than for a larger delayed reward (dashed line). However, when the distance to both rewards is increased (t2), the preferences reverse; the value function for the larger reward is higher than for the smaller.