Abstract
In many everyday decisions, people quickly integrate noisy samples of information to form a preference among alternatives that offer uncertain rewards. Here, we investigated this decision process using the Flash Gambling Task (FGT), in which participants made a series of choices between a certain payoff and an uncertain alternative that produced a normal distribution of payoffs. For each choice, participants experienced the distribution of payoffs via rapid samples updated every 50 ms. We show that people can make these rapid decisions from experience and that the decision process is consistent with a sequential sampling process. Results also reveal a dissociation between these preferential decisions and equivalent perceptual decisions where participants had to determine which alternatives contained more dots on average. To account for this dissociation, we developed a sequential sampling rank-dependent utility model, which showed that participants in the FGT attended more to larger potential payoffs than participants in the perceptual task despite being given equivalent information. We discuss the implications of these findings in terms of computational models of preferential choice and a more complete understanding of experience-based decision making.
Imagine you are a day trader deciding whether or not to buy a stock. To make a choice, you watch market data moving along LED ribbons around trading floors or at the bottom of computer monitors trying to gauge the price trend. The information flows by in real time so to decide if the market is trending up or down you have to quickly integrate the information racing by. How does the day trader make this preferential choice where there is not an objectively correct answer, but instead where the choice is based on some subjective value of the alternatives? One reasonable hypothesis is that the day trader makes a decision using a sequential sampling process (Busemeyer & Townsend, 1993; Usher & McClelland, 2004). During this process, as decision makers deliberate, they sequentially sample payoff information about the possible alternatives – either directly from the alternatives or from their memory of past experience with the alternatives – and integrate that information over time. Once a threshold of evidence is reached a decision is made accordingly.
Despite the plausibility of this sequential sampling hypothesis, we do not know how well decision makers integrate rapid samples of payoff information, which is a critical assumption of the process. More generally, it is an open question how well a sequential sampling process describes choices when decision makers have to rapidly process payoff information as in the case of our day trader. Most of the empirical work to date on risky decision making has focused on how people choose between monetary lotteries that are presented as static, symbolic descriptions of the payoffs and probabilities (i.e., decisions from description) (Weber et al., 2004). Other studies have used the so-called decisions-from-experience paradigm where participants explicitly sample from the alternatives and receive feedback over an extended number of trials (Hertwig & Erev, 2009). However, both of these decisions are quite different from the choices people make when they must integrate rapidly arriving payoff information.
Understanding how people accumulate payoff information in these situations may also improve our understanding of how people make decisions from description. Some computational models of these decisions assume a latent sequential sampling process drives the choice (e.g., Busemeyer & Townsend, 1993). In these models, when people are presented with a choice between a sure thing and a risky gamble it is assumed that they mentally simulate random samples from the gamble. These mentally simulated samples are then sequentially accumulated to a threshold that in turn determines the choice and response time. These models not only give a good account of overall choice patterns (Busemeyer & Townsend, 1993), but they also appear to describe the dynamics of deliberation as attention switches back and forth between attributes and alternatives (Busemeyer, 1985; Diederich & Busemeyer, 1999; Krajbich et al., 2010, 2012; Milosavljevic et al., 2010). Again, however, a critical question remains as to how and how well decision makers can integrate rapid samples of payoff information.
Indeed work in perceptual decision making suggests that humans and other primates can rapidly integrate changing perceptual information to make a decision and that this process is consistent with a sequential sampling process (Gold & Shadlen, 2007; Palmer et al., 2005; Ratcliff & McKoon, 2008). Thus, not only is the assumption for rapid integration of payoff information plausible, but it seems a similar process is used to make perceptual decisions as has been posited for preferential decisions. This parallelism has, in fact, led to the intriguing hypothesis that preferential and perceptual choice may even use the same or similar cognitive machinery (Busemeyer et al., 2006; Shadlen et al., 2008; Summerfield & Tsetsos, 2012; Symmonds & Dolan, 2012).
To examine whether a sequential sampling process underlies rapid decisions from experience, and to test the idea that perceptual and preferential decisions are based on similar cognitive mechanisms, we developed a novel preferential decision task, which we call the Flash Gambling Task (FGT). The FGT uses dynamic dot stimuli like those in perceptual decision tasks. In the FGT, participants make a series of decisions between a certain and an uncertain alternative, with the payoff amount indicated by the number of dots in the display. Importantly, outcomes from the uncertain alternative are dynamically updated every 50 ms via draws from an unknown payoff distribution. We compared the FGT to its perceptual analogue, where participants were shown the same stimuli and told to identify which option had the higher average number of dots. This comparison allowed us to directly test the hypothesis that people use the same or similar process to make preferential decisions as they use to make perceptual decisions. As we will show, we found both similarities and important differences in the performance on the two tasks. To account for these results, we develop a computational model that incorporates sequential sampling and rank dependent utility theory (Quiggin, 1982; Tversky & Kahneman, 1992; Luce, 2000).
1. Methods
1.1. Participants
Twenty-three students from Michigan State University completed the Flash Gambling Task (FGT) and twenty-five students completed the matched perceptual task. Participants in both tasks were paid $8.00 for participation, which took one hour. They were also paid a bonus based on their choices. In the FGT, this was the reward earned on four randomly selected trials. In the perceptual task, it was $0.005 times the number of times their response was correct. Both tasks were calibrated to have an average bonus of $2.00.
1.2. Flash Gambling Task
Stimuli were generated in MATLAB using MGL (http://justingardner.net/mgl) and displayed on a LCD monitor. Two circular displays of white dots were shown on a black background. Each display had a diameter of 12° of visual angle. One was located 9° to the left of a green central fixation and the other 9° to the right. The location of the certain alternative was fixed for a given participant but counterbalanced across participants.
The certain alternative was filled with randomly placed dots that remained throughout the duration of a trial. The uncertain alternative had a dynamic display of dots that changed every 50 ms (20 Hz). At each update, the number of dots was drawn from a normal distribution, and then shown in random locations in the display. We truncated the normal distribution at ±2 standard deviations to ensure the uncertain alternative always contained at least 30 dots. Participants were told to choose the option they would like to receive a draw from. After making a choice, participants received the payoff from the next draw, which was displayed as numerical feedback (Figure 1).
Figure 1.
Illustration of the stimuli in the flash gambling task.
1.3. Perceptual Task
The perceptual task was identical to the FGT except participants were told to identify the option that they believed showed the higher average number of dots. Participants received feedback if they were correct or not.
1.4. Design & Procedure
Participants in both tasks completed ten blocks of eighty-four trials. In addition to the between-subjects perceptual versus gambling manipulation, we manipulated three factors within subjects: the mean difference in the number of dots between the certain and uncertain alternative, the standard deviation in the number of dots in the uncertain alternative, and the number of dots in the certain alternative. Each of these manipulations occurred between trials within a block.
The manipulation of the mean difference in the number of dots (uncertain offset) allowed us to determine whether participants could discriminate differences between the certain and uncertain alternatives. We used three levels of uncertain offset, with the uncertain option having 30 fewer, an equal number, and 30 more dots than the certain alternative. We manipulated the standard deviation of the uncertain alternative to investigate a possible role of payoff variance on choice (Busemeyer, 1985). The standard deviation in the uncertain alternative was either 15 or 50 dots. Finally, we varied the number of dots in the certain alternative to encourage participants to attend to both the certain and uncertain alternative and treat them as separate options. The certain alternative contained either 130 or 250 dots.
During a trial, participants were shown the central fixation first for 200 ms, after which the certain and uncertain alternatives appeared and remained on screen until response. Participants chose the preferred option by pressing one of two keys on a keyboard. Once a response was made, the fixation dot was replaced with feedback. Feedback in the FGT was the value of a random draw from the chosen alternative. Feedback in the perceptual task was whether participant’s decision was correct.
2. Results
2.1. Behavioral Results
We analyzed data from twenty FGT and twenty-four perceptual participants. Three FGT participants were excluded for having over 10% (84 trials) of their response times less than 100 ms, which is faster than the typical time it takes to simply press a button. One perceptual participant was excluded for performing the task incorrectly. In all behavioral analyses, we report main effects and interactions between all the manipulated variables. Unless specified otherwise, we used one-sided t-tests and adjusted the degrees of freedom to maintain a family wise error rate of α = 0.05 for all post-hoc contrasts we investigated for a given analysis. These contrasts were performed using the multcomp package in R (Hothorn et al., 2008).
As previously mentioned, the number of dots in the certain alternative was varied in order to encourage participants to treat the certain alternative as a separate gamble. We did not, however, scale the magnitude of the uncertain offset or standard deviation. As a result, consistent with Weber’s law, participants in both tasks were not able to discriminate differences between the two alternatives in the 250 dot condition. Consequently, though we report main effects and interactions involving the number of certain dots, in figures, post-hoc analyses, and subsequent computational model-based analyses, we will only use trials where the certain alternative contained 130 dots.
2.1.1. Choice
Panels A and B of Figure 2 show the proportion of trials participants chose the uncertain alternative as a function of uncertain offset and standard deviation and task. Because our choice data was binary, we employed a logistic mixed model to determine the effects of the between-subjects factor of task (gambling or perceptual) and the within-subjects factors of uncertain offset, standard deviation, and the number of certain dots.
Figure 2.
(A–B). Choice proportion for the uncertain alternative on the 130 certain dot trials. Panel A: standard deviation of 15 dots in the uncertain alternative, Panel B: standard deviation of 50 dots. Better alternative is defined by the difference in mean number of dots between certain and uncertain alternative (+30: uncertain>certain, 0: uncertain=certain, −30: uncertain<certain). (C–D). Mean response time on 130 certain dot trials. Panel C: standard deviation of 15 dots in the uncertain alternative, Panel D: standard deviation of 50 dots. Gray bars: FGT data, white bars: perceptual data. Error bars correspond to one standard error of the mean (SEM).
The logistic model revealed a main effect of uncertain offset (χ2(2) = 64.5, p < 0.001). Specifically, the proportion of trials participants chose the uncertain alternative increased as the mean difference went from favoring the certain to the uncertain alternative, suggesting participants were able to discriminate which alternative had more dots. We confirmed this by testing whether the difference in the log-odds of choosing the uncertain alternative between 0 and −30 uncertain offset trials was positive (z = 14.8, padj < 0.001) and whether the corresponding difference between +30 and 0 uncertain offset trials was positive (z = 12.8, padj < 0.001).
The logistic model also revealed a main effect of task, with participants choosing the uncertain alternative more often in the FGT than in the perceptual analogue (χ2(1) = 13.4, p < 0.001), as well as an interaction between task and uncertain offset (χ2(1) = 16.4, p < 0.001). Panels A and B of Figure 2 show that this interaction arose because the uncertain alternative choice proportions changed more in the perceptual task than in the FGT, suggesting differences in the decision process between the two tasks. We confirmed that the log-odds difference between 0 and −30 uncertain offset trials was larger in the perceptual task than in the FGT (z = 4.08, padj < 0.001), but that the analogous difference between +30 and 0 trials was not (z = 1.69, padj = 0.395). The former difference appears to arise from the uncertain alternative being chosen more often in the FGT than the perceptual task for −30 uncertain offset trials, since this log-odds difference was positive (z = 3.6, padj = 0.001), but the corresponding difference in the +30 offset condition was not (z = 1.5, padj = 0.475). In other words, this interaction was due in part to a disproportionate preference for the uncertain alternative in the −30 offset condition in the gambling task. Our computational modeling will examine this disproportionate preference in more depth.
We also found a two-way interaction between the task and the number of certain dots (χ2(1) = 26.5, p < 0.001) and a three-way interaction between task, uncertain offset and the number of dots (χ2(2) = 12.8, p = 0.001), though these are not shown in Figure 2. As discussed earlier, these interactions were the result of participants not being able to discriminate between the uncertain and certain alternative in the 250 dots condition.
2.1.2. Response Time
Panels C and D of Figure 2 show the response time data. We examined differences in RTs using a repeated-measures ANOVA of ln(RT) on the factors task, uncertain offset, standard deviation and the number of certain dots. We used ln(RT) rather than RT to compensate for the skewed distribution of RTs. Due to the unbalanced design, we used a multi-level model and report likelihood ratio tests.
The ANOVA reveals a main effect of uncertain offset (χ2(2) = 21.8, p < 0.001). There was also a main effect of task (χ2(1) = 10.3, p = 0.001), with faster RTs in the FGT than in the perceptual analogue. Finally, we found a main effect of the number of certain dots (χ2(1) = 22.4, p < 0.001) as well as an interaction between task and the number of certain dots (χ2(2) = 11.1, p < 0.001).
2.2. Computational Modeling Results
To better compare people’s decision processes in the FGT and perceptual task, we fit a drift diffusion model to the data from both tasks, which models the decision process as a sequential sampling process (e.g., Stone, 1960; Link & Heath, 1975; Ratcliff, 1978; Laming, 1968). The drift diffusion model is a mathematical formulation of a sequential sampling process where participants are assumed to sequentially sample noisy information and accumulate it as evidence until a threshold is reached initiating a response. As a mathematical model we can fit it to the observed choices and response time distributions to decompose observed behavior into four psychologically meaningful parameters: (a) drift rate indexing the average amount and direction of evidence accumulated; (b) decision threshold indexing the amount of evidence required to make a decision; (c) bias indexing the prior bias in favor of choosing the uncertain or certain alternative; and (d) non-decision time indexing the amount of time spent on decision-irrelevant processing.
The model does not use the observed stimulus information to drive the evidence accumulation process; rather, it treats the evidence accumulation process as latent. The drift rate, in turn, estimates the rate and direction at which evidence would be accumulated in order to account for the observed choice data. Thus, we were particularly interested in whether the difference in choice proportions and RTs between the FGT and perceptual analogue would also result in differences in drift rate. Such a drift difference would indicate that the gamble frame changes the evidence that subjects accumulated. Differences between the two tasks could also arise in the other parameters. For instance, one might speculate that the difference in RTs between the two tasks could be due to a greater level of of response caution during the perceptual task. If this were true, then greater response caution should manifest itself via higher decision thresholds allowing participants to collect more evidence for each decision.
We fit the drift diffusion model at the individual level using quantile maximum probability method (Heathcote et al., 2002). This method fits the model to the quantiles of the observed response time distributions for uncertain and certain alternative choices (see also Busemeyer & Diederich, 2010). For each individual, regardless of whether they were in the FGT or perceptual condition, we fit a drift rate for each level of uncertain offset (collapsing across the different levels of standard deviation). Threshold, bias and non-decision time were set constant across offset and standard deviation conditions. As shown by Figure 3, the model does a good job in recreating the choice proportion and RT data. The average mean-squared prediction error (MSE) over participants of the predicted choice proportions and expected RTs supports the conclusion of a good fit (average choice MSE = 0.028, average RT MSE = 0.112s).
Figure 3.
Drift diffusion model fit and data. (A). Observed choice proportion for the uncertain alternative (bars), averaged across 15 and 50 dots standard deviation conditions. The model fit was shown as dots. (B). Observed mean response time (bars) averaged across standard deviation conditions and predicted mean response time (dots). Gray bars: FGT data, white bars: perceptual data. Error bars correspond to one standard error of the mean (SEM).
2.2.1. Drift Rate
Figure 4A plots average drift rates as a function of task and uncertain offset. We fit a linear mixed model with one between-subjects factor for task and one within-subjects factor for uncertain offset. We found a significant main effect of uncertain offset (χ2(2) = 44.9, p < 0.001), indicating that changes in the uncertain offset changed the rate of evidence accumulation. This difference in drift rates accounts for the effect of uncertain offset on choice proportions and RTs.
Figure 4.
Average parameter values of the diffusion model fits. (A). Mean drift rate. (B) Mean bias. (C) Mean threshold. (D). Mean non-decision time. Error bars are correspond to a single SEM.
More importantly for this study is the difference in drift rate between the FGT and the perceptual analogue, manifested as a main effect of task (χ2(1) = 8.89, p = 0.002), with participants accumulating evidence favoring the uncertain alternative faster in the FGT than in the perceptual task (z = 2.77, padj = 0.007). Moreover, we found that drift rates were positive on −30 offset trials in the FGT (z = 2.13, padj = 0.041), but were negative in the same trials in the perceptual task (z = 2.09, padj = 0.045). That is, when participants viewed the equivalent statistical information but framed as payoffs, information processing changed in a way that led participants to prefer the uncertain alternative even when the objective evidence supported the certain alternative offering a higher expected reward. Finally, we also found a significant interaction between task and uncertain offset in the drift rates (χ2(2) = 6.9, p = 0.031). Since the interaction is scale-dependent, we refrain from further interpretation (Loftus, 1978; Wagenmaker et al., 2012).
2.2.2. Bias, Threshold & Non-Decision Time
We also examined whether participants in the FGT and perceptual task varied in the amount of evidence they required to make a decision, their response bias toward either alternative and the amount of time they spent on non-decisional processing (Figure 4 B–D). We tested each hypothesis using a two-sided t-test. In all cases, the results were consistent with chance variation (for threshold, t(23.4) = −0.529; for bias, t(20.9) = 1.26; for non-decision time, t(34.7) = 0.852, all ps > 0.2).
These results indicate that the observed differences in choice and RT across the FGT and perceptual task derive primarily from differences in the evidence being accumulated, and not from, for example, an overall bias for participants to be risk-seeking or for participants to set a larger threshold in the perceptual task. The differences in drift rates suggest that on average participants in the FGT collected evidence favoring the uncertain alternative faster than participants in the perceptual task, which in turn led to FGT participants responding faster and preferring the uncertain alternative more often. Moreover, the interaction between task and uncertain offset in the drift rate implies the critical difference between the FGT and the perceptual task is a difference in the evidence that is accumulated. In the discussion, we argue that this difference in drift rates between the FGT and its perceptual analogue is due to participants’ valuation process during the FGT. In particular, participants appear to weigh experienced events in an optimistic manner.
3. Discussion
In this paper, we investigated how people make rapid decisions from experience when they are required to quickly integrate a stream of information to make a choice. We showed that as rewards from an uncertain alternative became, on average, more attractive, preference shifted towards the uncertain alternative. Consistent with sequential sampling accounts of preferential choice (Busemeyer & Townsend, 1993; Usher & McClelland, 2004), choice and response times were well described by a drift diffusion process. This was also true in the perceptual analogue of the task, suggesting these two different decisions use the same or similar systems (Busemeyer et al., 2006; Shadlen et al., 2008; Summerfield & Tsetsos, 2012; Symmonds & Dolan, 2012).
However, the study also revealed a dissociation between the two types of decisions. In particular, preference for the uncertain alternative was stronger and response times were faster during the FGT than in the perceptual analogue. Our drift diffusion analyses suggest these results are not due to a response bias or differences in the location of the choice threshold, but the result of a difference in the evidence that is accumulated during deliberation. In other words, preferential choice under uncertainty appears to lead to a different representation of statistical information as compared to when participants made perceptual judgments under uncertainty. Given that participants in both conditions saw the exact same statistical information, the next question is what explains this difference in the accumulated evidence.
Behavioral decision theory offers several possible explanations for how the subjective representation of value can depart from the objective value. In the remainder of this section, we determine whether three of the most common of these explanations - a utility function (Bernoulli, 1954), serial order effects (Hogarth & Einhorn, 1992) and rank dependent weights from rank-dependent utility theory (Quiggin, 1982; Tversky & Kahneman, 1992; Luce, 2000) - could account for our data. To do so, we show how each concept from behavioral decision theory morphs the evidence that is accumulated in a sequential sampling process and thus how it affects the drift rate.
3.1. The Sequential Sampling Process Model
Before developing the possible explanations, it is necessary to formally define the sequential sampling framework, which our analyses suggest provides a good description of the basic decision process. In both conditions, if decision makers were accumulating evidence objectively comparing the value of the certain alternative c to each of the k values sampled from the uncertain alternative, labeled (y1, …, yk), then the accumulated evidence would be
(1) |
Equation 1 formally links the information presented to the participant to the evidence that is accumulated during the sequential sampling process, making the latent process assumed in our earlier drift-diffusion model analyses explicit. Previous work (Ratcliff & McKoon, 2008; Palmer et al., 2005), as well as data from our perceptual task, suggest this is a reasonable approximation of the accumulated evidence during our perceptual condition. Unfortunately, the sequential sampling process in Equation 1 cannot describe the decision process in the FGT because it cannot predict a positive drift rate when the uncertain offset is negative (see Figure 4 Panel A). However, given its ability to account for perceptual data, we will build on Equation 1 to incorporate effects from behavioral decision theory.
3.2. Sequential Sampling of Subjective Expected Utility
The first possible explanation we investigate is a utility function. Utility functions are a standard way of modeling the representational change between objective monetary amounts and subjective values. This function formalizes the idea that decision makers evaluate monetary outcomes based on their subjective value or utility rather than their objective value (Bernoulli, 1954; Kahneman & Tversky, 1979; von Neumann & Morgenstern, 1947; Savage, 1954). In sequential sampling models, utility impacts what evidence is accumulated and thus affects the measured drift rate (Busemeyer & Townsend, 1993). Decision makers typically exhibit decreasing sensitivity to payoffs as the magnitude of payoffs increase, which is modeled with a power function u(x) = xθ with 0 < θ < 1 (Kahneman & Tversky, 1979; Luce, 2000). Substituting such a function into the evidence accumulation process (Equation 1) results in
(2) |
For utility to account for preferences, there must be a θ > 0 that causes yj − c and to have different signs. When this is true, and will also have different signs, resulting in the preference reversal observed in the −30 offset condition. There can be no such θ, however, because power functions increase monotonically for θ > 0, meaning that utility cannot account for our data.
3.3. Serial Order Sequential Sampling
The previous section illustrates that utility on its own cannot reverse the sign of the average difference between the uncertain and certain alternatives on −30 offset trials. Another way of accounting for different representations in the perceptual task and FGT is through serial-order effects, which discount a particular sample’s contribution to the accumulated evidence based on when that sample was observed (Hogarth & Einhorn, 1992). These effects have been observed in experience-based preference tasks similar to the FGT (e.g., Hertwig et al., 2004; Tsetsos et al., 2012).
Serial order effects can be formally incorporated into Equation 2 by weighting the contribution of each sample j by an amount aj(k) between 0 and 1 whose value depends on when j was observed. This results in the sequential sampling process
(3) |
Clearly, when aj(k) = 1, sample j, , contributes its full amount to the accumulated evidence, and when aj(k) = 0, sample j does not contribute at all. When aj(k) < 1, sample j contributes near its full amount when aj(k) is near one, and contributes nearly nothing when aj(k) is near zero. For example, a primacy effect can be accounted for by setting a1(k) = 1 and aj(k) = 1/j and a recency effect by setting ak(k) = 1 and aj(k) = 1/(k − j + 1) for j < k.
Again the only way to account for choices in the −30 uncertain offset condition is for yj − c and to have different signs. Following the previous section to rule out the utility parameter θ, the only way to reverse the sign is for the serial order weight aj(k) to be less than 0, aj(k) < 0. However, if this were the case then the serial order updating model would allow for objective evidence in support of one alternative (e.g., uncertain alternative) to be represented as evidence for the other alternative (e.g., certain alternative). As this is not consonant with the usual definition of a serial order effect (Hogarth & Einhorn, 1992), we require aj(k) > 0. By an argument analogous to the one presented for subjective expected utility, however, this means that incorporating serial order effects cannot account for choice data in the −30 uncertain offset condition.
3.4. Rank-Dependent Sequential Sampling
3.4.1. Decision Weights
A remaining hypothesis from behavioral decision theory is that people’s attention to observed outcomes is not uniform, but varies with an outcome’s likelihood of occurring and overall favorability (Tversky & Kahneman, 1992). This is typically captured using rank dependent utility theory and its decision weight construct (Quiggin, 1982; Tversky & Kahneman, 1992; Luce, 2000). According to rank dependent utility theory, the subjective value of the uncertain alternative is
(4) |
In this expression, x1 < … < xn are the possible payoffs of the uncertain alternative ordered by their desirability and π1 < … < πn are decision weights determining each payoff’s contribution to the uncertain alternative’s value. The decision weights πi are a function of what Wakker (2010) calls “good-news probabilities” - the probabilities qi of observing a payoff at least as large as xi.
These probabilities are transformed into decision weights by a probability weighting function W. Following Prelec (1998), we use the probability weighting function with parameters γ and δ,
(5) |
As illustrated in Figure 5, γ controls the curvature of the weighting function, and δ controls the elevation of the function producing optimistic or pessimistic weights.4 The decision weights πi are determined by taking successive differences between the transformed good-news probabilities, so that πn = W(qn) and πi = W(qi+1) − W(qi) for i < n. In the end, the decision weights reflect the marginal contribution of each possible outcome to the value of the alternative.5
Figure 5.
Example probability weighting functions using the Prelec equation. (A) Illustration of how changes in the curvature parameter affect the shape of the function. (B) Illustration of how changes in the elevation parameter affect the shape of the function. Note the unity line (dotted line) denotes objective probability (i.e., no weighting).
3.4.2. Sequential Sampling Weights
A different formulation of the decision weights is needed in a sequential sampling process because this process is a running sum of sampled outcomes, not a weighted average as in Equation 4. As a result, we need to model people’s sensitivity to each sample as it is experienced. This sensitivity to each sample is approximately equal to the derivative of the probability weighting function,
(6) |
(for a derivation of the sensitivity parameters see the Appendix). We show several probability weighting functions and their corresponding derivatives in Figure 6. They illustrate how probability weighting functions color observed evidence. When , the decision maker shows greater sensitivity to the corresponding sample, giving it more weight than it objectively should be given. When , the decision maker is less sensitive to the sample. Finally, when , the decision maker treats the sample equivalent to its objective weight in the accumulated sum. Using these sensitivity weights, the evidence accumulation process is
(7) |
where ωj is the associated with sample yj. One interesting aspect of this rank-dependent sequential sampling model is that due to the weak law of large numbers, as sample size grows (e.g., due to larger choice thresholds), predicted choices from the model will converge with predicted choices from rank-dependent utility theory (see the Appendix for a formal proof). Thus, the rank-dependent sequential sampling model we developed here represents a dynamic and stochastic generalization of rank-dependent utility theory.
Figure 6.
Probability weighting functions (A) and their corresponding derivatives (B). The derivatives of the functions roughly approximate the outcome weights used in the rank-dependent sequential sampling process.
3.4.3. Optimistic Sample Weights
We assessed the ability of rank-dependent sequential sampling to account for differences between the FGT and perceptual tasks via simulation. We used simulation, rather than fitting the model, because the likelihood has not yet been derived for the rank-dependent sequential sampling model. Our simulation consisted of two stages. In the first stage, we determined the decision threshold A and perceptual measurement error τ, to be defined shortly, that provided the best fit to participants’ behavior for each uncertain offset condition (−30, 0,+30) when the uncertain standard was 50 dots. A detailed description of the simulation-based fitting procedure is contained in the Appendix.
In the second stage, we simulated the sequential sampling rank dependent model using the values of A and τ from the perceptual task as estimates for the same parameters in the gambling task. Respectively, these estimates were 755 dots and 80 dots. This assumption seems reasonable given the fact that the analyses with the drift diffusion model did not yield significantly different thresholds for participants in the perceptual and gambling tasks. Moreover, linking the tasks in this way reduced the number of free parameters in our simulation while still allowing us to compare the FGT and perceptual analogue.
Each decision was simulated by sequentially sampling outcomes from the uncertain alternative, perturbing these outcomes with normally-distributed noise6, inserting the noisy values into Equation 7 and making a decision when reached the corresponding threshold. The noise simulated perceptual measurement error, and its standard deviation is the free parameter τ estimated in the perceptual task. For simplicity, we set the utility parameter θ in Equation 7 to one. To investigate the sequential sampling weights that best recreate the gambling data we simulated 1000 decisions for each γ and δ in a grid of values extending from 0 to 2.7 We ran this simulation for the experimental conditions in the FGT that corresponded to the conditions that informed the perceptual model (i.e., the offset conditions of −30, 0 and +30 dots, when the standard deviation was 50 dots).
Figure 7 summarizes the results of this simulation. Panel A shows which values of γ and δ resulted in choice proportions and RTs satisfying the following three conditions. First, choice proportions in the −30 offset condition must be above 0.5. Second, choice proportions must monotonically increase with increasing uncertain offset. Finally, we calculated the mean-squared deviation between the observed and predicted RTs over all decisions, show only those values whose deviation was less than 0.5. The shading in Figure 7 shows the mean-squared deviation for the points satisfying these conditions.
Figure 7.
Simulation results from the rank-dependent sequential sampling process. (A). Values of δ and γ that satisfied two conditions, colored by the variability of their RTs across the three offset levels. The first condition was that the uncertain alternative was chosen on at least half of trials for all offset levels. The second was that the number of times the uncertain alternative was chosen must increase with increasing offset. (B). Prelec functions corresponding to the δ and γ values in A. (C). The derivatives of the Prelec functions in B. The lines in B and C were colored by the variability of their RTs across the three offset levels.
Inspection of the plot reveals that γ values near 0.7 and δ values near 0.3 do a good job of recreating our data. Panel B plots probability weights (W(qi)) for the values of γ and δ shown in Panel A, and Panel C plots the sequential sampling weights . These results show that probability weighting can indeed qualitatively account for our data. Moreover, with γ, δ = 1 in the perceptual task, the best fitting values for the FGT suggest that participants weighted observed outcomes more optimistically than participants in the perceptual task. In particular, participants in the FGT tended to emphasize larger gains and deemphasize smaller gains in determining the value of the uncertain alternative.
Apparently, simply framing the task as a gamble is sufficient to change how decision makers allocate attentional weight to sampled outcomes. Non-linear decision weights have sometimes been explained in terms of a perceptuallike distortion (Kahneman & Tversky, 1984). Indeed, choices in the perceptual condition - specifically the bias to choose the uncertain alternative when the two alternatives had equal means - is consistent with some overweighting of large magnitudes. However, the comparison between the FGT and its perceptual analog suggest the rank dependent weights go beyond any simple constant level of perceptual distortion. Similar findings of decision makers overweighting extreme payoffs have been reported in other studies of decisions made from experience (Tsetsos et al., 2012; Ludvig et al., 2013) and we discuss this finding in the context of those studies. Before doing so, we discuss the implications of our results for computational models of preferential choice.
3.5. Implications for Computational Models of Preferential Choice
Besides revealing some of the basic properties of rapid decisions from experience, our results are also informative for computational models of decision making (Busemeyer & Townsend, 1993; Usher & McClelland, 2004). As discussed before, these models often assume a sequential sampling process underlies preferential choice. This sequential sampling assumption implies that to make a choice, decision makers mentally sample or simulate possible outcomes from the available alternatives. These outcomes could be memory traces of past outcomes or, in the case of gambles, they could be newly sampled outcomes as one’s attention is drawn to consider different possible outcomes (Busemeyer, 1982).
One assumption in these models is that the contribution of the sampled outcome to the accumulated evidence does not depend on the magnitude of the sampled outcome. Our results suggest otherwise. The rank-dependent sample weights imply the contribution of each outcome to the accumulated evidence at each time point is dependent partly on the relative standing of the sampled outcome in the larger set of possible outcomes. Having said that, rank-dependent sample weights does not discount the role of other factors, such as serial dependency (e.g., Tsetsos et al., 2012) and competing interactions with other alternatives (Roe et al., 2001), in shaping preferential choice. In fact, the FGT and similar tasks (Tsetsos et al., 2012) make the systematic investigation of these and other factors possible, through their ability to explicitly control the flow of samples of reward information.
3.6. Decisions from Experience
Our study falls into the general category of experience-based decisions where people must gather information from noisy samples of outcomes (e.g., Busemeyer, 1985; Hertwig et al., 2004; Ungemach et al., 2009; Glaser et al., 2012). The FGT and our results contribute to our general understanding of experience-based decisions in several different areas. Methodologically, the FGT expands the decisions-from-experience paradigm in at least three different areas.
First, it expands the paradigm to rapid presentations of payoff information. Typically the paradigm has focused on slow trial-by-trial presentation of payoff information (e.g., Hertwig et al., 2004; Barron & Erev, 2003). In a notable extension of the paradigm, Tsetsos et al. (2012) increased the presentation rate to 2–4 Hz; here we have pushed it even further to 20 Hz (i.e., a sample every 50 ms). This means that for the average decision, which participants took about 1.37s to make, they were shown about 27 samples from the uncertain alternative. Compare that with the slower experience-based decision paradigms where participants often base their decisions on 7 to 10 observations from a given alternative with each observation being displayed for a second or two (Hertwig et al., 2004). Investigations of how people make these rapid decisions from experience is important. One reason is that there is a vast array of preferential decisions made on the basis of rapidly communicated statistical information of which our day-trader example in the introduction is just one example. Our work shows that these decisions are well described as a sequential sampling process, a process well known in statistics (Wald & Wolfowitz, 1949) and used to make other more perceptual (Link & Heath, 1975; Gold & Shadlen, 2007) and memory-based decisions (Ratcliff, 1978).
A second methodological contribution is the extension of experience-based decisions to continuous distributions of payoffs. Most current studies of experience-based decisions focus on binary gambles whose payoffs follow a discrete probability distribution (Ludvig et al., 2013; Fox & Hadar, 2006; Gonzalez & Dutt, 2011; Hau et al., 2010, 2008; Hertwig et al., 2004; Hills & Hertwig, 2010; Rakow et al., 2008; Rakow & Newell, 2010; Ungemach et al., 2009; for studies that have used multiple outcome gambles see Busemeyer, 1985; Ert & Erev, 2007; Thaler et al., 1997; Barron & Erev, 2003). Discrete binary gambles are a useful simplification, but they have constrained the questions being asked. For example, to date most of the work on experience-based choice has centered around the height of the probability weighting function at specific probability values (Hertwig & Erev, 2009). Moving to a continuous distribution forced us to consider the implications of experiencing a range of payoffs and the contextual effects of those payoffs on each sample of experience. Here we found these contextual effects were well captured with a weighting function adapted from rank-dependent utility theories (Quiggin, 1982; Tversky & Kahneman, 1992; Luce, 2000; Wakker, 2010) within a sequential sampling framework.
A third methodological contribution is that the perceptual analogue to the FGT provides a different perspective for understanding experience-based preferential choice. Recently, many studies have sought to understand decisions from experience by comparing them with so-called decisions from description, in which respondents learn about the outcomes and probabilities of gambles through convenient descriptions. The main finding that comes from this comparison is that in experience-based choices people choose as if the impact of objectively rare outcomes has been attenuated in comparison to the weight given to the same outcomes in decisions from description producing a so-called description-experience gap (Hau et al., 2008).8 This description control, however, has proven troublesome in trying to understand the description-experience gap and more generally how people make decisions from experience. One reason is that the comparison between decisions from description and decisions from experience confounds sampling error, which alone can produce underweighting of rare events (Fox & Hadar, 2006; Rakow et al., 2008; Ungemach et al., 2009). That is, during decisions from experience one factor that impacts the internal representation of the gambles is the random variability in the sampled payoffs, which is not present when choices are made from descriptions of the gambles. Our comparison between preferential and perceptual choice using the FGT and its perceptual analogue controls for sampling error, since both tasks are experience-based and use the same information, but framed differently. Note also that sampling error should also be reduced via the larger sample sizes that are taken to make each decision in the FGT. The perceptual task also controls for other confounds that the comparison to decisions from description does not. One of these is the perceptual error that naturally arises when outcomes/payoffs are conveyed via experience as compared to description (Shafir et al., 2008). Other confounds include how respondents search or sample the alternatives for information, the memory requirements, and even possibly the decision process itself (Hertwig & Erev, 2009).
Together these methodological extensions helped reveal that while people use similar cognitive machinery to make rapid experience-based perceptual and preferential choices, the underlying processes also systematically differ. Specifically, the valuation process used in these preferential decisions appears to change the subjective representation of the statistical information with more attentional weight being placed on larger and more extreme potential payoffs. This can lead to risk seeking preferences in the gain domain when the risky option has extreme potential payoffs. Our result echoes similar results found in other studies with experience-based decision making (Tsetsos et al., 2012; Ludvig et al., 2013).9 The next question is why do people overweight these extreme events. Ludvig et al. (2013) suggests that this effect is similar to perceptual context effects where, for instance, colors appear differently depending on the other colors around them (Lotto & Purves, 2000) or lines looks longer or shorter depending on their arrowheads (Müller-Lyer, 1889). In the case of gambles, payoffs can seem more or less extreme relative to other possible payoffs. In our case, the rank-dependent nature of the weights suggests this arises because of the range of payoffs people experience during a particular trial with the the more extreme payoffs grabbing more attention. Our perceptual control, however, helps rule out a purely perceptual explanation to this effect. Instead, as mentioned earlier, it appears that framing the task as a gamble changes the attentional weight allocated to possible outcomes and the fact that this salience arises at rapid presentation rates would seem to speak agains more deliberative processes like learning (Niv et al., 2012). Perhaps the change in attention is the result of a motivational (Weber, 1994; Lopes, 1987) or arousal (Pham, 2007) component that arises in making preferential decisions. Either way our results support a growing appreciation of the role that attention plays in forming a preference during economic decisions (Krajbich et al., 2010; Krajbich & Rangel, 2011).
4. Conclusion
In conclusion, in this paper we investigated rapid experience-based decision making using a novel gambling task, the FGT. This task and its perceptual analogue showed that people use a similar sequential sampling process to make these two decisions. However, there are critical differences that lead to a dissociation between preferential and perceptual choice. We show that the valuation process used during preferential choices changes the representation of the statistical information that is accumulated. In particular, the results are consistent with a sequential sampling process using rank dependent weights where decision makers optimistically weighted higher payoffs during deliberation.
Supplementary Material
-
*
We investigate preferential choice from rapidly-presented perceptual evidence.
-
*
We show that the decision process is consistent with a sequential sampling process.
-
*
We compare preferential with perceptual decision based on the same stimuli.
-
*
We find greater attention to larger potential payoffs in preferential decisions.
Acknowledgments
We would like to thank Mitchell Uitvlugt for his assistance in preparing the manuscript. We would also like to thank the members of the Laboratory for Cognitive and Decision Science at Michigan State University for their insightful comments on earlier drafts of this paper.
This work was supported in part by NIH grant R03DA033455 awarded to TJP and TL.
Appendix
Appendix A. Derivation of Sensitivity Parameters in Rank Dependent Sequential Sampling Process
Appendix A.1. Definition of the Sensitivity Parameter
The sensitivity weights in the rank-dependent sequential sampling process were formed in a manner consistent with rank dependent utility theory (Wakker, 2010, Appendix 6.8). This supplement describes how they were derived and how the model was estimated. Recall that the value a decision maker assigns to an uncertain alternative under rank-dependent utility theory is
(A.1) |
where θ is the exponential parameter in the utility function (see Discussion), x1 < … < xn are the possible outcomes of the uncertain alternative ordered by desirability and each πi is the decision weight assigned to xi. The πi are determined by the good-news probabilities qi and the Prelec (1998) function
(A.2) |
through the equation
(A.3) |
Note that pi = qi+1 − qi. Since the Prelec function is differentiable10,
(A.5) |
by the Fundamental Theorem of Calculus (see Wakker, 2010, p. 199, for an in-depth discussion). Multiplying by pi/(qi+1 − qi) = 1, gives
(A.6) |
The ratio on the right-hand side of the product in Equation A.6 is the average slope over the interval [qi, qi+1] and is approximately equal to the derivative of the weighting function when pi = qi − qi+1 is small. This factor measures the sensitivity of the overall rank dependent value to each outcome. We set the outcome weights in the sequential sampling process to this sensitivity term,
(A.7) |
Appendix A.2. Agreement with Rank-Dependent Utility
Describing πi in this way allows to be incorporated into a sequential sampling process as
(A.8) |
where ωj is the associated with outcome yj. Note that Equation A.8 is the same as Equation 7 except that, Equation 7 contains a perceptual error term and Equation A.8 does not. In this section, we show that when the decision criterion is sufficiently large, decisions made by the sequential sampling process defined Equation A.8 agree with the decisions made by rank-dependent utility theory.11 This is sufficient to show that the sequential sampling process defined in Equation 7 is a process-model generalization of rank-dependent utility theory.
As noted earlier, this is a consequence of the Weak Law of Large Numbers (WLLN; Casella & Berger 2002). The WLLN formally states that, in a large sample, the proportion of times a outcome xi occurring with probability pi = qi+1 − qi will be observed is approximately its probability qi+1 − qi. For any sample of size k,
(A.9) |
where mi,k is the number of times outcome i is observed the sample. When k is sufficiently large by the WLLN. Substituting k(qi+1 − qi)πi into Equation A.9, yields
(A.10) |
When the decision threshold is large, a large number of samples will be observed before a decision is made. Thus, for large thresholds, the average evidence accumulated for the uncertain alternative, , is approximately the value υ assigned to the uncertain alternative by rank-dependent utility theory. Thus, the decisions made by the sequential sampling model and rank dependent utility will agree.
Appendix B. Fitting the Simulation Free Parameters
To fit the decision threshold A and perceptual measurement error τ, we estimated a decision threshold and perceptual measurement error for each of the three uncertain offset conditions (−30, 0,+30) when the uncertain standard deviation was 50 dots using participants’ behavior in the perceptual task. We then averaged these estimates across all uncertain offset conditions to obtain the estimates used to simulate decisions using rank-dependent weights.
Because the best-fitting decision threshold depends on the amount of perceptual measurement error, we fit each condition’s decision threshold and perceptual measurement error in two steps. First, we fit a decision threshold for each measurement error in the set {1, 10, …, 100}. This was necessary because the best-fitting decision threshold depends on the level of perceptual measurement error. Then, we visually compared the choice and RT predictions for each decision threshold/perceptual measurement error pair, and selected the one that best captured participant’s behavior in the perceptual task. The resulting error estimates were identical across the three tasks (see Table B.1).
Table B.1.
Best-fitting τ and A values as a function of the uncertain offset μ.
Uncertain Offset (μ) | Measurement Error (τ̂c) | Threshold (Âc) |
---|---|---|
−30 | 80 dots | 1030 dots |
0 | 80 | 227 |
+30 | 80 | 1012 |
Mean | 80 | 755 |
To fit the decision threshold given a level of perceptual measurement error, we first computed the average number of samples viewed on a trial, M, which is equal to the mean RT in seconds divided by the rate at which samples are drawn. We then simulated N = 1000 sequences of M independent dot differences yi1, …, yiM and measurement errors εi1, … εiM for each condition, and estimated the condition’s threshold as the average total evidence collected in the average number of samples collected in a trial,
(B.1) |
We then estimated the threshold parameter as the average of the Âc. The values of Âc fit for each condition are shown in Table B.1.
Examination of Table B.1 shows that the thresholds fit for the ±30 offset conditions are approximately equal to one another, but not to the threshold fit for the 0 offset condition. This is a consequence of all three conditions having the same mean RT despite their difference in mean perceptual evidence. Because they have the same mean RT the mean number of observed samples M will also be the same. As a result, the threshold estimated in the 0 offset condition must be smaller than the thresholds in the ±30 condition since the absolute value of the evidence collected from each sample is smaller in the 0 condition (i.e., on average the value of the additional evidence will be 0). To model the data in the gambling trials, we used a single threshold, by taking the mean of threshold estimates for the three offset conditions for the perceptual trials (i.e., 755 dots). In addition, given the large difference between the 0 and ±30 conditions, we also ran a simulation using different thresholds for each condition to make sure that our results were not simply an artifact of averaging the thresholds. Though not presented here, we obtained qualitatively similar results from both simulations.
Footnotes
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
The curvature parameter is often described as controlling the overweighting and underweighting of rare events. However, in a rank dependent model the distortion is applied to the probability ranks so that large and small ranks are overweighted where the function is steepest (Wakker, 2010).
The move to rank dependent calculation of decision weights over a direct transformation of probabilities (e.g., Kahneman & Tversky, 1979) address what appears to be implausible predictions of violation of stochastic dominance (Diecidue & Wakker, 2001).
We also took the floor of the noisy value, so the noisy value still corresponded to a number of dots.
Specifically, we simulated data for every (γ, δ) such that γ = k1/20 and δ = k2/20, where 1 ≤ k1, k2 ≤ 40. This means that 1/20 ≤ γ, δ ≤ 2.
In some cases, the weight given to rare outcomes is attenuated relative to the weight they deserve according to their objective probabilities (see for example problems 5 and 6 in Hertwig et al., 2004).
This result of greater weight to more extreme payoffs does seem to contrast with the typical finding of underweighting of rare events often reported in decisions from experience (Hertwig & Erev, 2009). However, as described earlier, the underweighting of rare events is often defined relative to the choices from decisions from description. Our study, as we have discussed, uses a pereptual control. Even so, it may be the case that underweighting of rare events is confined to the experience-based paradigms that use slow trial-by-trial presentation of payoff information. As we have described in the discussion, there are a number of important differences like the rapid presentation of larger sample sizes on each trial that could lead to this difference. This is certainly an avenue for future research.
(A.4) |
Formally, the limit as the threshold increases, the probability of disagreement converges to zero.
Contributor Information
Matthew D. Zeigenfuse, Universität Zürich
Timothy J. Pleskac, Michigan State Univerisity
Taosheng Liu, Michigan State Univerisity.
References
- Barron G, Erev I. Small feedback-based decisions and their limited correspondence to description-based decisions. Journal of Behavioral Decision Making. 2003;16:215–233. [Google Scholar]
- Bernoulli D. Exposition of a new theory on the measurement of risk. Econometrica. 1954;22:23–36. [Google Scholar]
- Busemeyer JR. Choice behavior in a sequential decision-making task. Organizational Behavior & Human Performance. 1982;29:175–207. [Google Scholar]
- Busemeyer JR. Decision making under uncertainty: A comparison of simple scalability, fixed-sample, and sequential-sampling models. Journal of Experimental Psychology: Learning, Memory, & Cognition. 1985;11:538–564. doi: 10.1037//0278-7393.11.3.538. [DOI] [PubMed] [Google Scholar]
- Busemeyer JR, Diederich A. Cognitive Modeling. Thousand Oaks, CA: SAGE Publications; 2010. [Google Scholar]
- Busemeyer JR, Jessup RK, Johnson JG, Townsend JT. Building bridges between neural models and complex decision making behaviour. Neural Networks. 2006;19:1047–1058. doi: 10.1016/j.neunet.2006.05.043. [DOI] [PubMed] [Google Scholar]
- Busemeyer JR, Townsend JT. Decision field theory: A dynamic-cognitive approach to decision making in an uncertain environment. Psychological Review. 1993;100:432–459. doi: 10.1037/0033-295x.100.3.432. [DOI] [PubMed] [Google Scholar]
- Casella G, Berger RL. Statistical Inference. 2nd ed. Pacific Grove, CA: Duxbury; 2002. [Google Scholar]
- Diecidue E, Wakker PP. On the intuition of rank-dependent utility. Journal of Risk and Uncertainty. 2001;23:281–298. [Google Scholar]
- Diederich A, Busemeyer JR. Conflict and the stochastic-dominance principle of decision making. Psychological Science. 1999;10:353–359. [Google Scholar]
- Ert E, Erev I. Replicated alternatives and the role of confusion, chasing, and regret in decisions from experience. Journal of Behavioral Decision Making. 2007;20:305–322. [Google Scholar]
- Fox CR, Hadar L. "Decisions from experience" = Sampling error plus prospect theory: Reconsidering Hertwig, Barron, Weber & Erev (2004) Judgment and Decision Making Journal. 2006;1:159–161. [Google Scholar]
- Glaser C, Trommershäuser J, Mamassian P, Maloney LT. Comparison of the distortion of probability information in decision under risk and in an equivalent visual task. Psychological Science. 2012;23:419–426. doi: 10.1177/0956797611429798. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gold JI, Shadlen MN. The neural basis of decision making. Annual Review of Neuroscience. 2007;30:535–574. doi: 10.1146/annurev.neuro.29.051605.113038. [DOI] [PubMed] [Google Scholar]
- Gonzalez C, Dutt V. Instance-based learning: Integrating sampling and repeated decisions from experience. Psychological Review. 2011;118:523. doi: 10.1037/a0024558. [DOI] [PubMed] [Google Scholar]
- Hau R, Pleskac TJ, Hertwig R. Decisions from experience and statistical probabilities: Why they trigger different choices than a priori probabilities. Journal of Behavioral Decision Making. 2010;23:48–68. [Google Scholar]
- Hau R, Pleskac TJ, Kiefer J, Hertwig R. The description-experience gap in risky choice: The role of sample size and experienced probabilities. Journal of Behavioral Decision Making. 2008;21:493–518. [Google Scholar]
- Heathcote A, Brown S, Mewhort DJK. Quantile maximum likelihood estimation of response time distributions. Psychonomic Bulletin & Review. 2002;9:394–401. doi: 10.3758/bf03196299. [DOI] [PubMed] [Google Scholar]
- Hertwig R, Barron G, Weber EU, Erev I. Decisions from experience and the effect of rare events in risky choice. Psychological Science. 2004;15:534–539. doi: 10.1111/j.0956-7976.2004.00715.x. [DOI] [PubMed] [Google Scholar]
- Hertwig R, Erev I. The description-experience gap in risky choice. Trends in Cognitive Science. 2009;13:517–523. doi: 10.1016/j.tics.2009.09.004. [DOI] [PubMed] [Google Scholar]
- Hills TT, Hertwig R. Information search in decisions from experience: Do our patterns of sampling foreshadow our decisions? Psychological Science. 2010;21:1787–1792. doi: 10.1177/0956797610387443. [DOI] [PubMed] [Google Scholar]
- Hogarth RM, Einhorn HJ. Order effects in belief updating - the belief-adjustment model. Cognitive Psychology. 1992;24:1–55. [Google Scholar]
- Hothorn T, Bretz F, Westfall P. Simultaneous inference in general parametric models. Biometrical Journal. 2008;50:346–363. doi: 10.1002/bimj.200810425. [DOI] [PubMed] [Google Scholar]
- Kahneman D, Tversky A. Prospect theory: An analysis of decision under risk. Econometrica. 1979;47:263–292. [Google Scholar]
- Kahneman D, Tversky A. Choices, values, and frames. American Psychologist. 1984;39:341–350. [Google Scholar]
- Krajbich I, Armel C, Rangel A. Visual fixations and the computation and comparison of value in simple choice. Nature Neuroscience. 2010;13:1292–1298. doi: 10.1038/nn.2635. [DOI] [PubMed] [Google Scholar]
- Krajbich I, Lu D, Camerer C, Rangel A. The attentional drift-diffusion model extends to simple purchasing decisions. Frontiers in Psychology. 2012;3 doi: 10.3389/fpsyg.2012.00193. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Krajbich I, Rangel A. Multialternative drift-diffusion model predicts the relationship between visual fixations and choice in value-based decisions. Proceedings of the National Academy of Sciences. 2011:13852–13857. doi: 10.1073/pnas.1101328108. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Laming DRJ. Information theory of choice-reaction times. New York, NY: Academic Press; 1968. [Google Scholar]
- Link SW, Heath RA. Sequential theory of psychological discrimination. Psychometrika. 1975;40:77–105. [Google Scholar]
- Loftus GR. On interpretation of interactions. Memory & Cognition. 1978;6:312–319. [Google Scholar]
- Lopes LL. Between hope and fear - the psychology of risk. Advances in Experimental Social Psychology. 1987;20:255–295. [Google Scholar]
- Lotto RB, Purves D. An empirical explanation of color contrast. Proceedings of the National Academy of Sciences. 2000;97:12834–12839. doi: 10.1073/pnas.210369597. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Luce RD. Utility of Gains and Losses: Measurement-theoretical and experimental approaches. Mahwah, NJ: Lawrence Erlbaum Associates; 2000. [Google Scholar]
- Ludvig EA, Madan CR, Spetch ML. Extreme outcomes sway risky decisions from experience. Journal of Behavioral Decision Making. 2013 [Google Scholar]
- Milosavljevic M, Malmaud J, Huth A, Koch C, Rangel A. The drift diffusion model can account for the accuracy and reaction time of value-based choices under high and low time pressure. Judgment and Decision Making. 2010;5:437–449. [Google Scholar]
- Müller-Lyer FC. Optische urteilstäuschungen. Archiv für Anatomie und Physiologie, Physiologische Abteilung. 1889;2:263–270. [Google Scholar]
- Niv Y, Edlund JA, Dayan P, O’Doherty JP. Neural prediction errors reveal a risk-sensitive reinforcement-learning process in the human brain. The Journal of Neuroscience. 2012;32:551–562. doi: 10.1523/JNEUROSCI.5498-10.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Palmer J, Huk A, Shadlen MN. The effect of stimulus strength on the speed and accuracy of a perceptual decision. Journal of Vision. 2005;5:376–404. doi: 10.1167/5.5.1. [DOI] [PubMed] [Google Scholar]
- Pham MT. Emotion and rationality: A critical review and interpretation of empirical evidence. Review of General Psychology. 2007;11:155–178. [Google Scholar]
- Prelec D. The probability weighting function. Econometrica. 1998;66:497–527. [Google Scholar]
- Quiggin J. A theory of anticipated risk. Journal of Economic Behavior & Organization. 1982;3:323–343. [Google Scholar]
- Rakow T, Demes KA, Newell BR. Biased samples not mode of presentation: Re-examining the apparent underweighting of rare events in experience-based choice. Organizational Behavior & Human Decision Processes. 2008;106:168–179. [Google Scholar]
- Rakow T, Newell BR. Degrees of uncertainty: An overview and framework for future research on experience-based choice. Journal of Behavioral Decision Making. 2010;23:1–14. [Google Scholar]
- Ratcliff R. Theory of memory retrieval. Psychological Review. 1978;85:59–108. [Google Scholar]
- Ratcliff R, McKoon G. The diffusion decision model: Theory and data for two-choice decision tasks. Neural Computation. 2008;20:873–922. doi: 10.1162/neco.2008.12-06-420. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Roe RM, Busemeyer JR, Townsend JT. Multialternative decision field theory: A dynamic connectionist model of decision making. Psychological Review. 2001;108:370–392. doi: 10.1037/0033-295x.108.2.370. [DOI] [PubMed] [Google Scholar]
- Savage LJ. The Foundations of Statistics. New York, NY: John Wiley & Sons; 1954. [Google Scholar]
- Shadlen MN, Kiani R, Hanks T, Churchland AK. Neurobiology of decision making: An intentional framework. In: Engel C, Singer W, editors. Better Than Conscious? Decision Making, the Human Mind, and Implications For Institutions. Cambridge, MA: MIT Press; 2008. [Google Scholar]
- Shafir S, Reich T, Tsur E, Erev I, Lotem A. Perceptual accuracy and conflicting effects of certainty on risk-taking behaviour. Nature. 2008;453:917–920. doi: 10.1038/nature06841. [DOI] [PubMed] [Google Scholar]
- Stone M. Models for choice-reaction time. Psychometrika. 1960;25:251–260. [Google Scholar]
- Summerfield C, Tsetsos K. Building bridges between perceptual and economic decision-making: Neural and computational mechanisms. Frontiers in Neuroscience. 2012;6 doi: 10.3389/fnins.2012.00070. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Symmonds M, Dolan RJ. The neurobiology of preference. In: Dolan RJ, Sharot T, editors. Neuroscience of Preference and Choice: Cognitive and neural mechanisms. Waltham, MA: Elsevier; 2012. pp. 4–23. [Google Scholar]
- Thaler RH, Tversky A, Kahneman D, Schwartz A. The effect of myopia and loss aversion on risk taking: An experimental test. Quarterly Journal of Economics. 1997;112:647–661. [Google Scholar]
- Tsetsos K, Chater N, Usher M. Salience driven value integration explains decision biases and preference reversal. Proceedings of the National Academy of Sciences. 2012;109:9659–9664. doi: 10.1073/pnas.1119569109. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tversky A, Kahneman D. Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and Uncertainty. 1992;5:297–323. [Google Scholar]
- Ungemach C, Chater N, Stewart N. Are probabilities overweighted or underweighted when rare outcomes are experienced (rarely)? Psychological Science. 2009;20:473–479. doi: 10.1111/j.1467-9280.2009.02319.x. [DOI] [PubMed] [Google Scholar]
- Usher M, McClelland JL. Loss aversion and inhibition in dynamical models of multialternative choice. Psychological Review. 2004;111:757–769. doi: 10.1037/0033-295X.111.3.757. [DOI] [PubMed] [Google Scholar]
- von Neumann J, Morgenstern O. Theory of games and economic behavior. 2nd ed. Princeton, NJ: Princeton University Press; 1947. [Google Scholar]
- Wagenmaker E-J, Krypotos AM, Criss AH, Iverson G. On the interpretation of removable interactions: A survey of the field 33 years after Loftus. Memory & Cognition. 2012;40:145–160. doi: 10.3758/s13421-011-0158-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wakker PP. Prospect Theory for Risk and Ambiguity. Cambridge, UK: Cambridge University Press; 2010. [Google Scholar]
- Wald A, Wolfowitz J. Bayes solutions of sequential decision problems. Proceedings of the National Academy of Sciences of the United States of America. 1949;35:99–102. doi: 10.1073/pnas.35.2.99. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Weber EU. From subjective probabilities to decision weights: The effect of asymmetric loss functions on the evaluation of uncertain outcomes and events. Psychological Bulletin. 1994;115:228–242. [Google Scholar]
- Weber EU, Shafir S, Blais AR. Predicting risk sensitivity in humans and lower animals: Risk as variance or coefficient of variation. Psychological Review. 2004;111:430–445. doi: 10.1037/0033-295X.111.2.430. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.