Abstract
Recent advances in neuroscience suggest that a utility-like calculation is involved in how the brain makes choices, and that this calculation may use a computation known as divisive normalization. While this tells us how the brain makes choices, it is not immediately evident why the brain uses this computation or exactly what behavior is consistent with it. In this paper, we address both of these questions by proving a three-way equivalence theorem between the normalization model, an information-processing model, and an axiomatic characterization. The information-processing model views behavior as optimally balancing the expected value of the chosen object against the entropic cost of reducing stochasticity in choice. This provides an optimality rationale for why the brain may have evolved to use normalization-type models. The axiomatic characterization gives a set of testable behavioral statements equivalent to the normalization model. This answers what behavior arises from normalization. Our equivalence result unifies these three models into a single theory that answers the “how”, “why”, and “what” of choice behavior.
1. Introduction
Choice is often modeled as behavior that seeks to maximize a utility function. Advances in neuroscience over the past few decades have pointed to a discrete set of brain areas apparently dedicated to representing a quantity that functions much like a utility representation (Fehr and Rangel, 2011; Glimcher, 2011; Knutson et al., 2001; Platt and Glimcher, 1999). These brain areas produce different levels of neural activity for different choice alternatives, where higher associated activity indicates a higher probability that the alternative in question is chosen. This “utility-like” process of the brain is referred to by neuroscientists as the subjective value function. Its stochastic relationship with choice can be modeled by a utility function with an additive noise term (Webb et al., 2013), which adapts some of the building blocks of classic stochastic choice theory (Luce, 1959; McFadden, 1973) to a modern neuroscientifically-motivated setting.
Another important finding from neuroscience about the subjective value function is that it appears to employ a computation known as divisive normalization (Louie et al., 2011). Originally identified in the visual domain, divisive normalization has been argued to be a canonical neural computation (Carandini and Heeger, 2012) in which the neural activity level generated to represent a particular “stimulus” (whether visual or other) is re-scaled in a way that depends on neighboring stimuli, that is, on the context. In choice behavior, divisive normalization works by re-scaling the value of each option by a function that depends on the values of all available alternatives (Louie et al., 2015). While this re-scaling function can take different forms, it is typically assumed to be proportional to the sum of the values of the items currently in the choice set plus a constant, which is the form we adopt throughout this paper.
More recently, divisive normalization has been successfully used to explain human choice behavior. Louie et al. (2013), Itthipuripat et al. (2015) and Khaw et al. (2017) conduct choice experiments that confirm different predictions of the divisive normalization model regarding context effects in choice. Webb et al. (2014) show that divisive normalization explains choice involving departures from the classic Luce rule —which is equivalent to the Independence of Irrelevant Alternatives — better than other proposals such as Multinomial Probit. Glimcher and Tymula (2018) document how divisive normalization can re-produce many of the behaviors typically associated with prospect theory. Landry and Webb (2017) show how a variant of the divisive normalization model can accommodate a range of context effects. For example, divisive normalization can also accommodate the violation of the Regularity Property often seen in the well-known Attraction Effect (Huber et al., 1982; Simonson, 1989).1 Its ability to violate Regularity (which has been previously discussed in Louie et al. (2013) and Webb et al. (2014)) demonstrates that divisive normalization is not a member of the “random utility” class of models (Block and Marschak, 1960; Thurstone, 1927), since these latter obey this condition.
While empirically observed stochastic choice modeled with divisive normalization tells us how the brain makes choices, it is not immediately evident why the brain would use this particular computation or exactly what behavior is consistent with it. In this paper, we address both of these questions by proving a three-way equivalence theorem between the divisive normalization model, an information-processing model, and an axiomatic characterization. The information-processing model views behavior as optimally balancing the expected value of the chosen object against the entropic cost of reducing stochasticity in choice. This provides an optimality rationale for why the brain may have evolved to use divisive normalization. The axiomatic characterization gives a set of testable behavioral statements equivalent to the divisive normalization model. This gives a precise answer to the question of what behavior arises from divisive normalization. Our equivalence result unifies the three models into a single theory that can simultaneously address the how, why, and what of choice behavior.
Divisive normalization, as a functional form, has been used by neuroscientists for over two decades to model how neural data, and, in particular, the neurally measurable subjective value function, determines choices. However, whether neural data can be precisely predicted (identified cardinally) from choices has not yet been addressed. For neuroscientists, this has proven to be a stark limitation, since it has meant neural observables have only been inferred from choice via the fitting of arbitrary functional forms to choice data. In this paper, we prove a result showing that choice data alone can be used to uniquely specify neural observables within the framework of the normalization model. In other words, we show there is a one-to-one relationship between neural and choice observables.
This result is important to empirical neuroscientists, since it allows for novel and precise predictions which link neural and choice data. We believe that this result is also important for economists working only with choice data, since it allows the application of insights from neural analysis without neural data. For example, it has been suggested that the subjective value function can be used as a direct measure of welfare and happiness (Glimcher, 2011; Loewenstein et al., 2008). Without taking any stance on this controversial question, we merely point out that the one-to-one relationship we establish means that this measurement can be made from choice data alone – and it suggests that inter-individual comparisons may also be possible. We prove the one-to-one relationship as a corollary to our more general uniqueness result that shows that the parameters of the divisive normalization model are behaviorally identified up to a multiplicative scaling. This level of identification is similar to other theories of stochastic choice, such as the Luce rule, and is enough to rank the alternatives by their values and to rank the choice sets by the expected value the agent receives when facing that set.
We next use our equivalence result to study how the divisive normalization model handles context effects. We use a standard notion of context effects as departures from the Independence of Irrelevant Alternatives (IIA) property developed by Luce (1959). Violations of IIA have been widely documented empirically, including in the well-known Compromise and Attraction Effects (Huber et al., 1982; Simonson, 1989) as well as in a wide variety of other settings. (For a survey, see Rieskamp et al., 2006.) We apply our equivalence result to find analogs to IIA violations in our divisive normalization and information-processing models. Specifically, we show that IIA violations are equivalent to differences in the divisive factor and the marginal cost of reducing stochasticity, in the divisive normalization and information-processing models, respectively. However, divisive normalization also places limits on context effects. In the forms studied here, it does not allow (stochastic) preference reversals: If one alternative is chosen more often than a second in one choice set, then it is chosen more often than the second in every choice set. Also, we provide a novel testable restriction by showing that any choice sets, which share at least two items, can be unambiguously ranked in terms of stochasticity of the choices made on those sets.
We now offer more detail on each of the three models in our equivalence result. The divisive normalization model works by first assigning a set-independent value to each possible choice alternative. Within a particular choice set, these values are then re-scaled by a factor that depends on the available alternatives. Specifically, this set-dependent factor is equal to the sum of the values of the items in the choice set plus a constant. These re-scaled values are then multiplied by a constant that serves to set upper and lower bounds on the set of possible choice probabilities. Both constants are assumed to be set-independent. A random error term is added to each re-scaled value and the alternative with the highest sum of re-scaled value plus error is then chosen. The error term accounts for observed neurobiological stochasticity in the representation of value.
Our information-processing model views choices as optimally balancing the cost and benefit of decreasing stochasticity in choice. Decreasing stochasticity in choices decreases information entropy, which, on physical grounds, must be costly (Landauer, 1961). We model this cost as an increasing function of the associated decrease in Shannon entropy (Shannon, 1948), where the specific functional form can depend on the choice set faced. Decreasing stochasticity in choices benefits the decision maker by increasing the chance of selecting the highest valued alternative. Therefore, a decision maker faces a trade-off between the cost and benefit of reducing stochasticity, which our information-processing model optimally balances. (Other work in decision theory models this feature as a preference for hedging or stochasticity; see, for example, Machina, 1989, or Agranov and Ortoleva, 2017.) Our general information-processing analysis identifies a broad family of divisive normalization models. We also identify the specific functional form for the cost of reducing entropy that corresponds to the additive normalization factor which is most often used in the empirical neuroscience literature, thereby providing an information-processing foundation for the latter.
Our axiomatic characterization consists of a nested set of six behaviorally testable axioms that together are equivalent to the divisive normalization model. The axioms are layered in a way that allows for different versions of the model to be independently tested. The first two axioms by themselves characterize a generalized version of the divisive normalization model where the divisive term can be any strictly positive choice set-dependent factor. Adding in the next two axioms restricts the divisive term to being equal to the sum of the values of the items in the choice set plus a constant. The final two axioms restricts the values of the two constants in the model to being strictly positive.
Returning to our equivalence result, we think of its three components as answering, respectively, the “how,” the “why,” and the “what” of choice behavior. The divisive normalization model explains how the brain makes choices, namely through the normalization computation. The information-processing formulation provides some insight into why that computation is used. The axiomatic characterization outlines exactly what behavior arises. In this way, our theory provides a unified answer to the “how,” the “why,” and the “what” of choice behavior.
2. Literature review
In addition to the divisive normalization literature reviewed in the introduction, our paper relates to the literature on random choice following the Luce rule (Luce, 1959) as well as the literature that employs Shannon entropy in models of decision making.
Our use of the Gumbel error term connects the divisive normalization model to the Luce rule — without the divisive re-scaling step, our model would be equivalent to the Luce rule. The connection between the Gumbel error and the Luce rule is well known (see Luce and Suppes, 1965, who attribute the result to Holman and Marley). However, our model does not contain the Luce rule as a special case since our divisive re-scaling factor cannot be constant across sets. The divisive normalization model with Gumbel error is an instance of the more general Set-Dependent-Luce model (Marley et al., 2008), which is equivalent to a version of our model where the re-scaling factor is allowed to be any strictly positive set-dependent term. We also relate to the larger literature generalizing the Luce rule to accommodate a wider range of empirical phenomena (e.g., Echenique and Saito, 2015; Echenique et al., 2014; Gul et al., 2014; Ravid, 2015; Tserenjigmid, 2016). Our paper is similar to these in that we can also accommodate a wider range of behavior, but we differ in that we use a neurobiological motivated functional form.
Our paper connects to previous work that employs Shannon entropy in models of decision making. Some of these previous papers also use entropy to model the cost of reducing stochasticity (Fudenberg et al., 2015; Mattsson and Weibull, 2002), while others have used entropy to model a taste for variety (Anderson et al., 1992; Swait and Marley, 2013). Several of these papers trace out a similar mathematical connection between entropy and the probability formulas as we do. In fact, this mathematical connection goes back much further to the physics connecting Helmholtz free energy to the Boltzmann distribution (see Mandl, 1988). Of particular note is Swait and Marley (2013), who studied a model equivalent to the Set-Dependent Luce model discussed above. Thus, the model of Swait and Marley (2013) is equivalent to a divisive normalization model with any strictly positive divisive factor.
The use of Shannon entropy in our information-processing formulation is also a point of connection with the rational-inattention literature initiated by Sims (1998, 2003). Recently, the rational-inattention notion has been applied to stochastic choice settings with uncertain values for the alternatives (Caplin and Dean, 2015; Matějka and McKay, 2014). This leads to an information-processing task: Determine the optimal cost to incur in learning about these uncertain values. By contrast, the information-processing task we consider is an efficient reduction in the intrinsic stochasticity of choosing amongst alternatives.
Finally, our paper relates to the efficient coding literature from neuroscience, which argues that neural processes should be efficiency-promoting (Attneave, 1954; Barlow, 1961). Within this literature, a number of papers advance efficiency arguments for the divisive normalization computation (see Carandini and Heeger, 2012 for a review). However, this literature differs from our paper by being concerned with applying divisive normalization to sensory processing and how the brain can efficiently store and represent sensory information. For example, one thread of this literature shows how divisive normalization can be used to de-correlate the activity of different neurons to reduce redundancy in how sensory input is represented (see for example, Schwartz and Simoncelli, 2001). We see our information-processing model as building a parallel efficiency argument at a more foundational level, with a focus on the choice domain.
3. Foundations of the normalization model
We now provide two foundations for the divisive normalization model: an information-processing model and an axiomatic characterization. The information-processing model provides insight into why the normalization computation is used — namely, because it optimally balances the costs and benefits of reducing stochasticity. The axiomatic characterization pins down what behavior the normalization model allows by providing a set of testable restrictions. The main result of this section is an equivalence theorem uniting all three models in terms of the behavior they imply. For example, any behavior that arises from the normalization model optimally solves the information-processing model and obeys our axioms. The equivalence works in all directions.
We begin with the formal framework, which we maintain throughout. Let X be a finite set consisting of all the alternatives from which the decision maker may be able to choose, which we assume contains at least four items. Let be the collection of all non-empty subsets of X, to be thought of as the possible choice sets the decision maker may face. As a convenient shorthand, for any function and , we set .
The choice behavior of the decision maker is described by a random choice rule ρ that assigns a full-support probability measure to every choice set . We limit attention to full-support measures because the divisive normalization model uses a full-support error term that implies all available options have a non-zero chance of being chosen. Formally, for any choice set A, define
which is the set of all probability measures on A random choice rule is then any function such that and for each .2 The interpretation is that is the probability that the decision maker chooses alternative x when faced with choice set A. To avoid degenerate cases, we assume ρ assigns at least three distinct probabilities in choice set X. In other words, there exist such that , and are all distinct.
We now define the divisive normalization model using a standard functional form widely employed in the neuroscience literature (Carandini and Heeger, 2012). The normalization model generates a stochastic and set-dependent utility for each alternative, and the highest utility alternative is then chosen. The stochastic utility of alternative x in set A is
The function v provides a set-independent value of each alternative. These set-independent values are then divisively re-scaled by the factor which is a constant plus the sum of values of items in the choice set. The term γ is a strictly positive constant that sets an upper and lower bound on achievable choice probabilities, with a larger γ corresponding to more relaxed bounds. We will make these claims about γ more precise at the end of this section. Lastly, is a random noise term that is i.i.d. across the alternatives and that we assume follows a Gumbel distribution with location 0 and scale 1.
We define the normalization model as the set of choice probabilities that can be generated using this utility form. We state this more formally as follows.
Definition 1.
A random choice rule ρ has a divisive normalization representation if there exists , σ > 0, and , such that for any and
where is distributed i.i.d. Gumbel (0,1).
The divisive normalization model is distinct from random utility. On the one hand, the presence of the divisive factor allows a set dependence absent in a standard random utility model. On the other hand, random utility allows for more general assumptions on the error term. The Gumbel distribution we assume does arise in a number of settings — in particular, as the asymptotic distribution of the maximum of a sequence of i.i.d. normal random variables. See, e.g., David and Nagaraja (2003). Also note that the continuous nature of the Gumbel distribution ensures there there is always a strict utility maximizing element, so we do not have to worry about ties.
As we discussed in the literature review, the use of the Gumbel error in the divisive normalization model creates a strong connection to the standard Luce rule. If we removed the divisive step, the stochastic utility would simply be , which (given that the error is Gumbel) is one way to formulate the Luce model. Despite this, the divisive normalization model does not contain the Luce model as a special case. In other words, there is no combination of parameters that makes the divisive step irrelevant. However, we can approximate any Luce model as a limiting case.3 Specifically, if we take σ, γ to infinity at the same rate, then the ratio approaches 1, which leaves us with the standard Luce model .
3.1. Information-processing model
In our information-processing model, the decision maker balances the expected utility of a given choice rule against the cost involved in reducing stochasticity in choices. The value of alternative x is given by v(x) and the expected utility of choice rule ρ on choice set A is
The information-processing costs of a particular rule ρ come from the reduction in Shannon entropy (Shannon, 1948) relative to the fully stochastic case. Shannon entropy measures the degree of stochasticity in behavior, where a higher degree of stochasticity implies higher entropy. We will find it useful to define entropy generally for any function , as follows:
where 0ln 0 is understood to equal zero. For a probability measure , equals the associated Shannon entropy of p on A. The maximum entropy of any function defined on set A is , which is achieved by the uniform measure that assigns the same probability to each alternative in A. Therefore, the entropy reduction achieved by any function is
The total cost of random choice rule ρ on choice set A will be a strictly increasing function of the entropy reduction achieved by ρ on A, where the shape of the function can depend on the choice set faced. We impose standard regularity conditions that the function is continuously differentiable and convex. Combining this with the expected value of a choice rule yields our definition of optimal behavior with costly information processing.
Definition 2.
A random choice rule ρ has an information-processing representation if there exists a function and, for each , a strictly increasing, convex, and continuously differentiable function such that for any
(1) |
A choice rule ρ having an information-processing representation is actually equivalent to a more general normalization computation where the divisive factor can be any strictly positive set-dependent function.4 Hence, the information-processing framework addresses why the brain might use the general form of divisive normalization without the specific additive function form for the divisive factor that is observed empirically. However, since the additive form is consistent with both behavioral and neurobiological evidence, we think it is interesting to ask what specific restrictions on the information-processing framework might lead to it. We will develop what we call the Marginal Cost Condition (MCC), which restricts the marginal cost of the family of functions . We prove that the information-processing model along with the MCC is equivalent to the divisive normalization model with the additive functional form. Hence, the MCC can be seen as an empirically motivated functional form restriction on the information-processing framework. Our analysis does not provide an independent motivation for the form of the MCC, but since it is equivalent to a neurobiologically motivated functional form in the normalization model, we posit that such a rationale may exist, and we leave this point as an interesting challenge for future empirical and theoretical work.
To define the MCC, first, for any , , , and γ > 0, set
In words, equals the entropy reduction achieved by the function that maps x to . This turns out to be equal to the entropy reduction achieved on A by the normalization computation using (v, σ, γ).
We say an information-processing representation obeys the Marginal Cost Condition (MCC) if there exist σ > 0 and γ > 0 such that, for each ,
for each A ∈ A. The MCC places a restriction on the marginal cost of entropy reduction at the choice probabilities generated by the divisive normalization model. Specifically, the marginal cost has to vary linearly with the total value of items in the set. This places a neurobiologically motivated restriction on the functional forms in the information-processing model.
3.2. Axiomatic characterization
Our axiomatic characterization gives six testable behavioral restrictions, arranged into three nested groups, which are jointly equivalent to the full divisive normalization model. The axioms are layered in a way that allows for different versions and aspects of the normalization model to be independently tested.
To state the axioms compactly, it will be useful to define a few terms. We say the pair (x, y) is distinguishable in A if and . For any (x, y) distinguishable in A, we define
The number measures how the choice probability ratio between x and y differs across choice sets A and X. Larger Rxy (A) means that this ratio is closer to 1 in A than in X. This suggests is related to the divisive factor, since a larger divisive factor pushes the re-scaled values, and hence the choice probabilities, closer together. In fact, if ρ has a divisive normalization representation (v, σ, γ), and (x, y) is distinguishable in A, then
(2) |
We will discuss the proof of this fact in our discussion around Eq. (3) below.
We are now ready to state our axioms.
Axiom 1
(Order). Let and . Then if and only if .
Our first axiom requires a set-independent ordinal ranking of the alternatives, in the sense that whether x is chosen more often than y is consistent across all choice sets. In the divisive normalization model, this ordinal ranking follows the ranking given by v, a fact we explore further in the next section.
Axiom 2
(Divisive Factoring), If (x, y) and are distinguishable pairs in A, then
Our second axiom states that the value of Rxy(A) does not depend on the specific x, y pair used. This is an immediate implication of Eq. (2) and captures the fact that the divisive factor in the normalization model depends only on the choice set and not on the particular item being re-scaled.
Together, our first two axioms characterize a basic version of the normalization model where the divisive factor is allowed to be any strictly positive set-dependent function.5
Axiom 3
(Additive Separability); For any
where (x, y) is distinguishable in all four sets used in the above equation.
Our third axiom says that the effect of removing an item on the divisive factor does not depend on the other alternatives in the choice set. This captures the additive separability of the divisive factor across the items.
Axiom 4
(Separability by Values). Suppose the pairs , , , and are each distinguishable in the set that contains only that pair and that is distinguishable in X. Then
Implicit in Axiom 4 is that both sides of the equation are well defined (do not involve division by zero) when the distinguishability conditions stated are met. We can interpret the ratio as providing a measure of x’s value relative to y’s, since we expect more valuable items to be chosen more often. Under this interpretation, the fourth axiom relates the relative values of x, y and to the divisive factor involving those alternatives. This ensures the divisive factor is additively separable using the values of the alternatives.
Our first four axioms together characterize a version of normalization where v, σ, and γ are not necessarily positive.6 For this, we require two additional axioms.
Axiom 5
(Strictly Positive v and γ.) Suppose contains distinguishable pair (x, y), and let z, be such that . Then
Our fifth axiom implies two facts: (1) the divisive factor strictly increases when adding alternatives to the choice set, and (2) the divisive factor increases by more when adding alternatives with a higher choice probability. From Eq. (2), we can easily see that the first fact corresponds to v > 0. The second follows from Eq. (2) under the assumption that alternatives with a higher choice probability have a higher value for v. This assumption requires γ > 0, since γ < 0 would allow the re-scaled value to decrease in v(x). Therefore, Axiom 5 corresponds to v and γ being strictly positive.
Axiom 6
(Strictly Positive σ.) If (x, y) is distinguishable in both A and , and is distinguishable in B, then
Our final axiom imposes strict subadditivity on the Rxy function, which captures the fact that σ > 0. To see why, note that, applying Eq. (2), the difference between the left and right-hand sides of the inequality equals
which must be positive because v and σ are strictly positive.
Using the axioms, we can see exactly which behavioral restrictions prevent the Luce rule from being a special case of the divisive normalization model. Under the Luce rule, Rxy(A) always equals 1, from which we can see that Axioms 1–3 and 6 are consistent with the Luce rule while Axioms 4–5 are not. (Axiom 4 is inconsistent with the Luce rule because of the implicit requirement that whenever all pairs from are distinguishable in X.) Axioms 4–5 prevent the Luce rule from being a special case because they link the value of an alternative with its impact on the divisive factor. Through this mechanism, alternatives with strictly positive value create a context effect inconsistent with the Luce rule. Without these axioms, we could construct a Luce rule by setting each alternative’s contribution to the divisive factor to be 0 while setting its value independently.
3.3. Equivalence result
The main result of this paper establishes a three-way equivalence uniting the divisive normalization model, our information-processing model, and our axiomatic characterization. The unification of the three models works on the level of behavior. Any choice probabilities that fit into one of the three models necessarily must fit into all three.
Theorem 1.
For any random choice rule ρ the following are equivalent:
Proof.
See the Appendix. □
We think of the three models in our equivalence result as answering, respectively, the “how,” the “why,” and the “what” of choice behavior. According to existing work in neuroscience, the divisive normalization model explains how the brain makes choices, namely through the normalization computation. The information-processing formulation provides some insight into why that computation is used. The axiomatic characterization outlines exactly what behavior arises.
The proof of Theorem 1 proceeds by establishing that all three parts are equivalent to the statement that there exists a function and constants σ > 0 and γ > 0 such that
(3) |
for all and .
We can use Eq. (3) to prove the claims we made about the role of γ. First, rewrite Eq. (3) as
Since all the parameters are strictly positive, whenever ,
Combining these inequalities with our rewritten version of Eq. (3) yields
These inequalities confirm that γ determines an upper and lower bound on the possible choice probabilities. As these inequalities give only the trivial statement , and as they force for all x, A.
We can also use Eq. (3) to establish our claim regarding Rxy(A) in Eq. (2). Eq. (3) implies that, for all
from which,
which simplifies to Eq. (2).
4. Identifying neural and behavioral parameters
Divisive normalization has been used by neuroscientists to model how neural data determines choices. Whether neural data can be identified from choices has not yet been addressed. In this section we prove a result showing that choice alone can be used to uniquely specify neural observables within the framework of the normalization model. In more detail, the re-scaled values in the divisive normalization model are used to match the neurally observable subjective value function, experimentally measured as the number of action potentials per second (or “firing rate”) of individual neurons. We prove that the re-scaled values are fully identified from choice behavior alone, and conversely, that the re-scaled values fully determine the (stochastic) choice behavior. In other words, there exists a one-to-one relationship between the neurally measurable subjective value function and the behavior it generates. This result is important to empirical neuroscientists. It allows novel and precise predictions linking neural and choice data. This result is also important for economists, since it allows the application of insights from neural analysis without neural data. We also note that this identity may have welfare implications.
We prove the one-to-one relationship as a corollary to our more general uniqueness result on the parameters of the divisive normalization model. We start this section by presenting this more general identification result. We then discuss its implications for neural and choice parameters in the form of two corollaries. We end by providing the proof of the identification result, which builds on the notation and logic (notably Eq. (2)) from the axiomatic characterization in the previous section.
Proposition 1.
Suppose ρ has divisive normalization representation (v, σ, γ)) Then is also a divisive normalization representation of ρ if and only if and there exists α > 0 such that .
Proposition 1 establishes that the v and σ parameters in the divisive normalization model are jointly unique up to a strictly positive multiplicative constant, while γ is fully unique. The parameter γ can be identified because of the divisive step in the normalization procedure. Without this step the model reduces to (which is equivalent to the Luce rule as discussed earlier), where γ is superfluous since it can be freely absorbed into v. In the normalization model, absorbing γ into v also impacts the divisive step, and so cannot be done freely. Interestingly, this constraint not only necessitates the γ parameter, but also allows for γ to be fully identified.
A transformation of the parameters of particular interest is the re-scaled value of each alternative x in choice set A, that is
As discussed above, these re-scaled values model the neurally measurable subjective value function in the divisive normalization framework. An immediate corollary of Proposition 1 is that these re-scaled values are fully identified from choice behavior. Conversely, the definition of the divisive normalization model immediately implies that these re-scaled values fully determine the choice probabilities.
Corollary 1.
Suppose ρ has divisive normalization representation (v, σ, γ). Then is also a divisive normalization representation of ρ if and only if for every (x, A) in X × :
Corollary 1 establishes two facts. First, if (v, σ, γ) and have the same re-scaled values, then they represent the same behavior. Second, if (v, σ, γ) and represent the same behavior, then they must have the same re-scaled values. This creates an exact one-to-one relationship between choice behavior and the neurally observable subjective value function, within the divisive normalization model. This allows for precise predictions on neural data from choice data alone, enabling new types of experimental hypotheses for empirical neuroscientists. This result is also relevant for data sets containing choices alone, since it justifies the application of insights based on neural analysis without neural data. For example, some researchers have suggested that the neurally measured subjective value function is the correct value level for welfare analysis (Loewenstein et al., 2008), and our result suggests this analysis can be performed with choice data alone.
Corollary 1 works because the re-scaling step uses the values of the alternatives. We can measure v(x) by how much the choice probabilities of other alternatives change when x is added to the set. A larger v(x) causes more re-scaling which pushes the values and choice probabilities closer together. If, instead, the re-scaling was done via a set-dependent factor that did not depend on the values then the re-scaled values would not be unique.
For example, suppose we assigned stochastic utility to alternative x in set A of
where F(A) is any strictly positive set-dependent function and is an i.i.d. random variable. Now define for some constant . Then for any choice set A and ,
This is enough to ensure that (v, F) and deliver the same choice probabilities, while having different re-scaled values. By contrast, in the divisive normalization model, changing from v to also changes the re-scaling factor which impacts the choice probabilities.
The second set of parameters we are interested in identifying consists of the untransformed values without re-scaling. Proposition 1 shows that these are unique only up to a multiplicative constant, as is the case in other stochastic choice models, such as the Luce rule. This degree of uniqueness is enough to determine a unique ordinal ranking over the choice alternatives and choice sets. Define
which is the expected value from choice rule ρ on set A using values v.
Corollary 2.
Suppose ρ has two divisive normalization representations (v, σ, γ) and . Then:
if and only if for all .
if and only if
Corollary 2 says that the divisive normalization model uniquely ranks the alternatives by their values and choice sets by their expected values. In this sense, the divisive normalization model provides a well-defined ordinal preference over alternatives and choice sets.
Proof of Proposition 1..
The “if direction” is obvious. For the other direction, suppose that (v, σ, γ) and are both divisive normalization representations of ρ. If choice set A contains a distinguishable pair (x, y), then we know
(4) |
since, using Eq. (2), both sides of the equation equal Rxy(A).
Now define
It is clear that α > 0 since all the terms are strictly positive. By our assumptions on ρ, we can find such that , , and , are all distinct. By Theorem 1, we know ρ obeys Axiom 1, which implies all pairs from {x, y, w} are distinguishable in every set that contains them. Hence, whenever , we can rearrange Eq. (4) to get
Therefore, whenever , we get
Hence, for any , we have that
We can apply the same logic with {x, w} taking the role of {x, y} to prove . We can also let {w, y} take the role of {x, y} to prove . Therefore, we have shown for all . Combining with the definition of α, it follows that .
To prove , we can use Eq. (3) to get
since both sides equal . Since (x, y) is distinguishable, we know that neither side of the equation is equal to 1, so that . Using and taking natural logs of both sides gives
And the desired result follows using . □
5. Context effects
In this section, we use our equivalence result to study how the divisive normalization model handles context effects.7 We use a standard notion of a context effect as a departure from the Independence of Irrelevant Alternatives (IIA) property developed by Luce (1959). Following the logic of our equivalence result, we find the precise analogs of IIA violations in our information-processing and divisive normalization models. In the normalization model, IIA violations are equivalent to differences across choice sets in the divisive factor. In the information-processing model, IIA violations are equivalent to changes in the marginal cost of reducing stochasticity across different sets. We also show how and the extent to which the divisive normalization model can accommodate well-known choice phenomena such as choice overload, the attraction effect, and the compromise effect. Last, we discuss the limitations on context effects implied by the divisive normalization model, which suggest a new testable prediction.
The IIA property requires that
whenever . In words, this says the relative choice probability between two alternatives is independent of the other alternatives in the set. For expositional purposes, we only consider probability ratios, where . This allows us to interpret larger ratios as being further from the equal-probability case and will simplify the statements of results. This simplification is without loss of generality since we can just invert any ratio that is smaller than 1.
Proposition 2.
Suppose ρ has normalization representation (v, σ, γ). Consider and such that . Then the following are equivalent:
ρ has an information-processing representation , where .
Proof.
The proof of Theorem 1 establishes that we can use the same parameters (v, σ, γ) in the normalization representation as in the information-processing representation and associated MCC. This, along with Theorem 1, immediately establishes the equivalence of Parts 2 and 3.
For the equivalence of Parts 1 and 2, let such that . From Eq. (3), this implies whenever . By the definition of Rxy, we then know that
By Eq. (2), this can be written as,
which gives us the desired result. □
The equivalence between Parts 1 and 2 in Proposition 2 establishes that a larger divisive factor is equivalent to IIA violations that move the probability ratios closer to equal probability. This demonstrates that the normalization model captures context effects through the divisive factor. Previous papers have noted this relationship between the divisive factor and IIA violations in more limited contexts. For example, Louie et al. (2013) studied this feature of the normalization model in three-item choice sets, while our result works for a choice set of any size. The equivalence between Parts 1 and 3 in Proposition 2 also shows that IIA violations correspond to changes in δ(A) in the information-processing model. Recall that δ(A) equals the marginal cost of reducing stochasticity on set A.
We can use Proposition 2 to better understand how and why the divisive normalization model accounts for previously studied context effects. For example, the well-known Compromise and Attraction Effects create IIA violations by adding a third alternative to a two-alternative choice set (Huber et al., 1982; Simonson, 1989). Louie et al. (2013) found IIA violations when they replaced the worst alternative in a three-alternative choice set with a slightly improved option. Specifically, they found this increased choice stochasticity between the two unchanged alternatives, in the sense of pushing the probability ratio closer to 1. Proposition 2 shows exactly how divisive normalization can accommodate these IIA violations. It does so because adding an alternative or raising the value of an alternative both change the divisive factor that drives context effects. Proposition 2 also suggests why these context effects occur, namely, because of changes in the marginal cost of reducing stochasticity across choice sets. This interpretation lines up particularly nicely with the result in Louie et al. (2013), since, under Proposition 2, raising the value of the worst alternative increases the marginal cost of reducing stochasticity, which naturally leads to more stochastic choices.
Choice overload is another context effect naturally captured by divisive normalization. This effect was found in a series of studies challenging the idea that adding options must weakly improve choice outcomes (Chernev, 2004; Iyengar et al., 2003; Iyengar and Lepper, 2000; Shah and Wolford, 2007). Instead, these papers make the argument that having more alternatives can lead to “worse” choices or even declining to choose at all. Our framework does not allow the possibility of choosing nothing, so we cannot capture that aspect of choice overload. However, the divisive normalization model does, very naturally, capture the idea that more options can lead to worse choices.8 More precisely, suppose we have a choice rule ρ with a divisive normalization representation (v, σ, γ). Let be the choice distribution calculated from , conditional on not choosing x. We show that q is first-order stochastically dominated by , when using v to rank the alternatives. In other words, adding x to the set A unambiguously worsens the choices made among the items in A. We formally state and prove this in the following proposition.
Proposition 3.
Suppose ρ has divisive normalization representation . Choose such that v is not constant over A. Let . Define by
Then first-order stochastically dominates q when ordering the alternatives according to v.
Proof.
Order the elements of A as , where for each . We want to show that for each ,
with the inequality strict for at least one m. First note that whenever , we have
We will use this inequality repeatedly in what follows. Moreover, the inequality must hold strictly whenever .
Next, note that
We get the strict inequality because v is not constant over A, so that for at least one i.
We have now shown the desired equation holds strictly for m = 1. Now suppose the equation holds at m and we will show it holds at m + 1. If m + 1 = n the equation holds with equality because probabilities sum to 1. Suppose m + 1 < n. If , then the desired result is obtained by simply adding to the left-hand side of the equation stated at m. Otherwise, if , then
as desired. □
It is also important to note limitations on the types of context effects which divisive normalization can accommodate. For example, the Attraction Effect is often associated with (stochastic) preferences reversals, where an alternative x is chosen more often than y in one choice set but not in another. The divisive normalization model can never achieve these reversals, which is an immediate implication of Axiom 1.9 Instead, the values of the normalization model will create a ordinal ranking of the alternative, where higher alternatives are always chosen more often. This also implies that the divisive normalization model obeys Weak Stochastic Transitivity (Block and Marschak, 1960), which requires that if and , then .
The divisive normalization model obeys an even stronger notion of ordering known as strong stochastic transitivity (SST). This is the requirement that if and , then . In words, if x beats y and y beats z, then not only must x beat z but the gap between x and z must be at least as large as either of the previous two comparisons. To see that SST holds, note that given a divisive normalization model and imply that . Simple algebra then yields
From Eq. (3), this implies that .
Another restriction on context effects is that the direction of the IIA violation must be consistent across all pairs when moving across choice sets. In other words, if one pair of items violates IIA by being further from the equal-probability case in set A versus set B, then the same must be true of all pairs that appear in both sets. We state this formally as:
Corollary 3.
Suppose ρ has divisive normalization representation (v, σ). Let such that and . Then
Corollary 3 follows immediately from Proposition 2, because the equivalence between Parts 1 and 2 imply that whether
holds for any particular pair can be determined by an inequality that depends on the sets as a whole.
Another way to interpret Corollary 3 is in terms of the relative stochasticity of the choice sets. The choice between x and y is more stochastic when the probability ratio between x and y is closer to 1. Therefore, Corollary 3 says any two sets (that share at least two items) can be unambiguously ranked by how stochastic the choices are on them. This provides a novel testable restriction on the divisive normalization model.
6. Concluding remarks
In this paper, we studied three different models that each presented a different perspective on choice behavior. The divisive normalization model says how the brain makes choices, namely, via the neurobiologically-motivated normalization computation. The information-processing formulation provides some insight into why that computation is used, namely, because it optimally balances the benefits and costs of reducing stochasticity. The axiomatic characterization pinpoints exactly what behavior arises by providing a set of testable behavioral predictions. Our main result proves an equivalence between these three models, uniting them into a single theory that can simultaneously address the “how,” the “why,” and the “what” of choice behavior.
We also explore how the parameters of the divisive normalization model can be identified from behavior, and what that tells us about the link between observable choice and observable neural quantities. We prove that, in the divisive normalization model, there is a one-to-one relationship between the neurally measurable subjective value function and the behavior it generates. This creates a theoretical foundation for work that links neural and behavioral data, and indicates that inference about neural variables can be made from choice behavior alone.
Lastly, we use our equivalence result to study how the divisive normalization model handles context effects. The divisive normalization model allows for context effects through changes in the divisive factor. When the divisive factor is equal across two choice sets, the choices on those sets will be context independent in the sense of obeying the Independence of Irrelevant Alternatives (IIA). We use our equivalence result to provide behavioral and information-processing analogs to the changes in divisive factor that drive context effects. We then apply these analogs to shed new light on existing empirical work, and to provide a novel testable prediction on the type of context effects allowed in the divisive normalization model.
We conclude by commenting on one of the more unusual aspects our paper, relative to the economics literature — namely our inclusion of a neurobiologically-motivated functional form in a choice model. The inclusion of this aspect is motivated, in part, by the argument due to Simon (1955, p. 99) that a theory of decision making should be consistent “with the access to information and the computational capacities that are actually possessed by the organism.” At the time of Simon’s writing, the development of such a theory was hindered by a lack of empirical knowledge about precisely such information and computational capacities — a fact which Simon himself noted (Simon 1955, p. 100).
In the decades since then, advances in neuroscience have taught us much about the actual decision processes of various organisms, humans included. By capitalizing on these advances, we have been able to build a theory of decision-making consistent with how the human brain actually makes choices, and, in this way, advance Simon’s argument. With this, we hope to have taken a step towards reconciling traditional approaches to decision-making with the fact that all behavior making must, ultimately, have a physical implementation.
Acknowledgments
Financial support from NIH grant R01DA038063, NYU Stern School of Business, NYU Shanghai, and J.P. Valles is gratefully acknowledged. We have greatly benefited from discussions with Andrew Caplin, Faruk Gul, Kenway Louie, Anthony Marley, Paula Miret, Wolfgang Pesendorfer, Doron Ravid, Shellwyn Weston, Jan Zimmerman, members of the Glimcher Lab at NYU, and seminar audiences at Columbia, HKUST, NYU Shanghai, PHBS Shenzhen, Princeton, Yale, and Zurich University. We are also grateful to two anonymous referees for very helpful comments. A previous version of this paper was entitled “Rational Imprecision: Information-Processing, Neural, and Choice-Rule Perspectives”.
Appendix A. Proof of Theorem 1
We will show that all three parts of Theorem 1 are equivalent to Eq. (3). Additionally, while not strictly required for Theorem 1, our proof will show that the same set of parameters (v, σ, γ) are used in Eq. (3), the divisive normalization model, and the information-processing model. In other words, our proof will also show the following three statements are equivalent:
(v, σ, γ) satisfies Eq. (3),
(v, σ, γ) is a divisive normalization representation of ρ,
(v, σ, γ) is an information information-processing representation of ρ that obeys the MCC.
A1. Equivalence with divisive normalization
Proving the equivalence of the divisive normalization model and Eq. (3) follows the lines of well-known arguments (Luce and Suppes, 1965; McFadden, 1978). To begin, suppose ρ has a divisive normalization representation (v, σ, γ), so that
where ‘s are i.i.d. and Gumbel with location 0 and scale 1. Let be the pdf and cdf of a Gumbel (0, 1) random variable. We then have
which we can rearrange to give
which can be integrated to obtain
Evaluating at the limits yields
which can be rearranged to give
as desired. This argument can be run backwards to prove the reverse implication.
A2. Equivalence with information processing model
Fix , , and σ > 0. Let be any family of strictly increasing convex, continuously differentiable, functions that map and obey the MCC using (v, σ, γ). Note that such a family always exists. For example,
Recall, we defined ρ to have information-processing representation if, for each is a solution to the following maximization problem:
(5) |
To prove the equivalence, it suffices to show that, for each . the unique solution to this maximization problem is given by the measure defined by Eq. (3) using (v, σ, γ). Fix , and define to be that measure. That is for each
Next, note that any function can be viewed as a point in . Under this interpretation, ΔA forms a compact subset of defined by affine constraints. Also note that, by standard properties of entropy, is strictly concave, which means is strictly convex. Using that is convex and strictly increasing, it follows that is strictly convex, and hence is strictly concave. Therefore, the objective function of the maximization problem in (5) is strictly concave since the only other term is linear. Affine constraints and a strictly concave objective function mean that the Karush-Kuhn-Tucker conditions are both necessary and sufficient for a feasible point to be a solution. Those conditions say
(6) |
for some λ and μx, with the complementary slackness condition that . We now show that p* satisfies those conditions. Since for all x, we set μx = 0. Define
We need to show that
By definition, . Hence we can apply the MCC to transform the above equation into
Using our definition of p* this becomes
which simplifies to
which holds by definition of λ.
We have now proved that p* is a solution to the maximization problem. Next suppose q* also solves the maximization problem. Since ΔA is closed under convex combinations, we can define a feasible by
for each . Since the objective function is strictly concave, if , then p would strictly improve on the optimal payoff, which is not possible. Hence p* must be the unique maximizer, as desired.
A3. Equivalence with axioms
Eq. (3) Implies the axioms
Suppose ρ obeys Eq. (3) using (v, σ, γ). We will show ρ obeys all six axioms. Axiom 1 follows immediately from the fact that, under Eq. (3), since γ > 0 and for all . To show the necessity of the rest of the axioms, we use the following lemma.
Lemma 1.
If ρ obeys Eq. (3) using (v, σ, γ), then
whenever (x, y) is distinguishable in A.
Proof.
By definition,
Applying Eq. (3) to the right-hand side gives
and the desired conclusion follows. □
Axiom 2 follows immediately from Lemma 1. Also, Lemma 1 allows us to rewrite the equation in Axiom 3 as
which holds since both sides are equal to .
Using Lemma 1, the equation in Axiom 4 is equivalent to
which can be verified by applying Eq. (3) to the right-hand side.
Now suppose that contains a distinguishable pair (x, y) and such that . By Eq. (3), . Using this and it follows that
which, via Lemma 1, proves Axiom 5.
Finally, let the sets A, B contain a distinguishable pair. Then Axiom 6 is equivalent to
Since the denominators are all positive, this is equivalent to
which holds because .
A3.1. The axioms imply equation (3)
Now assume Axioms 1–6 hold. We will find (v, σ, γ) such that Eq. (3) holds. By Axiom 1, if (x, y) is distinguishable in A, then (x, y) is distinguishable in all sets that contain this pair. So, we will simply say (x, y) is distinguishable to indicate that ρ(x, A) ≠ ρ(y, A) whenever x, y ∈ A. By Axiom 2, for any , we can set R(A) = Rxy(A) for all distinguishable (x, y) in A. If A does not contain any distinguishable pairs, set R(A) = 1. Also, set .
Lemma 2.
There exists a distinguishable pair such that contains a distinguishable pair.
Proof.
By assumption, contains at least three distinct choice probabilities. Let denote the three items generating the distinct probabilities. Since , there exists . It must be that either or . In the first case, set , and, in the second case, set . Either way we get the desired result. □
From here on, let be a distinguishable pair such that contains a distinguishable pair. For any , note that either or . Hence, always contains a distinguishable pair.
Now, for any . define
Also define
Lemma 3.
If contains distinguishable pair (x, y), then
Proof.
Let z ∈ A\{x, y}. By Axiom 3,
Combining the above with the definition of v yields
For any , the same logic yields
We repeatedly apply these steps to get the desired result. □
Lemma 4.
For any that contains a distinguishable pair
Proof.
Using Lemma 3, it suffices to show that, for any distinguishable pair (x, y)
First, we can apply Lemma 3 and the definition of σ to get
Since , it must be that either is distinguishable or is distinguishable. We will treat the first case, since the proof for the other case is similar. By Lemma 3, we have
Combining the previous two equations gives
Since (x⋆, x) and (x, y) are distinguishable pairs, we can use Lemma 3 to get
Combining this with the previous display equation yields
as desired. □
Since contains at least three distinct choice probabilities, there exists such that all pairs in are distinguishable. Define
That γ is well-defined (i.e., that the denominator does not equal 0) is implicit in Axiom 4, where we apply the axiom to the case that . That is distinguishable ensures that .
Now fix any , and we claim that
(7) |
for all . Chose any pair . Since contains at least three distinct choice probabilities, there must exist z ∈ X such that (x, z) and (y, z) form distinguishable pairs. Applying Axiom 4, we then get
Applying the definition of γ and Lemma 4, we have
If (x, y) is not distinguishable, then the above equation implies . Therefore, Eq. (7) holds since both sides are equal to 1. If (x, y) is distinguishable, then we can use our definition of R(A) to get
Since we are assuming (x, y) is distinguishable, we can apply Lemma 4 and take the exponent of both sides to get
Hence we have proved Eq. (7). Using the fact that choice probabilities must sum to 1, we can get that for each and :
All that remains is to show that are strictly positive. Recall that contains a distinguishable pair for all x ∈ A. Hence, by Axiom 5,
for all x ∈ X. Next, note that, using Lemma 4, we get
Therefore, by Axiom 5, and have the same sign. Since γ is defined as the ratio of these two terms, and using the fact that we already showed γ is well-defined and non-zero, we get that γ > 0. Finally, recall that, by construction, and both contain a distinguishable pair. Hence we can apply Lemma 4 to get
By Axiom 6, the left-hand side of that equation is strictly positive, and it follows that σ > 0.
Footnotes
The Attraction Effect has drawn interest in part because it is a single empirical phenomenon that violates a number of well-known choice principles. Here we use it as an example of Regularity violations, but it also violates the Independence of Irrelevant Alternatives (IIA), and can create (stochastic) preference reversals. Below, we will discuss each of these choice principles in more detail and use the Attraction Effect as an example of a phenomenon where they fail.
More precisely, by we mean that the restriction of to A is in ΔA. Throughout, we will often find it convenient to treat as a function from A to [0, 1], since the values of outside of A are always zero.
We thank a referee for this observation.
For details, see the online appendix at http://www.adambrandenburger.com/articles/papers
See the online appendix for the proof.
The sum would have to have the same sign for all to avoid violating Axiom 1.
As we noted in the introduction, that the divisive normalization model can handle contexts has been previously shown (e.g., Louie, et al., 2013 and Webb et al., 2014 ). What we add here is using our equivalence result to shed further light on how and why the normalization model accomplishes this.
We thank a referee for making this observation.
For an alternative formulation of the divisive normalization model that can achieve choice reversals, see Zimmermann et al. (2018).
URL: https://sites.google.com/site/ksteverson/ (K. Steverson), http://www.adambrandenburger.com (A. Brandenburger), http://www.neuroeconomics.nyu.edu/people/paul-glimcher/ (P. Glimcher)
References
- Agranov M, Ortoleva P, 2017. Stochastic choice and preferences for randomization. J. Polit. Econ. 125. [Google Scholar]
- Anderson S, de Palma A, Thisse J-F, 1992. Discrete Choice Theory of Product Differentiation. MIT Press, Cambridge. [Google Scholar]
- Attneave F, 1954. Some informational aspects of visual perception.. Psychol. Rev. 61 (3), 183–193. doi: 10.1037/h0054663. [DOI] [PubMed] [Google Scholar]
- Barlow H, 1961. Possible principles underlying the transformation of sensory messages In: Rosenblith WA (Ed.), Sensory Communication. M.I.T. Press, Cambridge, pp. 217–234. [Google Scholar]
- Block HD, Marschak J, 1960. Random orderings and stochastic theories of responses In: Olkin I (Ed.), Contributions to Probability and Statistics: Essays in Honor of Harold Hotelling. Stanford University Press, Stanford, pp. 97–132. [Google Scholar]
- Caplin A Dean M 2015. Revealed preference, rational inattention, and costly information acquisition. Am. Econ. Rev. 105 (7). [Google Scholar]
- Carandini M, Heeger D, 2012. Normalization as a canonical neural computation.. Nature Rev. Neurosci. (November) 1–12. doi: 10.1038/nrn3136. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chernev A 2004. When more is less and less is more : the role of ideal point availability and assortment in consumer choice. J. Consumer Res. 30, 180–183. [Google Scholar]
- David H, Nagaraja H, 2003. Order Statistics, 3rd ed. Wiley, New York: doi: 10.1016/j.bpj.2010.07.012. [DOI] [Google Scholar]
- Echenique F, Saito K, 2019. General Luce Model Economic Theory Forthcoming. [Google Scholar]
- Echenique F, Saito K, Tserenjigmid G, 2018. The Perception-adjusted luce model. Math. Soc. Sci. 93(c), 67–76. [Google Scholar]
- Fehr E, Rangel A, 2011. Neuroeconomic foundations of economic choice-Recent advances. J. Econ. Perspect. 25 (4), 3–30. doi: 10.1257/jep.25.4.3.21595323 [DOI] [Google Scholar]
- Fudenberg D, Iijima R, Strzalecki T, 2015. Stochastic choice and revealed perturbed utility. Econometrica 83 (6), 2371–2409. doi: 10.3982/ECTA12660. [DOI] [Google Scholar]
- Glimcher P, 2011. Foundations of Neuroeconomic Analysis. OUP. [Google Scholar]
- Glimcher PW, Tymula AA, 2018. Expected Subjective Value Theory (ESVT): A Representation of Decision under Risk and Certainty, Working Paper, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2783638.
- Gul F, Natenzon P, Pesendorfer W, 2014. Random choice as behavioral optimization. Econometrica 82 (5), 1873–1912. doi: 10.3982/ECTA10621. [DOI] [Google Scholar]
- Huber J, Payne JW, Puto C, 1982. Adding asymmetrically dominated alternatives: violations of regularity and the similarity hypothesis. J. Consumer Res. 9 (1), 90–98. doi: 10.1086/208899. [DOI] [Google Scholar]
- Itthipuripat S, Cha K, Rangsipat N, Serences JT, 2015. Value-Based attentional capture influences context-Dependent decision-Making. J. Neurophysiol. 114 (1), 560–569. doi: 10.1152/jn.00343.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Iyengar SS, Jiang W, Huberman G, 2003. How Much Choice is Too Much?: Contributions to 401 (k) Retirement Plans In: Mitchell O, Utkus S (Eds.), Pension Design and Structure: New Lessons from Behavioral Finance. Oxford University Press, Oxford, pp. 83–96. [Google Scholar]
- Iyengar SS, Lepper MR, 2000. When choice is demotivating: can one desire too much of a good thing? J. Personal. Soc. Psychol. 79 (6), 995–1006. doi: 10.1037/0022-3514.79.6.995. [DOI] [PubMed] [Google Scholar]
- Khaw MW, Glimcher PW, Louie K, 2017. Normalized value coding explains dynamic adaptation in the human valuation process. Proc. Natl. Acad. Sci. 114 (48), 201715293. doi: 10.1073/pnas.1715293114. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Knutson B, Adams CM, Fong GW, Hommer D, 2001. Anticipation of increasing monetary reward selectively recruits nucleus accumbens. J. Neurosci. 21 (16), RC159. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Landauer R, 1961. Irreversibility and heat generation in the computing process. IBM J. Res. Devel. 5 (July), 183–191. doi: 10.1147/rd.53.0183. [DOI] [Google Scholar]
- Landry P, Webb R, 2017. Pairwise normalization: a neuroeconomic theory of multi-Attribute choice. SSRN doi: 10.2139/ssrn.2963863. [DOI] [Google Scholar]
- Loewenstein G, Rick S, Cohen J, 2008. Neuroeconomics. Ann. Rev. Psychol. 59, 647–672. doi: 10.1017/9781316676349.022. [DOI] [PubMed] [Google Scholar]
- Louie K, Glimcher PW, Webb R, 2015. Adaptive neural coding: from biological to behavioral decision-making. Current Opin. Behav. Sci. 5, 91–99. doi: 10.1016/j.cobeha.2015.08.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Louie K, Grattan LE, Glimcher PW, 2011. Reward value-Based gain control: divisive normalization in parietal cortex. J. Neurosci. 31 (29), 10627–10639. doi: 10.1523/JNEUROSCI.1237-11.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Louie K, Khaw MW, Glimcher PW, 2013. Normalization is a general neural mechanism for context-Dependent decision making.. Proc. Natl. Acad. Sci USA 110 (15), 6139–6144. doi: 10.1073/pnas.1217854110. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Luce RD, 1959. Individual Choice Behavior: A Theoretical Analysis. Wiley, New York. [Google Scholar]
- Luce RD, Suppes P, 1965. Preference, Utility, and Subjective Probability In: Luce RD, Bush R, Galanter E (Eds.), Handbook of Mathematical Psychology. Wiley, New York, pp. 249–410. [Google Scholar]
- Machina MJ, 1989. Dynamic consistency and non-Expected utility models of choice under uncertainty. J. Econ. Literat. 27 (4), 1622–1668. [Google Scholar]
- Mandl F 1988. Statistical Physics, 2nd ed John Wiley & Sons. [Google Scholar]
- Marley AAJ, Flynn TN, Louviere JJ, 2008. Probabilistic models of set-dependent and attribute-level best-worst choice. J. Math. Psychol. 52 (5), 281–296. doi: 10.1016/j.jmp.2008.02.002. [DOI] [Google Scholar]
- Matějka F, McKay A, 2014. Rational inattention to discrete choices: a new foundation for the multinomial logit model. Am. Econ. Rev. 105 (1), 1–55. doi: 10.1257/aer.20130047. [DOI] [Google Scholar]
- Mattsson L-G, Weibull JW, 2002. Probabilistic choice and procedurally bounded rationality. Games Econ. Behav. 41 (1), 61–78. doi: 10.1016/S0899-8256(02)00014-3. [DOI] [Google Scholar]
- McFadden D, 1973. Conditional Logit analysis of qualitative choice behavior In: Zarembka P (Ed.), Frontiers in Econometrics; Wiley, New York, pp. 105–142. [Google Scholar]
- McFadden D, 1978. Modelling the choice of residential location In: Karlqvist S, Lundqvist L, Snickars F, Weibull J (Eds.), Spatial Interaction Theory and Planning Models, 673. North-Holland, Amsterdam, pp. 75–96. [Google Scholar]
- Platt ML, Glimcher PW, 1999. Neural correlates of decision variables in parietal cortex.. Nature 400 (6741), 233–238. doi: 10.1038/22268. [DOI] [PubMed] [Google Scholar]
- Ravid D, 2015. Focus, then compare. Work. Pap. 1–40. [Google Scholar]
- Rieskamp J, Busemeyer JR, Mellers BA, 2006. Extending the bounds of rationality: evidence and theories of preferential choice. J. Econ. Literat. 44 (3), 631–661. doi: 10.1257/jel.44.3.631. [DOI] [Google Scholar]
- Schwartz O, Simoncelli EP, 2001. Natural signal statistics and sensory gain control.. Nature Neurosci. 4 (8), 819–825. doi: 10.1038/90526. [DOI] [PubMed] [Google Scholar]
- Shah AM Wolford G, 2007. Buying behavior as a function of parametric variation of number of choices.. Psychol. sci. 18 (5), 369–370. [DOI] [PubMed] [Google Scholar]
- Shannon CE, 1948. A mathematial theory of communication. Bell Syst. Tech. J. 27 (3), 379–423. [Google Scholar]
- Simon HA, 1955. A behavioral model of rational choice. Quart. J. Econ. 69 (1), 99–118. doi: 10.2307/1884852. [DOI] [Google Scholar]
- Simonson I, 1989. Choice based on reasons: the case of attraction and compromise effects. J. Consumer Res. 16 (2), 158–174. doi: 10.1086/209205. [DOI] [Google Scholar]
- Sims C, 1998. Stickiness. Proceedings of the Carnegie-Rochester Conference Series on Public Policy 49, 317–356. 10.1016/S0167-2231(99)00013-5 [DOI] [Google Scholar]
- Sims CA, 2003. Implications of rational inattention. J. Monet. Econ. 50 (3), 665–690. doi: 10.1016/S0304-3932(03)00029-1. [DOI] [Google Scholar]
- Swait J, Marley AAJ, 2013. Probabilistic choice (models) as a result of balancing multiple goals. J. Math. Psychol. 57 (1–2), 1–14. doi: 10.1016/j.jmp.2013.03.003. [DOI] [Google Scholar]
- Thurstone L 1927. A law of comparative judgment. Psychol. Rev. 34 (4), 273–286. [Google Scholar]
- Tserenjigmid G, 2016. The Order-Dependent Luce Model, 1–20. [Google Scholar]
- Webb R Glimcher P, Levy I 2013. Neural Random Utility and Measured Value. SSRN, pp. 1–36. [Google Scholar]
- Webb R, Glimcher PW, Louie K, 2014. Rationalizing context-Dependent preferences: divisive normalization and neurobiological constraints on choice. Work. Pap. 1–56. [Google Scholar]
- Zimmermann J, Glimcher PW, Louie K, 2018. Multiple timescales of normalized value coding underlie adaptive choice behavior. Nature Commun. 9 (1), 3206. doi: 10.1038/s41467-018-05507-8. [DOI] [PMC free article] [PubMed] [Google Scholar]