Skip to main content
Entropy logoLink to Entropy
. 2019 Apr 6;21(4):375. doi: 10.3390/e21040375

Bounded Rational Decision-Making from Elementary Computations That Reduce Uncertainty

Sebastian Gottwald 1,*, Daniel A Braun 1
PMCID: PMC7514859  PMID: 33267089

Abstract

In its most basic form, decision-making can be viewed as a computational process that progressively eliminates alternatives, thereby reducing uncertainty. Such processes are generally costly, meaning that the amount of uncertainty that can be reduced is limited by the amount of available computational resources. Here, we introduce the notion of elementary computation based on a fundamental principle for probability transfers that reduce uncertainty. Elementary computations can be considered as the inverse of Pigou–Dalton transfers applied to probability distributions, closely related to the concepts of majorization, T-transforms, and generalized entropies that induce a preorder on the space of probability distributions. Consequently, we can define resource cost functions that are order-preserving and therefore monotonic with respect to the uncertainty reduction. This leads to a comprehensive notion of decision-making processes with limited resources. Along the way, we prove several new results on majorization theory, as well as on entropy and divergence measures.

Keywords: uncertainty, entropy, divergence, majorization, decision-making, bounded rationality, limited resources, Bayesian inference

1. Introduction

In rational decision theory, uncertainty may have multiple sources that ultimately share the commonality that they reflect a lack of knowledge on the part of the decision-maker about the environment. A paramount example is the perfectly rational decision-maker [1] that has a probabilistic model of the environment and chooses its actions to maximize the expected utility entailed by the different choices. When we consider bounded rational decision-makers [2], we may add another source of uncertainty arising from the decision-maker’s limited processing capabilities, since the decision-maker will not only accept a single best choice, but will accept any satisficing option. Today, bounded rationality is an active research topic that crosses multiple scientific fields such as economics, political science, decision theory, game theory, computer science, and neuroscience [3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21], where uncertainty is one of the most important common denominators.

Uncertainty is often equated with Shannon entropy in information theory [22], measuring the average number of yes/no-questions that have to be answered to resolve the uncertainty. Even though Shannon entropy has many desirable properties, there are plenty of alternative suggestions for entropy measures in the literature, known as generalized entropies, such as Rényi entropy [23] or Tsallis entropy [24]. Closely related to entropies are divergence measures, which express how probability distributions differ from a given reference distribution. If the reference distribution is uniform then divergence measures can be expressed in terms of entropy measures, which is why divergences can be viewed as generalizations of entropy, for example the Kullback-Leibler divergence [25] generalizing Shannon entropy.

Here, we introduce the concept of elementary computation based on a slightly stronger notion of uncertainty than is expressed by Shannon entropy, or any other generalized entropy alone, but is equivalent to all of them combined. Equating decision-making with uncertainty reduction, this leads to a new comprehensive view of decision-making with limited resources. Our main contributions can be summarized as follows:

  • (i) 

    Based on a fundamental concept of probability transfers related to the Pigou–Dalton principle of welfare economics [26], we promote a generalized notion of uncertainty reduction of a probability distribution that we call elementary computation. This leads to a natural definition of cost functions that quantify the resource costs for uncertainty reduction necessary for decision-making. We generalize these concepts to arbitrary reference distributions. In particular, we define Pigou–Dalton-type transfers for probability distributions relative to a reference or prior distribution, which induce a preorder that is slightly stronger than Kullback-Leibler divergence, but is equivalent to the notion of divergence given by all f-divergences combined. We prove several new characterizations of the underlying concept, known as relative majorization.

  • (ii) 

    An interesting property of cost functions is their behavior under coarse-graining, which plays an important role in decision-making and formalizes the general notion of making abstractions. More precisely, if a decision in a set Ω is split up into two steps by partitioning Ω=iAi and first deciding in the set of (coarse-grained) partitions {Ai}i and secondly choosing a fine-grained option inside the selected partition Ai, then it is an important question how the cost for the total decision-making process differs from the sum of the costs in each step. We show that f-divergences are superadditive with respect to coarse-graining, which means that decision-making costs can potentially be reduced by splitting up the decision into multiple steps. In this regard, we find evidence that the well-known property of Kullback-Leibler divergence of being additive under coarse-graining might be viewed as describing the minimal amount of processing cost that cannot be reduced by a more intelligent decision-making strategy.

  • (iii) 

    We define bounded rational decision-makers as decision-making processes that are optimizing a given utility function under a constraint on the cost function, or minimizing the cost function under a minimal requirement on expected utility. As a special case for Shannon-type information costs, we arrive at information-theoretic bounded rationality, which may form a normative baseline for bounded-optimal decision-making in the absence of process-dependent constraints. We show that bounded-optimal posteriors with informational costs trace a path through probability space that can itself be seen as an anytime decision-making process, where each step optimally trades off utility and processing costs.

  • (iv) 

    We show that Bayesian inference can be seen as a decision-making process with limited resources given by the number of available datapoints.

Section 2 deals with Items (i) and (ii), aiming at a general characterization of decision-making in terms of uncertainty reduction. Item (iii) is covered in Section 3, deriving information-theoretic bounded rationality as a special case. Section 4 illustrates the concepts with an example including Item (iv). Section 5 and Section 6 contain a general discussion and concluding remarks, respectively.

Notation

Let R denote the real numbers, R+:=[0,) the set of non-negative real numbers, and Q the rational numbers. We write |A| for the number of elements contained in a countable set A, and B\A for the set difference, that is the set of elements in B that are not in A. PΩ denotes the set of probability distributions on a set Ω, in particular, any pPΩ is normalized so that p(Ω)=Ep[1]=1. Random variables are denoted by capital letters X,Y,Z, while their explicit values are denoted by small letters x,y,z. For the probability distribution of a random variable X we write p(X), and p(x) for the values of p(X). Correspondingly, the expectation E[f(X)] is also written as Ep(X)[f(X)], Ep(X)[f], or Ep[f]. We also write fp:=1ni=1nf(xn), to denote the approximation of Ep[f] by an average over samples {x1,,xn} from pPΩ.

2. Decision-Making with Limited Resources

In this section, we develop the notion of a decision-making process with limited resources following the simple assumption that any decision-making process

  • (i) 

    reduces uncertainty

  • (ii) 

    by spending resources.

Starting from an intuitive interpretation of uncertainty and resource costs, these concepts are refined incrementally until a precise definition of a decision-making process is given at the end of this section (Definition 7) in terms of elementary computations. Here, a decision-making process is a comprehensive term that describes all kinds of biological as well as artificial systems that are searching for solutions to given problems, for example a human decision-maker that burns calories while thinking, or a computer that uses electric energy to run an algorithm. However, resources do not necessarily refer to a real consumable quantity but can also represent more explicit resources (e.g., time) as a proxy, for example the number of binary comparisons in a search algorithm, the number of forward simulations in a reinforcement learning algorithm, the number of samples in a Monte Carlo algorithm, or, even more abstractly, they can express the limited availability of some source of information, for example the number of data that are available to an inference algorithm (see Section 4).

2.1. Uncertainty Reduction by Eliminating Options

In its most basic form, the concept of decision-making can be formalized as the process of looking for a decision xΩ in a discrete set of options Ω={x1,,xN}. We say that a decision xΩ is certain, if repeated queries of the decision-maker will result in the same decision, and it is uncertain, if repeated queries can result in different decisions. Uncertainty reduction then corresponds to reducing the amount of uncertain options. Hence, a decision-making process that transitions from a space Ω of options to a strictly smaller subset AΩ reduces the amount of uncertain options from N=|Ω| to NA:=|A|<N, with the possible goal to eventually find a single certain decision x. Such a process is generally costly, the more uncertainty is reduced the more resources it costs (Figure 1). The explicit mapping between uncertainty reduction and resource cost depends on the details of the underlying process and on which explicit quantity is taken as the resource. For example, if the resource is given by time (or any monotone function of time), then a search algorithm that eliminates options sequentially until the target value is found (linear search) is less cost efficient than an algorithm that takes a sorted list and in each step removes half of the options by comparing the mid point to the target (logarithmic search). Abstractly, any real-valued function C on the power set of Ω that satisfies C(A)<C(A) whenever AA might be used as a cost function in the sense that C(A) quantifies the expenses of reducing the uncertainty from Ω to AΩ.

Figure 1.

Figure 1

Decision-making as search in a set of options. At the expense of more and more resources, the number of uncertain options is progressively reduced until x is the only remaining option.

In utility theory, decision-making is modeled as an optimization process that maximizes a so-called utility function U:ΩR (which can itself be an expected utility with respect to a probabilistic model of the environment, in the sense of von Neumann and Morgenstern [1]). A decision-maker that is optimizing a given utility function U obtains a utility of 1NAxAU(x)1NxΩU(x) on average after reducing the amount of uncertain options from N to NA<N (see Figure 2). A decision-maker that completely reduces uncertainty by finding the optimum x=argmaxxΩU(x) is called rational (without loss of generality we can assume that x is unique, by redefining Ω in the case when it is not). Since uncertainty reduction generally comes with a cost, a utility optimizing decision-maker with limited resources, correspondingly called bounded rational (see Section 3), in contrast will obtain only uncertain decisions from a subset AΩ. Such decision-makers seek satisfactory rather than optimal solutions, for example by taking the first option that satisfies a minimal utility requirement, which Herbert A. Simon calls a satisficing solution [2].

Figure 2.

Figure 2

Decision-making as utility optimization process.

Summarizing, we conclude that a decision-making process with decision space Ω that successively eliminates options can be represented by a mapping ϕ between subsets of Ω, together with a cost function C that quantifies the total expenses of arriving at a given subset,

ΩAϕ(A)A (1)

such that

ΩAϕ(A)A,0=C(Ω)<C(A)<C(ϕ(A))<C(A), (2)

For example, a rational decision-maker can afford C({x}), whereas a decision-maker with limited resources can typically only afford uncertainty reduction with cost C(A)<C({x}).

From a probabilistic perspective, a decision-making process as described above is a transition from a uniform probability distribution over N options to a uniform probability distribution over N<N options, that converges to the Dirac measure δx centered at x in the fully rational limit. From this point of view, the restriction to uniform distributions is artificial. A decision-maker that is uncertain about the optimal decision x might indeed have a bias towards a subset A without completely excluding other options (the ones in Ac=Ω\A), so that the behavior must be properly described by a probability distribution pPΩ. Therefore, in the following section, we extend Equations (1) and (2) to transitions between probability distributions. In particular, we must replace the power set of Ω by the space of probability distributions on Ω, denoted by PΩ.

2.2. Probabilistic Decision-Making

Let Ω be a discrete decision space of N=|Ω|< options, so that PΩ consists of discrete distributions p, often represented by probability vectors p=(p1,,pN). However, many of the concepts presented in this and the following section can be generalized to the continuous case [27,28].

Intuitively, the uncertainty contained in a distribution pPΩ is related to the relative inequality of its entries, the more similar its entries are, the higher the uncertainty. This means that uncertainty is increased by moving some probability weight from a more likely option to a less likely option. It turns out that this simple idea leads to a concept widely known as majorization [27,29,30,31,32,33], which has roots in the economic literature of the early 19th century [26,34,35], where it was introduced to describe income inequality, later known as the Pigou–Dalton Principle of Transfers. Here, the operation of moving weight from a more likely to a less likely option corresponds to the transfer of income from one individual of a population to a relatively poorer individual (also known as a Robin Hood operation [30]). Since a decision-making process can be viewed as a sequence of uncertainty reducing computations, we call the inverse of such a Pigou–Dalton transfer an elementary computation.

Definition 1

(Elementary computation). A transformation on PΩ of the form

Tε:p(p1,,pm+ε,,pnε,,pN), (3)

where m,n are such that pmpn, and 0<εpnpm2, is called a Pigou–Dalton transfer (see Figure 3). We call its inverse Tε1 an elementary computation.

Figure 3.

Figure 3

A Pigou–Dalton transfer as given by Equation (3). The transfer of probability from a more likely to a less likely option increases uncertainty.

Since making two probability values more similar or more dissimilar are the only two possibilities to minimally transform a probability distribution, elementary computations are the most basic principle of how uncertainty is reduced. Hence, we conclude that a distribution p has more uncertainty than a distribution p if and only if p can be obtained from p by finitely many elementary computations (and permutations, which are not considered an elementary computation due to the choice of ε).

Definition 2

(Uncertainty). We say that pPΩ contains more uncertainty than pPΩ, denoted by

pp, (4)

if and only if p can be obtained from p by a finite number of elementary computations and permutations.

Note that, mathematically, this defines a preorder on PΩ, i.e., a reflexive (pp for all pPΩ) and transitive (if pp, pp then pp for all p,p,pPΩ) binary relation.

In the literature, there are different names for the relation between p and p expressed by Definition 2, for example p is called more mixed than p [36], more disordered than p [37], more chaotic than p [32], or an average of p [29]. Most commonly, however, p is said to majorize p, which started with the early influences of Muirhead [38], and Hardy, Littlewood, and Pólya [29] and was developed by many authors into the field of majorization theory (a standard reference was published by Marshall, Olkin, and Arnold [27]), with far reaching applications until today, especially in non-equilibrium thermodynamics and quantum information theory [39,40,41].

There are plenty of equivalent (arguably less intuitive) characterizations of pp, some of which are summarized below. However, one characterization makes use of a concept very closely related to Pigou–Dalton transfers, known as T-transforms [27,32], which expresses the fact that moving some weight from a more likely option to a less likely option is equivalent to taking (weighted) averages of the two probability values. More precisely, a T-transform is a linear operator on PΩ with a matrix of the form T=(1λ)I+λΠ, where I denotes the identity matrix on RN, Π denotes a permutation matrix of two elements, and 0λ1. If Π permutes pm and pn, then (Tp)k=pk for all k{m,n}, and

(Tp)m=(1λ)pm+λpn,(Tp)n=λpm+(1λ)pn. (5)

Hence, a T-transform considers any two probability values pm and pn of a given pPΩ, calculates their weighted averages with weights (1λ,λ) and (λ,1λ), and replaces the original values with these averages. From Equation (5), it follows immediately that a T-transform with parameter 0<λ12 and a permutation Π of pm,pn with pmpn is a Pigou–Dalton transfer with ε=(pnpm)λ. In addition, allowing 12λ1 means that T-transfers include permutations, in particular, pp if and only if p can be derived from p by successive applications of finitely many T-transforms.

Due to a classic result by Hardy, Littlewood and Pólya ([29] (p. 49)), this characterization can be stated in an even simpler form by using doubly stochastic matrices, i.e., matrices A=(Aij)i,j with Aij0 and iAij=1=jAij for all i,j. By writing xA:=ATx for all xRN, and e:=(1,,1), these conditions are often stated as

Aij0,Ae=e,eA=e. (6)

Note that doubly stochastic matrices can be viewed as generalizations of T-transforms in the sense that a T-transform takes an average of two entries, whereas if p=pA with a doubly stochastic matrix A, then pj=iAijpi is a convex combination, or a weighted average, of p with coefficients (Aij)i for each j. This is also why p is then called more mixed than p [36]. Therefore, similar to T-transforms, we might expect that, if p is the result of an application of a doubly stochastic matrix, p=pA, then p is an average of p and therefore contains more uncertainty than p. This is exactly what is expressed by Characterization (iii) in the following theorem. A similar characterization of pp is that p must be given by a convex combination of permutations of the elements of p (see property (iv) below).

Without having the concept of majorization, Schur proved that functions of the form pif(pi) with a convex function f are monotone with respect to the application of a doubly stochastic matrix [42] (see property (v) below). Functions of this form are an important class of cost functions for probabilistic decision-makers, as we discuss in Example 1.

Theorem 1

(Characterizations of pp [27]). For p,pPΩ, the following are equivalent:

  • (i) 

    pp, i.e., p contains more uncertainty than p (Definition 2)

  • (ii) 

    p is the result of finitely many T-transforms applied to p

  • (iii) 

    p=pA for a doubly stochastic matrix A

  • (iv) 

    p=k=1KθkΠk(p) where KN, k=1Kθk=1, θk0, and Πk is a permutation for all k{1,,K}

  • (v) 

    i=1Nf(pi)i=1Nf(pi) for all continuous convex functions f

  • (vi) 

    i=1k(pi)i=1kpi for all k{1,,N1}, where p denotes the decreasing rearrangement of p

As argued above, the equivalence between (i) and (ii) is straight-forward. The equivalences among (ii), (iii), and (vi) are due to Muirhead [38] and Hardy, Littlewood, and Pólya [29]. The implication (v)(iii) is due to Karamata [43] and Hardy, Littlewood, and Pólya [44], whereas (iii)(v) goes back to Schur [42]. Mathematically, (iv) means that p belongs to the convex hull of all permutations of the entries of p, and the equivalence (iii)(iv) is known as the Birkhoff–von Neumann theorem. Here, we state all relations for probability vectors pPΩ, even though they are usually stated for all p,pRN with the additional requirement that i=1Npi=i=1Npi.

Condition (vi) is the classical and most commonly used definition of majorization [27,29,34], since it is often the easiest to check in practical examples. For example, from (vi), it immediately follows that uniform distributions over N options contain more uncertainty than uniform distributions over N<N options, since i=1k1N=kNkN=i=1k1N for all k<N, i.e., for N3 we have

1N,,1N1N1,,1N1,012,12,0,,01,0,0. (7)

In particular, if AAΩ, then the uniform distribution over A contains less uncertainty than the uniform distribution over A, which shows that the notion of uncertainty introduced in Definition 2 is indeed a generalizatin of the notion of uncertainty given by the number of uncertain options introduced in the previous section.

Note that ≺ only being a preorder on PΩ, in general, two distributions p,pPΩ are not necessarily comparable, i.e., we can have both pp and pp. In Figure 4, we visualize the regions of all comparable distributions for two exemplary distributions on a three-dimensional decision space (N=3), represented on the two-dimensional simplex of probability vectors p=(p1,p2,p3). For example, p=(12,14,14) and p=(25,25,15) cannot be compared under ≺, since 12>25, but 34<45.

Figure 4.

Figure 4

Comparability of probability distributions in N=3. The region in the center consists of all p that are majorized by p, i.e., pp, whereas the outer region consists of all p that majorize p, pp. The bright regions are not comparable to p. (a) p=(13,12,16); (b) p=(12,14,14).

Cost functions can now be generalized to probabilistic decision-making by noting that the property C(A)<C(A) whenever AA in Equation (2) means that C is strictly monotonic with respect to the preorder given by set inclusion.

Definition 3

(Cost functions on PΩ). We say that a function C:PΩR+ is a cost function, if it is strictly monotonically increasing with respect to the preorder ≺, i.e., if

ppC(p)C(p), (8)

with equality only if p and p are equivalent, pp, which is defined as pp and pp. Moreover, for a parameterized family of posteriors (pr)rI, we say that r is a resource parameter with respect to a cost function C, if the mapping IR+,rC(pr) is strictly monotonically increasing.

Since monotonic functions with respect to majorization were first studied by Schur [42], functions with this property are usually called (strictly) Schur-convex ([27] (Ch. 3)).

Example 1

(Generalized entropies). From (v) in Theorem 1, it follows that functions of the form

C(p)=i=1Nf(pi), (9)

where f is strictly convex, are examples of cost functions. Since many entropy measures used in the literature can be seen to be special cases of Equation (9) (with a concave f), functions of this form are often called generalized entropies [45]. In particular, for the choice f(t)=tlogt, we have C(p)=H(p), where H(p) denotes the Shannon entropy of p. Thus, if p contains more uncertainty than p in the sense of Definition 2 (pp) then the Shannon entropy of p is larger than the Shannon entropy of p and therefore p contains also more uncertainty in the sense of classical information theory than p. Similarly, for f(t)=log(t) we obtain the (negative) Burg entropy, and for functions of the form f(t)=±tα for αR\{0,1} we get the (negative) Tsallis entropy, where the sign is chosen depending on α such that f is convex (see, e.g., [46] for more examples). Moreover, the composition of any (strictly) monotonically increasing function g with Equation (9) generates another class of cost functions, which contains for example the (negative) Rényi entropy [23]. Note also that entropies of the form of Equation (9) are special cases of Csiszár’s f-divergences [47] for uniform reference distributions (see Example 3 below). In Figure 5, several examples of cost functions are shown for N=3. In this case, the two-dimensional probability simplex PΩ is given by the triangle in R3 with edges (1,0,0), (0,1,0), and (0,0,1). Cost functions are visualized in terms of their level sets.

We prove in Proposition A1 in Appendix A that all cost functions of the form of Equation (9) are superadditive with respect to coarse-graining. This seems to be a new result and an improvement upon the fact that generalized entropies (and f-divergences) satisfy information monotonicity [48]. More precisely, if a decision in Ω, represented by a random variable Z, is split up into two steps by partitioning Ω=iIAi and first deciding about the partition iI, correspondingly described by a random variable X with values in I, and then choosing an option inside of the selected partition Ai, represented by a random variable Y, i.e., Z=(X,Y), then

C(Z)C(X)+C(Y|X), (10)

where C(X):=C(p(X)) and C(Y|X):=Ep(X)[C(p(Y|X))]. For symmetric cost functions (such as Equation (9)) this is equivalent to

C(p1,,pN)C(p1+p2,p3,,pN)+(p1+p2)C(p1p1+p2,p2p1+p2). (11)

The case of equality in Equations (10) and (11) (see Figure 6) is sometimes called separability [49], strong additivity [50], or recursivity [51], and it is often used to characterize Shannon entropy [23,52,53,54,55,56]. In fact, we also show in Appendix A (Proposition A2) that cost functions C that are additive under coarse-graining are proportional to the negative Shannon entropy H. See also Example 3 in the next section, where we discuss the generalization to arbitrary reference distributions.

Figure 5.

Figure 5

Examples of cost functions for decision spaces with three elements (N=3): (a) Shannon entropy; (b) Tsallis entropy of order α=4; and (c) Rényi entropy of order α=3.5.

Figure 6.

Figure 6

Additivity under coarse-graining. If the cost for Z=(X,Y) is the sum of the costs for X and the cost for Y given X, then the cost function is proportional to Shannon entropy.

We can now refine the notion of a decision-making process introduced in the previous section as a mapping ϕ together with a cost function C satisfying Equation (2). Instead of simply mapping from sets A to smaller subsets AA by successively eliminating options, we now allow ϕ to be a mapping between probability distributions such that ϕ(p) can be obtained from p by a finite number of elementary computations (without permutations), and we require C to be a cost function on PΩ, so that

pϕ(p),C(p)<C(ϕ(p))pPΩ. (12)

Here, C(p) quantifies the total costs of arriving at a distribution p, and pp means that pp and pp. In other words, a decision-making process can be viewed as traversing probability space by moving pieces of probability from one option to another option such that uncertainty is reduced.

Up to now, we have ignored one important property of a decision-making process, the distribution q with minimal cost, i.e., satisfying C(q)C(p) for all p, which must be identified with the initial distribution of a decision-making process with cost function C. As one might expect (see Figure 5), it turns out that all cost functions according to Definition 3 have the same minimal element.

Proposition 1

(Uniform distributions are minimal). The uniform distribution (1N,,1N) is the unique minimal element in PΩ with respect to ≺, i.e.

1N,,1NppPΩ. (13)

Once Equation (13) is established, it follows from Equation (8) that C((1N,,1N))C(p) for all p, in particular the uniform distribution corresponds to the initial state of all decision-making processes with cost function C satisfying Equation (12). In particular, it contains the maximum amount of uncertainty with respect to any entropy measure of the form of Equation (9), known as the second Khinchin axiom [49], e.g., for Shannon entropy 0H(p)logN. Proposition 1 follows from Characterization (iv) in Theorem 1 after noticing that every pPΩ can be transformed to a uniform distribution by permuting its elements cyclically (see Proposition A3 in Appendix A for a detailed proof).

Regarding the possibility that a decision-maker may have prior information, for example originating from the experience of previous comparable decision-making tasks, the assumption of a uniform initial distribution seems to be artificial. Therefore, in the following section, we arrive at the final notion of a decision-making process by extending the results of this section to allow for arbitrary initial distributions.

2.3. Decision-Making with Prior Knowledge

From the discussion at the end of the previous section we conclude that, in full generality, a decision-maker transitions from an initial probability distribution qPΩ, called prior, to a terminal distribution pPΩ, called posterior. Note that, since once eliminated options are excluded from the rest of the decision-making process, a posterior p must be absolutely continuous with respect to the prior q, denoted by pq, i.e., p(x) can be non-zero for a given xΩ only if q(x) is non-zero.

The notion of uncertainty (Definition 2) can be generalized with respect to a non-uniform prior qPΩ by viewing the probabilities qi as the probabilities Q(Ai) of partitions Ai of an underlying elementary probability space Ω˜=iAi of equally likely elements under Q, in particular Q represents q as the uniform distribution on Ω˜ (see Figure 7). The similarity of the entries of the corresponding representation PPΩ˜ of any pPΩ (its uncertainty) then contains information about how close p is to q, which we call the relative uncertainty of p with respect to q (Definition 4 below).

Figure 7.

Figure 7

Representation of q and p by Q and P on Ω˜ (Example 2), such that the probabilities qi and pi are given by the probabilities of the partitions Ai with respect to Q and P, respectively.

The formal construction is as follows: Let p,qPΩ be such that pq and qiQ. The case when qiR then follows from a simple approximation of each entry by a rational number. Let αN be such that αqiN for all i{1,,N}, for example α could be chosen as the least common multiple of the denominators of the qi. The underlying elementary probability space Ω˜ then consists of α elements and there exists a partitioning {Ai}i=1,,N of Ω˜ such that

|Ai|=αqii{1,,N}, (14)

where Q denotes the uniform distribution on Ω˜. In particular, it follows that

Q(Ai)=j=1|Ai|1α=qii{1,,N}, (15)

i.e., Q represents q in Ω˜ with respect to the partitioning {Ai}i. Similarly, any pPΩ can be represented as a distribution on Ω˜ by requiring that P(Ai)=pi for all i{1,,N} and letting P to be constant inside of each partition, i.e., similar to Equation (15) we have P(Ai)=|Ai|P(ω)=pi for all ωAi and therefore by Equation (14)

P(ω)=1αpiqiωAi. (16)

Note that, if qi=0 then pi=0 by absolute continuity (pq) in which case we can either exclude option i from Ω or set P(ω)=0.

Example 2.

For a prior q=(16,12,13) we put α=6, so that Ω˜={ω1,,ω6} should be partitioned as Ω˜={ω1}{ω2,ω3,ω4}{ω5,ω6}. Then, qi corresponds to the probability of the ith partition under the uniform distribution Q=16(1,,1), while the distribution p=(16,34,112) is represented on Ω˜ by the distribution P=(16,14,14,14,124,124) (see Figure 7).

Importantly, if the components of the representation Λqp:=P in PΩ˜ given by Equation (16) are similar to each other, i.e., if P is close to uniform, then the components of p must be very similar to the components of q, which we express by the concept of relative uncertainty.

Definition 4

(Uncertainty relative to q). We say that pPΩ contains more uncertainty with respect to a prior qPΩ than pPΩ, denoted by pqp, if and only if Λqp contains more uncertainty than Λqp, i.e.

pqp:ΛqpΛqp (17)

where Λq:PΩPΩ˜,pP is given by Equation (16).

As we show in Theorem 2 below, it turns out that q coincides with a known concept called q-majorization [57], majorization relative to q [27,28], or mixing distance [58]. Due to the lack of a characterization by partial sums, it is usually defined as a generalization of Characterization (iii) in Theorem 1, that is p is q-majorized by p iff p=pA, where A is a so-called q-stochastic matrix, which means that it is a stochastic matrix (Ae=e) with qA=q. In particular, q does not depend on the choice of α in the definition of Λq. Here, we provide two new characterizations of q-majorization, the one given by Definition 4, and one using partial sums generalizing the original definition of majorization.

Theorem 2

(Characterizations of pqp). The following are equivalent

  • (i) 

    pqp, i.e., p contains more uncertainty relative to q than p (Definition 4).

  • (ii) 

    Λqp can be obtained from Λqp by a finite number of elementary computations and permutations on PΩ˜.

  • (iii) 

    p=pA for a q-stochastic matrix A, i.e., Ae=e and qA=q.

  • (iv) 

    i=1Nqifpiqii=1Nqifpiqi for all continuous convex functions f.

  • (v) 

    i=1l1(pi)+aq(k,l)(pl)i=1l1pi+aq(k,l)pl for all αi=1l1qikαi=1lqi and 1lN, where aq(k,l):=(kαi=1l1qi)/ql, and the arrows indicate that (pi/qi)i is ordered decreasingly.

To prove that (i), (iii), and (v) are equivalent (see Proposition A4 in Appendix A), we make use of the fact that Λq:PΩPΩ˜ has a left inverse Λq1:Λq(PΩ)PΩ. This can be verified by simply multiplying the corresponding matrices given in the proof of Proposition A4. The equivalence between (iii) and (iv) is shown in [28] (see also [27,58]). Characterization (ii) follows immediately from Definitions 2 and 4.

As required from the discussion at the end of the previous section, q is indeed minimal with respect to q, which means that it contains the most amount of uncertainty with respect to itself.

Proposition 2

(The prior is minimal). The prior qPΩ is the unique minimal element in PΩ with respect to q, that is

qqppPΩ. (18)

This follows more or less directly from Proposition 1 and the equivalence of (i) and (iii) in Theorem 2 (see Proposition A5 in Appendix A for a detailed proof).

Order-preserving functions with respect to q generalize cost functions introduced in the previous section (Definition 3). According to Proposition 2, such functions have a unique minimum given by the prior q. Since cost functions are used in Definition 7 below to quantify the expenses of a decision-making process, we require their minimum to be zero, which can always be achieved by redefining a given cost function by an additive constant.

Definition 5

(Cost functions relative to q). We say that a function Cq:PΩR+ is a cost function relative to q, if Cq(q)=0, if it is invariant under relabeling (qi,pi)i, and if it is strictly monotonically increasing with respect to the preorder q, that is if

pqpCq(p)Cq(p), (19)

with equality only if pqp, i.e., if pqp and pqp. Moreover, for a parameterized family of posteriors (pr)rI, we say that r is a resource parameter with respect to a cost function Cq, if the mapping IR+,rCq(pr) is strictly monotonically increasing.

Similar to generalized entropy functions discussed in Example 1, in the literature, there are many examples of relative cost functions, usually called divergences or measures of divergence.

Example 3

(f-divergences). From (iv) in Theorem 2, it follows that functions of the form

Cq(p):=i=1Nqifpiqi, (20)

where f is continuous and strictly convex with f(1)=0, are examples of cost functions relative to q. Many well-known divergence measures can be seen to belong to this class of relative cost functions, also known as Csiszár’s f-divergences [47]: the Kullback-Leibler divergence (or relative entropy), the squared 2 distance, the Hartley entropy, the Burg entropy, the Tsallis entropy, and many more [46,50] (see Figure 8 for visualizations of some of them in N=3 relative to a non-uniform prior).

As a generalization of Proposition A1 (superadditivity of generalized entropies), we prove in Proposition A6 in Appendix A that f-divergences are superadditive under coarse-graining, that is

Cq(Z)Cq(X)+Cq(Y|X) (21)

whenever Z=(X,Y), and Cq(X):=Cq(X)(p(X)) and Cq(Y|X):=Ep(X)[Cq(Y|X)(p(Y|X))],

This generalizes Equation (10) to the case of a non-uniform prior. Similar to entropies, the case of equality in Equation (21) is sometimes called composition rule [59], chain rule [60], or recursivity [50], and is often used to characterize Kullback-Leibler divergence [8,50,59,60].

Indeed, we also show in Appendix A (Proposition A7) that all additive cost functions with respect to q are proportional to Kullback-Leibler divergence (relative entropy). This goes back to Hobson’s modification [59] of Shannon’s original proof [22], after establishing the following monotonicity property for uniform distributions: If f(M,N) denotes the cost CuN(uM) of a uniform distribution uM over M elements relative to a uniform distribution uN over NM elements, then (see Figure 9).

f(M,N)f(M,N)MMN,f(M,N)f(M,N)MNN. (22)

Note that, even though our proof of Proposition A7 uses additivity under coarse graining to show the monotonicity property in Equation (22), it is easy to see that any relative cost function of the form of Equation (20) also satisfies Equation (22) by using the convexity of f as f(t)tsf(s)+(1ts)f(0) with t=NM<NM=s.

In terms of decision-making, superadditivity under coarse-graining means that decision-making costs can potentially be reduced by splitting up the decision into multiple steps, for example by a more intelligent search strategy. For example, if N=2k for some kN and Cq is superadditive, then the cost for reducing uncertainty to a single option, i.e., p=(1,0,,0), when starting from a uniform distribution q, satisfies

Cq(p)Cq2(1,0)+CqN/2(1,0,,0)logN=DKL(pq),

where qn:=(1n,,1n), and we have set Cq2(1,0)=1 as unit cost (corresponding to 1 bit in the case of Kullback-Leibler divergence). Thus, intuitively the property of the Kullback-Leibler divergence of being additive under coarse-graining might be viewed as describing the minimal amount of processing costs that must be contained in any cost function, because it cannot be reduced by changing the decision-making process. Therefore, in the following, we call cost functions that are proportional to the Kullback-Leibler divergence simply informational costs.

Figure 8.

Figure 8

Examples of cost functions for N=3 relative to q=(13,12,16): (a) Kullback-Leibler divergence; (b) Squared 2 distance; and (c) Tsallis relative entropy of order α=3.0.

Figure 9.

Figure 9

Monotonicity property in Equation (22): and (a) the cost is higher when more uncertainty has been reduced; and (b) if the posterior is the same, then it is cheaper to start from a prior with fewer options.

In contrast to the previous section, in the definition of q and its characterizations, we never use elementary computations on PΩ directly. This is because permutations interact with the uncertainty relative to q, and therefore q cannot be characterized by a finite number of elementary computations and permutations on PΩ. However, we can still define elementary computations relative to q by the inverse of Pigou–Dalton transfers Tε of the form of Equation (3) such that Tεpqp for ε>0, which is arguably the most basic form of how to generate uncertainty with respect to q.

Even for small ε, a regular Pigou–Dalton transfer does not necessarily increase uncertainty relative to q, because the similarity of the components now needs to be considered with respect to q. Instead, we compare the components of the representation P=Λqp of pPΩ, and move some probability weight ε0 from P(An) to P(Am) whenever P(ω)P(ω) for ωAm and ωAn, by distributing ε evenly among the elements in Am (see Figure 10), denoted by the transformation T˜ε. Here, ε must be small enough such that the inequality 1αpmqm=P(ω)P(ω)=1αpnqn is invariant under T˜ε, which means that

(T˜εP)(ω)(T˜εP)(ω)1αpmqm+ε|Am|1αpnqnε|An|(14)εpnqnpmqm1qm+1qn. (23)

Figure 10.

Figure 10

Pigou–Dalton transfer relative to q. A distribution pPΩ is transformed relative to q by first moving some amount of weight ε0 from P(An) to P(Am) where n,m are such that P(ω)P(ω) whenever ωAm and ωAn, with ε small enough such that this relation remains true after the transformation, and then mapping the transformed distribution back to PΩ by Λq1 (see Definition 6).

By construction, T˜ε minimally increases uncertainty in PΩ˜ while staying in the image of PΩ under Λq, by keeping the values of P constant in each partition, and therefore Tε:=Λq1T˜εΛq can be considered as the most basic way of how to increase uncertainty relative to q.

Definition 6

(Elementary computation relative to q). We call a transformation on PΩ of the form

Tε:p(p1,,pm+ε,,pnε,,pN), (24)

with m,n such that pmqmpnqn, and ε satisfying Equation (23), a Pigou–Dalton transfer relative to q, and its inverse Tε1 an elementary computation relative to q.

We are now in the position to state our final definition of a decision-making process.

Definition 7

(Decision-making process). A decision-making process is a gradual transformation

qpϕ(p)p

of a prior qPΩ to a posterior pPΩ, such that each step decreases uncertainty relative to q. This means that p is obtained from q by successive application of a mapping ϕ between probability distributions on Ω, such that ϕ(p) can be obtained from p by finitely many elementary computations relative to q, in particular

qqpqϕ(p),0=Cq(q)<Cq(p)<Cq(ϕ(p)), (25)

where Cq(p) quantifies the total costs of a distribution p, and pqp means that pqp and pqp.

In other words, a decision-making process can be viewed as traversing probability space from prior q to posterior p by moving pieces of probability from one option to another option such that uncertainty is reduced relative to q, while expending a certain amount of resources determined by the cost function Cq.

3. Bounded Rationality

3.1. Bounded Rational Decision-Making

In this section, we consider decision-making processes that trade off utility against costs. Such decision-makers either maximize a utility function subject to a constraint on the cost function, for example an author of a scientific article who optimizes the article’s quality until a deadline is reached, or minimizing the cost function subject to a utility constraint, for example a high-school student who minimizes effort such that the requirement to pass a certain class is achieved. In both cases, the decision-makers are called bounded rational, since in the limit of no resource constraints they coincide with rational decision-makers.

In general, depending on the underlying system, such an optimization process might have additional process dependent constraints that are not directly given by resource limitations, for example in cases when the optimization takes place in a parameter space that has less degrees of freedom than the full probability space PΩ. Abstractly, this is expressed by allowing the optimization process to search only in a subset ΓPΩ.

Definition 8

(Bounded rational decision-making process). Let U:ΩR be a given utility function, and ΓPΩ. A decision-making process with prior q, posterior pΓ, and cost function Cq is called bounded rational if its posterior satisfies

p=argmaxpΓEp[U]|Cq(p)C0, (26)

for a given upper bound C00, or equivalently

p=argminpΓCq(p)|Ep[U]U0, (27)

for a given lower bound U0R. In the case when the process constraints disappear, i.e., if Γ=PΩ, then a bounded rational decision-maker is called bounded-optimal.

The equivalence between Equation (26) and Equation (27) is easily seen from the equivalent optimization problem given by the formalism of Lagrange multipliers [61],

pβ:=argminpΓCq(p)βEp[U]=argmaxpΓEp[U]1βCq(p), (28)

where the cost or utility constraint is expressed by a trade-off between utility and cost, or cost and utility, with a trade-off parameter given by the Lagrange multiplier β, which is chosen such that the constraint given by C0 or U0 is satisfied. It is easily seen from the maximization problem on the right side of Equation (28) that a larger value of β decreases the weight of the cost term and thus allows for higher values of the cost function. Hence, β parameterizes the amount of resources the decision-maker can afford with respect to the cost function Cq, and, at least in non-trivial cases (non-constant utilities) it is therefore a resource parameter with respect to Cq in the sense of Definition 5. In particular, for β0, the decision-maker minimizes its cost function irrespective of the expected utility, and therefore stays at the prior, p0=q, whereas β makes the cost function disappear so that the decision-maker becomes purely rational with a Dirac posterior centered on the optima x of the utility function U.

For example, in Figure 11, we can see how the posteriors (pβ)β0 of bounded-optimal decision-makers with different cost functions for N=3 and with utility U=(0.8,1.0,0.4) leave a trace in probability space, by moving away from an exemplary prior q=(13,12,16) and eventually arriving at the rational solution δ(0,1,0).

Figure 11.

Figure 11

Paths of bounded-optimal decision-makers in P(Ω) for N=3. The straight lines in the background denote level sets of expected utility, the solid lines are level sets of the cost functions, and the dashed curves represent the paths (pβ)β0 of a bounded-optimal decision-maker given by Equation (28) with utility U=(0.8,1.0,0.4), prior q=(13,12,16), and cost functions given by: (a) Kullback-Leibler divergence; (b) Tsallis relative entropy of order α=3; and (c) Burg relative entropy.

For informational costs (i.e., proportional to Kullback-Leibler divergence), β is a resource parameter with respect to any cost function.

Proposition 3.

If (pβ)β0 is a family of bounded-optimal posteriors given by Equation (28) with Cq(p)=DKL(pq), then β is a resource parameter with respect to any cost function, in particular

q=p0qpβqpββ,βwithβ<β. (29)

This generalizes a result in [37] to the case of non-uniform priors, by making use of our new Characterization (v) of q, by which it suffices to show that βi=1l1(pβ,i)+aq(k,l)(pβ,l) is an increasing function of β for all k,l specified in Theorem 2 (see Proposition A8 in Appendix A for details). For simplicity, we restrict ourselves to the case of the Kullback-Leibler divergence, however the proof is analogous for cost functions of the form of Equation (20) with f being differentiable and strictly convex on [0,1] (so that f is strictly monotonically increasing and thus invertible on [0,1], see [37] for the case of uniform priors).

Hence, for any β>0, the posteriors (pβ)β<β of a bounded-optimal decision-making process with the Kullback-Leibler divergence as cost function can be regarded as the steps of a decision-making process (i.e., satisfying Equation (25)) with posterior pβ, where each step optimally trades off utility against informational cost. This means that with increasing β the posteriors pβ do not only decrease entropy in the sense of the Kullback-Leibler divergence, but also in the sense of any other cost function.

The important case of bounded-optimal decision-makers with informational costs is termed information-theoretic bounded rationality [14,18,62] and is studied more closely in the following sections.

3.2. Information-Theoretic Bounded Rationality

For bounded-optimal decision-making processes with informational costs, the unconstrained optimization problem in Equation (28) takes the form maxpPΩF[p], where

F[p]:=Ep[U]1βDKL(pq), (30)

which has a unique maximum pβ, the bounded-optimal posterior given by

pβ(x)=1Zβq(x)eβU(x) (31)

with normalization constant Zβ. This form can easily be derived by finding the zeros of the functional derivative of the objective functional in Equation (30) with respect to p (with an additional normalization constraint), whereas the uniqueness follows from the convexity of the mapping pDKL(pq). For the actual maximum of F we obtain

Fβ:=maxpPΩF[p]=F[pβ]=1βlogZβ,

so that pβ(x)=q(x)eβ(U(x)Fβ).

Due to its analogy with physics, in particular thermodynamics (see, e.g., [18]), the maximization of Equation (30) is known as the Free Energy principle of information-theoretic bounded rationality, pioneered in [14,18,62], further developed in [63,64], and applied in recent studies of artificial systems, such as generative neural networks trained by Markov chain Monte Carlo methods [65], or in reinforcement learning as an adaptive regularization strategy [66,67], as well as in recent experimental studies on human behavior [68,69]. Note that there is a formal connection of Equation (30) and the Free Energy principle of active inference [70], however, as discussed in [64] Section 6.3: both Free Energy principles have conceptually different interpretations.

Example 4

(Bayes rule as a bounded-optimal posterior). In Bayesian inference, the parameter θ of the distribution pθ of a random variable Y is inferred from a given dataset d={y1,,yN} of observations of Y by treating the parameter itself as a random variable Θ with a prior distribution q(Θ). The parameterized distribution of Y evaluated at an observation yid given a certain value of Θ, i.e., p(yi|Θ=θ), is then understood as a function of θ, known as the likelihood of the datapoint yi under the assumption of Θ=θ. After seeing the dataset d, the belief about Θ is updated by using Bayes rule

p(θ)=q(θ)p(d|θ)Eq(Θ)[p(d|Θ)].

This takes the form of a bounded-optimal posterior in Equation (31) with β=N and utility function given by the average log-likelihood per datapoint,

U(θ):=1Nlogp(d|θ)=1Ni=1Nlog(p(yi|θ)),

since then Bayes rule reads

p(θ)=1Zq(θ)eβU(θ). (32)

The corresponding Free Energy in Equation (30), which is maximized by Equation (32),

F[p(Θ)]=Ep(Θ)[U(Θ)]1βDKL(p(Θ)q(Θ))=1NEp(Θ)logp(d|Θ)logp(Θ)q(Θ)=1NDKLp(Θ)q(Θ)p(d|Θ) (33)

coincides with the variational Free Energy Fvar from Bayesian statistics. Indeed, from Equation (33) it is easy to see that F assumes its maximum when p(Θ) is proportional to q(Θ)p(d|Θ), that is when p(Θ) is given by Equation (32). In the literature, Fvar is used in the variational characterization of Bayes rule, in cases when the form of Equation (32) cannot be achieved exactly but instead is approximated by optimizing Equation (33) over the parameters ϑ of a parameterized distribution pϑ(Θ) [71,72].

In the following section, we show that the Free Energy F of a bounded rational decision-making process satisfies a recursivity property, which allows the interpretation of F as a certainty-equivalent.

3.3. The Recursivity of F and the Value of a Decision Problem

Consider a bounded-optimal decision-maker with an informational cost function deciding about a random variable Z with values in Ω that is decomposed into the random variables X and Y, i.e., Z=(X,Y). This decomposition can be understood as a two-step process, where first a decision about a partition Ai of the full search space Ω=iIAi is made, represented by a random variable X with values in I, followed by a decision about Y inside the partition selected by X (see Figure 6).

Since p(Z)=p(X)p(Y|X), by the additivity of the Kullback-Leibler divergence (Proposition A7), we have

F[p(Z)]=Ep(Z)[U(Z)]1βDKL(p(Z)q(Z))=Ep(X)Ep(Y|X)[U(X,Y)]1βDKLp(Y|X)q(Y|X)1βDKL(p(X)q(X)),

and therefore, if Fβ[p(Y|X)]:=Ep(Y|X)[U(X,Y)]1βDKL(p(Y|X)q(Y|X)) denotes the Free Energy of the second step,

F[p(X)p(Y|X)]=Ep(X)Fβ[p(Y|X)]1βDKL(p(X)q(X)). (34)

In particular, the Free Energy Fβ[p(Y|X)] of the second decision-step plays the role of the utility function of the first decision-step (see Figure 12). In Equation (34), the two decision-steps have the same resource parameter β, controlling the strength of the constraint on the total informational costs

DKL(p(Z)q(Z))=DKL(p(X)q(X))+Ep(X)DKL(p(Y|X)q(Y|X)).

Figure 12.

Figure 12

Recursivity of the Free Energy under coarse-graining. The decision about Z=(X,Y) is equivalent to a two-step process consisting of the decision about X and the decision about Y given X. The objective function for the first step is the Free Energy of the second step.

More generally, each step might have a separate information-processing constraint, which requires two resource parameters β1 and β2, and results in the total Free Energy

F[p(X),p(Y|X)]=Ep(X)Fβ2[p(Y|X)]1β1DKL(p(X)q(X)).

Example 5.

Consider a bounded-rational decision-maker with informational cost function and a utility function U defined on a set Ω={z1,,z4} with values as given in Figure 13 and an information-processing bound of 0.2 bits (β0.9). If we partition Ω into the disjoint subsets {z1,z2} and {z3,z4}, then the decision about Z can be decomposed into two steps, Z=(X,Y), the decision about X corresponding to the choice of the partition and the decision about Y given X corresponding to the choice of zi inside the given partition determined by X. According to Equation (34), the choice of the partition X=xi is not in favor of the achieved expected utility inside each partition, but of the Free Energy (see Figure 13).

Figure 13.

Figure 13

The Free Energy as certainty-equivalent (Example 5). (a) Utility function U as a function of zi (top) and expected utilities for the coarse-grained partitions {z1,z2} and {z3,z4} corresponding to the choices x1 and x2, respectively, for a bounded-rational decision-maker with β=0.9 (bottom). (b) The bounded optimal probability distribution over xi (top) does not correspond to the expected utilities in (a) but to the Free Energy of the second decision-step, i.e., the decision about Y given X (bottom).

Therefore, a bounded rational decision-maker that has the choice among decision-problems ideally should base its decision not on the expected utility that might be achieved but on the Free Energy of the subordinate problems. In other words, the Free Energy quantifies the value of a decision-problem that, besides the achieved average utility, also takes the information-processing costs into account.

3.4. Multi-Task Decision-Making and the Optimal Prior

Thus far, we have considered decision-making problems with utility functions defined on Ω only, modeling a single decision-making task. This is extended to multi-task decision-making problems by utility functions of the form U:W×ΩR,(w,x)U(w,x), where the additional variable wW represents the current state of the world. Different world states w in general lead to different optimal decisions x(w):=argmaxxΩU(w,x). For example, in a chess game, the optimal moves depend on the current board configurations the players are faced with.

The prior q for a bounded-rational multi-task decision-making problem may either depend or not depend on the world state wW. In the first case, the multi-task decision-making problem is just given by multiple single-task problems, i.e., for each wW, q(X|W=w) and p(X|W=w) are the prior and posterior of a bounded rational decision-making process with utility function xU(x,w), as described in the previous sections. In the case when there is a single prior for all world states wW, the Free Energy is

F[p(X|W)]=Ep(W)Ep(X|W)[U(W,X)]1βDKL(p(X|W)q(X)) (35)

where p(W) is a given world state distribution. Note that, for simplicity, we assume that β is independent of wW, which means that only the average information-processing is constrained, in contrast to the information-processing being constrained for each world state which in general would result in β being a function of w. Similar to single-task decision-making (Equation (31)), the maximum of Equation (35) is achieved by

pβ(x|w)=1Zβ(w)q(x)eβU(w,x) (36)

with normalization constant Zβ(w). Interestingly, the deliberation cost in Equation (35) depends on how well the prior was chosen to reach all posteriors without violating the processing constraint. In fact, viewing the Free Energy in Equation (35) as a function of both, posterior and prior, F[p(X|W)]=F[p(X|W),q(X)], and optimizing for the prior yields the marginal of the joint distribution p(W,X)=p(W)p(X|W), i.e., the mean of the posteriors for the different world states,

q(X):=argmaxq(X)F[p(X|W),q(X)]=Ep(W)[p(X|W)]. (37)

Similar to Equation (31), Equation (37) follows from finding the zeros of the functional derivative of the Free Energy with respect to q(X) (modified by an additional term for the normalization constraint).

Optimizing the Free Energy F[p(X|W),q(X)] for both prior and posterior can be achieved by iterating Equations (36) and (37). This results in an alternating optimization algorithm, originally developed independently by Blahut and Arimoto to calculate the capacity of a memoryless channel [73,74] (see [75] for a convergence proof by Csiszár and Tusnády). Note that

F[p(X|W),q(X)]=Ep(W)p(X|W)[U(W,X)]1βI(W;X),

in particular that the information-processing cost is now given by the mutual information I(W;X) between the random variables W and X. In this form, we can see that the Free Energy optimization with respect to prior and posterior is equivalent to the optimization problem in classical rate distortion theory [76], where U is given by the negative of the distortion measure.

Similar to in rate-distortion theory, where compression algorithms are analyzed with respect to the rate-distortion function, any decision-making system can now be analyzed with respect to informational bounded-optimality. More precisely, when plotting the achieved expected utility against the information-processing resources of a bounded-rational decision-maker with optimal prior, we obtain a Pareto-optimality curve that forms an efficiency-frontier that cannot be surpassed by any decision-making process (see Figure 14c).

Figure 14.

Figure 14

Absolute identification task with known world state distribution: (a) utility function; (b) world states distribution (a mixture of two Gaussians); (c) expected utility as a function of information-processing resources for a bounded-optimal decision-maker with a uniform and with an optimal prior (the shaded region cannot be reached by any decision-making process); and (d) exemplary optimal priors q(X) for different information-processing bounds.

3.5. Multi-Task Decision-Making with Unknown World State Distribution

A bounded rational decision-making process with informational cost and utility U:W×ΩR that has an optimal prior q(X) given by the marginal in Equation (37) must have perfect knowledge about the world state distribution p(W). In contrast, here we consider the case when the exact shape of the world state distribution is unknown to the decision-maker and therefore has to be inferred from the already seen world states. More precisely, we assume that the world state distribution is parameterized by a parameter θR, i.e., p(W)=pθtrue(W) for a given parameterized distribution pθ(W). Since the true parameter θtrue is unknown, θ is treated as a random variable by itself, so that pθ(W)=p(W|Θ=θ). After a dataset d={w1,,wN}WN of samples from p(W|Θ=θtrue) has been observed the joint distribution of all involved random variables can be written as

p(Θ,D,W,X)=p(Θ)p(D|Θ)p(W|Θ)p(X|D,W)

where p(Θ) denotes the decision-maker’s prior belief about Θ, and p(D=d|Θ)=i=1Np(wi|Θ) is the likelihood of the previously observed world states. Therefore, the resulting (multi-task) Free Energy (see Equation (35)) is given by

Ep(Θ)p(D|Θ)p(W|Θ)Ep(X|D,W)[U(W,X)]1βDKL(p(X|D,W)q(X|D)). (38)

It turns out that we obtain Bayesian inference as a byproduct of optimizing Equation (38) with respect to the prior q(X|D). Indeed, by calculating the functional derivative with respect to q(X|D) of the Free Energy in Equation (38) plus an additional term for the normalization constraint of q(X|D) (with Lagrange multiplier λ), we can see that any distribution q(X|D) that optimizes Equation (38) must satisfy

1βEp(Θ)p(W|Θ)p(D|Θ)p(X|D,W)q(X|D)+λ=0,

where λR is chosen such that q(X|D=d)PΩ for any dWN. This is equivalent to

q(X|D)=1ZDEp(Θ)p(D|Θ)Ep(W|Θ)[p(X|D,W)],

where ZD denotes the normalization constant of q(X|D), given by ZD=Ep(Θ)[p(D|Θ)], since Ep(X|D,W)[1]=1 as well as Ep(W|Θ)[1]=1. Therefore, we obtain

q(X|D)=Ep(Θ|D)Ep(W|Θ)[p(X|D,W)]

with p(Θ|D) as defined in Equation (39). Hence, we have shown

Proposition 4

(Optimality of Bayesian inference). The optimal prior q(X|D) that maximizes Equation (38) is given by q(X|D)=Ep(Θ|D)p(W|Θ)[p(X|D,W)], where p(Θ|D) is the Bayes posterior

p(Θ|D):=p(Θ)p(D|Θ)Ep(Θ)[p(D|Θ)]. (39)

4. Example: Absolute Identification Task with Known and Unknown Stimulus Distribution

Consider a bounded rational decision-maker with a multi-task utility function U such that, for each wW, U(w,x) is non-zero for only one choice xΩ, as shown in Figure 14. Here, the decision and world spaces are both finite sets of N=20 elements. The world state distribution p(W) is given by a mixture of two Gaussian distributions, as shown in Figure 14b. Due to some world states wW being more likely than others, there are some choices xΩ that are less likely to be optimal.

4.1. Known Stimulus Distribution

As can be seen in Figure 14c (dashed line), here it is not ideal to have a uniform prior distribution, q(x)=1N for all xΩ. Instead, if the world state distribution is known perfectly and the prior has the form suggested by Equation (37), i.e., q(x)=wp(w)pβ(x|w), then, as can be seen in Figure 14c (solid line), achieving the same expected utility as with a uniform prior requires less informational resources. In particular, the explicit form of q depends on the resource parameter β, see Figure 14d. For low resource availability (small β), only the choices that correspond to the most probable world states are considered. However, for β, we have

q(x)=wp(W=w)δw,x=p(W=x),

because here limβpβ(x|w)=δw,x is the posterior of a rational decision-maker, where δw,x denotes the Kronecker-δ (which is only non-zero if w=x). Hence, for decision-makers with abundant information-processing resources (large β) the optimal prior q(X) approaches the form of the world state distribution p(W) (since here W=Ω).

4.2. Unknown Stimulus Distribution

In the case when the decision-maker has to infer its knowledge about p(W) from a set of samples d={w1,,wN}, we know from Section 3.5 that this is optimally done via Bayesian inference. Here, we assume a mixture of two Gaussians as a parameterization of p(W), so that θ=(μ1,μ2,σ1,σ2), where μi and σi denote mean and standard-deviation of the ith component, respectively (for simplicity, with fixed equal weights for the two mixture components).

In Figure 15a, we can see how different values of N affect the belief about the world state distribution, p(W|D)=Ep(Θ|D)[p(W|Θ)], when p(Θ|D) is given by the Bayes posterior (39) with a uniform prior belief p(Θ). The resulting expected utilities (averaged over samples from p(D|θtrue)) as functions of available information-processing resources are displayed in Figure 15b, which shows how the performance of a bounded-rational decision-maker with optimal prior and perfect knowledge about the true world state distribution is approached by bounded rational decision-makers with limited but increasing knowledge given by the sample size N.

Figure 15.

Figure 15

Absolute identification task with unknown world state distribution: (a) average of inferred world state distributions for different sizes N of datasets (standard-deviations across datasets indicated by error bars); and (b) resulting utility-information curves of a bounded-rational decision-maker with optimal prior that has to infer the world state distribution from datasets with different sizes N.

Abstractly, we can view Equation (39) as the bounded optimal solution to the decision-making problem that starts with a prior p(Θ) and arrives at a posterior p(Θ|D=d) after processing the samples in d={w1,,wN} (see also Example 4). In fact, the posteriors shown in Figure 15a satisfy the requirements for a decision-making process with resource given by the number of data N, when averaged over p(D). In particular, by increasing N the posteriors contain less and less uncertainty with respect to the preorder ≺ given by majorization. Accordingly, if we plot the achieved expected utility against the number of samples, we obtain an optimality curve similar to Figure 14c and Figure 15b. In Figure 16, we can see how Bayesian Inference outperforms Maximum Likelihood when evaluated with respect to the average expected utility of a bounded-rational decision-maker with 2 bits of information-processing resources.

Figure 16.

Figure 16

Optimality curve given by Bayesian inference. The average expected utility as a function of N achieved by a bounded-rational decision-maker that infers the world state distribution with Bayes rule in Equation (39) forms an efficiency frontier that cannot be surpassed by any other inference scheme, like for example Maximum Likelihood, when starting from the same prior belief about the world.

5. Discussion

In this work, we have developed a generalized notion of decision-making in terms of uncertainty reduction. Based on the simple idea of transferring pieces of probability between the elements of a probability distribution, which we call elementary computations, we have promoted a notion of uncertainty which is known in the literature as majorization, a preorder ≺ on PΩ. Taking non-uniform initial distributions into account, we extended the concept to the notion of relative uncertainty, which corresponds to relative majorization q. Even though a large amount of research has been done on majorization theory, from the early works [29,34,38] through further developments [27,30,31,32,36,77,78] to modern applications [39,40,41], there is a lack of results on the more general concept of relative majorization. This does not seem to be due to a lack of interest, as can be seen from the results [28,57,58,79], but mostly because relative majorization looses some of the appealing properties of majorization which makes it harder to deal with, for example that permutations no longer leave the ordering q invariant, in contrast to the case of a uniform prior. This restriction does, however, not affect our application of the concept to decision-making, as permutations are not considered as elementary computations, since they do not diminish uncertainty. By reducing the non-uniform to the uniform case, we managed to prove new results on relative majorization (Theorem 2), which then enabled new results in other parts of the paper (Example 3 and Propositions A6 and A8), and allowed an intuitive interpretation of our final definition of a decision-making process (Definition 7) in terms of elementary computations with respect to non-uniform priors (Definition 6).

More precisely, starting from stepwise elimination of uncertain options (Section 2.3), we have argued that decision-making can be formalized by transitions between probability distributions (Section 2.2), and arrived at the concept of decision-making processes traversing probability space from prior to posterior by successively moving pieces of probability between options such that uncertainty relative to the prior is reduced (Section 2.1). Such transformations can be quantified by cost functions, which we define as order-preserving functions with respect to q and capture the resource costs that must be expended by the process. We have shown (Propositions A1 and A6) that many known generalized entropies and divergences, which are examples of such cost functions (Examples 1 and 3), satisfy superadditivity with respect to coarse-graining. This means that under such cost functions, decision-making costs can potentially be reduced by a more intelligent search strategy, in contrast to Kullback-Leibler divergence, which was characterized as the only additive cost function (Proposition A7). There are plenty of open questions for further investigation in that regard. First, it is not clear under which assumptions on the cost functions Cq superadditivity could be improved to Cq(p)=αDKL(pq)+r(p,q) with α>0 and r(p,q)0. Additionally, it would be an interesting challenge to find sufficient conditions implying super-additivity that include more cost functions than f-divergences. The field of information geometry might give further insights on the topic, since there are studies in similar directions, in particular characterizations of divergence measures in terms of information monotonicity and the data-processing inequality [48,80,81,82]. One interesting result is the characterization of Kullback-Leibler divergence as the single divergence measure being both an f-divergence and a Bregman divergence.

In Section 3, bounded rational decision-makers were defined as decision-making processes that are maximizing utility under constraints on the cost function, or equivalently minimizing resource costs under a minimal utility requirement. In the important case of additive cost functions (i.e., proportional to Kullback-Leibler divergence), this leads to information-theoretic bounded rationality [14,18,62,63,64], which has precursors in the economic and game-theoretic literature [4,8,11,12,13,14,15,16,19,83,84,85]. We have shown that the posteriors of a bounded rational decision-maker with increasing informational constraints leave a path in probability space that can itself be considered an anytime decision-making process, in each step perfectly trading off utility against processing costs (Proposition 3). In particular, this means that the path of a bounded rational decision-maker with informational cost decreases uncertainty with respect to all cost functions, not just Kullback-Leibler divergence. We have also studied the role of the prior in bounded rational multi-task decision-making, where we have seen that imperfect knowledge about the world state distribution leads to Bayesian inference as a byproduct, which is in line with the characterization of Bayesian inference as minimizing prediction surprise [86], but also demonstrates the wide applicability of the developed theory of decision-making with limited resources.

Finally, in Section 4, we have presented the results of a simulated bounded rational decision-maker solving an absolute identification task with and without knowledge about the world state distribution. Additionally, we have seen that Bayesian inference can be considered a decision-making process with limited resources by itself, where the resource is given by the number of available data points.

6. Conclusions

To our knowledge, this is the first principled approach to decision-making based on the intuitive idea of Pigou–Dalton-type probability transfers (elementary computations). Information-theoretic bounded rationality has been introduced by other axiomatic approaches before [8,62]. For example, in [62], a precise relation between rewards and information value is derived by postulating that systems will choose those states with high probability that are desirable for them. This leads to a direct coupling of probabilities and utility, where utility and information inherit the same structure, and only differ with respect to normalization (see [87] for similar ideas). In contrast, we assume utility and probability to be independent objects a priori that only have a strict relationship in the case of bounded-optimal posteriors. The approach in [8] introduces Kullback-Leibler divergence as disutility for decision control. Based on Hobson’s characterization [59], the authors argued that cost functions should be monotonic with respect to uniform distributions (the property in Equation (22)) and invariant under decomposition, which coincides with additivity under coarse-graining (see Examples 1 and 3). Both assumptions are special cases of our more general treatment, where cost functions must be monotonic with respect to elementary computations and are generally not restricted to being additive.

In the literature, there are many mechanistic models of decision-making that instantiate decision-making processes with limited resources. Examples include reinforcement learning algorithms with variable depth [88,89], Markov chain Monte Carlo (MCMC) models where only a certain number of samples can be evaluated [65,85,90], and evidence accumulation models that accumulate noisy evidence until either a fixed threshold is reached [91,92,93,94,95] or where thresholds are determined dynamically by explicit cost functions depending on the number of allowed evidence accumulation steps [96,97]. Many of these concrete models may be described abstractly by resource parameterizations (Definition 5). More precisely, in such cases, the posteriors {pr}rIΓPΩ are generated by an explicit process with process constraints Γ and resource parameter r. For example, in diffusion processes r may correspond to the amount of time allowed for evidence accumulation, in Monte Carlo algorithms r may reflect the number of MCMC steps, and in a reinforcement learning agent r may represent the number of forward-simulations. If the resource restriction is described by a monotonic cost function rcr [96,97], then the process can be optimized by a maximization problem of the form

maxrI,pΓrEp[U]cr=maxrI,pΓrEp[U]|crM=maxpΓEp[U]|Cq(p)M,

where M,M are non-negative constants, ΓrΓ denotes the subset of probability distributions with resource r, and Cq denotes a cost function such that rCq(p) for pΓr is strictly monotonically increasing. In particular, such cases can also be regarded as bounded rational decision-making problems of the form of Equation (26).

Bounded rationality models in the literature come in a variety of flavors. In the heuristics and biases paradigm, the notion of optimization is often dismissed in its entirety [7], even though decision-makers still have to have a notion of options being better or worse, for example to adapt their aspiration levels in a satisficing scheme [98]. We have argued that from an abstract normative perspective we can formulate satisficing in probabilistic terms, such that one could investigate the efficiency of heuristics within this framework. Another prominent approach to bounded rationality is given by systems capable of decision-making about decision-making, i.e., meta-decision-making. Explicit decision-making processes composed of two decision steps have been studied, for example, in the reinforcement learning literature [88,89,99,100], where the first step is represented by a meta decision about whether a cheap model-free or a more expensive model-based learning algorithm is used in the second step. The meta step consists of a trade-off between the estimated utility against the decision-making costs of the second decision step. In the information-theoretic framework of bounded rationality, this could be seen as a natural property of multi-step decision-making and the recursivity property in Equation (34), from which it follows that the value of a decision-making problem is given by its free energy that, besides the achieved utility, also takes the corresponding processing costs into account. Another prominent approach to bounded rationality is computational rationality [19], where the focus lies on finding bounded-optimal programs that solve constrained optimization problems presented by the decision-maker’s architecture and the task environment. As described above, such architectural constraints could be represented by a process dependent subset ΓPΩ, and in fact our resource costs could be included into such a subset Γr as well. From this point of view, both frameworks would look for bounded-optimal solutions in that the search space is first restricted and then the best solution in the restricted search space is found. However, our search space would consist of distributions describing probabilistic input-output maps, whereas the search space of programs would be far more detailed.

The notion of decision-making presented in this work, intuitively developed from the basic concept of uncertainty reduction given by elementary computations and motivated by the simple idea of progressively eliminating options, on the one hand provides a promising theoretical playground that is open for further investigation (e.g., superadditivity of cost functions and minimality of relative entropy), potentially providing new insights into the connection between the fields of rationality theory and information theory, and on the other hand serves the purpose of a general framework to describe and analyze all kinds of decision-making processes (e.g., in terms of bounded-optimality).

Appendix A. Proofs of Technical Results from Section 2 and Section 3

Proposition A1

(Superadditivity of generalized entropies under coarse-graining, Example 1). All cost functions of the form

C(p)=i=1Nf(pi), (A1)

with f (strictly) convex and differentiable on [0,1], and f(1)=0, are superadditive with respect to coarse-graining, that is

C(Z)C(X)+C(Y|X)

whenever Z=(X,Y), and C(X):=C(p(X)) and C(Y|X):=Ep(X)[C(p(Y|X))].

Proof. 

As shown in the following proof, strict convexity is not needed for superadditivity, but it is required for the definition of a cost function. First, since ipi=1, notice that we can always redefine the convex function f in Equation (A1) by fc(t):=f(t)c(t1) for an arbitrary constant cR without changing C(p) for all pPΩ. Hence, without loss of generality, we may assume f(1)=0, so that t=1 is a global minimum of f (since f(t)f(1)+(t1)f(1)=f(1) due to convexity). Since C is symmetric, superadditivity under coarse-graining is equivalent to

C(p1,,pN)C(p1+p2,p3,,pN)+(p1+p2)C(p1p1+p2,p2p1+p2) (A2)

This simply follows by induction, since Equation (A2) corresponds to the partitioning Ω=j=1N1Aj with A1={x1,x2} and Aj={xj+1} for all j=2,,N1 (see also [101] Proposition 2.3.5). In terms of f, Equation (A2) reads

f(p1)+f(p2)f(p1+p2)+(p1+p2)fp1p1+p2+fp2p1+p2

By setting u=p1+p2 and v=p1p1+p2, this is equivalent to

f(uv)+f(u(1v))f(u)+uf(v)+f(1v) (A3)

for all u,v[0,1]. Writing Fv(u):=f(uv)+f(u(1v))f(u)u(f(v)+f(1v)) and noting that Fv(0)0 and Fv(1)=0, it suffices to show that Fv(u)0, which shows that Fv is monotonically decreasing from Fv(0) to Fv(1)=0 and thus Fv(u)0 for all u[0,1]. We have for all v,u[0,1]

Fv(u)=vf(uv)+(1v)f(u(1v))f(u)f(v)f(1v)

By the symmetry of Fv under the replacement of v by 1v, without loss of generality, we may assume that v12, so that uvu(1v)u. Since f is convex, f is monotonically increasing on [0,1], and thus f(uv)f(u(1v)f(u). In particular,

f(u)=vf(u)+(1v)f(u)vf(uv)+(1v)f(u(1v))

and thus, since f(v)+f(1v)0, it follows that Fv(u)0, which completes the proof. □

Proposition A2

(Characterization of Shannon entropy, Example 1). If a cost function C is additive under coarse-graining, that is if C(Z)=C(X)+C(Y|X) with the notation from Proposition A1, then

C=αH

for some α0, i.e., C is proportional to the negative Shannon entropy H.

Proof. 

Since uniform distributions over N options are majorized by uniform distributions over N<N options (see Equation (7)), it follows for any cost function C that the function f defined by f(N):=C1N,,1N is monotonically increasing. Therefore, the claim follows directly from Shannon’s proof [22], who showed that this monotonicity, additivity under coarse-graining, and continuity determine Shannon entropy up to a constant factor. □

Proposition A3

(Proposition 1). The uniform distribution (1N,,1N) is the unique minimal element in PΩ with respect to ≺, i.e.,

1N,,1NppPΩ. (A4)

Proof. 

For the proof of Equation (A4), let (Πi)i=1N be the family of all cyclic permutations of the N entries of a probability vector p, such that

Π1(p)=p,Π2(p)=(pN,p1,,pN1),,ΠN(p)=(p2,,pN,p1).

It follows that i=1NΠi(p)=e for all pPΩ, where e=(1,,1) as above, and therefore the uniform distribution 1Ne is given by a convex combination of permutations of p, so that (iv) in Theorem 1 implies 1Nep. There are many different ways to prove uniqueness. A direct way is to assume there exists qP with q1Ne for all pPΩ, so that by (iii) in Theorem 1 there exists a stochastic matrix A with 1NeA=q. However, since eA=e (A stochastic), it follows that q=1Ne. An indirect way would be to use that if qp for all pPΩ, then from Example 1 we know that this implies H(q)H(p) for all pPΩ, where H denotes the Shannon entropy, simply because H is a cost function. In particular, q maximizes H and therefore coincides with the uniform distribution 1Ne. □

Proposition A4

(Equivalence of (i),(iii),(v) in Theorem 2). The following are equivalent

  • (i) 

    pqp, i.e., p contains more uncertainty relative to q than p (Definition 4).

  • (iii) 

    p=pA for a q-stochastic matrix A, i.e., Ae=e and qA=q.

  • (v) 

    i=1l1(pi)+aq(k,l)(pl)i=1l1pi+aq(k,l)pl for all αi=1l1qikαi=1lqi and 1lN, where aq(k,l):=(kαi=1l1qi)/ql, and the arrows indicate that (pi/qi)i is ordered decreasingly.

Proof. 

We use the fact that Λq:PΩPΩ˜ has a left inverse Λq1:Λq(PΩ)PΩ satisfying Λq1Λq=I, where I denotes the identity on PΩ. This can be verified by simply multiplying the corresponding matrices, given by

Λq1=11000011000011|A1|++|AN|=α,Λq=1α1q10001q10000001qN0001qNα

and noting that, by definition, αqi=|Ai|.

Characterization (v) follows from (vi) of Theorem 1 and Definition 4, since pqp if and only if

i=1k(Λqp)ii=1k(Λqp)i

for all 1kα1, from which (v) is an immediate consequence.

(i)(iii): Assuming that pqp, we have ΛqpΛqp and, therefore, by (iii) in Theorem 1, there exists a doubly stochastic matrix B such that Λqp=BTΛqp. With AT:=Λq1BTΛq, it follows that

Ae=ΛqTB(Λq1)Te=(Λq1)TBe=(Λq1)Te=e,

where we use that (Λq1)Te=e and ΛqTe=e which is easy to check, and Be=e from B being a stochastic matrix. Note that, by slightly abusing notation, here e is always the constant vector (1,,1) irrespective of the number of its entries (N or α, depending on which space the operator is defined). Moreover, we have

ATq=Λq1BTΛqq=1αΛq1BTe=1αΛq1e=q

where we have used that Λqq by definition is the uniform distribution on PΩ˜, i.e., Λqq=1αe, and therefore also Λq1e=αq. In particular, since also Aij0 (B,Λq,Λq1 have non-negative entries), it follows that A is a q-stochastic matrix such that p=ATp=pA.

(i)(iii): Similarly, if A is a q-stochastic matrix with p=pA, then Λqp=BTΛqp, where B:=(Λq1)TAΛqT satisfies BTe=αΛqATq=αΛqq=e and Be=(Λq1)TAe=(Λq1)Te=e, where we have used that Λq1e=αq, Λqq=1αe, ΛqTe=e, and (Λq1)Te=e. In particular, since also Bij0, B is doubly stochastic and therefore ΛqpΛqp, i.e., pqp. □

Proposition A5

(Proposition 2). The prior qPΩ is the unique minimal element in PΩ with respect to q, that is qqp for all pPΩ.

Proof. 

Let pPΩ, and let P:=Λqp denote its representation in PΩ˜. Then, QP by Proposition A3 (uniform distributions are minimal) and therefore q=Λq1Qqp, in particular q is minimal with respect to q. For uniqueness, let p be possibly another minimal element. Then, pqq and therefore by (iii) in Theorem 2 there exists a q-stochastic matrix A with p=qA. However, since A is q-stochastic, qA=q, and thus p=q. □

Proposition A6

(Example 3: Superadditivity of f-divergences under coarse-graining). All relative cost functions of the form

Cq(p)=i=1Nqifpiqi, (A5)

with f (strictly) convex and differentiable on [0,1], and f(1)=0, are superadditive with respect to coarse-graining, that is

Cq(Z)Cq(X)+Cq(Y|X)

whenever Z=(X,Y), and Cq(X):=Cq(X)(p(X)) and Cq(Y|X):=Ep(X)[Cq(Y|X)(p(Y|X))], which for cost functions that are symmetric with respect to permutations of ((pi,qi))i=1,,N (such as Equation (20)) is equivalent to

Cq(p)C(q1+q2,q3,qN)(p1+p2,p3,,pN)+(p1+p2)C(q1q1+q2,q2q1+q2)p1p1+p2,p2p1+p2. (A6)

Proof. 

This is a simple corollary to Proposition A1, after establishing the following interesting property of cost functions of the form of Equation (A5):

Cq(p)=CΛqq(Λqp) (A7)

where Λq denotes the representation mapping defined in Section 2.3 that maps q to a uniform distribution on an elementary decision space Ω˜ of |Ω˜|=α elements, given by (Λqp)(ω)=1αpiqi whenever ωAi, where {Ai}i=1N is a disjoint partition of Ω˜ such that |Ai|=αqi. Equation (A7) then follows from

CΛqq(Λqp)=ωΩ˜(Λqq)(ω)f(Λqp)(ω)(Λqq)(ω)=1αi=1NwAifpiqi=i=1Nqifpiqi,

where we use that wAi1=|Ai|=αqi. Hence, the case of a non-uniform prior reduces to the case of a uniform prior, which is shown in Proposition A1. □

Proposition A7

(Example 3: Characterization of Kullback-Leibler divergence). If Cq is a continuous cost function relative to q that is additive under coarse-graining, that is Cq(Z)=Cq(X)+Cq(Y|X) in the notation of Proposition A6, then

Cq(p)=αDKL(pq) (A8)

for some α0, where DKL(pq) denotes the Kullback-Leibler divergence DKL(pq)=ipilog(pi/qi).

Proof. 

First, we show that any relative cost function that is additive under coarse-graining satisfies Equation (22), the monotonicity property for uniform distributions: If f(M,N) denotes the cost CuN(uM) of a uniform distribution uM over M elements relative to a uniform distribution uN over NM elements, then (22) is true. Once Equation (22) has been established, then Equation (A8) goes back to a result by Hobson [59] (see also [8]), whose proof is a modification of Shannon’s axiomatic characterization [22].

The first property in Equation (22) actually is true for all relative cost functions: For q=uN with N=|Ω|, we have pqp iff pp and thus the first property follows from Equation (7), and the same is true in the case when N<|Ω|, since we always assume that p,p are absolutely continuous with respect to q, which allows to redefine Ω to only contain the N options covered by q.

For the proof of the second property in Equation (22), we let the random variable X indexing the partitions E1,E2 of Ω, where E1 denotes the support of uN and E2=Ω\E1 its complement, and Y representing the choice inside of the selected partition Ei given X=i. Letting q(Z)=uN, and p(Z)=uN, then it follows from addivity under coarse-graining that Cq(X)(p(X))=CuN(uN)=f(N,N), and letting p(Z)=uM, we obtain

f(M,N)=Cq(X)(p(X))+C(Y|X=1)=f(N,N)+f(M,N),

since p(X=1)=1, p(X=2)=0, and C(Y|X=1)=CuN(uM), and thus f(M,N)f(M,N). □

Proposition A8

(Proposition 3). If (pβ)β0 is a family of bounded-optimal posteriors given by Equation (28) with cost function Cq(p)=DKL(pq), then β is a resource parameter with respect to any cost function, in particular

q=p0qpβqpββ,βwithβ<β. (A9)

Proof. 

Part of the proof generalizes a result in [37] to the case of a non-uniform prior q, by making use of our new Characterization (v) of q, by which it suffices to show that βi=1l1(pβ,i)+aq(k,l)(pβ,l) is an increasing function of β for all k,l specified in Theorem 2. By Equation (31) below, we have

βpβ,i=β1ZβqieβUi=pβ,i(UiEpβ[U]),

where Ui are the decreasingly arranged utility values U(x) for xΩ, so that also pβ,i/qi is arranged decreasingly. From the ordering of the Ui, it is easy to see that

i=1kpiUi+pk+1Uk+1j=1kpji=1kpiUij=1kpj+pk+1

with the notation pk:=pβ,k, from which it follows that Sk:=i=1kp^kUi with p^k:=pk/j=1kpj, is monotonically decreasing in k (with SN=Ep[U]), and therefore

i=1kpi(UiEp[U])=i=1kp^iUiEp[U]j=1kpj=(SkSN)j=1kpj0

for all kN. Hence, it suffices to show that i=1l1xi+txl0 if i=1kxi0 for all k and t[0,1]. If xl0, there is nothing to show, and if xl<0, we have i=1l1xi+txli=1lxi0, which completes the proof of pβqpβ.

It remains to be shown that pβqpβ. This follows again from (v) in Theorem 2, more precisely from the requirement that pβ,1pβ,1 if pβqpβ, after establishing that βZβ1eβU1 is monotonically increasing. For the latter, note that since the Ui are ordered decreasingly, i=1NUiqieβUiU1i=1NqieβUi from which follows that β(Zβ1eβU1)0, which completes the proof. □

Author Contributions

Conceptualization, S.G. and D.A.B.; Formal analysis, S.G.; Funding acquisition, D.A.B.; Methodology, S.G. and D.A.B.; Software, S.G.; Supervision, D.A.B.; Visualization, S.G.; Writing—original draft, S.G.; and Writing—review and editing, S.G. and D.A.B.

Funding

This research was funded by the European Research Council, grant number ERC-StG-2015-ERC, Project ID: 678082, “BRISC: Bounded Rationality in Sensorimotor Coordination”.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  • 1.Von Neumann J., Morgenstern O. Theory of Games and Economic Behavior. Princeton University Press; Princeton, NJ, USA: 1944. [Google Scholar]
  • 2.Simon H.A. A Behavioral Model of Rational Choice. Q. J. Econ. 1955;69:99–118. doi: 10.2307/1884852. [DOI] [Google Scholar]
  • 3.Russell S.J., Subramanian D. Provably Bounded-optimal Agents. J. Artif. Intell. Res. 1995;2:575–609. doi: 10.1613/jair.133. [DOI] [Google Scholar]
  • 4.Ochs J. Games with Unique, Mixed Strategy Equilibria: An Experimental Study. Games Econ. Behav. 1995;10:202–217. doi: 10.1006/game.1995.1030. [DOI] [Google Scholar]
  • 5.Lipman B.L. Information Processing and Bounded Rationality: A Survey. Can. J. Econ. 1995;28:42–67. doi: 10.2307/136022. [DOI] [Google Scholar]
  • 6.Aumann R.J. Rationality and Bounded Rationality. Games Econ. Behav. 1997;21:2–14. doi: 10.1006/game.1997.0585. [DOI] [Google Scholar]
  • 7.Gigerenzer G., Selten R. Bounded Rationality: The Adaptive Toolbox. MIT Press; Cambridge, MA, USA: 2001. [Google Scholar]
  • 8.Mattsson L.G., Weibull J.W. Probabilistic choice and procedurally bounded rationality. Games Econ. Behav. 2002;41:61–78. doi: 10.1016/S0899-8256(02)00014-3. [DOI] [Google Scholar]
  • 9.Jones B.D. Bounded Rationality and Political Science: Lessons from Public Administration and Public Policy. J. Public Adm. Res. Theory. 2003;13:395–412. doi: 10.1093/jopart/mug028. [DOI] [Google Scholar]
  • 10.Sims C.A. Implications of rational inattention. J. Monet. Econ. 2003;50:665–690. doi: 10.1016/S0304-3932(03)00029-1. [DOI] [Google Scholar]
  • 11.Wolpert D.H. Information Theory—The Bridge Connecting Bounded Rational Game Theory and Statistical Physics. In: Braha D., Minai A.A., Bar-Yam Y., editors. Complex Engineered Systems: Science Meets Technology. Springer; Berlin/Heidelberg, Germany: 2006. pp. 262–290. [Google Scholar]
  • 12.Howes A., Lewis R.L., Vera A. Rational adaptation under task and processing constraints: Implications for testing theories of cognition and action. Psychol. Rev. 2009;116:717–751. doi: 10.1037/a0017187. [DOI] [PubMed] [Google Scholar]
  • 13.Still S. Information-theoretic approach to interactive learning. Europhys. Lett. 2009;85:28005. doi: 10.1209/0295-5075/85/28005. [DOI] [Google Scholar]
  • 14.Tishby N., Polani D. Information Theory of Decisions and Actions. In: Cutsuridis V., Hussain A., Taylor J.G., editors. Perception-Action Cycle: Models, Architectures, and Hardware. Springer; New York, NY, USA: 2011. pp. 601–636. [Google Scholar]
  • 15.Spiegler R. Bounded Rationality and Industrial Organization. Oxford University Press; Oxford, UK: 2011. [Google Scholar]
  • 16.Kappen H.J., Gómez V., Opper M. Optimal control as a graphical model inference problem. Mach. Learn. 2012;87:159–182. doi: 10.1007/s10994-012-5278-7. [DOI] [Google Scholar]
  • 17.Burns E., Ruml W., Do M.B. Heuristic Search when Time Matters. J. Artif. Intell. Res. 2013;47:697–740. doi: 10.1613/jair.4047. [DOI] [Google Scholar]
  • 18.Ortega P.A., Braun D.A. Thermodynamics as a theory of decision-making with information-processing costs. Proc. R. Soc. Lond. A Math. Phys. Eng. Sci. 2013;469:20120683. doi: 10.1098/rspa.2012.0683. [DOI] [Google Scholar]
  • 19.Lewis R.L., Howes A., Singh S. Computational Rationality: Linking Mechanism and Behavior through Bounded Utility Maximization. Top. Cogn. Sci. 2014;6:279–311. doi: 10.1111/tops.12086. [DOI] [PubMed] [Google Scholar]
  • 20.Acerbi L., Vijayakumar S., Wolpert D.M. On the Origins of Suboptimality in Human Probabilistic Inference. PLoS Comput. Biol. 2014;10:e1003661. doi: 10.1371/journal.pcbi.1003661. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Gershman S.J., Horvitz E.J., Tenenbaum J.B. Computational rationality: A converging paradigm for intelligence in brains, minds, and machines. Science. 2015;349:273–278. doi: 10.1126/science.aac6076. [DOI] [PubMed] [Google Scholar]
  • 22.Shannon C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948;27:379–423. doi: 10.1002/j.1538-7305.1948.tb01338.x. [DOI] [Google Scholar]
  • 23.Renyi A. Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics. University of California Press; Berkeley, CA, USA: 1961. On Measures of Entropy and Information; pp. 547–561. [Google Scholar]
  • 24.Tsallis C. Possible generalization of Boltzmann-Gibbs statistics. J. Stat. Phys. 1988;52:479–487. doi: 10.1007/BF01016429. [DOI] [Google Scholar]
  • 25.Kullback S., Leibler R.A. On Information and Sufficiency. Ann. Math. Stat. 1951;22:79–86. doi: 10.1214/aoms/1177729694. [DOI] [Google Scholar]
  • 26.Dalton H. The Measurement of the Inequality of Incomes. Econ. J. 1920;30:348–361. doi: 10.2307/2223525. [DOI] [Google Scholar]
  • 27.Marshall A.W., Olkin I., Arnold B.C. Inequalities: Theory of Majorization and Its Applications. 2nd ed. Springer; New York, NY, USA: 2011. [Google Scholar]
  • 28.Joe H. Majorization and divergence. J. Math. Anal. Appl. 1990;148:287–305. doi: 10.1016/0022-247X(90)90002-W. [DOI] [Google Scholar]
  • 29.Hardy G.H., Littlewood J., Pólya G. Inequalities. Cambridge University Press; Cambridge, UK: 1934. [Google Scholar]
  • 30.Arnold B.C. Majorization and the Lorenz Order: A Brief Introduction. Springer; New York, NY, USA: 1987. [Google Scholar]
  • 31.Pečarić J.E., Proschan F., Tong Y.L. Convex Functions, Partial Orderings, and Statistical Applications. Academic Press; Cambridge, MA, USA: 1992. [Google Scholar]
  • 32.Bhatia R. Matrix Analysis. Springer; New York, NY, USA: 1997. [Google Scholar]
  • 33.Arnold B.C., Sarabia J.M. Majorization and the Lorenz Order with Applications in Applied Mathematics and Economics. Springer International Publishing; Berlin, Germany: 2018. [Google Scholar]
  • 34.Lorenz M.O. Methods of Measuring the Concentration of Wealth. Publ. Am. Stat. Assoc. 1905;9:209–219. doi: 10.2307/2276207. [DOI] [Google Scholar]
  • 35.Pigou A.C. Wealth and Welfare. Macmillan; New York, NY, USA: 1912. [Google Scholar]
  • 36.Ruch E., Mead A. The principle of increasing mixing character and some of its consequences. Theor. Chim. Acta. 1976;41:95–117. doi: 10.1007/BF01178071. [DOI] [Google Scholar]
  • 37.Rossignoli R., Canosa N. Limit temperature for entanglement in generalized statistics. Phys. Lett. 2004;323:22–28. doi: 10.1016/j.physleta.2004.01.057. [DOI] [Google Scholar]
  • 38.Muirhead R.F. Some Methods applicable to Identities and Inequalities of Symmetric Algebraic Functions of n Letters. Proc. Edinb. Math. Soc. 1902;21:144–162. doi: 10.1017/S001309150003460X. [DOI] [Google Scholar]
  • 39.Brandão F.G.S.L., Horodecki M., Oppenheim J., Renes J.M., Spekkens R.W. Resource Theory of Quantum States Out of Thermal Equilibrium. Phys. Rev. Lett. 2013;111:250404. doi: 10.1103/PhysRevLett.111.250404. [DOI] [PubMed] [Google Scholar]
  • 40.Horodecki M., Oppenheim J. Fundamental limitations for quantum and nanoscale thermodynamics. Nat. Commun. 2013;4:2056. doi: 10.1038/ncomms3059. [DOI] [PubMed] [Google Scholar]
  • 41.Gour G., Müller M.P., Narasimhachar V., Spekkens R.W., Halpern N.Y. The resource theory of informational nonequilibrium in thermodynamics. Phys. Rep. 2015;583:1–58. doi: 10.1016/j.physrep.2015.04.003. [DOI] [Google Scholar]
  • 42.Schur I. Über eine Klasse von Mittelbildungen mit Anwendungen auf die Determinanten-Theorie. Sitz. Berl. Math. Ges. 1923;22:9–20. [Google Scholar]
  • 43.Karamata J. Sur une inégalité relative aux fonctions convexes. Publ. L’Institut Math. 1932;1:145–147. [Google Scholar]
  • 44.Hardy G., Littlewood J., Pólya G. Some Simple Inequalities Satisfied by Convex Functions. Messenger Math. 1929;58:145–152. [Google Scholar]
  • 45.Canosa N., Rossignoli R. Generalized Nonadditive Entropies and Quantum Entanglement. Phys. Rev. Lett. 2002;88:170401. doi: 10.1103/PhysRevLett.88.170401. [DOI] [PubMed] [Google Scholar]
  • 46.Gorban A.N., Gorban P.A., Judge G. Entropy: The Markov Ordering Approach. Entropy. 2010;12:1145–1193. doi: 10.3390/e12051145. [DOI] [Google Scholar]
  • 47.Csiszár I. A class of measures of informativity of observation channels. Period. Math. Hung. 1972;2:191–213. doi: 10.1007/BF02018661. [DOI] [Google Scholar]
  • 48.Amari S. α-Divergence Is Unique, Belonging to Both f-Divergence and Bregman Divergence Classes. IEEE Trans. Inf. Theory. 2009;55:4925–4931. doi: 10.1109/TIT.2009.2030485. [DOI] [Google Scholar]
  • 49.Khinchin A.Y. Mathematical Foundations of Information Theory. Dover Books on Advanced Mathematics, Dover; New York, NY, USA: 1957. [Google Scholar]
  • 50.Csiszár I. Axiomatic Characterizations of Information Measures. Entropy. 2008;10:261–273. doi: 10.3390/e10030261. [DOI] [Google Scholar]
  • 51.Aczél J., Forte B., Ng C.T. Why the Shannon and Hartley Entropies Are ‘Natural’. Adv. Appl. Probab. 1974;6:131–146. doi: 10.2307/1426210. [DOI] [Google Scholar]
  • 52.Faddeev D.K. On the concept of entropy of a finite probabilistic scheme. Usp. Mat. Nauk. 1956;11:227–231. [Google Scholar]
  • 53.Tverberg H. A New Derivation of the Information Function. Math. Scand. 1958;6:297–298. doi: 10.7146/math.scand.a-10555. [DOI] [Google Scholar]
  • 54.Kendall D.G. Functional equations in information theory. Probab. Theory Relat. Fields. 1964;2:225–229. doi: 10.1007/BF00533380. [DOI] [Google Scholar]
  • 55.Lee P.M. On the Axioms of Information Theory. Ann. Math. Stat. 1964;35:415–418. doi: 10.1214/aoms/1177703765. [DOI] [Google Scholar]
  • 56.Aczel J. On different characterizations of entropies. In: Behara M., Krickeberg K., Wolfowitz J., editors. Probability and Information Theory. Springer; Berlin/Heidelberg, Germany: 1969. pp. 1–11. [Google Scholar]
  • 57.Veinott A.F. Least d-Majorized Network Flows with Inventory and Statistical Applications. Manag. Sci. 1971;17:547–567. doi: 10.1287/mnsc.17.9.547. [DOI] [Google Scholar]
  • 58.Ruch E., Schranner R., Seligman T.H. The mixing distance. J. Chem. Phys. 1978;69:386–392. doi: 10.1063/1.436364. [DOI] [Google Scholar]
  • 59.Hobson A. A new theorem of information theory. J. Stat. Phys. 1969;1:383–391. doi: 10.1007/BF01106578. [DOI] [Google Scholar]
  • 60.Leinster T. A short characterization of relative entropy. arXiv. 20171712.04903 [Google Scholar]
  • 61.Everett H. Generalized Lagrange Multiplier Method for Solving Problems of Optimum Allocation of Resources. Oper. Res. 1963;11:399–417. doi: 10.1287/opre.11.3.399. [DOI] [Google Scholar]
  • 62.Ortega P.A., Braun D.A. A conversion between utility and information; Proceedings of the 3rd Conference on Artificial General Intelligence (AGI-2010); Washington, DC, USA. 5–8 March 2010; Cham, Switzerland: Atlantis Press, Springer International Publishing; 2010. [Google Scholar]
  • 63.Genewein T., Leibfried F., Grau-Moya J., Braun D.A. Bounded Rationality, Abstraction, and Hierarchical Decision-Making: An Information-Theoretic Optimality Principle. Front. Robot. AI. 2015;2:27. doi: 10.3389/frobt.2015.00027. [DOI] [Google Scholar]
  • 64.Gottwald S., Braun D.A. Systems of bounded rational agents with information-theoretic constraints. Neural Comput. 2019:1–37. doi: 10.1162/neco_a_01153. [DOI] [PubMed] [Google Scholar]
  • 65.Hihn H., Gottwald S., Braun D.A. Bounded Rational Decision-Making with Adaptive Neural Network Priors. In: Pancioni L., Schwenker F., Trentin E., editors. Artificial Neural Networks in Pattern Recognition. Springer International Publishing; Cham, Switzerland: 2018. pp. 213–225. [Google Scholar]
  • 66.Leibfried F., Grau-Moya J., Bou-Ammar H. An Information-Theoretic Optimality Principle for Deep Reinforcement Learning. arXiv. 20181708.01867v5 [Google Scholar]
  • 67.Grau-Moya J., Leibfried F., Vrancx P. Soft Q-Learning with Mutual-Information Regularization; Proceedings of the International Conference on Learning Representations; New Orleans, LA, USA. 6–9 March 2019. [Google Scholar]
  • 68.Ortega P.A., Stocker A. Human Decision-Making under Limited Time; Proceedings of the 30th Conference on Neural Information Processing Systems; Barcelona, Spain. 5–10 December 2016. [Google Scholar]
  • 69.Schach S., Gottwald S., Braun D.A. Quantifying Motor Task Performance by Bounded Rational Decision Theory. Front. Neurosci. 2018;12:932. doi: 10.3389/fnins.2018.00932. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 70.Friston K.J. The free-energy principle: A rough guide to the brain? Trends Cogn. Sci. 2009;13:293–301. doi: 10.1016/j.tics.2009.04.005. [DOI] [PubMed] [Google Scholar]
  • 71.Hinton G.E., van Camp D. Keeping the Neural Networks Simple by Minimizing the Description Length of the Weights; Proceedings of the Sixth Annual Conference on Computational Learning Theory; Santa Cruz, CA, USA. 26–28 July 1993; New York, NY, USA: ACM; 1993. pp. 5–13. [Google Scholar]
  • 72.MacKay D.J.C. Developments in Probabilistic Modelling with Neural Network—Ensemble Learning. In: Kappen B., Gielen S., editors. Neural Networks: Artificial Intelligence and Industrial Applications. Springer; London, UK: 1995. pp. 191–198. [Google Scholar]
  • 73.Blahut R.E. Computation of channel capacity and rate-distortion functions. IEEE Trans. Inf. Theory. 1972;18:460–473. doi: 10.1109/TIT.1972.1054855. [DOI] [Google Scholar]
  • 74.Arimoto S. An algorithm for computing the capacity of arbitrary discrete memoryless channels. IEEE Trans. Inf. Theory. 1972;18:14–20. doi: 10.1109/TIT.1972.1054753. [DOI] [Google Scholar]
  • 75.Csiszár I., Tusnády G. Information geometry and alternating minimization procedures. Stat. Decis. Suppl. Issue. 1984;1:205–237. [Google Scholar]
  • 76.Shannon C.E. Coding theorems for a discrete source with a fidelity criterion. IRE Int. Conv. Rec. 1959;7:142–163. [Google Scholar]
  • 77.Tsuyoshi A. Majorization, doubly stochastic matrices, and comparison of eigenvalues. Linear Algebra Its Appl. 1989;118:163–248. [Google Scholar]
  • 78.Cohen J.E., Derriennic Y., Zbăganu G. Doeblin and Modern Probability (Blaubeuren, 1991) Volume 149. Amer. Math. Soc.; Providence, RI, USA: 1993. Majorization, monotonicity of relative entropy, and stochastic matrices; pp. 251–259. [Google Scholar]
  • 79.Latif N., Pečarić D., Pečarić J. Majorization, Csiszár divergence and Zipf-Mandelbrot law. J. Inequal. Appl. 2017;2017:197. doi: 10.1186/s13660-017-1472-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 80.Jiao J., Courtade T.A., No A., Venkat K., Weissman T. Information Measures: The Curious Case of the Binary Alphabet. IEEE Trans. Inf. Theory. 2014;60:7616–7626. doi: 10.1109/TIT.2014.2360184. [DOI] [Google Scholar]
  • 81.Amari S., Cichocki A. Information geometry of divergence functions. Bull. Pol. Acad. Sci. 2010;58:183–195. doi: 10.2478/v10175-010-0019-1. [DOI] [Google Scholar]
  • 82.Amari S., Karakida R., Oizumi M. Information geometry connecting Wasserstein distance and Kullback-Leibler divergence via the entropy-relaxed transportation problem. Inf. Geom. 2018;1:13–37. doi: 10.1007/s41884-018-0002-8. [DOI] [Google Scholar]
  • 83.McKelvey R.D., Palfrey T.R. Quantal Response Equilibria for Normal Form Games. Games Econ. Behav. 1995;10:6–38. doi: 10.1006/game.1995.1023. [DOI] [Google Scholar]
  • 84.Todorov E. Efficient computation of optimal actions. Proc. Natl. Acad. Sci. USA. 2009;106:11478–11483. doi: 10.1073/pnas.0710743106. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 85.Vul E., Goodman N., Griffiths T.L., Tenenbaum J.B. One and Done? Optimal Decisions From Very Few Samples. Cogn. Sci. 2014;38:599–637. doi: 10.1111/cogs.12101. [DOI] [PubMed] [Google Scholar]
  • 86.Ortega P.A., Braun D.A. A Minimum Relative Entropy Principle for Learning and Acting. J. Artif. Int. Res. 2010;38:475–511. doi: 10.1613/jair.3062. [DOI] [Google Scholar]
  • 87.Candeal J.C., De Miguel J.R., Induráin E., Mehta G.B. Utility and entropy. Econ. Theory. 2001;17:233–238. doi: 10.1007/PL00004100. [DOI] [Google Scholar]
  • 88.Keramati M., Dezfouli A., Piray P. Speed/Accuracy Trade-Off between the Habitual and the Goal-Directed Processes. PLoS Comput. Biol. 2011;7:1–21. doi: 10.1371/journal.pcbi.1002055. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 89.Keramati M., Smittenaar P., Dolan R.J., Dayan P. Adaptive integration of habits into depth-limited planning defines a habitual-goal–directed spectrum. Proc. Natl. Acad. Sci. USA. 2016;113:12868–12873. doi: 10.1073/pnas.1609094113. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 90.Ortega P.A., Braun D.A., Tishby N. Monte Carlo methods for exact amp; efficient solution of the generalized optimality equations; Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA); Hong Kong, China. 31 May–5 June 2014; pp. 4322–4327. [Google Scholar]
  • 91.Laming D.R.J. Information Theory of Choice-Reaction Times. Academic Press; Oxford, UK: 1968. [Google Scholar]
  • 92.Ratcliff R. A theory of memory retrieval. Psychol. Rev. 1978;85:59–108. doi: 10.1037/0033-295X.85.2.59. [DOI] [Google Scholar]
  • 93.Townsend J., Ashby F. The Stochastic Modeling of Elementary Psychological Processes (Part 2) Cambridge University Press; Cambridge, UK: 1983. [Google Scholar]
  • 94.Ratcliff R., Starns J.J. Modeling confidence judgments, response times, and multiple choices in decision making: Recognition memory and motion discrimination. Psychol. Rev. 2013;120:697–719. doi: 10.1037/a0033152. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 95.Shadlen M., Hanks T., Churchland A.K., Kiani R., Yang T. Bayesian Brain: Probabilistic Approaches to Neural Coding. MIT Press; Cambridge, MA, USA: 2006. The Speed and Accuracy of a Simple Perceptual Decision: A Mathematical Primer. [Google Scholar]
  • 96.Frazier P.I., Yu A.J. Sequential Hypothesis Testing Under Stochastic Deadlines; Proceedings of the 20th International Conference on Neural Information Processing Systems; Vancouver, BC, Canada. 3–6 December 2007; Red Hook, NY, USA: Curran Associates Inc.; 2007. pp. 465–472. [Google Scholar]
  • 97.Drugowitsch J., Moreno-Bote R., Churchland A.K., Shadlen M.N., Pouget A. The cost of accumulating evidence in perceptual decision making. J. Neurosci. 2012;32:3612–3628. doi: 10.1523/JNEUROSCI.4010-11.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 98.Simon H.A. Models of Bounded Rationality. MIT Press; Cambridge, MA, USA: 1982. [Google Scholar]
  • 99.Pezzulo G., Rigoli F., Chersi F. The Mixed Instrumental Controller: Using Value of Information to Combine Habitual Choice and Mental Simulation. Front. Psychol. 2013;4:92. doi: 10.3389/fpsyg.2013.00092. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 100.Viejo G., Khamassi M., Brovelli A., Girard B. Modeling choice and reaction time during arbitrary visuomotor learning through the coordination of adaptive working memory and reinforcement learning. Front. Behav. Neurosci. 2015;9:225. doi: 10.3389/fnbeh.2015.00225. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 101.Aczel J., Daroczy Z. On Measures of Information and Their Characterizations. Academic Press; New York, NY, USA: 1975. [Google Scholar]

Articles from Entropy are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES