Skip to main content
Entropy logoLink to Entropy
. 2022 Mar 13;24(3):403. doi: 10.3390/e24030403

A Novel Approach to the Partial Information Decomposition

Artemy Kolchinsky 1
Editor: Eckehard Olbrich1
PMCID: PMC8947370  PMID: 35327914

Abstract

We consider the “partial information decomposition” (PID) problem, which aims to decompose the information that a set of source random variables provide about a target random variable into separate redundant, synergistic, union, and unique components. In the first part of this paper, we propose a general framework for constructing a multivariate PID. Our framework is defined in terms of a formal analogy with intersection and union from set theory, along with an ordering relation which specifies when one information source is more informative than another. Our definitions are algebraically and axiomatically motivated, and can be generalized to domains beyond Shannon information theory (such as algorithmic information theory and quantum information theory). In the second part of this paper, we use our general framework to define a PID in terms of the well-known Blackwell order, which has a fundamental operational interpretation. We demonstrate our approach on numerous examples and show that it overcomes many drawbacks associated with previous proposals.

Keywords: partial information decomposition, redundancy, synergy

1. Introduction

Understanding how information is distributed in multivariate systems is an important problem in many scientific fields. In the context of neuroscience, for example, one may wish to understand how information about an external stimulus is encoded in the activity of different brain regions. In computer science, one might wish to understand how the output of a logic gate reflects the information present in different inputs to that gate. Numerous other examples abound in biology, physics, machine learning, cryptography, and other fields [1,2,3,4,5,6,7,8,9,10].

Formally, suppose that we are provided with a random variable Y which we call the “target”, as well as a set of n random variables X1,,Xn which we call the “sources”. The partial information decomposition (PID), first proposed by Williams and Beer in 2010 [11], aims to quantify how information about the target is distributed among the different sources. In particular, the PID seeks to decompose the mutual information provided jointly by all sources into a set of nonnegative terms, such as redundancy (information present in each individual source), synergy (information only provided by the sources jointly, not individually), union information (information provided by at least one individual source), and unique information (information provided by only one individual source).

As discussed in detail below, the PID is inspired by an analogy between information theory and set theory. In this analogy, the information that the sources provide about the target are imagined as sets, while PID terms such as redundancy, union information, and synergy are imagined as the sizes of intersections, unions, and complements. While the analogy between information-theoretic and set-theoretic quantities is suggestive, it does not specify how to actually define the PID. Moreover, it has also been shown that existing measures from information theory (such as mutual information and conditional mutual information) cannot be used directly to construct the PID, since these measures conflate contributions from different terms like synergy and redundancy [11,12]. In response, many proposals for how to define PID terms have been advanced [5,13,14,15,16,17,18,19,20,21]. However, existing proposals suffer from various drawbacks, such as behaving counterintuitively on simple examples, being limited to only two sources, or lacking a clear operational interpretation. Today there is no generally agreed-upon way of defining the PID.

In this paper, we propose a new and principled approach to the PID which addresses these drawbacks. Our approach can handle any number of sources and can be justified in algebraic, axiomatic, and operational terms. We present our approach in two parts.

In part I (Section 4), we propose a general framework for defining the PID. Our framework does not prescribe specific definitions, but instead shows how an information-theoretic decomposition can be grounded in a formal analogy with set theory. Specifically, we consider the definitions of “set intersection” and “set union” in set theory: the intersection of sets S1,S2, is the largest set that is contained in all of the Si, while the union of sets S1,S2, is the smallest set that contains all of the Si. As we show, these set-theoretic definitions can be mapped into information-theoretic terms by treating “sets” as random variables, “set size” as mutual information between a random variable and the target Y, and “set inclusion” as some externally specified ordering relation ⊏, which specifies when one random variable is more informative than another. Using this mapping, we define information-theoretic redundancy and union information in the same way that the sizes of intersections and unions are defined in set theory (other PID terms, such as synergy and unique information, can be computed in a straightforward way from redundancy and union information). Moreover, while our approach is motivated by set-theoretic intuitions, as we show in Section 4.2, it can also be derived from an alternative axiomatic foundation. We finish part I by reviewing relevant prior work in information theory and the PID literature. We also discuss how our framework can be generalized beyond the standard setting of the PID and even beyond Shannon information theory, to domains like algorithmic information theory and quantum information theory.

One unusual aspect of our framework is that it provides independent definitions of union information and redundancy. Most prior work on the PID has focused exclusively on the definition of redundancy, because it assumed that union information can be determined from redundancy using the so-called “inclusion-exclusion principle”. In Section 4.3, we argue that the inclusion-exclusion principle should not be expected to hold in the context of the PID.

Part I provides a general framework. Concrete definitions of the PID can be derived from this general framework by choosing a specific “more informative” ordering relation ⊏. In fact, the study of ordering relations between information sources has a long history in statistics and information theory [22,23,24,25,26,27]. One particularly important relation is the so-called “Blackwell order” [13,28], which has a fundamental operational interpretation in terms of utility maximization in decision theory.

In part II of this paper (Section 5), we combine the general framework developed in part I with the Blackwell order. This gives rise to concrete definitions of redundancy and union information. We show that our measures behave intuitively and have simple operational interpretations in terms of decision theory. Interestingly, while our measure of redundancy is novel, our measure of union information has previously appeared in the literature under a different guise [13,17].

In Section 6, we compare our redundancy measure to previous proposals, and illustrate it with various bivariate and multivariate examples. We finish the paper with a discussion and proposals for future work in Section 7.

We introduce some necessary notation and preliminaries in the next section. In addition, we provide background regarding the PID in Section 3. All proofs, as well as some additional results, are found in the appendix.

2. Notation and Preliminaries

We use uppercase letters (Y,X,Q,) to indicate random variables over some underlying probability space. We use lowercase letters (y,x,q,) to indicate specific outcomes of random variables, and calligraphic letters (Y,X,Q) to indicate sets of outcomes. We often index random variables with a subscript, e.g., the random variable Xi with outcomes xiXi (so xi does not refer to the ith outcome of random variable X, but rather to some generic outcome of random variable Xi). We use notation like ABC to indicate that A is conditionally independent of C given B. Except where otherwise noted, we assume that all random variables have a finite number of outcomes.

We use notation like PX(x) to indicate the probability distribution associated with random variable X, PXY(x,y) to indicate the joint probability distribution associated with random variables X and Y, and PX|Y(x|y) to indicate the conditional probability distribution of X given Y. Given two random variables X and Y with outcome sets X and Y, we use notation like κX|Y(x|y) to indicate some stochastic channel of outputs xX given inputs yY. In general, a channel κX|Y specifies some arbitrary conditional distribution of X given Y, which can be different from PX|Y, the actual conditional distribution of X given Y (as determined by the underlying probability space).

As described above, we consider the information that a set of “source” random variables X1,,Xn provide a “target” random variable Y. Without loss of generality, we assume that the marginal distributions PY and PXi for all i have full support (if they do not, one can restrict Y and/or Xi to outcomes that have strictly positive probability).

Finally, note that despite our use of the terms “source” and “target”, we do not assume any causal directionality between the sources and target (see also discussion in [29]). For example, in neuroscience, Y might be an external stimulus which causes the activity of brain regions X1,,Xn, while in computer science Y might represent the output of a logic gate caused by inputs X1,,Xn (so the causal direction is reversed). In yet other contexts, there could be other causal relationships among X1,,Xn and Y, or they might not be causally related at all.

3. Background on the Partial Information Decomposition (PID)

Given a set of sources X1,,Xn and a target Y, the PID aims to decompose I(Y;X1,,Xn), the total mutual information provided by all sources about the target, into a set of nonnegative terms such as [11,12]:

  • Redundancy I(X1;;Xn  Y), the information present in each individual source. Redundancy can be considered as the intersection of the information provided by different sources and is sometimes called “intersection information” in the literature [16,18].

  • Union information I(X1;;Xn  Y), the information provided by at least one individual source [12,17].

  • Synergy S(X1;;Xn  Y), the information found in the joint outcome of all sources, but not in any of their individual outcomes. Synergy is defined as [17]
    S(X1;;Xn  Y)=I(Y;X1,,Xn)I(X1;;Xn  Y). (1)
  • Unique information in source Xi, U(Xi  Y|X1;;Xn), the non-redundant information in each particular source. Unique information is defined as
    U(Xi  Y|X1;;Xn)=I(Y;Xi)I(X1;;Xn  Y). (2)

In addition to the above terms, one can also define excluded information,

E(Xi  Y|X1;;Xn)=I(X1;;Xn  Y)I(Y;Xi), (3)

as the information in the union of the sources which is not in a particular source Xi. To our knowledge, excluded information has not been previously considered in the PID literature, although it is the natural “dual” of unique information as defined in Equation (2).

Given the definitions above, once a measure of redundancy I is chosen, unique information is determined by Equation (2). Similarly, once a measure of union information I is chosen, synergy and excluded information are determined by Equations (1) and (3). In Figure 1, we illustrate the relationships between these different PID terms for the simple case of two sources, X1 and X2. We show two different decompositions of the information provided by the sources jointly, I(X1,X2;Y), and individually, I(X1;Y) and I(X2;Y). The diagram on the left shows the decomposition defined in terms of redundancy I, while the diagram on the right shows the decomposition defined in terms of union information I.

Figure 1.

Figure 1

Partial information decomposition of the information provided by two sources about a target. On the left, we show the decomposition induced by redundancy I, which leads to measures of unique information U. On the right, we show the decomposition induced by union information I, which leads to measures of synergy S and excluded information E.

When more than two sources are present, the PID can be used to define additional terms, beyond the ones shown in Figure 1. For example, for three sources, one can define redundancy terms like I(X1,X2,X3  Y) (representing the information found in all individual sources) as well as redundancy terms like I((X1,X2),(X1,X3),(X2,X3)  Y) (representing the information found in all pairs of sources), and similarly for union information.

The idea that redundancy and union information lead to two different information decompositions is rarely discussed in the literature. In fact, the very concept of union information is rarely discussed in the literature explicitly (although it often appears in an implicit form via measures of synergy, since synergy is related to union information through Equation (1)). As we discuss below in Section 4.3, the reason for this omission is that most existing work assumes (whether implicitly or explicitly) that redundancy and union information are not independent measures, but are instead related via the so-called “inclusion-exclusion principle”. If the inclusion-exclusion principle is assumed to hold, then the distinction between the two decompositions disappears. We discuss this issue in greater detail below, where we also argue that the inclusion-exclusion principle should not be expected to hold in the context of the PID.

We have not yet described how the redundancy and union information measures I and I are defined. In fact, this remains an open research question in the field (and one which this paper will address). When they first introduced the idea of the PID, Williams and Beer proposed a set of intuitive axioms that any measure of redundancy should satisfy [11,12], which we summarize in Appendix A. In later work, Griffith and Koch [17] proposed a similar set of axioms that union information should satisfy, which are also summarized in Appendix A. However, these axioms do not uniquely identify a particular measure of redundancy or union information.

Williams and Beer also proposed a particular redundancy measure which satisfies their axioms, which we refer to as IWB [11,12]. Unfortunately, IWB has been shown to behave counterintuitively in some simple cases [19,20]. For example, consider the so-called “COPY gate”, where there are two sources X1 and X2 and the target is a copy of their joint outcomes, Y=(X1,X2). If X1 and X2 are statistically independent, I(X1;X2)=0, then intuition suggests that the two sources provide independent information about Y and therefore that redundancy should be 0. In general, however, IWB(X1;X2  Y) does not vanish in this case. To avoid this issue, Ince [20] proposed that any valid redundancy measure should obey the following property:

If I(X1;X2)=0, then I(X1;X2  (X1,X2))=0, (4)

which is called the Independent identity property.

In recent years, many other redundancy measures have been proposed [13,15,16,18,19,20,21]. However, while some of these proposals satisfy the Independent identity property, they suffer various other drawbacks, such as exhibiting other types of counterintuitive behavior, being limited to two sources, and/or lacking a clear operational motivation. We discuss some of these previously proposed measures in Section 4.4, Section 5.4 and Section 6.

Unlike redundancy, to our knowledge only two measures of union information have been advanced. The first one appeared in the original work on the PID [12], and was derived from IWB using the inclusion-exclusion principle. The second one appeared more recently [13,17] and is discussed in Section 5.4 below.

4. Part I: Redundancy and Union Information from an Ordering Relation

4.1. Introduction

As mentioned above, PID is motivated by an informal analogy with set theory [12]. In particular, redundancy is interpreted analogously to the size of the intersection of the sources X1,,Xn, while union information is interpreted analogously to the size of their union.

We propose to define the PID by making this analogy formal, and in particular by going back to the algebraic definitions of intersection and union in set theory. In pursuing this direction, we build on a line of previous work in information theory and PID, which we discuss in Section 4.4.

Recall that in set theory, the intersection of sets S1,,SnU (where U is some universal set) is the largest set that is contained in all Si (Section 7.2, [30]). This means that the size of the intersection can be written as

|iSi|=supTU Tsuch that i TSi, (5)

Similarly, the union of sets S1,,SnU is the smallest set that contains all Si (Section 7.2, [30]), so the size of the union can be written as

|iSi|=infTU T such that i SiT. (6)

Equations (5) and (6) are useful because they express the size of the intersection and union via an optimization over simpler terms (the size of individual sets, |T|, and the subset inclusion relation, ⊆).

We translate these definitions to the information-theoretic setting of the PID. We take the analogue of a “set” to be some random variable A that provides information about the target Y, and the analogue of “set size” to be the mutual information I(A;Y). In addition, we assume that there is some ordering relation ⊏ between random variables analogous to set inclusion ⊆. Given such a relation, the expression AB means that random variable B is “more informative” than A, in the sense that the information that A provides about Y is contained within the information that B provides about Y.

At this point, we leave the ordering relation ⊏ unspecified. In general, we believe that the choice of ⊏ will not be determined from purely information-theoretic considerations, but may instead depend on the operational setting and scientific domain in which the PID is applied. At the same time, there has been a great deal of research on ordering relations in statistics and information theory. In part II of this paper, Section 5, we will combine our general framework with a particular ordering relation, the so-called “Blackwell order”, which has a fundamental interpretation in terms of decision theory.

We now provide formal definitions of redundancy and union information, relative to the choice of ordering relation ⊏. In analogy to Equation (5), we define redundancy as

I(X1;;Xn  Y):=supQI(Q;Y)such that i QXi (7)

where the maximization is over all random variables with a finite number of outcomes. Thus, redundancy I is the maximum information about Y in any random variable that is less informative than all of the sources. In analogy with Equation (6), we define union information as

I(X1;;Xn  Y):=infQI(Q;Y)such that i XiQ (8)

Thus, union information I is the minimum information about Y in any random variable that is more informative than all of the sources. Given these definitions, other elements of the PID (such as unique information, synergy, and excluded information) can be defined using the expressions found in Section 3. Note that I and I depend the choice of ordering relation ⊏, although for convenience we leave this dependence implicit in our notation.

One of the attractive aspects of our definitions is that they do not simply quantify the amount of redundancy and union information, but also specify the “content” of that redundant and union information. In particular, the random variable Q that achieves the optimum in Equation (7) specifies the content of the redundant information via the joint distribution PYQ. Similarly, the random variable Q which achieves the optimum in Equation (8) specifies the content of the union information via the joint distribution PYQ. Note that these optimizing Q may not be unique, reflecting the fact that there may be different ways to represent the redundancy or union information. (Note also that the supremum or infinitum may not be achieved in Equations (7) and (8), in which case one can consider Q that achieve the optimal values to any desired precision ϵ>0.)

So far we have not made any assumptions about the ordering relation ⊏. However, we can derive some useful bounds by introducing three weak assumptions:

  1. Monotonicity of mutual information: ABI(A;Y)I(B;Y) (less informative sources have less mutual information).

  2. Reflexivity: AA for all A (each source is at least as informative as itself).

  3. For all sources Xi, OXi(X1,,Xn), where O indicates a constant random variable with a single outcome and (X1,,Xn) indicates all sources considered jointly (each source is more informative than a trivial source and less informative than all sources jointly).

Assumptions I and II imply that the redundancy and union information of a single source are equal to the mutual information in that source:

I(X1  Y)=I(X1  Y)=I(X1;Y).

Assumptions I and III imply the following bounds on redundancy and union information:

0I(X1;;Xn  Y)miniI(Y;Xi). (9)
maxiI(Y;Xi)I(X1;;Xn  Y)I(Y;X1,,Xn). (10)

Equation (9) in turn implies that the unique information in each source Xi, as defined in Equation (2), is bounded between 0 and I(Y;Xi). Similarly, Equation (10) implies that the synergy, as defined in Equation (1), obeys

0S(X1;;Xn  Y)miniI(Y;X1,,Xn|Xi),

where we have used the chain rule I(Y;X1,,Xn)=I(Y;Xi)+I(Y;X1,,Xn|Xi). Equation (10) also implies that excluded information in each source Xi, as defined in Equation (3), is bounded between 0 and I(Y;X1,,Xn|Xi).

Note that in general, stronger orders give smaller values of redundancy and larger values of union information. Consider two orders ⊏ and where the first one is stronger than the second: ABAB for all A and B. Then, any Q in the feasible set of Equation (7) under ⊏ will also be in the feasible set under , and similarly for Equation (8). Therefore, I defined relative to ⊏ will have a lower value than I defined relative to , and vice versa for I.

In the rest of this section, we discuss alternative axiomatic justifications for our general framework, the role of the inclusion-exclusion principle, relation to prior work, and further generalizations. Readers who are more interested in the use of our framework to define concrete measures of redundancy and union information may skip to Section 5.

4.2. Axiomatic Derivation

In Section 4.1, we defined the PID in terms of an algebraic analogy with intersection and union in set theory. This definition can be considered as the primary one in our framework. At the same time, the same definitions can also be derived in an alternative manner from a set of axioms, as commonly sought after in the PID literature. In particular, in Appendix B, we prove the following result regarding redundancy.

Theorem 1. 

Any redundancy measure that satisfies the following five axioms is equal to I(X1;;Xn  Y) as defined in Equation (7).

  1. Symmetry: I(X1;;Xn  Y) is invariant to the permutation of X1,,Xn.

  2. Self-redundancy: I(X1  Y)=I(Y;X1).

  3. Monotonicity: I(X1;;Xn  Y)I(X1;;Xn1  Y).

  4. Order equality: I(X1;;Xn  Y)=I(X1;;Xn1  Y) if XiXn for some i<n.

  5. Existence: There is some Q such that I(X1;;Xn  Y)=I(Y;Q) and QXi for all i.

While Symmetry, Self-redundancy, and Monotonicity axioms are standard in the PID literature (see Appendix A), the last two axioms require some explanation. Order equality is a generalization of the previously proposed Deterministic equality axiom, described in Appendix A, where the condition Xi=f(Xn) (deterministic relationship) is generalized to the “more informative” relation XiXn. This axiom reflects the idea that if a new source Xn is more informative than an existing source Xi, then redundancy shouldn’t decrease when Xn is added.

Existence is the most novel of our proposed axioms. It says that for any set of sources X1,,Xn, there exists some random variable which captures the redundant information. It is similar to the statement in axiomatic set theory that the intersection of a collection of sets is itself a set (note that in Zermelo-Fraenkel set theory, this statement is derived from the Axiom of Separation).

We can derive a similar result for union information (proof in Appendix B).

Theorem 2. 

Any union information measure that satisfies the following five axioms is equal to I(X1;;Xn  Y) as defined in Equation (8).

  1. Symmetry: I(X1;;Xn  Y) is invariant to the permutation of X1,,Xn.

  2. Self-union: I(X1  Y)=I(Y;X1).

  3. Monotonicity: I(X1;;Xn  Y)I(X1;;Xn1  Y).

  4. Order equality: I(X1;;Xn  Y)=I(X1;;Xn1  Y) if XnXi for some i<n.

  5. Existence: There is some Q such that I(X1;;Xn  Y)=I(Y;Q) and XiQ for all i.

These axioms are dual to the redundancy axioms outlined above. Compared to previously proposed axioms for union information, as described in Appendix A, the most unusual of our axioms is Existence. It says that given a set of sources X1,,Xn, there exists some random variable which captures the union information. It is similar in spirit to the “Axiom of Union” in axiomatic set theory [31].

Finally, note that for some choices of ⊏, there may not exist measures of redundancy and/or union information that satisfy the axioms in Theorems 1 and 2, in which case these theorems still hold but are trivial. However, even in such “pathological” cases, I and I can still be defined via Equations (7) and (8), as long as ⊏ has a “least informative” and a “most informative” element (e.g., as provided by Assumption III above), so that the feasible sets are not empty. In this sense, the definitions in Equations (7) and (8) are more general than the axiomatic derivations provided by Theorems 1 and 2.

4.3. Inclusion-Exclusion Principle

One unusual aspect of our approach is that, unlike most previous work, we propose separate measures of redundancy and union information.

Recall that in set theory, the size of the intersection and the union are not independent of each other, but are instead related by the inclusion-exclusion principle (IEP). For example, given any two sets S and T, the IEP states that the size of the union of S and T is given by the sum of their individual sizes minus the intersection,

ST=S+TST. (11)

More generally, the IEP relates the sizes of intersection and unions for any number of sets, via the following inclusion-exclusion formulas:

|i=1nSi|=J{1,,n}(1)J1|iJSi|. (12)
|i=1nSi|=J{1,,n}(1)J1|iJSi|. (13)

Historically, the IEP has played an important role in analogies between set theory and information theory, which began to be explored in 1950s and 1960s [32,33,34,35,36]. Recall that the entropy H(X) quantifies the amount of information gained by learning the outcome of random variable X. It has been observed that, for a set of random variables X1,,Xn, the joint entropy H(X1,,Xn) behaves somewhat like the size of the union of the information in the individual variables. For instance, like the size of the union, joint entropy is subadditive (H(X1)+H(X2)H(X1,X2)) and increases with additional random variables (H(X1,X2)H(X1)). Moreover, for two random variables X1 and X2, the mutual information I(X1;X2)=H(X1)+H(X2)H(X1,X2) acts like the size of the intersection of the information provided by X1 and X2, once intersection is defined analogously to the IEP expression in Equation (11) [35,36]. Given the general IEP formula in Equation (13), this can be used to define the size of the intersection between any number of random variables. For instance, the size of a three-way intersection is

I(X1;X2;X3)=H(X1)+H(X2)+H(X3)H(X1,X2)H(X1,X3)H(X2,X3)+H(X1,X2,X3),

a quantity called co-information or interaction information in the literature [32,33,35,36,37].

Unfortunately, interaction information, as well as other higher-order interaction terms defined via the IEP, can take negative values [32,35,37]. This conflicts with the intuition that information measures should always be non-negative, in the same way that set size is always non-negative.

One of the primary motivations for the PID, as originally proposed by Williams and Beer [11,12], was to solve the problem of negativity encountered by interaction information. To develop a non-negative information decomposition, Williams and Beer took two steps. First, they considered the information that a set of sources X1,,Xn provide about some target random variable Y. Second, they developed a non-negative measure of redundancy (IWB) which leads to a non-negative union information once an IEP formula like Equation (12) is applied (Theorem 4.7, [12]). For example, in the original proposal, union information and redundancy are related via

I(X1;X2  Y)=?I(Y;X1)+I(Y;X2)I(X1;X2  Y), (14)

which is the analogue of Equation (11). This can be plugged into expressions like Equation (1), so as to express synergy in terms of redundancy as

S(X1;;Xn  Y)=?I(Y;X1,,Xn)I(Y;X1)I(Y;X2)+I(X1;X2  Y). (15)

The meaning of IEP-based identities such as Equations (14) and (15) can be illustrated using the Venn diagrams in Figure 1. In particular, they imply that the pink region in the right diagram is equal in size to the pink region in the left diagram, and that the grey region in the left diagram is equal in size to the grey region in the right diagram. More generally, IEP implies an equivalence between the information decomposition based on redundancy and the one based on union information.

As mentioned in Section 3, due to shortcomings in the original redundancy measure IWB, numerous other proposals for the PID have been advanced. Most of these proposals introduce new measures of redundancy, while keeping the general structure of the PID as introduced by Williams and Beer. In particular, most of these proposals assume that the IEP holds, so that union information can be derived from a measure of redundancy. While the assumption of the IEP is sometimes stated explicitly, more frequently it is implicit in the definitions used. For example, many proposals assume that synergy is related to redundancy via an expression like Equation (15), although (as shown above) this implicitly assumes that the IEP holds. In general, the IEP has been largely an unchallenged and unexamined assumption in the PID field. It is easy to see the appeal of the IEP: it builds on deep-seated intuitions about intersection/union from set theory and Venn diagrams, it has a long history in the information-theoretic literature, and it simplifies the problem of defining the PID since it only requires a measure of redundancy to be defined—rather than a measure of redundancy and a measure of union information. (Note that one can also start from union information and then derive redundancy via the IEP formula in Equation (13), as in Appendix B of Ref. [17], although this is much less common in the literature.)

However, there is a different way to define a non-negative PID, which is still grounded in a formal analogy with set theory but does not assume the IEP. Here, one defines measures of redundancy and union information based on the underlying algebra of intersection and union: the intersection of X1,,Xn is the largest element that is less than each Xi, while the union is the smallest element that is greater than each Xi. Given these definitions, intersections and unions are not necessarily related to each numerically, as in the IEP, but are instead related by an algebraic duality.

This latter approach is the one we pursue in our definitions (it has also appeared in some prior work, which we review in the next subsection). In general, the IEP will not hold for redundancy and union information as defined in Equations (7) and (8). (To emphasize this point, we put a question mark in Equations (14) and (15), and made the sizes of the pink and grey regions visibly different in Figure 1). However, given the algebraic and axiomatic justifications for I and I, we do not see the violation of the IEP as a fatal issue. In fact, there are many domains where generalizations of intersections and unions do not obey the IEP. For example, it is well-known that the IEP is violated in the domain of vector spaces, once the size of a vector space is measured in terms of its dimension [38]. The PID is simply another domain where the IEP should not be expected to hold.

We believe that many problems encountered in previous work on the PID—such as the failure of certain redundancy measures to generalize to more than two sources, or the appearance of uninterpretable negative synergy values—are artifacts of the IEP assumption. In fact, the following result shows that any measures of redundancy and union information which satisfy several reasonable assumptions must violate the IEP as soon as 3 or more sources are present (the proof, in Appendix I, is based on a construction from [39,40]).

Lemma 1. 

Let I be any nonnegative redundancy measure which obeys Symmetry, Self-redundancy, Monotonicity, and Independent identity. Let I be any union information measure which obeys I(X1;;Xn  Y)I(Y;X1,,Xn). Then, I and I cannot be related by the inclusion-exclusion principle for 3 or more sources.

The idea that different information decompositions may arise from redundancy versus synergy (and therefore union information) has recently appeared in the PID literature [15,40,41,42,43]. In particular, Chicharro and Panzeri proposed a PID that involves two decomposition: an “information gain” decomposition based on redundancy and an “information loss” decomposition based on synergy [41]. These decompositions correspond to the two Venn diagrams shown in Figure 1.

4.4. Relation to Prior Work

Here we discuss prior work which is relevant to our algebraic approach to the PID.

First, note that our definitions of redundancy and union information in Equations (7) and (8) are closely related to notions of “meet” and “join” in a field of algebra called order theory, which generalize intersections and unions to domains beyond set theory [44]. Given a set of objects S and an order ⊏, the meet of a,bS is the unique largest cS that is smaller than both a and b: ca,cb and dc for any d that obeys da,db. Similarly, the join of a,bS is the unique smallest c that is larger than both a and b: ac,ac and cd for any d that obeys ad,bd. Note that meets and joins are only defined when ⊏ is a special type of partial order called a lattice. This is a strict requirement, and many important ordering relations in information theory are not lattices (this includes the “Blackwell order”, which we will consider in part II of this paper [45]).

In our approach, we do not require the ordering relation ⊏ to be a lattice, or even a partial order. We do not require these properties because we do not aim to find the unique union random variable or the unique redundancy random variable. Instead, we aim to quantify the size of the intersection and the size of the union, which we do by optimizing mutual information subject to constraints, as Equations (7) and (8). These definitions are well-defined even when ⊏ is not a lattice, which allows us to consider a much broader set of ordering relations.

We mention three important precursors of our approach that have been proposed in the PID literature. First, Griffith et al. [16] considered the following order between random variables:

AB iff A=f(B) for some deterministic function f. (16)

This ordering relation ⊲ was first considered in a 1953 paper by Shannon [22], who showed that it defines a lattice over random variables. That paper was the first to introduce the algebraic idea of meets and joins into information theory, leading to an important line of subsequent research [46,47,48,49,50]. Using this order, Ref. [16] defined redundancy as the maximum mutual information in any random variable that is a deterministic function of all of the sources,

I(X1;;Xn  Y):=maxQI(Q;Y) such that i QXi, (17)

which is clearly a special case of Equation (7). Unfortunately, in practice, I is not a useful redundancy measure, as it tends to give very small values and is highly discontinuous. For example, I(X1;;Xn  Y)=0 whenever the joint distribution PX1XnY has full support, meaning that it vanishes on almost all joint distributions [16,18,47]. The reason for this counterintuitive behavior is that the order ⊲ formalizes an extremely strict notion of “more informative”, which is not robust to noise.

Given the deficiencies of I, Griffith and Ho [18] proposed another measure of redundancy (also discussed as I2 in Ref. [49]),

IGH(X1;;Xn  Y):=maxQI(Q;Y) such that i QXiY. (18)

This measure is also a special case of Equation (7), where the more informative relation AB is formalized via the conditional independence condition ABY. This measure is similar to the redundancy measure we propose in part II of this paper, and we discuss it in more detail in Section 5.4. (Note that there are some incorrect claims about IGH in the literature: Lemmas 6 and 7 of Ref. [49] incorrectly state that IGH(X1;X2  Y)=0 whenever X1 and X2 are independent—see the AND gate counterexample in Section 6—while Ref. [18] incorrectly states that IGH obeys a property called Target Monotonicity).

Finally, we mention the so-called “minimum mutual information” redundancy IMMI [51]. This is perhaps the simplest redundancy measure, being equal to the minimal mutual information in any source: IMMI(X1;;Xn  Y):=miniI(Xi;Y). It can be written in the form of Equation (7) as

IMMI(X1;;Xn  Y):=maxQI(Q;Y) such that i I(Q;Y)I(Xi;Y). (19)

This redundancy measure has been criticized for depending only on the amount of information provided by the different sources, being completely insensitive to the content of that information. Nonetheless, IMMI can be useful in some settings, and it plays an important role in the context of Gaussian random variables [51].

Interestingly, unlike IMMI, the original redundancy measure proposed by Williams and Beer [11], IWB, does not appear to be a special case of Equation (7) (at least not under the natural definition of the ordering relation ⊏). We demonstrate this using a counter-example in Appendix H.

As mentioned in Section 4.1, stronger ordering relations give smaller values of redundancy. For the orders considered above, it is easy to show that

ABABYI(A;Y)I(B;Y). (20)

This implies that IIGHIMMI. In fact, IMMI is the largest measure that is compatible with the monotonicity of mutual information (Assumption I in Section 4.1).

4.5. Further Generalizations

We finish part I of this paper by noting that one can further generalize our approach, by considering other analogues of “set”, “set size”, and “set inclusion” beyond the ones considered in Section 4.1. Such generalizations allow one to analyze notions of information intersection and union in a wide variety of domains, including setups different from the standard one considered in the PID, and domains not based on Shannon information theory.

At a general level, consider a set of object Ω that represents possible “sources”, which may be random variables, as in Section 4.1, or otherwise. Assume there is some function ϕ:ΩR that quantifies the “amount of information” in a given source Ω, and some relation ⊏ on Ω that indicates which sources are more informative than others. Then, in analogy to Equations (5) and (6), for any set of sources {b1,,bn}Ω, one can define redundancy and union information as

I(b1;;bn):=supaΩ ϕ(a) such that i abi (21)
I(b1;;bn):=infaΩ ϕ(a) such that i bia. (22)

Synergy, unique, and excluded information can then be defined via Equations (1) to (3).

There are many possible examples of such generalizations, of which we mention a few as illustrations.

  • Shannon information theory (beyond mutual information). In Section 4.1, ϕ was the mutual information between each random variable and some target Y. This can be generalized by choosing a different “amount of information” function ϕ, so that redundancy and union information are quantified in terms of other measures of statistical dependence. Among many other options, possible choices of ϕ include Pearson’s correlation (for continuous random variables) and measures of statistical dependency based f-divergences [52], Bregman divergences [53], and Fisher information [54].

  • Shannon information theory (without a fixed target). The PID can also be defined for a different setup than the typical one considered in the literature. For example, consider a situation where the sources are channels κX1|Y,,κXn|Y, while the marginal distribution over the target Y is left unspecified. Here one may take Ω as the set of channels, ϕ as the channel capacity ϕ(κA|Y):=maxPYIPYκA|Y(A;Y), and ⊏ as some ordering relation on channels [24]

  • Algorithmic information theory. The PID can be defined for other notions of information, such as the ones used in Algorithmic Information Theory (AIT) [55]. In AIT, “information” is not defined in terms of statistical uncertainty, but rather in terms of the program length necessary to generate strings. For example, one may take Ω as the set of finite strings, ⊏ as algorithmic conditional independence (ab iff K(y|b)K(y|b,a)const, where K(·|·) is conditional Kolmogorov complexity), and ϕ(a):=K(y)K(y|a) as the “algorithmic mutual information” with some target string y. (This setup is closely related to the notion of algorithmic “common information” [47]).

  •   Quantum information theory. As a final example, the PID can be defined in the context of quantum information theory. For example, one may take Ω as the set of quantum channels, ⊏ as quantum Blackwell order [56,57,58], and ϕ(Φ)=I(ρ,Φ), where I is the Ohya mutual information for some target density matrix ρ under channel ΦΩ [59].

5. Part II: Blackwell Redundancy and Union Information

In the first part of this paper, we proposed a general framework for defining PID terms. In this section, which forms part II of this paper, we develop a concrete definition of redundancy and union information by combining our general framework with a particular ordering relation ⊏. This ordering relation is called the “Blackwell order”, and it plays a fundamental role in statistics and decision theory [28,45,60]. We first introduce the Blackwell order, then use it to define measures of redundancy and union information, and finally discuss various properties of our measures.

5.1. The Blackwell Order

We begin by introducing the ordering relation that we use to define our PID. Given three random variables B,C and Y, the ordering relation BYC is defined as follows:

BYCiffPB|Y(b|y)=cκB|C(b|c)PC|Y(c|y)for some channel κB|C and all b,y. (23)

We refer to the relation Y as the Blackwell order relative to random variable Y. (Note that the Blackwell order and Blackwell’s Theorem are usually formulated in terms of channels—that is, conditional distributions like κB|Y and κC|Y—rather than of random variables as done here. However, these two formulations are equivalent, as shown in [45]).

In words, Equation (23) means the conditional distribution by PB|Y can be generated by first sampling from the conditional distribution PC|Y, and then applying some channel κB|C to the outcome. The relation BYC implies that PB|Y is more noisy than PC|Y and, by the “data processing inequality” [61], B must have less mutual information about Y than C:

BYCI(B;Y)I(C;Y). (24)

Intuition suggests that when BYC, the information that B provides about Y is contained in the information that C provides about Y. This intuition is formalized within a decision-theoretic framework using the so-called Blackwell’s Theorem [28,45,60]. To introduce this theorem, imagine a scenario in which Y represents the state of the environment. Imagine also that there is an agent who acquires information about the environment via the conditional distribution PB|Y(b|y), and then uses outcome B=b to select actions aA according to some “decision rule” given by the channel κA|B. Finally, the agent gains utility according to some utility function u(a,y), which depends on the agent’s action a and the environment’s state y. The maximum expected utility achievable by any decision rule is given by

VYmax(B,u):=maxκA|By,b,aPY(y)PB|Y(b|y)κA|B(a|b)u(a,y). (25)

From an operational perspective, it is natural to say that B is less informative than C about Y if there is no utility function such that an agent with access to B can achieve higher expected utility than an agent with access to C. Blackwell’s Theorem states that this is precisely the case if and only if BYC [28,45]:

BYC iff VYmax(B,u)VYmax(C,u) for all u. (26)

In some sense, this operational description of the relation Y is deeper than the data processing inequality, Equation (24), which says that BYC is sufficient (but not necessary) for I(B;Y)I(C;Y). In fact, it can happen that I(B;Y)I(C;Y) even though BYC [26,60,62].

A connection between PID and Blackwell’s theorem was first proposed in [13], which argued that the PID should be defined in an operational manner (see Section 5.3 for further discussion of [13]).

5.2. Blackwell Redundancy

We now define a measure of redundancy based on the Blackwell order. Specifically, we use our general definition of redundancy, Equation (7), while using the Blackwell order relative to Y as the “more informative” relation ⊏:

I(X1;;Xn  Y):=supQ I(Q;Y) such that i QYXi. (27)

We refer to this measure as Blackwell redundancy.

Given Blackwell’s Theorem, I has a simple operational interpretation. Imagine two agents, Alice and Bob, who can acquire information about Y via different random variables, and then use this information to maximize their expected utility. Suppose that Alice has access to one of the sources Xi. Then, the Blackwell redundancy I is the maximum information that Bob can have about Y without being able to do better than Alice on any utility function, regardless of which source Alice has access to.

Blackwell redundancy can also be used to define a measure of Blackwell unique information, U(Xi  Y|X1;;Xn):=I(Y;Xi)I(X1;;Xn  Y), via Equation (2). As we show in Appendix I, U satisfies the following property, which we term the Multivariate Blackwell property.

Theorem 3. 

U(Xi  Y|X1;;Xn)=0 if and only if XiYXj for all ji.

Operationally, Theorem 3 means that source Xi has non-zero unique information iff there exists a utility function such that an agent with access to source Xi can achieve higher utility than an agent with access to any other source Xj.

Computing I involves maximizing a convex function subject to a set of linear constraints. These constraints define a feasible set which is a convex polytope, and the maximum must lie on one of the vertices of this polytope [63]. In Appendix C, we show how to solve this optimization problem. In particular, we use a computational geometry package to enumerate the vertices of the feasible set, and then choose the best vertex (code is available at [64]). In that appendix, we also prove that an optimal solution to Equation (27) can always be achieved by Q with cardinality Q=(iXi)n+1. Note that the supremum in Equation (27) is always achieved. Note also that I satisfies the redundancy axioms in Section 4.2.

As discussed above, solving the optimization problem in Equation (27) gives a (possibly non-unique) optimal random variable Q which specifies the content of the redundant information. As shown in Appendix C, solving Equation (27) also provides a set of channels κQ|Xi for each source Xi, which identify the redundant information in each source.

Note that the Blackwell order satisfies assumptions I-III in Section 4.1, thus Blackwell redundancy satisfies the bounds derived in that section. Finally, note that like many other redundancy measures, Blackwell redundancy becomes equivalent to the measure IMMI (as defined in Equation (19)) when applied to Gaussian random variables (for details, see Appendix E).

5.3. Blackwell Union Information

We now define a measure of union information using our general definition in Equation (8), while using the Blackwell order relative to Y as the “more informative” relation:

I(X1;;Xn  Y):=infQ I(Q;Y) such that i XiYQ. (28)

We refer to this measure as Blackwell union information.

As for Blackwell redundancy, Blackwell union information can be understood in operational terms. Consider two agents, Alice and Bob, whose use information about Y to maximize their expected utility. Suppose that Alice has access to one of the sources Xi. Then, the Blackwell union information I is the minimum information that Bob must have about Y in order to do better than Alice on any utility function, regardless of which source Alice has access to.

Blackwell union information can be used to define measures of synergy and excluded information via Equations (1) and (3). The resulting measure of excluded information E(Xi  Y|X1;;Xn):=I(X1;;Xn  Y)I(Y;Xi) satisfies the following property, which is the “dual” of the Multivariate Blackwell property considered in Theorem 3. (See Appendix I for the proof).

Theorem 4. 

E(Xi  Y|X1;;Xn)=0 if and only if XjYXi for all ji.

Operationally, Theorem 4 means that there is excluded information for source Xi iff there exists a utility function such that an agent with access to one of the other sources Xj can achieve higher expected utility than an agent with access to Xi.

We discuss the problem of numerically solving the optimization problem in Equation (28) in the next subsection.

5.4. Relation to Prior Work

Our measure of Blackwell redundancy I is new to the PID literature. The most similar existing redundancy measure is IGH [18], which is discussed above in Section 4.4. IGH is a special case of Equation (7), once the “more informative” relation BC is defined in terms of conditional independence BCY. Note that conditional independence is stronger than the Blackwell order: given the definition of Y in Equation (23), it is clear that BCY implies BYC (the channel κB|C can be taken to be PB|C), but not vice versa. As discussed in Section 4.1, stronger ordering relations give smaller values of redundancy, so in general IGHI. Note also that BYC depends only on the pairwise marginals PBY and PCY, while conditional independence BCY depends on the joint distribution PBCY. As we discuss in Appendix F, the conditional independence order can be interpreted in decision-theoretic terms, which suggests an operational interpretation for IGH.

Interestingly, Blackwell union information I is equivalent to two measures that have been previously proposed in the PID literature, although they were formulated in a different way. Bertschinger et al. [13] considered the following measure of bivariate redundancy:

IBROJA(X1;X2  Y):=I(Y;X1)+I(Y;X2)IBROJA(X1;X2  Y), (29)

where IBROJA is defined via the optimization problem

IBROJA(X1;X2  Y)=minX˜1,X˜2I(Y;X˜1,X˜2) such that PX˜1Y=PX1Y,PX˜2Y=PX2Y, (30)

and reflects the minimal mutual information that two random variables can have about Y, given that their pairwise marginals with Y are fixed to be PX1Y and PX2Y. Note that Ref. [13] did not refer to IBROJA as a measure of union information (we use our notation in writing it as IBROJA). Instead, these measures were derived from an operational motivation, with the goal of deriving a unique information measure that obeys the so-called Blackwell property: I(Y;X1)IBROJA(X1;X2  Y)=0 if X1YX2 (see Theorems 3 and 4 above).

Starting from a different motivation, Griffith and Koch [17] proposed a multivariate version of IBROJA,

IBROJA(X1;;Xn  Y)=minX˜i,,X˜nI(Y;X˜1,,X˜n) such that i PX˜iY=PXiY. (31)

The goal of Ref. [17] was to derive a measure of multivariate synergy from a measure of union information, as in Equation (1). In that paper, IBROJA was explicitly defined as a measure of union information. To our knowledge, Ref. [17] was the first (and perhaps only) paper to propose a measure of union information that was not derived from redundancy via the inclusion-exclusion principle.

While IBROJA(X1;;Xn  Y) and I(X1;;Xn  Y) are stated as different optimization problems, we prove in Appendix G that these optimization problems are equivalent, in that they will always achieve the same optimum value. Interestingly, since IBROJA and I are equivalent, our measure of Blackwell redundancy I appears as the natural dual to IBROJA. Another implication of this equivalence is that Blackwell union information I can be quantified by solving the optimization problem in Equation (31), rather than Equation (28). This is advantageous, because Equation (31) involves the minimization of a convex function over a convex polytope, which can be solved using standard convex optimization techniques [65].

In Ref. [13], the redundancy measure IBROJA in Equation (29) was only defined for the bivariate case. Since then, it has been unclear how to extend this redundancy measure to more than two sources. However, by comparing Equations (14) and (29), we see the root of the problem: IBROJA is derived by applying the inclusion-exclusion principle to a measure of union information, IBROJA. It cannot be extended to more than two sources because the inclusion-exclusion principle generally leads to counterintuitive results for more than 2 sources, as shown in Lemma 1. Note also that what Ref. [13] called the unique information in X1, IBROJA(X1;X2  Y)I(Y;X2), in our framework would be considered a measure of the excluded information for X2.

At the same time, the union information measure IBROJA, and the corresponding synergy from Equation (1), does not use the inclusion-exclusion principle. Therefore, it can be easily extended to any number of sources [17].

5.5. Continuity of Blackwell Redundancy and Union Information

It is often desired that information-theoretic measures are continuous, meaning that small changes in underlying probability distributions lead to small changes in the resulting measures. In this section, we consider the continuity of our proposed measures, I and I.

We first consider Blackwell redundancy I. It turns out that this measure is not always continuous in the joint probability PX1XnY (a discontinuous example is provided in Section 5.6). However, the discontinuity of I is not necessarily pathological, and we can derive an interpretable geometric condition that guarantees that I is continuous.

Consider the conditional distribution of the target Y given some source Xi, PY|Xi. Let rank PY|Xi indicate its rank, meaning the dimension of the space spanned by the vectors {PY|Xi=xi}xiXi. The rank of PY|Xi quantifies the number of independent directions that the target distribution PY can be moved by manipulating the source distribution PXi, and it cannot be larger than |Y|. The next theorem shows that I is locally continuous, as long as n1 or more of the source conditional distributions have this maximal rank.

Theorem 5. 

As a function of the joint distribution PX1,,Xn,Y, I is locally continuous whenever n1 or more of the conditional distributions PY|Xi have rank PY|Xi=|Y|.

In proving this result, we also show that I is continuous almost everywhere (see proof in Appendix D). Finally, in that appendix we also use Theorem 5 to show that I is continuous everywhere if Y is a binary random variable.

We illustrate the meaning of Theorem 5 visually in Figure 2. We show two situations, both of which involve two sources X1 and X2 and a target Y with cardinality |Y|=3. In one situation, both pairwise conditional distributions have rank equal to |Y|, so I is locally continuous. In the other situation, both pairwise conditional distributions are rank deficient (e.g., this might happen because X1 and X2 have cardinality |X1|=|X2|=2), so I is not guaranteed to be continuous. From the figure it is easy to see how the discontinuity may arise. Given the definition of the Blackwell order and I, for any random variable Q in the feasible set of Equation (27), the conditional distributions PY|Q=q must fall within the intersection of the distributions spanned by PY|X1 and PY|X2 (the intersection of the red and green shaded regions in Figure 2). On the right, the size of this intersection can discontinuously jump from a line (when PY|X1 and PY|X2 are perfectly aligned) to a point (when PY|X1 and PY|X2 are not perfectly aligned). Thus, the discontinuity of I arises from a geometric phenomenon, which is related to the discontinuity of the intersection of low-dimensional vector subspaces.

Figure 2.

Figure 2

Illustration of Theorem 5, which provides a sufficient condition for the local continuity of I. Consider two scenarios, both of which involves two sources X1 and X2 and a target Y with cardinality |Y|=3. The blue areas indicate the simplex of probability distributions over Y, with the marginal PY and the pairwise conditionals PY|Xi=xi marked. On the left, both sources have rank PY|Xi=3=|Y|, so I is locally continuous. On the right, both sources have rank PY|Xi=2<|Y|, so I is not necessarily locally continuous. Note that I is also continuous if only source has rank PY|Xi=3.

We briefly comment on the continuity of I. As we described above, this measure turns out to be equivalent to IBROJA. The continuity of IBROJA in the bivariate case was proven in Theorem 35 of Ref. [66]. We believe that the continuity of IBROJA for an arbitrary number of sources can be shown using similar methods, although we leave this for future work.

5.6. Behavior on the COPY Gate

As mentioned in Section 3, the “COPY gate” example is often used to test the behavior of different redundancy measures. The COPY gate has two sources, X1 and X2, and a target Y=(X1,X2) which is a copy of the joint outcome. It is expected that redundancy should vanish if X1 and X2 are statistically independent, as formalized by the Independent identity property in Equation (4).

Blackwell redundancy I satisfies the Independent identity. In fact, we prove a more general result, which shows that I(X1,X2  (X1,X2)) is equal to an information-theoretic measure called Gács-Körner common information C(XY) [16,47,67]. C(XY) quantifies the amount of information that can be deterministically extracted from both random variables X or Y, and it is closely related to the “deterministic function” order ⊲ defined in Equation (16). Formally, it can be written as

C(XY)=supQH(Q) such that QX,QY, (32)

where H is Shannon entropy. In Appendix I, we prove the following result.

Theorem 6. 

I(X1,X2  (X1,X2))=C(X1X2).

Note that 0C(X1X2)I(X1;X2) [47], so I satisfies the Independent identity property. At the same time, C(X1X2) can be strictly less than I(X1;X2). For example, if PX1X2 has full support, then I(X1;X2) can be arbitrarily large while C(X1X2)=0 (see proof of Theorem 6). This means that I violates a previously proposed property, sometimes called the Identity property, that suggests that redundancy should satisfy I(X1;X2  (X1,X2))=I(X1;X2). However, the validity of the Identity property is not clear, and several papers have argued against it [15,39].

The value of C(X1X2) depends on the precise pattern of zeros in the joint distribution PX1X2 and is therefore not continuous. For instance, for the bivariate COPY gate, redundancy can change discontinuously as one goes from the situation where X1=X2 (so that all information is redundant, I=I(X1;X2)) to one where X1 and X2 are almost, but not entirely, identical. This discontinuity can be understood in terms of Theorem 5 and Figure 2: in the COPY gate, the cardinality of the target variable |Y|=|X1|×|X2| is larger than the cardinality of the individual sources. In other words, when the sources X1 and X2 are not perfectly correlated, they provide information about different “subspaces” of the target (X1,X2), and so it is possible that very little (or none) of their information is redundant.

At the same time, the Blackwell property, Theorem 3, implies that

I(X1,X2  X1)=I(X1;X2)=I(X1,X2  X2) (33)

In other words, the redundancy in X1 and X2, where either one of the individual sources is taken as the target, is given by the mutual information I(X1;X2). This holds even though the redundancy in the COPY gate can be much lower than I(X1;X2).

It is also interesting to consider how Blackwell union information, I, behaves on the COPY gate. Using techniques from [13], it can be shown that the union information is simply the joint entropy,

I(X1;X2  (X1,X2))=H(X1,X2). (34)

Since H(X1,X2)=I(X1,X2;X1,X2), Equations (1) and (34) together imply that the COPY gate has no synergy.

Note that we can use Theorem 6 and Equation (34) to illustrate that I and I violate the inclusion-exclusion principle, Equation (14). Using Equation (34) and a bit of rearranging, Equation (14) becomes equivalent to I(X1;X2  (X1,X2))=?I(X1;X2), which is the Identity property mentioned above. I violates this property, since redundancy for the COPY gate can be smaller than I(X1;X2).

6. Examples and Comparisons to Previous Measures

In this section, we compare our proposed measure of Blackwell redundancy I to existing redundancy measures. We focus on redundancy, rather than union information, because redundancy has seen much more development in the literature, and because Blackwell union information I is equivalent to an existing measure (see Section 5.4).

6.1. Qualitative Comparison

In Table 1, we compare I to six existing measures of multivariate redundancy:

  • IWB, the redundancy measure first proposed by Williams and Beer [11].

  • IMMI, the “minimum mutual information” [51], Equation (19) in Section 4.4.

  • I, proposed by Griffith et al. [16], Equation (17) in Section 4.4.

  • IGH, proposed by Griffith and Ho [18], Equation (18) in Section 4.4.

  • IInce, proposed by Ince [20].

  • IFL, proposed by Finn and Lizier [21].

We also compare I to three existing measures of bivariate redundancy (i.e., for 2 sources):

  • IBROJA, proposed by Bertschinger et al. [13], defined in Equation (29).

  • IHarder, proposed by Harder et al. [19].

  • Idep, proposed by James et al. [15].

For I as well as the 9 existing measures, we consider the following properties, which are chosen to highlight differences between our approach and previous proposals:

  1. Has it been defined for more than 2 sources

  2. Does it obey the Monotonicity axiom from Section 4.2

  3. Is it compatible with the inclusion-exclusion principle (IEP) for the bivariate case, such that union information as defined in Equation (14) obeys I(X1;X2  Y)I(X1,X2;Y)

  4. Does it obey the Independent identity property, Equation (4)

  5. Does it obey the Blackwell property (possibly in its multivariate form, Theorem 3)

We also consider two additional properties, which require a bit of introduction.

Table 1.

Comparison of different redundancy measures. ? indicate properties that we could not easily establish.

I IWB IMMI I IGH IInce IFL IBROJA IHarder Idep
More than 2 sources
Monotonicity
IEP for bivariate case ? ?
Independent identity
Blackwell property
Pairwise marginals
Target equality

The first property was suggested by Ref. [13], who argued that redundancy should only depend on the pairwise marginal distributions of each source with the target,

If pXiY=pX˜iY˜ for all i, then I(X1;;Xn  Y)=I(X˜1;;X˜n  Y˜). (35)

In Table 1, we term this property Pairwise marginals. We believe that the validity of Equation (35) is not universal, but may depend on the particular setting in which the PID is being used. However, redundancy redundancy measures that satisfy this property have one important advantage: they are well-defined not only when the sources are random variables X1,,Xn, but also in the more general case when the sources are channels κX1|Y,,κXn|Y.

The second property has not been previously considered in the literature, although it appears to be highly intuitive. Observe that the target random variable Y contains all possible information about itself. Thus, it may be expected that adding the target to the set of sources should not decrease the redundancy:

I(X1;;Xn;Y  Y)=I(X1;;Xn  Y). (36)

In Table 1, we term this property Target equality. Note that for redundancy measures which can be put in the form of Equation (7), Target Equality is satisfied if the order ⊏ obeys XiY for all sources Xi. (Note also that Target Equality is unrelated to the previously proposed Strong Symmetry property; for instance, it is easy to show that the redundancy measures IWB and IMMI satisfy Target Equality, even though they violate Strong Symmetry [68]).

6.2. Quantitative Comparison

We now illustrate our proposed measure of redundancy I on some simple examples, and compare its behavior to existing redundancy measures.

The values of I were computed with our code, provided at [64]. The values of all other redundancy measures except IGH were computed using the dit Python package [69]. To our knowledge, there have been no previous proposals for how to compute IGH. In fact, this measure involves maximizing a convex function subject to linear constraints, and can be computed using similar methods as I. We provide code for computing IGH at [64].

We begin by considering some simple bivariate examples. In all cases, the sources X1 and X2 are binary and uniformly distributed. The results are shown in Table 2.

Table 2.

Behavior of I and other redundancy measures on bivariate examples.

Target I IWB IMMI I IGH IInce IFL IBROJAIHarder Idep
Y=X1 AND X2 0.311 0.311 0.311 0 0.123 0.104 0.561 0.311 0.082
Y=X1+X2 0.5 0.5 0.5 0 0 0 0.5 0.5 0.189
Y=X1 I(X1; X2) I(X1; X2) I(X1; X2) C(X1  X2) I(X1; X2) * 1 I(X1; X2) I(X1; X2)
Y=(X1,X2) C(X1  X2) 1 1 C(X1  X2) C(X1  X2) * 1 I(X1; X2) I(X1; X2)
  1. The AND gate, Y=X1 AND X2, with X1 and X2 independent. (It is incorrectly stated in Refs. [18,49] that IGH vanishes here; actually IGH(X1;X2  X1 AND X2)0.123, which corresponds to the maximum achieved in Equation (18) by Q=X1 OR X2.)

  2. The SUM gate: Y=X1+X2, with X1 and X2 independent.

  3. The UNQ gate: Y=X1. Here IInce (marked with ∗) gave values that increased with the amount of correlation between X1 and X2 but were typically larger than I(X1; X2).

  4. The COPY gate: Y=(X1,X2). Here, our redundancy measure is equal to the Gács-Körner common information between X and Y, as discussed in Section 5.6. The same holds for the redundancy measures IGH and I, which can be shown using a slight modification of the proof of Theorem 6. For this gate, IInce (marked with ∗) gave the same values as for the UNQ gate, which increased with the amount of correlation between X1 and X2 but were typically larger than I(X1;X2).

We also analyze several examples with three sources, with the results shown in Table 3. We considered those previously proposed measures which can be applied to more than two sources (we do not show IGH, as our implementation was too slow for these examples).

Table 3.

Behavior of I and other redundancy measures on three sources.

Target I IWB IMMI I IInce IFL
Y=X1 AND X2 AND X3 0.138 0.138 0.138 0 0.024 0.294
Y=X1+X2+X3 0.311 0.311 0.311 0 0 0.561
Y=((A,B),(A,C),(A,D)) 1 2 2 1 1 2
  1. Three-way AND gate: Y=X1ANDX2ANDX3, where the sources are binary and uniformly and independently distributed.

  2. Three-way SUM gate: Y=X1+X2+X3, where the sources are binary and uniformly and independently distributed.

  3. “Overlap” gate: we defined four independent uniformly distributed binary random variables, A,B,C,D. These were grouped into three sources X1,X2,X3 as X1=(A,B), X2=(A,C), X3=(A,D). The target was the joint outcome of all three sources, Y=(X1,X2,X3)=((A,B),(A,C),(A,D)). Note that the three sources overlap on a single random variable A, which suggests that the redundancy should be 1 bit.

7. Discussion and Future Work

In this paper, we proposed a new general framework for defining the partial information decomposition (PID). Our framework was motivated in several ways, including a formal analogy with intersections and unions in set theory as well as an axiomatic derivation.

We also used our general framework to propose concrete measures of redundancy and union information, which have clear operational interpretations based on Blackwell’s theorem. Other PID measures, such as synergy and unique information, can be computed from our measures of redundancy and union information via simple expressions.

One unusual aspect of our framework is that it provides separate measures of redundancy and union information. As we discuss above, most prior work on the PID assumed that redundancy and union information are related to each other via the so-called “inclusion-exclusion” principle. We argue that the inclusion-exclusion principle should not be expected to hold in the context of the PID, and in fact that it leads to counterintuitive behavior once 3 or more sources are present. This suggests that different information decompositions should be derived for redundancy vs. union information. This idea is related to a recent proposal in the literature, which argues that two different PIDs are needed, one based on redundancy and one based on synergy [41]. An interesting direction for future work is to relate our framework with the dual decompositions proposed in [41].

From a practical standpoint, an important direction for future work is to develop better schemes for computing our redundancy measure. This measure is defined in terms of a convex maximization problem, which in principle can be NP-hard (a similar convex maximization problem was proven to be NP-hard in [70]). Our current implementation, which enumerates the vertices of the feasible set, works well for relatively small state spaces, but we do not expect it to scale to situations with many sources, or where the sources have large cardinalities. However, the problem of convex maximization with linear constraints is a very active area of optimization research, with many proposed algorithms [63,71,72]. Investigating these algorithms, as well as various approximation schemes such as relaxations and variational bounds, is of interest.

Finally, we showed how our framework can be used to define measures of redundancy and union information in situations that go beyond the standard setting of the PID (e.g., when the probability distribution of the target is not specified). Our framework can even be applied in domains beyond Shannon information theory, such as algorithmic information theory and quantum information theory. Future work may exploit this flexibility to explore various new applications of the PID.

Acknowledgments

We thank Paul Williams, Alexander Gates, Nihat Ay, Bernat Corominas-Murtra, Pradeep Banerjee, and especially Johannes Rauh for helpful discussions and suggestions. We also thank the Santa Fe Institute for helping to support this research.

Appendix A. PID Axioms

In developing the PID framework, Williams and Beer [11,12] proposed that any measure of redundancy should obey a set of axioms. In slightly modified form, these axioms can be written as follows:

  • Symmetry: I(X1;;Xn  Y) is invariant to the permutation of X1,,Xn.

  • Self-redundancy: I(X1  Y)=I(Y;X1).

  • Monotonicity: I(X1;;Xn  Y)I(X1;;Xn1  Y).

  • Deterministic equality:I(X1;;Xn  Y)=I(X1;;Xn1  Y) if Xi=f(Xn) for some i<n and deterministic function f.

These axioms are based on intuitions regarding the behavior of intersection in set theory [12]. The Symmetry axiom is self-explanatory. Self-redundancy states that if only a single-source is present, all of its information is redundant. Monotonicity states that redundancy should not increase when an additional source is considered (consider that the size of set intersection can only decrease as more sets are considered). Deterministic equality states that redundancy should remain the same when an additional source Xn is added that contains all (or more) of the same information that is already contained in an existing source Xi (which is formalized as the condition Xi=f(Xn)).

Union information was considered the original PID proposal [12,73], as well as a more recent paper [17]. Ref. [17] proposed that any measure of union information should satisfy the following set of natural axioms, stated here in slightly modified form:

  • Symmetry: I(X1;;Xn  Y) is invariant to the permutation of X1,,Xn.

  • Self-union: I(X1  Y)=I(Y;X1).

  • Monotonicity: I(X1;;Xn  Y)I(X1;;Xn1  Y).

  • Deterministic equality: I(X1;;Xn  Y)=I(X1;;Xn1  Y) if Xn=f(Xi) for some i<n and deterministic function f.

These axioms are based on intuitions concerning the behavior of the union operator in set theory, and are the natural “duals” of the redundancy axioms mentioned above.

Appendix B. Uniqueness Proofs

Proof of Theorem 1. 

Assume there is a redundancy measure I that obeys the five axioms stated in the theorem. We will show that I=I, as defined in Equation (7).

Given Equation (7) and the definition of the supremum, for any ϵ>0 there exists a random variable Q such that QXi for i{1,,n} and

I(Q;Y)I(X1;;Xn  Y)ϵ, (A1)

By Order equality, I(Q;X1;;Xk  Y)=I(Q;X1;;Xk1  Y). Induction gives

I(Q;X1;;Xn  Y)=I(Q  Y)=I(Q;Y)I(X1;;Xn  Y)ϵ

where we used Self-redundancy and Equation (A1). We also have I(Q;X1;;Xn  Y)I(X1;;Xn  Y) by Symmetry and Monotonicity. Combining gives

I(X1;;Xn  Y)ϵI(X1;;Xn  Y).

We now show that I is the largest measure that satisfies Existence. Let Q be a random variable that obeys QXi for all i{1,,n} and I(X1;;Xn  Y)=I(Y;Q). Since Q falls within the feasible set of the optimization problem in Equation (7),

I(X1;;Xn  Y)=I(Q;Y)I(X1;;Xn  Y).

Combining gives

I(X1;;Xn  Y)ϵI(X1;;Xn  Y)ϵI(X1;;Xn  Y).

Since this holds for all ϵ>0, taking the limit ϵ0 gives I=I. □

Proof of Theorem 2. 

Assume there is a union information measure I that obeys the five axioms stated in the theorem. We will show that I=I, as defined in Equation (8).

Given Equation (8) and the definition of the infinitum, for any ϵ>0 there exists a random variable Q such that QXi for i{1,,n} and

I(Q;Y)I(X1;;Xn  Y)+ϵ, (A2)

By Order equality, I(Q;X1;;Xk  Y)=I(Q;X1;;Xk1  Y). Induction gives

I(Q;X1;;Xn  Y)=I(Q  Y)=I(Q;Y)I(X1;;Xn  Y)+ϵ

where we used Self-union and Equation (A2). We also have I(Q;X1;;Xn  Y)I(X1;;Xn  Y) by Symmetry and Monotonicity. Combining gives

I(X1;;Xn  Y)+ϵI(X1;;Xn  Y).

We now show that I is the smallest measure that satisfies Existence. Let Q be a random variable that obeys XiQ for all i{1,,n} and I(X1;;Xn  Y)=I(Y;Q). Since Q falls within the feasible set of the optimization problem in Equation (8),

I(X1;;Xn  Y)=I(Q;Y)I(X1;;Xn  Y).

Combining gives

I(X1;;Xn  Y)I(X1;;Xn  Y)+ϵI(X1;;Xn  Y)+ϵ.

Since this holds for all ϵ>0, taking the limit ϵ0 gives I=I. □

Appendix C. Computing I

Here we consider the optimization problem that defines our proposed measure of redundancy, Equation (27). We first prove a bound on the required cardinality of Q.

Theorem A1. 

For optimizing Equation (27), it suffices to consider Q with cardinality Q=iXin+1.

Proof. 

Consider any random variable Q with outcome set Q which satisfies QYXi for all i. We show that whenever Q has full support on Q>iXin+1 outcomes, there is another random variable Q˜ which achieves I(Q˜;Y)I(Q;Y), while satisfying Q˜YXi for all i and having support on at most iXin+1 outcomes.

To begin, let Ω indicate the set of random variables over outcomes Q, such that all Q˜Ω satisfy:

PY|Q˜(y|q)=PY|Q(y|q) for all y,q{qQ:PQ˜(q)>0} (A3)
qPQ˜(q)PXi|Q(xi|q)=PXi(xi)  for all i,xi. (A4)

Since QYXi, by Equation (23) there exist channels κQ|Xi(q|xi) that satisfy PQ|Y(q|y)=xiκQ|Xi(q|xi)PXi|Y(xi|y). Now write the conditional distribution over Q˜ and Y as

PQ˜Y(q,y)=PQ˜(q)PQ(q)PQY(q,y)=xiPQ˜(q)PQ(q)κQ|Xi(q|xi)PXi|Y(xi|y)PY(y)=xiκQ˜|Xi(q|xi)PXi|Y(xi|y)PY(y), (A5)

where we used Equation (A3) and defined the channel κQ˜|Xi as

κQ˜|Xi=PQ˜(q)PXi(xi)PXi(xi)PQ(q)κQ|Xi(q|xi),

(Note this is a kind of double Bayesian inverse, given Equation (A4)). Equation (A5) implies that Q˜YXi for all i.

We now show that there is Q˜Ω that achieves I(Q;Y)I(Q˜;Y) and has support on at most iXin+1 outcomes in Q. Write the mutual information between any Q˜Ω and Y as

I(Q˜;Y)=qPQ˜(q)DKL(PY|Q˜=qPY)=qPQ˜(q)DKL(PY|Q=qPY), (A6)

where DKL is the Kullback-Leibler divergence. We consider the maximum of this mutual information across Ω, I*=maxQ˜ΩI(Q˜;Y). Using Equations (A4) and (A6), this maximum can be written as

I*=maxωΔqω(q)D(PY|Q=qPY) such that i,xi:qω(q)PXi|Q(xi|q)=PXi(xi),

where Δ is the set of all distributions over Q. By conservation of probability, xiPXi(xi)=1, so we can eliminate a constraint for one of the outcomes xi of each source i. Thus, I* is the maximum of a linear function over Δ, subject to i(Xi1)=iXin hyperplane constraints.

The feasible set is compact, and the maximum will be achieved at one of the extreme points of the feasible set. By Dubin’s Theorem [74], any extreme point of this feasible set can be expressed as a convex combination of at most iXin+1 extreme points of Δ. In other words, the maximum in Equation (27) is achieved by a random variable Q˜ with support on at most iXin+1 values of Q. This random variable satisfies

I(Q˜;Y)=I*I(Q;Y),

where the last inequality comes from the fact that Q is an element of Ω. □

We now return to the optimization problem in Equation (27). Given Theorem A1 and the definition of the Blackwell order in Equation (23), it can be rewritten as

I(X1;;Xn  Y)=maxκQ|Y,κQ|X1,,κQ|XnIκ(Q,Y)such that i,y,xi :xiκQ|Xi(q|xi)PXi|Y(xi|y)=κQ|Y(q|y). (A7)

where the optimization is over channels with Q of cardinality iXin+1. The notation Iκ(Q;Y) indicates the mutual information that arises from the marginal distribution PY and the conditional distribution κQ|Y,

Iκ(Q;Y)=yPY(y)κQ|Y(q|y)lnκQ|Y(q|y)yκQ|Y(q|y)PY(y)

Equation (A7) involves maximizing a convex function over the convex polytope defined by the following system of linear (in)equalities:

Λ={(κQ|Y,κQ|X1,,κQ|Xn):
i,xi,q κQ|Xi(q|xi)0, (A8)
q,y κQ|Y(q|y)0, (A9)
y qκQ|Y(q|y)=1, (A10)
i,xi qκQ|Xi(q|xi)=1, (A11)
i,y,qQ{0} xiκQ|Xi(q|xi)PXiY(xi,y)κQ|Y(q|y)PY(y)=0}, (A12)

We do not place a constraint on q=0 in Equation (A12) because that would be redundant with the constraints Equations (A10) and (A11). Also, note that we replaced the sup in Equation (27) with max in Equation (A7), which is justified since we are optimizing over a finite dimensional, closed, and bounded region (thus the supremum is always achieved).

The maximum of a convex function over a convex polytope is found at one of the vertices of the polytope. To find the solution to Equation (A7), we use a computational geometry package to enumerate the vertices of Λ. We evaluate Iκ(Y;Q) at each vertex, and pick the maximum value. This procedure also finds optimal conditional distributions κQ|Y,κQ|X1,,κQ|Xn. Code is available at [64].

Appendix D. Continuity of I

To prove the continuity of I, we begin by considering the feasible set of the optimization problem in Equation (A7), as specified by the system (in)equalities in Equations (A8) to (A12). For convenience, write this system of (in)equalities in matrix notation,

Λ=κR|Q||Y|+i|Q||Xi|:κ0,Aκ=a, (A13)

where κ is a vector representation of (κQ|Y,κQ|X1,,κQ|Xn), the matrix A encodes the left-hand side of Equations (A10) to (A12), and the vector a is filled with 1s and 0s, as appropriate.

We first prove the following lemma.

Lemma A1. 

The matrix A defined in Equation (A13) is full rank if n1 or more of the pairwise conditional distributions have rank PY|Xi=|Y|.

Proof. 

Without loss of generality, assume that PY has full support (otherwise none of the pairwise marginals PXiY can achieve rank |Y|). Write A in block matrix form as A=BC, where the matrix B has |Y|+i|Xi| rows and encodes the constraints of Equations (A10) and (A11), and the matrix C has n|Y|(|Q|1) rows and encodes the constraints of Equation (A12).

Each row in B has a 1 in some column which is zero in every other row of B and every row of C. This column corresponds either to κQ|Y(0|y) for a particular y (for constraints like Equation (A10)), or to κQ|Xi(0|xi) for a particular i and xi (for constraints like Equation (A11)). These columns are 0 in C because q=0 is omitted Equation (A12). This means that no row of B is a linear combination of other rows in B or C, and that no row in C is a linear combination of any set of other rows that includes a row in B. Therefore, if the rows of A are linearly dependent, it must be that the rows of C are linearly dependent.

Next, let ci,y,q indicate the row of C that represents the constraints in Equation (A12) for some source i and outcomes y,q0. Any such row has a column for each xiXi with value PXiY(xi,y) (at the same index as the row in κ that represents κQ|Xi(q|xi)). Since PXiY(xi,y)>0 for at least one xiXi, one of these columns must be non-zero. At the same time, these columns are zero in every row cj,y,q where ji or qq. This means that row ci,y,q can only be a linear combination of other rows in C if, for all xi, PXiY(xi,y) is a linear combination of PXiY(xi,y) for yy. In linear algebra terms, this can be stated as rank PY|Xi<|Y|.

The previous argument shows that if A is linearly dependent, there must be at least one source i with rank PY|Xi<|Y| and some row ci,y,q which is a linear combination of other rows from C. Observe that this row ci,y,q has a column with value PY(y)>0 (at the same index as the row in κ that represents κQ|Y(q|y)). This column is zero in every other row ci,y,q for yy or qq. This means that ci,y,q is a linear combination of a set of other rows in C that include some row cj,y,q for ji. This implies that cj,y,q is also a linear combination of other rows in C, which means that rank PY|Xj<|Y|.

We have shown that if A is linearly dependent, there must be at least two pairwise conditionals with rank PY|Xi<|Y|. □

We are now ready to prove Theorem 5.

Proof of Theorem 5. 

For the case of a single source (n=1), I reduces to the mutual information I=I(Y;X1), which is continuous (Section 2.3, [75]). Thus, without loss of generality, we assume that n2.

Next, we define some notation. Note that the optimum value (I) and the feasible set of the optimization problem in Equation (A7) is a function of the pairwise marginal distributions PX1Y,,PXnY. We write Ω for the set of all pairwise marginal distributions which have the same marginal over Y:

Ω=(qX1Y,,qXnY):xiqXiY(xi,y)=xjqXjY(xj,y) i,j.

For any rΩ, let I(r) indicate the corresponding optimum value in Equation (A7), given the marginals in r, and let Λ(r) indicate the feasible set of the optimization problem, as defined in Equation (A13).

Note that the matrix A in Equation (A13) depends on the choice of r, which we indicate by writing it as the matrix-valued function A(r). Given any r=(qX1Y,,qXnY)Ω and feasible solution κ=(κQ|Y,κQ|X1,,κQ|Xn)Λ(r), let I(r,κ) indicate the corresponding mutual information I(Q;Y), where the marginal distribution over Y is specified by r and the conditional distribution of Q given Y is specified by κQ|Y. Using this notation, I(r)=maxκΛ(r)I(r,κ).

Below, we show that I(r) is continuous if r is rank regular [76], which means that there is a neighborhood UΩ of r such that rank A(r)=rank A(r) for all rU. Then, to prove the theorem, we assume that A(r) is full rank. Given Lemma A1, this is true as long as n1 or more of the pairwise conditionals PY|Xi have rank PY|Xi=|Y|. Note that a matrix M is full rank iff the singular values σ(M) are all strictly positive. Since A(r) is full rank, and A(r) and σ(M) are continuous, there is a neighborhood U of r such that the singular values σ(A(r)) are all strictly positive for all rU, therefore all A(r)) have full rank. This shows that r is rank regular and so I is continuous at r.

We now prove that I(r) is continuous if A(r) is rank regular. To do so, we will use Hoffman’s Theorem [77,78]. In our case, it states that for any pair of marginals r,rΩ and a feasible solution κΛ(r), there exists a feasible solution κΛ(r) such that

κκαA(r)A(r), (A14)

where α is a constant that does not depend on r or κ. (In the notation of [78], we take G=G, g=g and d=d, and use that the norm of s=κ is bounded, given that it is finite dimensional and has entries in [0,1]). We will also use Daniel’s theorem (Theorem 4.2, [78]), which states that for any r,rΩ such that rank A(r)=rank A(r), and any feasible solution κΛ(r), there exists κΛ(r) such that

κκβA(r)A(r), (A15)

where β is a constant that doesn’t depend on r (in the notation of [78], ε=A(r)A(ri) and again use that κ have a bounded norm).

Now consider also any sequence r1,r2,Ω that converges to a marginal rΩ. Let κiΛ(ri) indicate an optimal solution of Equation (A7) for ri, so that I(ri,κi)=I(ri). Given Equation (A14), there is a corresponding sequence κ1,κ2,Λ(r) such that

κiκiαA(r)A(ri).

Since A(·) is continuous and ri converges to r, we have limiA(ri)=A(r) and therefore limiκiκi=0. This implies

0=limiI(ri,κi)I(r,κi)limiI(ri)I(r) (A16)

where we first used continuity of mutual information, I(ri,κi)=I(ri) and I(r,κi)I(r).

Now assume that r is rank regular. Since ri converges to r, rank A(ri)=rank A(r) for all sufficiently large i. Let κΛ(r) be an optimal solution of Equation (A7) for r, so that I(r,κ)=I(r). Given Equation (A15), for all sufficiently large i there exists κiΛ(ri) such that

κκiβA(r)A(ri).

As before, we have limiA(ri)=A(r) and limiκκi=0, which implies

0=limiI(ri,κi)I(r,κ)limiI(ri)I(r) (A17)

where we first used continuity of mutual information, I(ri,κi)I(ri), and I(r,κ)I(r).

Combining Equations (A16) and (A17) proves continuity, limiI(ri)=I(r), under the assumption that A(r) is rank regular.

Finally, note that A(r) is a real analytic function of r. This means that almost all r rank regular, because those r which are not rank regular form a proper analytic subset of Ω (which has measure zero) [76]. Thus, I(r) is continuous almost everywhere. □

We finish our analysis of the continuity of I by showing global continuity when the target is a binary random variable.

Corollary A1. 

I(X1;;Xn  Y) is continuous everywhere when Y is a binary random variable.

Proof. 

In an overloading of notation, let I(r) and Ir(Xi;Y) indicate I(X1;;Xn  Y) and the mutual information I(Xi;Y), respectively, for the joint distribution rX1XnY. By Theorem 5, I can only be discontinuous at the joint distribution PX1XnY if there is a source Xi with rank PY|Xi=1<|Y|. However, if source Xi has rank 1, then the conditional distributions PY|Xi=xi are the same for all xi, so IP(Xi;Y)=0 and I(P)=0 (since 0I(P)IP(Xi;Y)). Finally, consider any sequence of joint distributions sX1XnY(n) for n=1,2, that converges to PX1XnY. We have

0limnI(s(n))limnIs(n)(Xi;Y)=IP(Xi;Y)=0,

where we used the continuity of mutual information. This shows that limnI(s(n))=0=I(P), proving continuity. □

Appendix E. Behavior of I on Gaussian Random Variables

Although in this paper we focused on random variables with finite sets of outcomes, we can briefly comment on the behavior of Blackwell redundancy on Gaussian random variables. Suppose that all sources X1,,Xn and the target Y are continuous-valued, and that the pairwise marginals PXiY are multivariate Gaussians. In addition, suppose that Y is one-dimensional (the sources Xi can be multi-dimensional). Given these assumptions, Barrett [51] analyzed the IBROJA measure and showed that the corresponding excluded information obeys E(Xj  Y|Xi;Xj)=0 whenever I(Xi;Y)I(Xj;Y). Recall that IBROJA is equivalent to Blackwell union information I. Then, given the Blackwell property, Theorem 4, and the data processing inequality, Equation (24), the result in Ref. [51] implies that XiYXj if and only if I(Xi;Y)I(Xj;Y). Thus, for Gaussian random variables, Blackwell redundancy I is equivalent to IMMI redundancy, as defined in Equation (19). This parallels the case for most other redundancy measures [51].

Appendix F. Operational Interpretation of the IGH

As mentioned in the main text, the redundancy measure IGH is a special case of Equation (7), where the “more informative” order BC is defined in terms of conditional independence BCY. Here we show that this ordering relation can be given an operational interpretation, which is similar but distinct from the operational interpretation of the Blackwell order Y discussed in Section 5.1.

To introduce this operational interpretation, let the random variable Y represent the state of the environment, and assume there are two random variables B and C which have some information about Y. Suppose that an agent tries to maximize expected utility u(a,y) by using a strategy that depends either on the outcomes of B or C. Blackwell’s theorem tells us that BYC iff an agent with access to C can always achieve higher expected utility than an agent with access to B. It is possible, however, the agent with access to C may do worse than the agent with access to B, conditional on the event that random variable C has some particular outcome c. In the following theorem, we show BCY iff the agent cannot do better with B than C, even when conditioned on any particular outcome C=c. (We thank Johannes Rauh for suggesting this simplified proof).

Theorem A2. 

Given random variables B, C, and Y, BCY if and only if

maxκA|By,a,bPYB|C(y,b|c)κA|B(a|b)u(a,y)maxκA|Cy,aPY|C(y|c)κA|C(a|c)u(a,y). (A18)

for all utility functions u(a,y) and all cC with PC(c)>0.

Proof. 

Consider any cC with PC(c)>0. By multiplying both sides of Equation (A18) by PC(c) and rearranging, this inequality can be rewritten as

maxκA|By,a,bPY(y)PC|Y(c|y)PB|Y,C(b|y,c)κA|B(a|b)u(a,y)  maxκA|Cy,aPY(y)PC|Y(c|y)κA|C(a|c)u(a,y). (A19)

Note that if Equation (A18) holds for a given c and all utility functions, then it must also hold for the utility function u(a,y):=PC|Y(c|y)u(a,y). Plugging into Equation (A19) gives

maxκA|By,a,bPY(y)PB|Y,C(b|y,c)κA|B(a|b)u(a,y)maxκA|Cy,aPY(y)κA|C(a|c)u(a,y). (A20)

Now define two random variables: a constant random variable C^c with a single outcome c and B^c with the same outcomes as B but having the conditional distribution PB^c|Y=PB|Y,C=c. Then, Equation (A20) can be written in terms of these random variables as

maxκA|By,a,bPY(y)PB^c|Y(b|y)κA|B(a|b)u(a,y)maxκA|Cy,aPY(y)PC^c|Y(c|y)κA|C(a|c)u(a,y). (A21)

Given Equations (25) and (26), Equation (A21) holds for all u iff B^cYC^c. Since C^c has a single outcome, it is independent of Y. That means B^c must be also independent of Y and so PB^c|Y=y=PB|Y=y,C=c is the same for all y, implying that PB|Y=y,C=c=PB|C=c. Since this holds for all cC, PB|YC=PB|C and therefore BCY. □

Given Theorem A2, IGH can be given the following operational interpretation. Imagine two agents, Alice and Bob, who can acquire information about Y via different random variables, and then use this information to maximize their expected utility. Suppose that Alice has access to one of the sources Xi. Then, IGH is the maximum information that Bob can have about Y without being able to do better than Alice on any utility function, regardless of which source Xi Alice has access to, and even when conditioned on Xi having any particular outcome xi.

Appendix G. Equivalence of I and IBROJA

The following proves that I and IBROJA, as defined via the optimization problems in Equations (28) and (31), are equivalent.

Theorem A3. 

I(X1;;Xn  Y)=IBROJA(X1;;Xn  Y).

Proof. 

Let X˜1,,X˜n be a set of random variables that achieve I(Y;X˜1,,X˜n)=IBROJA(X1;;Xn  Y). Define the random variable Q:=(X˜1,,X˜n), and note that X˜iYQ for all i. Since PX˜iY=PXiY for all i, it must be that XiYQ for all i. Thus Q satisfies the constraints of the optimization problem in Equation (28), so

I(X1;;Xn  Y)I(Y;Q)=IBROJA(X1;;Xn  Y). (A22)

Next, consider the optimization in Equation (28). For any ϵ>0, let Q be a random variable that satisfies XiYQ and achieves

I(Y;Q)I(X1;;Xn  Y)+ϵ. (A23)

For each i, let κXi|Q be a channel that obeys PXi|Y(xi|y)=qκXi|Q(xi|q)PQ|Y(q|y) (such a channel must exist since XiYQ). Define the random variables X˜1,,X˜n with the joint distribution

PYQX˜1X˜n(y,q,x1,,xn)=PY(y)PQ|Y(q|y)iκXi|Q(xi|q). (A24)

Note that the pairwise marginals obey PX˜iY=PXiY. Thus, all of the X˜i satisfy the marginal constraints in the right hand side of Equation (31), so

IBROJA(X1;;Xn  Y)I(Y;X˜1,,X˜n). (A25)

By elementary properties of mutual information, we have

I(Y;X˜1,,X˜n)I(Y;Q,X˜1,,X˜n) (A26)

Given Equation (A24), the Markov condition YQX˜1,,X˜n holds, so

I(Y;X˜1,,X˜n)I(Y;Q) (A27)

by the data processing inequality. Combining Equations (A23) and (A25) to (A27) implies

IBROJA(X1;;Xn  Y)I(Y;Q)I(X1;;Xn  Y)+ϵ.

Since this holds for all ϵ, we can take the limit ϵ0 to give IBROJA(X1;;Xn  Y)I(X1;;Xn  Y). The result follows by combining with Equation (A22). □

Appendix H. Relation between IWB and Our General Framework

Here we consider whether the redundancy measure IWB proposed by Williams and Beer [11] can be put in the general form Equation (7). This measure is defined as

IWB(X1;;Xn  Y):=yPY(y)miniI(Xi;Y=y), (A28)

where I(Xi;Y=y) is called the “specific information” between Xi and target outcome y,

I(Xi;Y=y):=DKL(PXi|Y=yPXi)=xiPXi|Y(xi|y)logPXi|Y(xi|y)PXi(xi),

and DKL is Kullback-Leibler (KL) divergence.

Specific information obeys I(X;Y)=yPY(y)I(X;Y=y). Thus, Equation (A28) looks similar to a mutual information expression, where each specific information term is given by the smallest specific information that y carries about any of the sources. Motivated by this interpretation, one might ask whether there exists a random variable Q whose specific information terms are equal to I(Q;Y=y)=miniI(Xi;Y=y) for each y. If such a random variable existed, then IWB could be written as

IWB(X1;;Xn  Y)=?maxQI(Q;Y) such that i,y:I(Q;Y=y)I(Xi;Y=y), (A29)

which has the form of Equation (7), with the ⊏ order defined as

AB iff I(A;Y=y)I(B;Y=y) for all yY. (A30)

Here we provide a counterexample to demonstrate that such a variable does not exist in general, and so therefore Equation (A29) is not generally valid. Suppose Y has three outcomes Y={0,1,2} with a uniform distribution, and consider two binary sources X1,X2 with the following conditional distributions,

PX1|Y(x1|y)=δ(x1,y)if y{0,1}12δ(x1,0)+12δ(x1,1)if y=2PX2|Y(x1|y)=δ(x1,y)if y{0,2}12δ(x1,0)+12δ(x1,2)if y=1

In this case, a simple calculation shows that the specific information obeys (in bits)

I(X1;Y=0)=1I(X2;Y=0)=1I(X1;Y=1)=1I(X2;Y=1)=0I(X1;Y=2)=0I(X2;Y=2)=1

Plugging into Equation (A28) gives IWB(X1;X2  Y)=1/3.

Now consider the optimization problem in Equation (A29). Since I(X1;Y=2)=I(X2;Y=1)=0, any allowed Q must satisfy I(Q;Y=1)=I(Q;Y=2)=0 and therefore PQ|Y=1=PQ=PQ|Y=2. Combined with the marginalization identity PY(0)PQ|Y=0+PY(1)PQ|Y=1+PY(2)PQ|Y=2=PQ, this implies that PQ|Y=0=PQ and therefore that I(Q;Y=0)=0. Thus, any allowed Q obeys I(Q;Y)=0IWB. This means that IWB cannot be expressed in the form of Equation (7) when ⊏ is defined as Equation (A30).

Appendix I. Miscellaneous Derivations

Proof of Lemma 1. 

We use a modified version of the example in [39,68]. Consider a set of n3 sources. The inclusion-exclusion principle states that

I(X1;;Xn  Y)=J{1,,n}{}(1)J1I(XJ1;;XJJ  Y). (A31)

Now, let X1,,Xn1 be uniformly distributed and statistically independent binary random variables, and take Xn=X1 XOR X2 and Y=(X1,X2,Xn). Note that I(Y;Xi)=1 bit for i{1,2,n} and I(Y;Xi)=0 for i{3,,n1}, and that I(Y;X1,,Xn)=2 bit. Thus, I(Xi;Xj  Y)=0 whenever i{3,,n1} or j{3,,n1}, as follows from Symmetry, Self-redundancy, and Monotonicity. Note also that the outcomes of Y are simply a relabelling of (X1,X2), and similarly for (X1,Xn) and (X2,Xn). Then, since by Independent identity property, I(Xi;Xj  Y)=0 for ij where i,j{1,2,n}. Thus, I(Xi;Xj  Y)=0 for all pairs ij. By Monotonicity, redundancy is 0 for any set of 2 or more sources.

Plugging this into Equation (A31) gives

I(X1;;Xn  Y)=iI(Xi  Y)=iI(Xi  Y)=3 bit

Note that this exceeds the total amount of information about the target provided jointly by all sources, which is only 2 bits, so I(X1;;Xn  Y)¬I(Y;X1,,Xn). □

Proof of Theorem 3. 

Without loss of generality, let i=1. We will use that U(X1  Y|X1;;Xn)=0 is equivalent to

I(X1;;Xn  Y)=I(X1;Y). (A32)

We will use that by monotonicity of mutual information with respect to Y (see Section 4.1),

I(X1;Y)I(X1;;Xn  Y). (A33)

We first prove the “if” direction. Since Q=X1 is in the feasible set of Equation (27), I(X1;;Xn  Y)I(X1;Y). Combining with Equation (A33) gives Equation (A32).

We now prove the “only if” direction. As described in Appendix C, I can be expressed as an optimization over a finite dimensional, closed, and bounded region, so the supremum in Equation (27) is achieved. Thus, there is some Q such that QYXi for all i and

I(Y;Q)=I(X1;;Xn  Y). (A34)

Since QYX1, there is a conditional probability distribution κQ|X1 such that PQ|Y(q|y)=x1κQ|X1(q|x1)PX1|Y(x1|y). Define a random variable Q˜ with the joint distribution

PYX1Q˜(y,x1,q)=κQ|X1(q|x1)PYX1(y,x1).

We will use that PQY=PQ˜Y. Then, the chain rule for mutual information gives

I(Y;X1,Q˜)=I(Y;Q˜)+I(Y;X1|Q˜)=I(Y;X1)+I(Y;Q˜|X1)=I(Y;X1),

where we used the Markov condition YX1Q˜. Combining and rearranging gives

I(Y;Q˜)=I(Y;X1)I(Y;X1|Q˜). (A35)

Now assume that Equation (A32) holds. Combining with Equation (A34) and PQY=PQ˜Y gives I(Y;X1)=I(Y;Q)=I(Y;Q˜). Combining with Equation (A35) gives I(Y;X1|Q˜)=0, meaning that the Markov condition YQ˜X1 holds and therefore X1YQ˜. Since QYXi for all i and PQY=PQ˜Y, it also the case that Q˜YXi for all i. Finally, since Y is transitive, X1YXi for all i, which is the desired result. □

Proof of Theorem 4. 

Without loss of generality, let i=1. We will use that E(X1  Y|X1;;Xn)=0 is equivalent to

I(X1;;Xn  Y)=I(X1;Y). (A36)

We will use that by monotonicity of mutual information with respect to Y (see Section 4.1),

I(X1;Y)I(X1;;Xn  Y). (A37)

We first prove the “if” direction. Since Q=X1 is in the feasible set of Equation (28), I(X1;;Xn  Y)I(X1;Y). Combining with Equation (A37) gives Equation (A36).

We now prove the “only if” direction. As we show in Appendix G, I is equivalent to IBROJA, which is defined as an optimization over a finite dimensional, closed, and bounded region. Thus the infimum in Equation (28) is always achieved, so there is some Q such that XiYQ for all i and

I(Y;Q)=I(X1;;Xn  Y). (A38)

Moreover, since X1YQ, there is a conditional probability distribution κX1|Q such that PX1|Y(x1|y)=qκX1|Q(x1|q)PQ|Y(q|y). Define a random variable X˜1 with the joint distribution

PYX˜1Q(y,x1,q)=κX1|Q(x1|q)PQY(q,y).

We will use that PX1Y=PX˜1Y. Then, using the chain rule for mutual information,

I(Y;X˜1,Q)=I(Y;X˜1)+I(Y;Q|X˜1)=I(Y;Q)+I(Y;X˜1|Q)=I(Y;Q),

where we used the Markov condition YQX˜1. Combining and rearranging gives

I(Y;X˜1)=I(Y;Q)I(Y;Q|X˜1). (A39)

Now assume that Equation (A36) holds. Combining with Equation (A38) and PX1Y=PX˜1Y gives I(Y;X1)=I(Y;X˜1)=I(Y;Q). Combining with Equation (A39) gives I(Y;Q|X˜1)=0, meaning that the Markov condition YX˜1Q holds and therefore QYX˜1. Since PX1Y=PX˜1Y, it is also the case that QYX1. Finally, since XiYQ for all i and Y is transitive, XiYX1 for all i, which is the desired result. □

Proof of Theorem 6. 

Consider any random variable Q which achieves the maximum in Equation (27). This implies there are channels κQ|X1 and κQ|X2 such that for any qQ and (x1,x2)X1×X2 with pX1X2(x1,x2)>0,

PQ|X1X2(q|x1,x2)=x1κQ|X1(q|x1)PX1|X1X2(x1|x1,x2)PQ|X1X2(q|x1,x2)=x2κQ|X2(q|x2)PX2|X1X2(x2|x1,x2).

We now equate the above two expressions, while using that PX1|X1X2(x1|x1,x2)=δ(x1,x1) and PX2|X1X2(x2|x1,x2)=δ(x2,x2) (where δ(·,·) is the Kronecker delta). This gives

κQ|X1(q|x1)=κQ|X2(q|x2) (A40)

for all q and any (x1,x2) where pX1X2(x1,x2)>0.

Now consider a bipartite graph with vertex set X1X2 and an edge between vertex x1 and vertex x2 if PX1X2(x1,x2)>0. Define Π to be the set of connected components of this bipartite graph, and let f1:X1Π be a function that maps each x1 to its corresponding connected component (for any x1 with PX1(x1)=0, f1(x1) can be any value). Equation (A40) implies that if x1 and x1 both belong to the same connected component, then the constraint Equation (A40) will “propagate” from x1 to x1, so that κQ|X1(q|x1)=κQ|X1(q|x1). Said differently, this means that κQ|X1(q|x1)=κQ|X1(q|f1(x1)) and that the Markov condition (X1,X2)X1f1(X1)Q holds. This gives

I(X1,X2;Q)I(X1,X2;f1(X1))=H(f1(X1)), (A41)

where the first inequality uses the data processing inequality, and the second equality uses that f1(X1) is a deterministic function of (X1,X2). The upper bound in Equation (A41) is achieved when Q=f1(X1), thus QX1. A similar argument shows that QX2.

We have shown that the constraints in Equation (27) can be replaced by QX1,QX2. It is also clear that any Q which is a deterministic function of either X1 or X2 must also be a deterministic function of the target Y=(X1,X2), hence I(Y;Q)=H(Q). Combining these results shows that Equation (27) is equivalent to Equation (32) for the COPY gate. □

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Footnotes

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Schneidman E., Bialek W., Berry M.J. Synergy, Redundancy, and Independence in Population Codes. J. Neurosci. 2003;23:11539–11553. doi: 10.1523/JNEUROSCI.23-37-11539.2003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Daniels B.C., Ellison C.J., Krakauer D.C., Flack J.C. Quantifying collectivity. Curr. Opin. Neurobiol. 2016;37:106–113. doi: 10.1016/j.conb.2016.01.012. [DOI] [PubMed] [Google Scholar]
  • 3.Tax T., Mediano P., Shanahan M. The partial information decomposition of generative neural network models. Entropy. 2017;19:474. doi: 10.3390/e19090474. [DOI] [Google Scholar]
  • 4.Amjad R.A., Liu K., Geiger B.C. Understanding individual neuron importance using information theory. arXiv. 2018 doi: 10.1109/TNNLS.2021.3088685.1804.06679 [DOI] [PubMed] [Google Scholar]
  • 5.Lizier J., Bertschinger N., Jost J., Wibral M. Information decomposition of target effects from multi-source interactions: Perspectives on previous, current and future work. Entropy. 2018;20:307. doi: 10.3390/e20040307. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Wibral M., Priesemann V., Kay J.W., Lizier J.T., Phillips W.A. Partial information decomposition as a unified approach to the specification of neural goal functions. Brain Cogn. 2017;112:25–38. doi: 10.1016/j.bandc.2015.09.004. [DOI] [PubMed] [Google Scholar]
  • 7.Timme N., Alford W., Flecker B., Beggs J.M. Synergy, redundancy, and multivariate information measures: An experimentalist’s perspective. J. Comput. Neurosci. 2014;36:119–140. doi: 10.1007/s10827-013-0458-4. [DOI] [PubMed] [Google Scholar]
  • 8.Chan C., Al-Bashabsheh A., Ebrahimi J.B., Kaced T., Liu T. Multivariate Mutual Information Inspired by Secret-Key Agreement. Proc. IEEE. 2015;103:1883–1913. doi: 10.1109/JPROC.2015.2458316. [DOI] [Google Scholar]
  • 9.Rosas F.E., Mediano P.A., Jensen H.J., Seth A.K., Barrett A.B., Carhart-Harris R.L., Bor D. Reconciling emergences: An information-theoretic approach to identify causal emergence in multivariate data. PLoS Comput. Biol. 2020;16:e1008289. doi: 10.1371/journal.pcbi.1008289. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Cang Z., Nie Q. Inferring spatial and signaling relationships between cells from single cell transcriptomic data. Nat. Commun. 2020;11:2084. doi: 10.1038/s41467-020-15968-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Williams P.L., Beer R.D. Nonnegative decomposition of multivariate information. arXiv. 20101004.2515 [Google Scholar]
  • 12.Williams P.L. Ph.D. Thesis. Indiana University; Bloomington, IN, USA: 2011. Information dynamics: Its theory and application to embodied cognitive systems. [Google Scholar]
  • 13.Bertschinger N., Rauh J., Olbrich E., Jost J., Ay N. Quantifying unique information. Entropy. 2014;16:2161–2183. doi: 10.3390/e16042161. [DOI] [Google Scholar]
  • 14.Quax R., Har-Shemesh O., Sloot P. Quantifying synergistic information using intermediate stochastic variables. Entropy. 2017;19:85. doi: 10.3390/e19020085. [DOI] [Google Scholar]
  • 15.James R.G., Emenheiser J., Crutchfield J.P. Unique information via dependency constraints. J. Phys. Math. Theor. 2018;52:014002. doi: 10.1088/1751-8121/aaed53. [DOI] [Google Scholar]
  • 16.Griffith V., Chong E.K., James R.G., Ellison C.J., Crutchfield J.P. Intersection information based on common randomness. Entropy. 2014;16:1985–2000. doi: 10.3390/e16041985. [DOI] [Google Scholar]
  • 17.Griffith V., Koch C. Guided Self-Organization: Inception. Springer; Berlin/Heidelberg, Germany: 2014. Quantifying synergistic mutual information; pp. 159–190. [Google Scholar]
  • 18.Griffith V., Ho T. Quantifying redundant information in predicting a target random variable. Entropy. 2015;17:4644–4653. doi: 10.3390/e17074644. [DOI] [Google Scholar]
  • 19.Harder M., Salge C., Polani D. Bivariate measure of redundant information. Phys. Rev. 2013;87:012130. doi: 10.1103/PhysRevE.87.012130. [DOI] [PubMed] [Google Scholar]
  • 20.Ince R. Measuring Multivariate Redundant Information with Pointwise Common Change in Surprisal. Entropy. 2017;19:318. doi: 10.3390/e19070318. [DOI] [Google Scholar]
  • 21.Finn C., Lizier J. Pointwise Partial Information Decomposition Using the Specificity and Ambiguity Lattices. Entropy. 2018;20:297. doi: 10.3390/e20040297. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Shannon C. The lattice theory of information. Trans. Ire Prof. Group Inf. Theory. 1953;1:105–107. doi: 10.1109/TIT.1953.1188572. [DOI] [Google Scholar]
  • 23.Shannon C.E. A note on a partial ordering for communication channels. Inf. Control. 1958;1:390–397. doi: 10.1016/S0019-9958(58)90239-0. [DOI] [Google Scholar]
  • 24.Cohen J., Kempermann J.H., Zbaganu G. Comparisons of Stochastic Matrices with Applications in Information Theory, Statistics, Economics and Population. Springer Science & Business Media; Berlin/Heidelberg, Germany: 1998. [Google Scholar]
  • 25.Le Cam L. Sufficiency and approximate sufficiency. Ann. Math. Stat. 1964;35:1419–1455. doi: 10.1214/aoms/1177700372. [DOI] [Google Scholar]
  • 26.Korner J., Marton K. Comparison of two noisy channels. Top. Inf. Theory. 1977;16:411–423. [Google Scholar]
  • 27.Torgersen E. Comparison of Statistical Experiments. Volume 36 Cambridge University Press; Cambridge, UK: 1991. [Google Scholar]
  • 28.Blackwell D. Equivalent comparisons of experiments. Ann. Math. Stat. 1953;24:265–272. doi: 10.1214/aoms/1177729032. [DOI] [Google Scholar]
  • 29.James R., Emenheiser J., Crutchfield J. Unique information and secret key agreement. Entropy. 2019;21:12. doi: 10.3390/e21010012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Whitelaw T.A. Introduction to Abstract Algebra. 2nd ed. Blackie & Son; London, UK: 1988. OCLC: 17440604. [Google Scholar]
  • 31.Halmos P.R. Naive Set Theory. Courier Dover Publications; Mineola, NY, USA: 2017. [Google Scholar]
  • 32.McGill W. Multivariate information transmission. Trans. Ire Prof. Group Inf. Theory. 1954;4:93–111. doi: 10.1109/TIT.1954.1057469. [DOI] [Google Scholar]
  • 33.Fano R.M. The Transmission of Information: A Statistical Theory of Communications. Massachusetts Institute of Technology; Cambridge, MA, USA: 1961. [Google Scholar]
  • 34.Reza F.M. An Introduction to Information Theory. Dover Publications, Inc.; Mineola, NY, USA: 1961. [Google Scholar]
  • 35.Ting H.K. On the amount of information. Theory Probab. Its Appl. 1962;7:439–447. doi: 10.1137/1107041. [DOI] [Google Scholar]
  • 36.Yeung R.W. A new outlook on Shannon’s information measures. IEEE Trans. Inf. Theory. 1991;37:466–474. doi: 10.1109/18.79902. [DOI] [Google Scholar]
  • 37.Bell A.J. The co-information lattice; Proceedings of the Fifth International Workshop on Independent Component Analysis and Blind Signal Separation: ICA; Nara, Japan. 1–4 April 2003. [Google Scholar]
  • 38.Tilman Examples of Common False Beliefs in Mathematics (Dimensions of Vector Spaces). MathOverflow. 2010. [(accessed on 4 January 2022)]. Available online: https://mathoverflow.net/q/23501.
  • 39.Rauh J., Bertschinger N., Olbrich E., Jost J. Reconsidering unique information: Towards a multivariate information decomposition; Proceedings of the 2014 IEEE International Symposium on Information Theory; Honolulu, HI, USA. 29 June–4 July 2014; pp. 2232–2236. [Google Scholar]
  • 40.Rauh J. Secret Sharing and Shared Information. Entropy. 2017;19:601. doi: 10.3390/e19110601. [DOI] [Google Scholar]
  • 41.Chicharro D., Panzeri S. Synergy and Redundancy in Dual Decompositions of Mutual Information Gain and Information Loss. Entropy. 2017;19:71. doi: 10.3390/e19020071. [DOI] [Google Scholar]
  • 42.Ay N., Polani D., Virgo N. Information decomposition based on cooperative game theory. arXiv. 2019 doi: 10.14736/kyb-2020-5-0979.1910.05979 [DOI] [Google Scholar]
  • 43.Rosas F.E., Mediano P.A., Rassouli B., Barrett A.B. An operational information decomposition via synergistic disclosure. J. Phys. A Math. Theor. 2020;53:485001. doi: 10.1088/1751-8121/abb723. [DOI] [Google Scholar]
  • 44.Davey B.A., Priestley H.A. Introduction to Lattices and Order. Cambridge University Press; Cambridge, UK: 2002. [Google Scholar]
  • 45.Bertschinger N., Rauh J. The Blackwell relation defines no lattice; Proceedings of the 2014 IEEE International Symposium on Information Theory; Honolulu, HI, USA. 29 June–4 July 2014; Piscataway, NJ, USA: IEEE; 2014. pp. 2479–2483. [Google Scholar]
  • 46.Li H., Chong E.K. On a connection between information and group lattices. Entropy. 2011;13:683–708. doi: 10.3390/e13030683. [DOI] [Google Scholar]
  • 47.Gács P., Körner J. Common information is far less than mutual information. Probl. Control Inf. Theory. 1973;2:149–162. [Google Scholar]
  • 48.Aumann R.J. Agreeing to disagree. Ann. Stat. 1976;4:1236–1239. doi: 10.1214/aos/1176343654. [DOI] [Google Scholar]
  • 49.Banerjee P.K., Griffith V. Synergy, Redundancy and Common Information. arXiv. 20151509.03706v1 [Google Scholar]
  • 50.Hexner G., Ho Y. Information structure: Common and private (Corresp.) IEEE Trans. Inf. Theory. 1977;23:390–393. doi: 10.1109/TIT.1977.1055722. [DOI] [Google Scholar]
  • 51.Barrett A.B. Exploration of synergistic and redundant information sharing in static and dynamical Gaussian systems. Phys. Rev. E. 2015;91:052802. doi: 10.1103/PhysRevE.91.052802. [DOI] [PubMed] [Google Scholar]
  • 52.Pluim J.P., Maintz J.A., Viergever M.A. F-information measures in medical image registration. IEEE Trans. Med. Imaging. 2004;23:1508–1516. doi: 10.1109/TMI.2004.836872. [DOI] [PubMed] [Google Scholar]
  • 53.Banerjee A., Merugu S., Dhillon I.S., Ghosh J., Lafferty J. Clustering with Bregman divergences. J. Mach. Learn. Res. 2005;6:1705–1749. [Google Scholar]
  • 54.Brunel N., Nadal J.P. Mutual information, Fisher information, and population coding. Neural Comput. 1998;10:1731–1757. doi: 10.1162/089976698300017115. [DOI] [PubMed] [Google Scholar]
  • 55.Li M., Vitányi P. An Introduction to Kolmogorov Complexity and Its Applications. Volume 3 Springer; Berlin/Heidelberg, Germany: 2008. [Google Scholar]
  • 56.Shmaya E. Comparison of information structures and completely positive maps. J. Phys. A Math. Gen. 2005;38:9717. doi: 10.1088/0305-4470/38/44/008. [DOI] [Google Scholar]
  • 57.Chefles A. The quantum Blackwell theorem and minimum error state discrimination. arXiv. 20090907.0866 [Google Scholar]
  • 58.Buscemi F. Comparison of quantum statistical models: Equivalent conditions for sufficiency. Commun. Math. Phys. 2012;310:625–647. doi: 10.1007/s00220-012-1421-3. [DOI] [Google Scholar]
  • 59.Ohya M., Watanabe N. Quantum entropy and its applications to quantum communication and statistical physics. Entropy. 2010;12:1194–1245. doi: 10.3390/e12051194. [DOI] [Google Scholar]
  • 60.Rauh J., Banerjee P.K., Olbrich E., Jost J., Bertschinger N., Wolpert D. Coarse-Graining and the Blackwell Order. Entropy. 2017;19:527. doi: 10.3390/e19100527. [DOI] [Google Scholar]
  • 61.Cover T.M., Thomas J.A. Elements of Information Theory. John Wiley & Sons; Hoboken, NJ, USA: 2006. [Google Scholar]
  • 62.Makur A., Polyanskiy Y. Comparison of channels: Criteria for domination by a symmetric channel. IEEE Trans. Inf. Theory. 2018;64:5704–5725. doi: 10.1109/TIT.2018.2839743. [DOI] [Google Scholar]
  • 63.Benson H.P. Handbook of Global Optimization. Springer; Berlin/Heidelberg, Germany: 1995. Concave minimization: Theory, applications and algorithms; pp. 43–148. [Google Scholar]
  • 64.Kolchinsky A. Code for Computing I∩≺. 2022. [(accessed on 3 January 2022)]. Available online: https://github.com/artemyk/redundancy.
  • 65.Banerjee P.K., Rauh J., Montúfar G. Computing the unique information; Proceedings of the 2018 IEEE International Symposium on Information Theory (ISIT); Vail, CO, USA. 17–22 June 2018; Piscataway, NJ, USA: IEEE; 2018. pp. 141–145. [Google Scholar]
  • 66.Banerjee P.K., Olbrich E., Jost J., Rauh J. Unique informations and deficiencies; Proceedings of the 2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton); Monticello, IL, USA. 2–5 October 2018; Piscataway, NJ, USA: IEEE; 2018. pp. 32–38. [Google Scholar]
  • 67.Wolf S., Wultschleger J. Zero-error information and applications in cryptography; Proceedings of the Information Theory Workshop; San Antonio, TX, USA. 24–29 October 2004; Piscataway, NJ, USA: IEEE; 2004. pp. 1–6. [Google Scholar]
  • 68.Bertschinger N., Rauh J., Olbrich E., Jost J. Proceedings of the European Conference on Complex Systems 2012. Springer; Berlin/Heidelberg, Germany: 2013. Shared information - new insights and problems in decomposing information in complex systems; pp. 251–269. [Google Scholar]
  • 69.James R.G., Ellison C.J., Crutchfield J.P. dit: A Python package for discrete information theory. J. Open Source Softw. 2018;3:738. doi: 10.21105/joss.00738. [DOI] [Google Scholar]
  • 70.Kovačević M., Stanojević I., Šenk V. On the entropy of couplings. Inf. Comput. 2015;242:369–382. doi: 10.1016/j.ic.2015.04.003. [DOI] [Google Scholar]
  • 71.Horst R. On the global minimization of concave functions. Oper.-Res.-Spektrum. 1984;6:195–205. doi: 10.1007/BF01720068. [DOI] [Google Scholar]
  • 72.Pardalos P.M., Rosen J.B. Methods for global concave minimization: A bibliographic survey. Siam Rev. 1986;28:367–379. doi: 10.1137/1028106. [DOI] [Google Scholar]
  • 73.Williams P.L., Beer R.D. Generalized measures of information transfer. arXiv. 20111102.1507 [Google Scholar]
  • 74.Dubins L.E. On extreme points of convex sets. J. Math. Anal. Appl. 1962;5:237–244. doi: 10.1016/S0022-247X(62)80007-9. [DOI] [Google Scholar]
  • 75.Yeung R.W. A First Course in Information Theory. Springer Science & Business Media; Berlin/Heidelberg, Germany: 2012. [Google Scholar]
  • 76.Lewis A.D. Semicontinuity of Rank and Nullity and Some Consequences. 2009. [(accessed on 3 January 2022)]. Available online: https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.709.7290&rep=rep1&type=pdf.
  • 77.Hoffman A.J. On Approximate Solutions of Systems of Linear Inequalities. J. Res. Natl. Bur. Stand. 1952;49:174–176. doi: 10.6028/jres.049.027. [DOI] [Google Scholar]
  • 78.Daniel J.W. On Perturbations in Systems of Linear Inequalities. SIAM J. Numer. Anal. 1973;10:299–307. doi: 10.1137/0710029. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Not applicable.


Articles from Entropy are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES