Skip to main content
Entropy logoLink to Entropy
. 2018 Sep 6;20(9):679. doi: 10.3390/e20090679

Logical Entropy: Introduction to Classical and Quantum Logical Information Theory

David Ellerman 1
PMCID: PMC7513204  PMID: 33265768

Abstract

Logical information theory is the quantitative version of the logic of partitions just as logical probability theory is the quantitative version of the dual Boolean logic of subsets. The resulting notion of information is about distinctions, differences and distinguishability and is formalized using the distinctions (“dits”) of a partition (a pair of points distinguished by the partition). All the definitions of simple, joint, conditional and mutual entropy of Shannon information theory are derived by a uniform transformation from the corresponding definitions at the logical level. The purpose of this paper is to give the direct generalization to quantum logical information theory that similarly focuses on the pairs of eigenstates distinguished by an observable, i.e., qudits of an observable. The fundamental theorem for quantum logical entropy and measurement establishes a direct quantitative connection between the increase in quantum logical entropy due to a projective measurement and the eigenstates (cohered together in the pure superposition state being measured) that are distinguished by the measurement (decohered in the post-measurement mixed state). Both the classical and quantum versions of logical entropy have simple interpretations as “two-draw” probabilities for distinctions. The conclusion is that quantum logical entropy is the simple and natural notion of information for quantum information theory focusing on the distinguishing of quantum states.

Keywords: logical entropy, partition logic, qudits of an observable

1. Introduction

The formula for “classical” logical entropy goes back to the early Twentieth Century [1]. It is the derivation of the formula from basic logic that is new and accounts for the name. The ordinary Boolean logic of subsets has a dual logic of partitions [2] since partitions (=equivalence relations = quotient sets) are category-theoretically dual to subsets. Just as the quantitative version of subset logic is the notion of logical finite probability, so the quantitative version of partition logic is logical information theory using the notion of logical entropy [3]. This paper generalizes that “classical” (i.e., non-quantum) logical information theory to the quantum version. The classical logical information theory is briefly developed before turning to the quantum version. Applications of logical entropy have already been developed in several special mathematical settings; see [4] and the references cited therein.

2. Duality of Subsets and Partitions

The foundations for classical and quantum logical information theory are built on the logic of partitions ([2,5]), which is dual (in the category-theoretic sense) to the usual Boolean logic of subsets. This duality can be most simply illustrated using a set function f:XY. The image fX is a subset of the codomain Y and the inverse-image or coimage f1Y is a partition on the domain X, where a partition π=B1,,BI on a set U is a set of subsets or blocks Bi that are mutually disjoint and jointly exhaustive (iBi=U) In category theory, the duality between subobject-type constructions (e.g., limits) and quotient-object-type constructions (e.g., colimits) is often indicated by adding the prefix “co-” to the latter. Hence, the usual Boolean logic of “images” has the dual logic of “coimages”. However, the duality runs deeper than between subsets and partitions. The dual to the notion of an “element” (an “it”) of a subset is the notion of a “distinction” (a “dit”) of a partition, where u,uU×U is a distinction or dit of π if the two elements are in different blocks. Let ditπU×U be the set of distinctions or ditset of π. Similarly an indistinction or indit of π is a pair u,uU×U in the same block of π. Let inditπU×U be the set of indistinctions or inditset of π. Then, inditπ is the equivalence relation associated with π, and ditπ=U×Uinditπ is the complementary binary relation that might be called apartition relation or an a partness relation. The notions of a distinction and indistinction of a partition are illustrated in Figure 1.

Figure 1.

Figure 1

Distinctions and indistinctions of a partition.

The relationships between Boolean subset logic and partition logic are summarized in Figure 2, which illustrates the dual relationship between the elements (“its”) of a subset and the distinctions (“dits”) of a partition.

Figure 2.

Figure 2

Dual logics: Boolean subset logic of subsets and partition logic.

3. From the Logic of Partitions to Logical Information Theory

In Gian-Carlo Rota’s Fubini Lectures [6] (and in his lectures at MIT), he remarked in view of duality between partitions and subsets that, quantitatively, the “lattice of partitions plays for information the role that the Boolean algebra of subsets plays for size or probability” ([7] p. 30) or symbolically:

LogicalProbabilityTheoryBooleanLogicofSubsets=LogicalInformationTheoryLogicofPartitions.

Andrei Kolmogorov has suggested that information theory should start with sets, not probabilities.

Information theory must precede probability theory, and not be based on it. By the very essence of this discipline, the foundations of information theory have a finite combinatorial character.

([8] p. 39)

The notion of information-as-distinctions does start with the set of distinctions, the information set, of a partition π=B1,,BI on a finite set U where that set of distinctions (dits) is:

ditπ=u,u:Bi,Biπ,BiBi,uBi,uBi.

The normalized size of a subset is the logical probability of the event, and the normalized size of the ditset of a partition is, in the sense of measure theory, the measure of the amount of information in a partition. Thus, we define the logical entropy of a partition π=B1,,BI, denoted hπ, as the size of the ditset ditπU×U normalized by the size of U×U:

hπ=ditπU×U=U×Ui=1IBi×BiU×U=1i=1IBiU2=1i=1IPrBi2.

In two independent draws from U, the probability of getting a distinction of π is the probability of not getting an indistinction.

Given any probability measure p:U[0,1] on U=u1,,un, which defines pi=pui for i=1,,n, the product measure p×p:U×U0,1 has for any binary relation RU×U the value of:

p×pR=ui,ujRpuipuj=ui,ujRpipj.

The logical entropy of π in general is the product-probability measure of its ditset ditπU×U, where PrB=uBpu:

hπ=p×pditπ=ui,ujditπpipj=1BπPrB2.

The standard interpretation of hπ is the two-draw probability of getting a distinction of the partition π, just as PrS is the one-draw probability of getting an element of the subset-event S.

4. Compound Logical Entropies

The compound notions of logical entropy are also developed in two stages, first as sets and then, given a probability distribution, as two-draw probabilities. After observing the similarity between the formulas holding for the compound Shannon entropies and the Venn diagram formulas that hold for any measure (in the sense of measure theory), the information theorist, Lorne L. Campbell, remarked in 1965 that the similarity:

suggests the possibility that Hα and Hβ are measures of sets, that Hα,β is the measure of their union, that Iα,β is the measure of their intersection, and that Hα|β is the measure of their difference. The possibility that Iα,β is the entropy of the “intersection” of two partitions is particularly interesting. This “intersection,” if it existed, would presumably contain the information common to the partitions α and β.

([9] p. 113)

Yet, there is no such interpretation of the Shannon entropies as measures of sets, but the logical entropies precisely fulfill Campbell’s suggestion (with the “intersection” of two partitions being the intersection of their ditsets). Moreover, there is a uniform requantifying transformation (see the next section) that obtains all the Shannon definitions from the logical definitions and explains how the Shannon entropies can satisfy the Venn diagram formulas (e.g., as a mnemonic) while not being defined by a measure on sets.

Given partitions π=B1,,BI and σ=C1,,CJ on U, the joint information set is the union of the ditsets, which is also the ditset for their join: ditπditσ=ditπσU×U. Given probabilities p=p1,,pn on U, the joint logical entropy is the product probability measure on the union of ditsets:

hπ,σ=hπσ=p×pditπditσ=1i,jPrBiCj2.

The information set for the conditional logical entropy hπ|σ is the difference of ditsets, and thus, that logical entropy is:

hπ|σ=p×pditπditσ=hπ,σhσ.

The information set for the logical mutual information mπ,σ is the intersection of ditsets, so that logical entropy is:

mπ,σ=p×pditπditσ=hπ,σhπ|σhσ|π=hπ+hσhπ,σ.

Since all the logical entropies are the values of a measure p×p:U×U0,1 on subsets of U×U, they automatically satisfy the usual Venn diagram relationships as in Figure 3.

Figure 3.

Figure 3

Venn diagram for logical entropies as values of a probability measure p×p on U×U.

At the level of information sets (w/o probabilities), we have the information algebra Iπ,σ, which is the Boolean subalgebra of U×U generated by ditsets and their complements.

5. Deriving the Shannon Entropies from the Logical Entropies

Instead of being defined as the values of a measure, the usual notions of simple and compound entropy ‘burst forth fully formed from the forehead’ of Claude Shannon [10] already satisfying the standard Venn diagram relationships (one author surmised that “Shannon carefully contrived for this ‘accident’ to occur” ([11] p. 153)). Since the Shannon entropies are not the values of a measure, many authors have pointed out that these Venn diagram relations for the Shannon entropies can only be taken as “analogies” or “mnemonics” ([9,12]). Logical information theory explains this situation since all the Shannon definitions of simple, joint, conditional and mutual information can be obtained by a uniform requantifying transformation from the corresponding logical definitions, and the transformation preserves the Venn diagram relationships.

This transformation is possible since the logical and Shannon notions of entropy can be seen as two different ways to quantify distinctions; and thus, both theories are based on the foundational idea of information-as-distinctions.

Consider the canonical case of n equiprobable elements, pi=1n. The logical entropy of 1 = B1,,Bn where Bi=ui with p=1n,,1n is:

U×UΔU×U=n2nn2=11n=1PrBi.

The normalized number of distinctions or ‘dit-count’ of the discrete partition 1 is 11n=1PrBi. The general case of logical entropy for any π=B1,,BI is the average of the dit-counts 1PrBi for the canonical cases:

hπ=iPrBi1PrBi.

In the canonical case of 2n equiprobable elements, the minimum number of binary partitions (“yes-or-no questions” or “bits”) whose join is the discrete partition 1=B1,,B2n with PrBi=12n, i.e., that it takes to uniquely encode each distinct element, is n, so the Shannon–Hartley entropy [13] is the canonical bit-count:

n=log22n=log211/2n=log21PrBi.

The general case of Shannon entropy is the average of these canonical bit-counts log21PrBi:

Hπ=iPrBilog21PrBi.

The dit-bit transform essentially replaces the canonical dit-counts by the canonical bit-counts. First, express any logical entropy concept (simple, joint, conditional or mutual) as an average of canonical dit-counts 1PrBi, and then, substitute the canonical bit-count log1PrBi to obtain the corresponding formula as defined by Shannon. Figure 4 gives examples of the dit-bit transform.

Figure 4.

Figure 4

Summary of the dit-bit transform.

For instance,

hπ|σ=hπ,σhσ=i,jPrBiCj1PrBiCjjPrCj1PrCj

is the expression for hπ|σ as an average over 1PrBiCj and 1PrCj, so applying the dit-bit transform gives:

i,jPrBiCjlog1/PrBiCjjPrCjlog1/PrCj=Hπ,σHσ=Hπ|σ.

The dit-bit transform is linear in the sense of preserving plus and minus, so the Venn diagram formulas, e.g., hπ,σ=hσ+hπ|σ, which are automatically satisfied by logical entropy since it is a measure, carry over to Shannon entropy, e.g., Hπ,σ=Hσ+Hπ|σ as in Figure 5, in spite of it not being a measure (in the sense of measure theory):

Figure 5.

Figure 5

Venn diagram mnemonic for Shannon entropies.

6. Logical Entropy via Density Matrices

The transition to quantum logical entropy is facilitated by reformulating the classical logical theory in terms of density matrices. Let U=u1,,un be the sample space with the point probabilities p=p1,,pn. An event SU has the probability PrS=ujSpj.

For any event S with PrS>0, let:

S=1PrS(χSu1p1,,χSunpn)t

(the superscript t indicates transpose) which is a normalized column vector in Rn where χS:U0,1 is the characteristic function for S, and let S be the corresponding row vector. Since S is normalized, S|S=1. Then, the density matrix representing the event S is the n×n symmetric real matrix:

ρS=SSsothat(ρ(S))j,k=1PrSpjpkforuj,ukS0otherwise.

Then, ρS2=SS|SS=ρS, so borrowing language from quantum mechanics, ρS is said to be a pure state density matrix.

Given any partition π=B1,,BI on U, its density matrix is the average of the block density matrices:

ρπ=iPrBiρBi.

Then, ρπ represents the mixed state, experiment or lottery where the event Bi occurs with probability PrBi. A little calculation connects the logical entropy hπ of a partition with the density matrix treatment:

hπ=1i=1IPrBi2=1trρπ2=hρπ

where ρπ2 is substituted for PrBi2 and the trace is substituted for the summation.

For the throw of a fair die, U=u1,u3,u5,u2,u4,u6 (note the odd faces ordered before the even faces in the matrix rows and columns) where uj represents the number j coming up, the density matrix ρ0 is the “pure state” 6×6 matrix with each entry being 16.

ρ0=1/61/61/61/61/61/61/61/61/61/61/61/61/61/61/61/61/61/61/61/61/61/61/61/61/61/61/61/61/61/61/61/61/61/61/61/6u1u3u5u2u4u6.

The nonzero off-diagonal entries represent indistinctions or indits of the partition 0 or, in quantum terms, “coherences” where all 6 “eigenstates” cohere together in a pure “superposition” state. All pure states have a logical entropy of zero, i.e., h0=0 (i.e., no dits) since trρ=1 for any density matrix, so if ρ02=ρ0, then trρ02=trρ0=1 and h0=1trρ02=0.

The logical operation of classifying undistinguished entities (like the six faces of the die before a throw to determine a face up) by a numerical attribute makes distinctions between the entities with different numerical values of the attribute. Classification (also called sorting, fibering or partitioning ([14] Section 6.1)) is the classical operation corresponding to the quantum operation of “measurement” of a superposition state by an observable to obtain a mixed state.

Now classify or “measure” the die-faces by the parity-of-the-face-up (odd or even) partition (observable) π=Bodd,Beven=u1,u3,u5,u2,u4,u6. Mathematically, this is done by the Lüders mixture operation ([15] p. 279), i.e., pre- and post-multiplying the density matrix ρ0 by Podd and by Peven, the projection matrices to the odd or even components, and summing the results:

Poddρ0Podd+Pevenρ0Peven=1/61/61/60001/61/61/60001/61/61/6000000000000000000000+0000000000000000000001/61/61/60001/61/61/60001/61/61/6=1/61/61/60001/61/61/60001/61/61/60000001/61/61/60001/61/61/60001/61/61/6=12ρBodd+12ρBeven=ρπ.

Theorem 1 (Fundamental(classical)).

The increase in logical entropy, hρπhρ0, due to a Lüders mixture operation is the sum of amplitudes squared of the non-zero off-diagonal entries of the beginning density matrix that are zeroed in the final density matrix.

Proof. 

Since for any density matrix ρ, trρ2=i,jρij2 ([16] p. 77), we have: hρπhρ0=1trρπ21trρ02=trρ02trρπ2=i,jρij02ρijπ2, if ui,uiditπ, then and only then are the off-diagonal terms corresponding to ui and ui zeroed by the Lüders operation. ☐

The classical fundamental theorem connects the concept of information-as-distinctions to the process of “measurement” or classification, which uses some attribute (like parity in the example) or “observable” to make distinctions.

In the comparison with the matrix ρ0 of all entries 16, the entries that got zeroed in the Lüders operation ρ0ρπ correspond to the distinctions created in the transition 0 = u1,,u6π=u1,u3,u5,u2,u4,u6, i.e., the odd-numbered faces were distinguished from the even-numbered faces by the parity attribute. The increase in logical entropy = sum of the squares of the off-diagonal elements that were zeroed = hπh0=2×9×162=1836=12. The usual calculations of the two logical entropies are: hπ=2×122=12 and h0=11=0.

Since, in quantum mechanics, a projective measurement’s effect on a density matrix is the Lüders mixture operation, that means that the effects of the measurement are the above-described “making distinctions” by decohering or zeroing certain coherence terms in the density matrix, and the sum of the absolute squares of the coherences that were decohered is the increase in the logical entropy.

7. Quantum Logical Information Theory: Commuting Observables

The idea of information-as-distinctions carries over to quantum mechanics.

[Information] is the notion of distinguishability abstracted away from what we are distinguishing, or from the carrier of information. …And we ought to develop a theory of information which generalizes the theory of distinguishability to include these quantum properties…

([17] p. 155)

Let F:VV be a self-adjoint operator (observable) on a n-dimensional Hilbert space V with the real eigenvalues ϕ1,,ϕI, and let U=u1,,un be an orthonormal (ON) basis of eigenvectors of F. The quantum version of a dit, a qudit, is a pair of states definitely distinguishable by some observable. Any nondegenerate self-adjoint operator such as k=1nkPuk, where Puk is the projection to the one-dimensional subspace generated by uk, will distinguish all the vectors in the orthonormal basis U, which is analogous classically to a pair u,u of distinct elements of U that are distinguishable by some partition (i.e., 1). In general, a qudit is relativized to an observable, just as classically a distinction is a distinction of a partition. Then, there is a set partition π=Bii=1,,I on the ON basis U so that Bi is a basis for the eigenspace of the eigenvalue ϕi and Bi is the “multiplicity” (dimension of the eigenspace) of the eigenvalue ϕi for i=1,,I. Note that the real-valued function f:UR takes each eigenvector in ujBiU to its eigenvalue ϕi so that f1ϕi=Bi contains all the information in the self-adjoint operator F:VV since F can be reconstructed by defining it on the basis U as Fuj=fujuj.

The generalization of “classical” logical entropy to quantum logical entropy is straightforward using the usual ways that set-concepts generalize to vector-space concepts: subsets → subspaces, set partitions → direct-sum decompositions of subspaces (hence the “classical” logic of partitions on a set will generalize to the quantum logic of direct-sum decompositions [18] that is the dual to the usual quantum logic of subspaces), Cartesian products of sets → tensor products of vector spaces and ordered pairs uk,ukU×U basis elements ukukVV. The eigenvalue function f:UR determines a partition f1ϕiiI on U, and the blocks in that partition generate the eigenspaces of F, which form a direct-sum decomposition of V.

Classically, a dit of the partition f1ϕiiI on U is a pair uk,uk of points in distinct blocks of the partition, i.e., fukfuk. Hence, a qudit of F is a pair uk,uk (interpreted as ukuk in the context of VV) of vectors in the eigenbasis definitely distinguishable by F, i.e., fukfuk, distinct F-eigenvalues. Let G:VV be another self-adjoint operator on V, which commutes with F so that we may then assume that U is an orthonormal basis of simultaneous eigenvectors of F and G. Let γjjJ be the set of eigenvalues of G, and let g:UR be the eigenvalue function so a pair uk,uk is a qudit ofG if gukguk, i.e., if the two eigenvectors have distinct eigenvalues of G.

As in classical logical information theory, information is represented by certain subsets (or, in the quantum case, subspaces) prior to the introduction of any probabilities. Since the transition from classical to quantum logical information theory is straightforward, it will be presented in table form in Figure 6 (which does not involve any probabilities), where the qudits uk,uk are interpreted as ukuk.

Figure 6.

Figure 6

The parallel development of classical and quantum logical information prior to probabilities.

The information subspace associated with F is the subspace quditFVV generated by the qudits ukuk of F. If F=λI is a scalar multiple of the identity I, then it has no qudits, so its information space quditλI is the zero subspace. It is an easy implication of the common dits theorem of classical logical information theory (([19] (Proposition 1)) or ([5] (Theorem 1.4))) that any two nonzero information spaces quditF and quditG have a nonzero intersection, i.e., have a nonzero mutual information space. That is, there are always two eigenvectors uk and uk that have different eigenvalues both by F and by G.

In a measurement, the observables do not provide the point probabilities; they come from the pure (normalized) state ψ being measured. Let ψ=j=1nuj|ψuj=j=1nαjuj be the resolution of ψ in terms of the orthonormal basis U=u1,,un of simultaneous eigenvectors for F and G. Then, pj=αjαj (αj is the complex conjugate of αj) for j=1,,n are the point probabilities on U, and the pure state density matrix ρψ=ψψ (where ψ=ψ is the conjugate-transpose) has the entries: ρjkψ=αjαk, so the diagonal entries ρjjψ=αjαj=pj are the point probabilities. Figure 7 gives the remaining parallel development with the probabilities provided by the pure state ψ where we write ρψρψ as ρψ2.

Figure 7.

Figure 7

The development of classical and quantum logical entropies for commuting F and G.

The formula hρ=1trρ2 is hardly new. Indeed, trρ2 is usually called the purity of the density matrix since a state ρ is pure if and only if trρ2=1, so hρ=0, and otherwise, trρ2<1, so hρ>0; and the state is said to be mixed. Hence, the complement 1trρ2 has been called the “mixedness” ([20] p. 5) or “impurity” of the state ρ. The seminal paper of Manfredi and Feix [21] approaches the same formula 1trρ2 (which they denote as S2) from the viewpoint of Wigner functions, and they present strong arguments for this notion of quantum entropy (thanks to a referee for this important reference to the Manfredi and Feix paper). This notion of quantum entropy is also called by the misnomer “linear entropy” even though it is quadratic in ρ, so we will not continue that usage. See [22] or [23] for references to that literature. The logical entropy is also the quadratic special case of the Tsallis–Havrda–Charvat entropy ([24,25]) and the logical distance special case [19] of C. R. Rao’s quadratic entropy [26].

What is new here is not the formula, but the whole back story of partition logic outlined above, which gives the logical notion of entropy arising out of partition logic as the normalized counting measure on ditsets of partitions; just as logical probability arises out of Boolean subset logic as the normalized counting measure on subsets. The basic idea of information is differences, distinguishability and distinctions ([3,19]), so the logical notion of entropy is the measure of the distinctions or dits of a partition, and the corresponding quantum version is the measure of the qudits of an observable.

8. Two Theorems about Quantum Logical Entropy

Classically, a pair of elements uj,uk either “cohere” together in the same block of a partition on U, i.e., are an indistinction of the partition, or they do not, i.e., they are a distinction of the partition. In the quantum case, the nonzero off-diagonal entries αjαk in the pure state density matrix ρψ=ψψ are called quantum “coherences” ([27] p. 303; [15] p. 177) because they give the amplitude of the eigenstates uj and uk “cohering” together in the coherent superposition state vector ψ=j=1nuj|ψuj=jαjuj. The coherences are classically modeled by the nonzero off-diagonal entries pjpk for the indistinctions uj,ukBi×Bi, i.e., coherences ≈ indistinctions.

For an observable F, let ϕ:UR be for F-eigenvalue function assigning the eigenvalue ϕui=ϕi for each ui in the ON basis U=u1,,un of F-eigenvectors. The range of ϕ is the set of F-eigenvalues ϕ1,,ϕI. Let Pϕi:VV be the projection matrix in the U-basis to the eigenspace of ϕi. The projective F-measurement of the state ψ transforms the pure state density matrix ρψ (represented in the ON basis U of F-eigenvectors) to yield the Lüders mixture density matrix ρψ=i=1IPϕiρψPϕi ([15] p. 279). The off-diagonal elements of ρψ that are zeroed in ρψ are the coherences (quantum indistinctions or quindits) that are turned into “decoherences” (quantum distinctions or qudits of the observable being measured).

For any observable F and a pure state ψ, a quantum logical entropy was defined as hF:ψ=trPquditFρψρψ. That definition was the quantum generalization of the “classical” logical entropy defined as hπ=p×pditπ. When a projective F-measurement is performed on ψ, the pure state density matrix ρψ is transformed into the mixed state density matrix by the quantum Lüders mixture operation, which then defines the quantum logical entropy hρψ=1trρψ2. The first test of how the quantum logical entropy notions fit together is showing that these two entropies are the same: hF:ψ=hρψ. The proof shows that they are both equal to classical logical entropy of the partition πF:ψ defined on the ON basis U=u1,,un of F-eigenvectors by the F-eigenvalues with the point probabilities pj=αjαj. That is, the inverse images Bi=ϕ1ϕi of the eigenvalue function ϕ:UR define the eigenvalue partition πF:ψ=B1,,BI on the ON basis U=u1,,un with the point probabilities pj=αjαj provided by the state ψ for j=1,,n. The classical logical entropy of that partition is: hπF:ψ=1i=1IpBi2 where pBi=ujBipj.

We first show that hF:ψ=trPquditFρψρψ=hπF:ψ. Now, quditF=ujuk:ϕujϕuk, and quditF is the subspace of VV generated by it. The n×n pure state density matrix ρψ has the entries ρjkψ=αjαk, and ρψρψ is an n2×n2 matrix. The projection matrix PquditF is an n2×n2 diagonal matrix with the diagonal entries, indexed by j,k=1,,n: PquditFjjkk=1 if ϕujϕuk and zero otherwise. Thus, in the product Pqudit(F)ρψρψ, the nonzero diagonal elements are the pjpk where ϕujϕuk, and so, the trace is j.k=1npjpk:ϕujϕuk, which by definition, is hF:ψ. Since j=1npj=i=1IpBi=1, i=1IpBi2=1=i=1IpBi2+iipBipBi. By grouping the pjpk in the trace according to the blocks of πF:ψ, we have:

hF:ψ=trPqudit(F)ρψρψ=j.k=1npjpk:ϕujϕuk=iipjpk:ujBi,ukBi=iipBipBi=1i=1IpBi2=hπF:ψ

.

To show that hρψ=1trρψ2=hπF:ψ for ρψ=i=1IPϕiρψPϕi, we need to compute trρψ2. An off-diagonal element in ρjkψ=αjαk of ρψ survives (i.e., is not zeroed and has the same value) the Lüders operation if and only if ϕuj=ϕuk. Hence, the j-th diagonal element of ρψ2 is:

k=1nαjαkαjαk:ϕuj=ϕuk=k=1npjpk:ϕuj=ϕuk=pjpBi

where ujBi. Then, grouping the j-th diagonal elements for ujBi gives ujBipjpBi=pBi2. Hence, the whole trace is: trρψ2=i=1IpBi2, and thus:

hρψ=1trρψ2=1i=1IpBi2=hF:ψ.

This completes the proof of the following theorem.

Theorem 2.

hF:ψ=hπF:ψ=hρψ.

Measurement creates distinctions, i.e., turns coherences into “decoherences”, which, classically, is the operation of distinguishing elements by classifying them according to some attribute like classifying the faces of a die by their parity. The fundamental theorem about quantum logical entropy and projective measurement, in the density matrix version, shows how the quantum logical entropy created (starting with hρψ=0 for the pure state ψ) by the measurement can be computed directly from the coherences of ρψ that are decohered in ρψ.

Theorem 3 (Fundamental (quantum)).

The increase in quantum logical entropy, hF:ψ=hρψ, due to the F-measurement of the pure state ψ, is the sum of the absolute squares of the nonzero off-diagonal terms (coherences) in ρψ (represented in an ON basis of F-eigenvectors) that are zeroed (‘decohered’) in the post-measurement Lüders mixture density matrix ρψ=i=1IPϕiρψPϕi.

Proof. 

hρψhρψ=1trρψ21trρψ2=jkρjkψ2ρjkψ2. If uj and uk are a qudit of F, then and only then are the corresponding off-diagonal terms zeroed by the Lüders mixture operation i=1IPϕiρψPϕi to obtain ρψ from ρψ. ☐

Density matrices have long been a standard part of the machinery of quantum mechanics. The fundamental theorem for logical entropy and measurement shows there is a simple, direct and quantitative connection between density matrices and logical entropy. The theorem directly connects the changes in the density matrix due to a measurement (sum of absolute squares of zeroed off-diagonal terms) with the increase in logical entropy due to the F-measurement hF:ψ=trPqudit(F)ρψρψ=hρψ (where hρψ=0 for the pure state ψ).

This direct quantitative connection between state discrimination and quantum logical entropy reinforces the judgment of Boaz Tamir and Eliahu Cohen ([28,29]) that quantum logical entropy is a natural and informative entropy concept for quantum mechanics.

We find this framework of partitions and distinction most suitable (at least conceptually) for describing the problems of quantum state discrimination, quantum cryptography and in general, for discussing quantum channel capacity. In these problems, we are basically interested in a distance measure between such sets of states, and this is exactly the kind of knowledge provided by logical entropy [Reference to [19]].

([28] p. 1)

Moreover, the quantum logical entropy has a simple “two-draw probability” interpretation, i.e., hF:ψ=hρψ is the probability that two independent F-measurements of ψ will yield distinct F-eigenvalues, i.e., will yield a qudit of F. In contrast, the von Neumann entropy has no such simple interpretation, and there seems to be no such intuitive connection between pre- and post-measurement density matrices and von Neumann entropy, although von Neumann entropy also increases in a projective measurement ([30] Theorem 11.9, p. 515).

The development of the quantum logical concepts for two non-commuting operators (see Appendix B) is the straightforward vector space version of the classical logical entropy treatment of partitions on two set X and Y (see Appendix A).

9. Quantum Logical Entropies of Density Operators

The extension of the classical logical entropy hp=1i=1npi2 of a probability distribution p=p1,,pn to the quantum case is hρ=1trρ2 where a density matrix ρ replaces the probability distribution p and the trace replaces the summation.

An arbitrary density operator ρ, representing a pure or mixed state on V, is also a self-adjoint operator on V, so quantum logical entropies can be defined where density operators play the double role of providing the measurement basis (as self-adjoint operators), as well as the state being measured.

Let ρ and τ be two non-commuting density operators on V. Let X=uii=1,,n be an orthonormal (ON) basis of ρ eigenvectors, and let λii=1,,n be the corresponding eigenvalues, which must be non-negative and sum to one, so they can be interpreted as probabilities. Let Y=vjj=1,,n be an ON basis of eigenvectors for τ, and let μjj=1,,n be the corresponding eigenvalues, which are also non-negative and sum to one.

Each density operator plays a double role. For instance, ρ acts as the observable to supply the measurement basis of uii and the eigenvalues λii, as well as being the state to be measured supplying the probabilities λii for the measurement outcomes. In this section, we define quantum logical entropies using the discrete partition 1X on the set of “index” states X=uii and similarly for the discrete partition 1Y on Y=vjj, the ON basis of eigenvectors for τ.

The qudit sets of VVVV are then defined according to the identity and difference on the index sets and independent of the eigenvalue-probabilities, e.g., qudit1X=uivjuivj:ii. The, n the qudit subspaces are the subspaces of VV2 generated by the qudit sets of generators:

  • qudit1X=uivjuivj:ii;

  • qudit1Y=uivjuivj:jj;

  • qudit1X,1Y=qudit1Xqudit1Y=uivjuivj:iiorjj;

  • qudit1X|1Y=qudit1Xqudit1Y=uivjuivj:iiandj=j;

  • qudit1Y|1X=qudit1Yqudit1X=uivjuivj:i=iandjj; and

  • qudit1Y&1X=qudit1Yqudit1X=uivjuivj:iiandjj.

Then, as qudit sets: qudit1X,1Y=qudit1X|1Yqudit1Y|1Xqudit1Y&1X, and the corresponding qudit subspaces stand in the same relation where the disjoint union is replaced by the disjoint sum.

The density operator ρ is represented by the diagonal density matrix ρX in its own ON basis X with ρXii=λi and similarly for the diagonal density matrix τY with τYjj=μj. The density operators ρ,τ on V define a density operator ρτ on VV with the ON basis of eigenvectors uivji,j and the eigenvalue-probabilities of λiμji,j. The operator ρτ is represented in its ON basis by the diagonal density matrix ρXτY with diagonal entries λiμj where 1=λ1++λnμ1++μn=i,j=1nλiμj. The probability measure puivj=λiμj on VV defines the product measure p×p on VV2 where it can be applied to the qudit subspaces to define the quantum logical entropies as usual.

In the first instance, we have:

h1X:ρτ=p×pqudit1X=λiμjλiμj:ii=iiλiλij,jμjμj=iiλiλi=1iλi2=1trρ2=hρ

and similarly h1Y:ρτ=hτ. Since all the data are supplied by the two density operators, we can use simplified notation to define the corresponding joint, conditional and mutual entropies:

  • hρ,τ=h1X,1Y:ρτ=p×pqudit1Xqudit1Y;

  • hρ|τ=h1X|1Y:ρτ=p×pqudit1Xqudit1Y;

  • hτ|ρ=h1Y|1X:ρτ=p×pqudit1Yqudit1X; and

  • mρ,τ=h1Y&1X:ρτ=p×pqudit1Yqudit1X.

Then, the usual Venn diagram relationships hold for the probability measure p×p on VV2, e.g.,

hρ,τ=hρ|τ+hτ|ρ+mρ,τ,

and probability interpretations are readily available. There are two probability distributions λ=λii and μ=μjj on the sample space 1,,n. Two pairs i,j and i,j are drawn with replacement; the first entry in each pair is drawn according to λ and the second entry according to μ. Then, hρ,τ is the probability that ii or jj (or both); hρ|τ is the probability that ii and j=j, and so forth. Note that this interpretation assumes no special significance to a λi and μi having the same index since we are drawing a pair of pairs.

In the classical case of two probability distributions p=p1,,pn and q=q1,,qn on the same index set, the logical cross-entropy is defined as: hp||q=1ipiqi and interpreted as the probability of getting different indices in drawing a single pair, one from p and the other from q. However, this cross-entropy assumes some special significance to pi and qi having the same index. However, in our current quantum setting, there is no correlation between the two sets of “index” states uii=1,,n and vjj=1,,n. However, when the two density operators commute, τρ=ρτ, then we can take uii=1,,n as an ON basis of simultaneous eigenvectors for the two operators with respective eigenvalues λi and μi for ui with i=1,,n. In that special case, we can meaningfully define the quantum logical cross-entropy as hρ||τ=1i=1nλiμi, but the general case awaits further analysis below.

10. The Logical Hamming Distance between Two Partitions

The development of logical quantum information theory in terms of some given commuting or non-commuting observables gives an analysis of the distinguishability of quantum states using those observables. Without any given observables, there is still a natural logical analysis of the distance between quantum states that generalizes the “classical” logical distance hπ|σ+hσ|π between partitions on a set. In the classical case, we have the logical entropy hπ of a partition where the partition plays the role of the direct-sum decomposition of eigenspaces of an observable in the quantum case. However, we also have just the logical entropy hp of a probability distribution p=p1,,pn and the related compound notions of logical entropy given another probability distribution q=q1,,qn indexed by the same set.

First, we review that classical treatment to motivate the quantum version of the logical Hamming distance between states. A binary relation RU×U on U=u1,,un can be represented by an n×n incidence matrix I(R) where:

IRij=1ifui,ujR0ifui,ujR.

Taking R as the equivalence relation inditπ associated with a partition π=B1,,BI, the density matrix ρπ of the partition π (with equiprobable points) is just the incidence matrix Iinditπ rescaled to be of trace one (i.e., the sum of diagonal entries is one):

ρπ=1UIinditπ.

From coding theory ([31] p. 66), we have the notion of the Hamming distance between two 0,1 vectors or matrices (of the same dimensions), which is the number of places where they differ. Since logical information theory is about distinctions and differences, it is important to have a classical and quantum logical notion of Hamming distance. The powerset U×U can be viewed as a vector space over Z2 where the sum of two binary relations R,RU×U is the symmetric difference, symbolized as RΔR=RRRR=RRRR, which is the set of elements (i.e., ordered pairs ui,ujU×U) that are in one set or the other, but not both. Thus, the Hamming distance DHIR,IR between the incidence matrices of two binary relations is just the cardinality of their symmetric difference: DHIR,IR=RΔR. Moreover, the size of the symmetric difference does not change if the binary relations are replaced by their complements: RΔR=U2RΔU2R.

Hence, given two partitions π=B1,,BI and σ=C1,,CJ on U, the unnormalized Hamming distance between the two partitions is naturally defined as (this is investigated in Rossi [32]):

Dπ,σ=DHIinditπ,Iinditσ=inditπΔinditσ=ditπΔditσ,

and the Hamming distance between π and σ is defined as the normalized Dπ,σ:

dπ,σ=Dπ,σU×U=ditπΔditσU×U=ditπditσU×U+ditσditπU×U=hπ|σ+hσ|π.

This motivates the general case of point probabilities p=p1,,pn where we define the Hamming distance between the two partitions as the sum of the two logical conditional entropies:

dπ,σ=hπ|σ+hσ|π=2h(πσ)hπhσ.

To motivate the bridge to the quantum version of the Hamming distance, we need to calculate it using the density matrices ρπ and ρσ of the two partitions. To compute the trace trρπρσ, we compute the diagonal elements in the product ρπρσ and add them up: ρπρσkk=lρπklρσlk=lpkplplpk where the only nonzero terms are where uk,ulBC for some Bπ and Cσ. Thus, if ukBC, then ρπρσkk=ulBCpkpl. Therefore, the diagonal element for uk is the sum of the pkpl for ul in the same intersection BC as uk, so it is pkPrBC. Then, when we sum over the diagonal elements, then for all the ukBC for any given B,C, we just sum ukBCpkPrBC=PrBC2 so that trρπρσ=Bπ,CσPrBC2=1hπσ.

Hence, if we define the logical cross-entropy of π and σ as:

h(π||σ)=1trρπρσ,

then for partitions on U with the point probabilities p=p1,,pn, the logical cross-entropy hπ||σ of two partitions is the same as the logical joint entropy, which is also the logical entropy of the join:

hπ||σ=hπ,σ=hπσ.

Thus, we can also express the logical Hamming distance between two partitions entirely in terms of density matrices:

dπ,σ=2hπ||σhπhσ=trρπ2+trρσ22trρπρσ.

11. The Quantum Logical Hamming Distance

The quantum logical entropy hρ=1trρ2 of a density matrix ρ generalizes the classical hp=1ipi2 for a probability distribution p=p1,,pn. As a self-adjoint operator, a density matrix has a spectral decomposition ρ=i=1nλiuiui where uii=1,,n is an orthonormal basis for V and where all the eigenvalues λi are real, non-negative and i=1nλi=1. Then, hρ=1iλi2 so hρ can be interpreted as the probability of getting distinct indices ii in two independent measurements of the state ρ with ui as the measurement basis. Classically, it is the two-draw probability of getting distinct indices in two independent samples of the probability distribution λ=λ1,,λn, just as hp is the probability of getting distinct indices in two independent draws on p. For a pure state ρ, there is one λi=1 with the others zero, and hρ=0 is the probability of getting distinct indices in two independent draws on λ=0,,0,1,0,,0.

In the classical case of the logical entropies, we worked with the ditsets or sets of distinctions of partitions. However, everything could also be expressed in terms of the complementary sets of indits or indistinctions of partitions (ordered pairs of elements in the same block of the partition) since: ditπinditπ=U×U. When we switch to the density matrix treatment of “classical” partitions, then the focus shifts to the indistinctions. For a partition π=B1,,BI, the logical entropy is the sum of the distinction probabilities: hπ=uk,ulditπpkpl, which in terms of indistinctions is:

hπ=1uk,ulinditπpkpl=1i=1IPrBi2.

When expressed in the density matrix formulation, then trρπ2 is the sum of the indistinction probabilities:

trρπ2=uk,ulinditπpkpl=i=1IPrBi2.

The nonzero entries in ρπ have the form pkpl for uk,ulinditπ; their squares are the indistinction probabilities. That provides the interpretive bridge to the quantum case.

The quantum analogue of an indistinction probability is the absolute square ρkl2 of a nonzero entry ρkl in a density matrix ρ, and trρ2=k,lρkl2 is the sum of those “indistinction” probabilities. The nonzero entries in the density matrix ρ might be called “coherences” so that ρkl may be interpreted as the amplitudes for the states uk and ul to cohere together in the state ρ, so trρ2 is the sum of the coherence probabilities, just as trρπ2=uk,ulinditπpkpl is the sum of the indistinction probabilities. The quantum logical entropy hρ=1trρ2 may then be interpreted as the sum of the decoherence probabilities, just as hρπ=hπ=1uk,ulinditπpkpl is the sum of the distinction probabilities.

This motivates the general quantum definition of the joint entropy hπ,σ=hπσ=hπ||σ, which is the:

hρ||τ=1trρτ quantum logical cross-entropy.

To work out its interpretation, we again take ON eigenvector bases uii=1n for ρ and vjj=1n for τ with λi and μj as the respective eigenvalues and compute the operation of τρ:VV. Now, ui=jvj|uivj so ρui=λiui=jλivj|uivj, and then, for τ=jvjvjμj, so τρui=jλiμjvj|uivj. Thus, τρ in the uii basis would have the diagonal entries ui|τρ|ui=jλiμjvj|uiui|vj, so the trace is:

trτρ=iui|τρ|ui=i,jλiμjvj|uiui|vj=trρτ

which is symmetrical. The other information we have is the iλi=1=jμj, and they are non-negative. The classical logical cross-entropy of two probability distributions is hp||q=1i,jpiqjδij, where two indices i and j are either identical or totally distinct. However, in the quantum case, the “index” states ui and vj have an “overlap” measured by the inner product ui|vj. The trace trρτ is real since vj|ui=ui|vj and vj|uiui|vj=ui|vj2=vj|ui2 is the probability of getting λi when measuring vj in the ui basis and the probability of getting μj when measuring ui in the vj basis. The twofold nature of density matrices as states and as observables then allows trρτ to be interpreted as the average value of the observable ρ when measuring the state τ or vice versa.

We may call vj|uiui|vj the proportion or extent of overlap for those two index states. Thus, trρτ is the sum of all the probability combinations λiμj weighted by the overlaps vj|uiui|vj for the index states ui and vj. The quantum logical cross-entropy can be written in a number of ways:

hρ||τ=1trρτ=1i,jλiμjvj|uiui|vj=trτIρ=i,j1λiμjvj|uiui|vj=trρIτ=i,jλi1μjvj|uiui|vj

.

Classically, the “index state” i completely overlaps with j when i=j and has no overlap with any other i from the indices 1,,n, so the “overlaps” are, as it were, j|ii|j=δij, the Kronecker delta. Hence, the classical analogue formulas are:

hp||q=1i,jpiqjδij=i,j1piqjδij=i,jpi1qjδij.

The quantum logical cross-entropy hρ||τ can be interpreted by considering two measurements, one of ρ with the uii measurement basis and the other of τ with the vjj measurement basis. If the outcome of the ρ measurement was ui with probability λi, then the outcome of the τ measurement is different than vj with probability 1μj; however, that distinction probability λi1μj is only relevant to the extent that ui and vj are the “same state” or overlap, and that extent is vj|uiui|vj. Hence, the quantum logical cross-entropy is the sum of those two-measurement distinction probabilities weighted by the extent that the states overlap. The interpretation of hρ and hτ||ρ, as well as the later development of the quantum logical conditional entropy hρ|τ and the quantum Hamming distance dρ,τ are all based on using the eigenvectors and eigenvalues of density matrices, which Michael Nielsen and Issac Chuang seem to prematurely dismiss as having little or no “special significance” ([30] p. 103).

When the two density matrices commute, ρτ=τρ, then (as noted above) we have the essentially classical situation of one set of index states uii which is an orthonormal basis set of simultaneous eigenvectors for both ρ and τ with the respective eigenvalues λii and μjj. Then, uj|uiui|uj=δij, so hρ||τ=i,jλi1μjδij is the probability of getting two distinct index states ui and uj for ij in two independent measurements, one of ρ and one of τ in the same measurement basis of uii. This interpretation includes the special case when τ=ρ and hρ||ρ=hρ.

We saw that classically, the logical Hamming distance between two partitions could be defined as:

dπ,σ=2hπ||σhπhσ=trρπ2+trρσ22trρπρσ

so this motivates the quantum definition. Nielsen and Chuang suggest the idea of a Hamming distance between quantum states, only to then dismiss it. “Unfortunately, the Hamming distance between two objects is simply a matter of labeling, and a priori there are not any labels in the Hilbert space arena of quantum mechanics!” ([30] p. 399). They are right that there is no correlation, say, between the vectors in the two ON bases uii and vjj for V, but the cross-entropy hρ||τ uses all possible combinations in the terms λi1μjvj|uiui|vj; thus, the definition of the Hamming distance given below does not use any arbitrary labeling or correlations.

dρ,τ=2hρ||τhρhτ=trρ2+trτ22trρτ

This is the definition of the quantum logical Hamming distance between two quantum states.

There is another distance measure between quantum states, namely the Hilbert–Schmidt norm, which has been recently investigated in [29] (with an added 12 factor). It is the square of the Euclidean distance between the quantum states, and ignoring the 12 factor, it is the square of the “trace distance” ([30] Chapter 9) between the states.

trρτ2 Hilbert-chmidt norm,

where we write A2 for AA. Then, the naturalness of this norm as a “distance” is enhanced by the fact that it is the same as the quantum Hamming distance:

Theorem 4.

Hilbert–Schmidt norm = quantum logical Hamming distance.

Proof. 

trρτ2=trρ2+trτ22trρτ=2hρ||τhρhτ=dρ,τ. ☐

Hence, the information inequality holds trivially for the quantum logical Hamming distance:

dρ,τ0 with equality iff ρ=τ.

The fundamental theorem can be usefully restated in this broader context of density operators instead of in terms of the density matrix represented in the ON basis for F-eigenvectors. Let ρ be any state to be measured by an observable F, and let ρ^=i=1IPϕiρPϕi be the result of applying the Lüders mixture operation (where the Pϕi are the projection operators to the eigenspaces of F). Then, a natural question to ask is the Hilbert–Schmidt norm or quantum logical Hamming distance between the pre- and post-measurement states. It might be noted that the Hilbert–Schmidt norm and the Lüders mixture operation are defined independently of any considerations of logical entropy.

Theorem 5 (Fundamental (quantum)).

trρρ^2=hρ^hρ.

Proof. 

trρρ^2=tr[ρρ^ρρ^]=trρ2+trρ^2trρρ^trρ^ρ where ρ=ρ, ρ^=ρ^, trρρ^=trρ^ρ and trρρ^=trρi=1IPϕiρPϕi=trρPϕiρPϕi. Also trρ^2=triPϕiρPϕijPϕjρPϕj=triPϕiρPϕi2ρPϕi using the orthogonality of the distinct projection operators. Then, using the idempotency of the projections and the cyclicity of the trace, trρ^2=triρPϕiρPϕi so trρρ^=trρ^ρ=trρ^2, and hence, trρρ^2=trρ2trρ^2=hρ^hρ. ☐

12. Results

Logical information theory arises as the quantitative version of the logic of partitions just as logical probability theory arises as the quantitative version of the dual Boolean logic of subsets. Philosophically, logical information is based on the idea of information-as-distinctions. The Shannon definitions of entropy arise naturally out of the logical definitions by replacing the counting of distinctions by the counting of the minimum number of binary partitions (bits) that are required, on average, to make all the same distinctions, i.e., to encode the distinguished elements uniquely, which is why the Shannon theory is so well adapted for the theory of coding and communication.

This “classical” logical information theory may be developed with the data of two partitions on a set with point probabilities. Section 7 gives the generalization to the quantum case where the partitions are provided by two commuting observables (the point set is an ON basis of simultaneous eigenvectors), and the point probabilities are provided by the state to be measured. In Section 8, the fundamental theorem for quantum logical entropy and measurement established a direct quantitative connection between the increase in quantum logical entropy due to a projective measurement and the eigenstates (cohered together in the pure superposition state being measured) that are distinguished by the measurement (decohered in the post-measurement mixed state). This theorem establishes quantum logical entropy as a natural notion for a quantum information theory focusing on distinguishing states.

The classical theory might also start with partitions on two different sets and a probability distribution on the product of the sets (see Appendix A). Appendix B gives the quantum generalization of that case with the two sets being two ON bases for two non-commuting observables, and the probabilities are provided by a state to be measured. The classical theory may also be developed just using two probability distributions indexed by the same set, and this is generalized to the quantum case where we are just given two density matrices representing two states in a Hilbert space. Section 10 and Section 11 carry over the Hamming distance measure from the classical to the quantum case where it is equal to the Hilbert–Schmidt distance (square of the trace distance). The general fundamental theorem relating measurement and logical entropy is that the Hilbert–Schmidt distance (=quantum logical Hamming distance) between any pre-measurement state ρ and the state ρ^ resulting from a projective measurement of the state is the difference in their logical entropies, hρ^hρ.

13. Discussion

The overall argument is that quantum logical entropy is the simple and natural notion of information-as-distinctions for quantum information theory focusing on the distinguishing of quantum states. These results add to the arguments already presented by Manfredi and Feix [21] and many others (see [23]) for this notion of quantum entropy.

There are two related classical theories of information, classical logical information theory (focusing on information-as-distinctions and analyzing classification) and the Shannon theory (focusing on coding and communications theory). Generalizing to the quantum case, there are also two related quantum theories of information, the logical theory (using quantum logical entropy to focus on distinguishing quantum states and analyzing measurement as the quantum version of classification) and the conventional quantum information theory (using von Neumann entropy to develop a quantum version of the Shannon treatment of coding and communications).

Appendix A. Classical Logical Information Theory with Two Sets X and Y

The usual (“classical”) logical information theory for a probability distribution px,y on X×Y (finite) in effect uses the discrete partition on X and Y [3]. For the general case of quantum logical entropy for not-necessarily commuting observables, we need to first briefly develop the classical case with general partitions on X and Y.

Given two finite sets X and Y and real-valued functions f:XR with values ϕii=1I and g:YR with values γjj=1J, each function induces a partition on its domain:

π=f1ϕiiI=B1,,BI on X, and σ=g1γjjJ=C1,,CJ on Y.

We need to define logical entropies on X×Y, but first, we need to define the ditsets or information sets.

A partition π=B1,,BI on X and a partition σ=C1,,CJ on Y define a product partition π×σ on X×Y whose blocks are Bi×Cji,j. Then, π induces π×0Y on X×Y (where 0Y is the indiscrete partition on Y), and σ induces 0X×σ on X×Y. The corresponding ditsets or information sets are:

  • ditπ×0Y=x,y,x,y:fxfxX×Y2;

  • dit0X×σ=x,y,x,y:gygyX×Y2;

  • ditπ×σ=ditπ×0Ydit0X×σ; and so forth.

Given a joint probability distribution p:X×Y0,1, the product probability distribution is p×p:X×Y20,1.

All the logical entropies are just the product probabilities of the ditsets and their union, differences and intersection:

  • hπ×0Y=p×pditπ×0Y;

  • h0X×σ=p×pdit0X×σ;

  • hπ×σ=p×pditπ×σ=p×pditπ×0Ydit0X×σ;

  • hπ×0Y|0X×σ=p×pditπ×0Ydit0X×σ;

  • h0X×σ|π×0Y=p×pdit0X×σditπ×0Y;

  • mπ×0Y,0X×σ=p×pditπ×0Ydit0X×σ.

All the logical entropies have the usual two-draw probability interpretation where the two independent draws from X×Y are x,y and x,y and can be interpreted in terms of the f-values and g-values:

  • hπ×0Y = probability of getting distinct f-values;

  • h0X×σ = probability of getting distinct g-values;

  • hπ×σ = probability of getting distinct f- or g-values;

  • hπ×0Y|0X×σ = probability of getting distinct f-values, but the same g-values;

  • h0X×σ|π×0Y = probability of getting distinct g-values, but the same f-values;

  • mπ×0Y,0X×σ = probability of getting distinct f- and distinct g-values.

We have defined all the logical entropies by the general method of the product probabilities on the ditsets. In the first three cases, hπ×0Y, h0X×σ and hπ×σ, they were the logical entropies of partitions on X×Y, so they could equivalently be defined using density matrices. The case of hπ×σ illustrates the general case. If ρπ is the density matrix defined for π on X and ρσ the density matrix for σ on Y, then ρπ×σ=ρπρσ is the density matrix for π×σ defined on X×Y, and:

hπ×σ=1trρπ×σ2.

The marginal distributions are: pXx=ypx,y and pYy=xpx,y. Since π is a partition on X, there is also the usual logical entropy hπ=pX×pXditπ=1trρπ2=hπ×0Y where ditπX×X and similarly for pY.

Since the context should be clear, we may henceforth adopt the old notation from the case where π and σ were partitions on the same set U, i.e., hπ=hπ×0Y, hσ=h0X×σ, hπ,σ=hπ×σ, etc.

Since the logical entropies are the values of a probability measure, all the usual identities hold where the underlying set is now X×Y2 instead of U2, as illustrated in Figure A1.

Figure A1.

Figure A1

Venn diagram for logical entropies as values of a probability measure p×p on X×Y2.

The previous treatment of hX, hY, hX,Y, hX|Y, hY|X and mX,Y in [3] was just the special cases where π=1X and σ=1Y.

Appendix B. Quantum Logical Entropies with Non-Commuting Observables

As before in the case of commuting observables, the quantum case can be developed in close analogy with the previous classical case. Given a finite-dimensional Hilbert space V and not necessarily commuting observables F,G:VV, let X be an orthonormal basis of V of F-eigenvectors, and let Y be an orthonormal basis for V of G-eigenvectors (so X=|Y|).

Let f:XR be the eigenvalue function for F with values ϕii=1I, and let g:YR be the eigenvalue function for G with values γjj=1J.

Each eigenvalue function induces a partition on its domain:

π=f1ϕi=B1,,BI on X, and σ=g1γj=C1,,CJ on Y.

We associated with the ordered pair x,y, the basis element xy in the basis xyxX,yY for VV. Then, each pair of pairs x,y,x,y is associated with the basis element xyxy in VVVV=VV2.

Instead of ditsets or information sets, we now have qudit subspaces or information subspaces. For RVV2, let R be the subspace generated by R. We simplify the notation of quditπ×0Y=quditπ=xyxy:fxfx, etc.

  • quditπ=xyxy:fxfx;

  • quditσ=xyxy:gygy;

  • quditπ,σ=quditπquditσ; and so forth. Tt is again an easy implication of the aforementioned common dits theorem that any two nonzero information spaces quditπ and quditσ have a nonzero intersection, so the mutual information space quditπquditσ is not the zero space.

A normalized state ψ on VV defines a pure state density matrix ρψ=ψψ. Let αx,y=xy|ψ, so if Pxy is the projection to the subspace (ray) generated by xy in VV, then a probability distribution on X×Y is defined by:

px,y=αx,yαx,y=trPxyρψ,

or more generally, for a subspace TVV, a probability distribution is defined on the subspaces by:

PrT=trPTρψ.

Then, the product probability distribution p×p on the subspaces of VV2 defines the quantum logical entropies when applied to the information subspaces:

  • hF:ψ=p×pquditπ=trPquditπρψρψ;

  • hG:ψ=p×pquditσ=trPquditσρψρψ;

  • hF,G:ψ=p×pquditπquditσ=trPquditπquditσρψρψ;

  • hF|G:ψ=p×pquditπquditσ=trPquditπquditσρψρψ;

  • hG|F:ψ=p×pquditσquditπ=trPquditσquditπρψρψ;

  • mF,G:ψ=p×pquditπquditσ=trPquditπquditσρψρψ.

The observable F:VV defines an observable FI:VVVV with the eigenvectors xv for any nonzero vV and with the same eigenvalues ϕ1,,ϕI (the context should suffice to distinguish the identity operator I:VV from the index set I for the F-eigenvalues). Then, in two independent measurements of ψ by the observable FI, we have:

hF:ψ= probability of getting distinct eigenvaluesϕi and ϕi,i.e.,of getting a qudit of F.

In a similar manner, G:VV defines the observable IG:VVVV with the eigenvectors vy and with the same eigenvalues γ1,,γJ. Then, in two independent measurements of ψ by the observable IG, we have:

hG:ψ= probability of getting distinct eigenvalues γj and γj.

The two observables F,G:VV define an observable FG:VVVV with the eigenvectors xy for x,yX×Y and eigenvalues fxgy=ϕiγj. To interpret the compound logical entropies cleanly, we assume there is no accidental degeneracy, so there are no ϕiγj=ϕiγj for ii and jj. Then, for two independent measurements of ψ by FG, the compound quantum logical entropies can be interpreted as the following “two-measurement” probabilities:

  • hF,G:ψ = probability of getting distinct eigenvalues ϕiγjϕiγj where ii or jj;

  • hF|G:ψ = probability of getting distinct eigenvalues ϕiγjϕiγj where ii;

  • hG|F:ψ = probability of getting distinct eigenvalues ϕiγjϕiγj where jj;

  • mF,G:ψ = probability of getting distinct eigenvalues ϕiγjϕiγj where ii and jj.

All the quantum logical entropies have been defined by the general method using the information subspaces, but in the first three cases hF:ψ, hG:ψ and hF,G:ψ, the density matrix method of defining logical entropies could also be used. Then, the fundamental theorem could be applied relating the quantum logical entropies to the zeroed entities in the density matrices indicating the eigenstates distinguished by the measurements.

The previous set identities for disjoint unions now become subspace identities for direct sums such as:

quditπquditσ=quditπquditσquditπquditσquditσquditπ.

Hence, the probabilities are additive on those subspaces as shown in Figure A2:

hF,G:ψ=hF|G:ψ+mF,G:ψ+hG|F:ψ.

Figure A2.

Figure A2

Venn diagram for quantum logical entropies as probabilities on VV2.

Funding

This research received no external funding.

Conflicts of Interest

The author declares no conflict of interest.

References

  • 1.Gini C. Variabilità e Mutabilità. Tipografia di Paolo Cuppini; Bologna, Italy: 1912. (In Italian) [Google Scholar]
  • 2.Ellerman D. An Introduction of Partition Logic. Logic J. IGPL. 2014;22:94–125. doi: 10.1093/jigpal/jzt036. [DOI] [Google Scholar]
  • 3.Ellerman D. Logical Information Theory: New Foundations for Information Theory. Logic J. IGPL. 2017;25:806–835. doi: 10.1093/jigpal/jzx022. [DOI] [Google Scholar]
  • 4.Markechová D., Mosapour B., Ebrahimzadeh A. Logical Divergence, Logical Entropy, and Logical Mutual Information in Product MV-Algebras. Entropy. 2018;20:129. doi: 10.3390/e20020129. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Ellerman D. The Logic of Partitions: Introduction to the Dual of the Logic of Subsets. Rev. Symb. Logic. 2010;3:287–350. doi: 10.1017/S1755020310000018. [DOI] [Google Scholar]
  • 6.Rota G.-C. Twelve Problems in Probability No One Likes to Bring up. In: Crapo H., Senato D., editors. Algebraic Combinatorics and Computer Science. Springer; Milan, Italy: 2001. pp. 57–93. [Google Scholar]
  • 7.Kung J.P., Rota G.C., Yan C.H. Combinatorics: The Rota Way. Cambridge University Press; New York, NY, USA: 2009. [Google Scholar]
  • 8.Kolmogorov A. Combinatorial Foundations of Information Theory and the Calculus of Probabilities. Russ. Math. Surv. 1983;38:29–40. doi: 10.1070/RM1983v038n04ABEH004203. [DOI] [Google Scholar]
  • 9.Campbell L. Lorne. Entropy as a Measure. IEEE Trans. Inf. Theory. 1965;11:112–114. doi: 10.1109/TIT.1965.1053712. [DOI] [Google Scholar]
  • 10.Shannon C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948;27:379–423, 623–656. doi: 10.1002/j.1538-7305.1948.tb01338.x. [DOI] [Google Scholar]
  • 11.Rozeboom W.W. The Theory of Abstract Partials: An Introduction. Psychometrika. 1968;33:133–167. doi: 10.1007/BF02290150. [DOI] [PubMed] [Google Scholar]
  • 12.Abramson N. Information Theory Coding. McGraw-Hill; New York, NY, USA: 1963. [Google Scholar]
  • 13.Hartley R.V. Transmission of information. Bell Syst. Tech. J. 1928;7:553–563. doi: 10.1002/j.1538-7305.1928.tb01236.x. [DOI] [Google Scholar]
  • 14.Lawvere F. Conceptual Mathematics: A First Introduction to Categories. Cambridge University Press; New York, NY, USA: 1997. William and Stephen Schanuel. [Google Scholar]
  • 15.Auletta G., Mauro F., Giorgio P. Quantum Mechanics. Cambridge University Press; Cambridge, UK: 2009. [Google Scholar]
  • 16.Fano U. Description of States in Quantum Mechanics by Density Matrix and Operator Techniques. Rev. Mod. Phys. 1957;29:74–93. doi: 10.1103/RevModPhys.29.74. [DOI] [Google Scholar]
  • 17.Bennett C.H. Quantum Information: Qubits and Quantum Error Correction. Int. J. Theor. Phys. 2003;42:153–176. doi: 10.1023/A:1024439131297. [DOI] [Google Scholar]
  • 18.Ellerman D. The Quantum Logic of Direct-Sum Decompositions: The Dual to the Quantum Logic of Subspaces. Logic J. IGPL. 2018;26:1–13. doi: 10.1093/jigpal/jzx026. [DOI] [Google Scholar]
  • 19.Ellerman D. Counting Distinctions: On the Conceptual Foundations of Shannon’s Information Theory. Synthese. 2009;168:119–149. doi: 10.1007/s11229-008-9333-7. [DOI] [Google Scholar]
  • 20.Jaeger G. Quantum Information: An Overview. Springer; New York, NY, USA: 2007. [Google Scholar]
  • 21.Manfredi G., Feix M.R. Entropy and Wigner Functions. Phys. Rev. E. 2000;62:4665. doi: 10.1103/PhysRevE.62.4665.quant-ph/0203102 [DOI] [PubMed] [Google Scholar]
  • 22.Buscemi F., Bordone P., Bertoni A. Linear Entropy as an Entanglement Measure in Two-Fermion Systems. Phys. Rev. 2007;75:032301. doi: 10.1103/PhysRevA.75.032301.quant-ph/0611223v2 [DOI] [Google Scholar]
  • 23.Woldarz J.J. Entropy and Wigner Distribution Functions Revisited. Int. J. Theor. Phys. 2003;42:1075–1084. doi: 10.1023/A:1025439010479. [DOI] [Google Scholar]
  • 24.Havrda J., Charvát F. Quantification Methods of Classification Processes: Concept of Structural Alpha-Entropy. Kybernetika. 1967;3:30–35. [Google Scholar]
  • 25.Tsallis C. Possible Generalization for Boltzmann-Gibbs Statistics. J. Stat. Phys. 1988;52:479–487. doi: 10.1007/BF01016429. [DOI] [Google Scholar]
  • 26.Rao C.R. Diversity and Dissimilarity Coefficients: A Unified Approach. Theor. Popul. Biol. 1982;21:24–43. doi: 10.1016/0040-5809(82)90004-1. [DOI] [Google Scholar]
  • 27.Cohen-Tannoudji C., Laloe F., Diu B. Quantum Mechanics. Volume 1 John Wiley and Sons; New York, NY, USA: 2005. [Google Scholar]
  • 28.Tamir B., Cohen E. Logical Entropy for Quantum States. arXiv. 2014. 1412.0616v2
  • 29.Tamir B., Cohen E. A Holevo-Type Bound for a Hilbert Schmidt Distance Measure. J. Quantum Inf. Sci. 2015;5:127–133. doi: 10.4236/jqis.2015.54015. [DOI] [Google Scholar]
  • 30.Nielsen M., Chuang I. Quantum Computation and Quantum Information. Cambridge University Press; Cambridge, UK: 2000. [Google Scholar]
  • 31.McEliece R.J. Encyclopedia of Mathematics and Its Applications. Volume 3 Addison-Wesley; Reading, MA, USA: 1977. The Theory of Information and Coding: A Mathematical Framework for Communication. [Google Scholar]
  • 32.Rossi G. Partition Distances. arXiv. 2011. 1106.4579v1

Articles from Entropy are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES