Skip to main content
Entropy logoLink to Entropy
. 2024 Mar 1;26(3):225. doi: 10.3390/e26030225

Maximum Geometric Quantum Entropy

Fabio Anza 1,2,*, James P Crutchfield 2
Editor: Carlo Cafaro
PMCID: PMC10969449  PMID: 38539737

Abstract

Any given density matrix can be represented as an infinite number of ensembles of pure states. This leads to the natural question of how to uniquely select one out of the many, apparently equally-suitable, possibilities. Following Jaynes’ information-theoretic perspective, this can be framed as an inference problem. We propose the Maximum Geometric Quantum Entropy Principle to exploit the notions of Quantum Information Dimension and Geometric Quantum Entropy. These allow us to quantify the entropy of fully arbitrary ensembles and select the one that maximizes it. After formulating the principle mathematically, we give the analytical solution to the maximization problem in a number of cases and discuss the physical mechanism behind the emergence of such maximum entropy ensembles.

Keywords: quantum mechanics, geometric quantum mechanics, maximum entropy estimation, density matrix

PACS: 05.45.-a, 89.75.Kd, 89.70.+c, 05.45.Tp

1. Introduction

1.1. Background

Quantum mechanics defines a system’s state |ψ as an element of a Hilbert space H. These are the pure states. To account for uncertainties in a system’s actual state |ψ, one extends the definition to density operators ρ that act on H. These operators are linear, positive semidefinite ρ0, self-adjoint ρ=ρ, and normalized Trρ=1. ρ, then, is a pure state when it is also a projector: ρ2=ρ.

The spectral theorem guarantees that one can always decompose a density operator as ρ=iλi|λiλi|, where λi[0,1] are its eigenvalues and |λi its eigenvectors. Ensemble theory [1,2] gives the decomposition’s statistical meaning: λi is the probability that the system is in the pure state |λi. Together, they form ρ’s eigenensemble L(ρ):=λj,|λjj, which, putting degeneracies aside for a moment, is unique. L(ρ), however, is not the only ensemble compatible with the measurement statistics given by ρ. Indeed, there is an infinite number of different ensembles that give the same density matrix: pk,|ψkk such that kpk|ψkψk|=jλj|λjλj|. Throughout the following, E(ρ) identifies the set of all ensembles of pure states consistent with a given density matrix.

1.2. Motivation

Since the association ρE(ρ) is one-to-many, it is natural to ask whether a meaningful criterion to uniquely select an element of E(ρ) exists. This is a typical inference problem, and a principled answer is given by the maximum entropy principle (MEP) [3,4,5]. Indeed, when addressing inference given only partial knowledge, maximum entropy methods have enjoyed marked empirical success. They are broadly exploited in science and engineering.

Following this lead, the following answers the question of uniquely selecting an ensemble for a given density matrix by adapting the maximum entropy principle. We also argue in favor of this choice by studying the dynamical emergence of these ensembles in a number of cases.

The development is organized as follows. Section 2 discusses the relevant literature on this problem. It also sets up language and notation. Section 3 gives a brief summary of Geometric Quantum Mechanics: a differential-geometric language to describe the states and dynamics of quantum systems [6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23]. Then, Section 4 introduces the technically pertinent version of MEP—the Maximum Geometric Entropy Principle (MaxGEP). Section 5 discusses two mechanisms that can lead to the MaxGEP and identifies different physical situations in which the ensemble can emerge. Eventually, Section 6 summarizes what this accomplishes and draws several forward-looking conclusions.

2. Existing Results

The properties and characteristics of pure-state ensembles is a vast and rich research area, one whose results are useful across a large number of fields from quantum information and quantum optics to quantum thermodynamics and quantum computing, to mention only a few. This section discusses four sets of results relevant to our purposes. This also allows introducing language and notation.

First, recall Ref. [24], where Hughston, Josza, and Wootters gave a constructive characterization of all possible ensembles behind a given density matrix, assuming an ensemble with a finite number of elements. Second, Wiseman and Vaccaro, in Ref. [25], then argued for a preferred ensemble via the dynamically motivated criterion of a Physically Realizable ensemble. Third, Goldstein, Lebowitz, Tumulka, and Zanghi singled out the Gaussian Adjusted Projected (GAP) measure as a preferred ensemble behind a density matrix in a thermodynamic and statistical mechanics setting [26]. Fourth, Brody and Hughston used one form of maximum entropy within geometric quantum mechanics [27].

2.1. HJW Theorem

At the technical level, one of the most important results for our purposes is the Hughston–Josza–Wootters (HJW) theorem, proved in Ref. [24], which we now summarize.

Consider a system with finite-dimensional Hilbert space HS described by a density matrix ρ with rank r: ρ=j=1rλj|λjλj|. We assume dimHS:=dS=r, since the case in which dS>r is easily handled by restricting HS to the r-dimensional subspace defined by the image of ρ. Then, a generic ensemble eρE(ρ) with ddS elements can be generated from L(ρ) via linear remixing with a d×dS matrix M having as columns dS orthonormal vectors. Then, eρ=pk,|ψk is given by the following:

pk|ψk=j=1dSMkjλj|λj.

Equivalently, one can generate ensembles applying a generic d×d unitary matrix U to a list of d non-normalized dS-dimensional states in which the first dS, λj|λjj=1dS, are proportional to the eigenvectors of ρ, while the remaining ddS are simply null vectors:

pk|ψk=j=1dSUkjλj|λj.

Here, we must remember that U is not an operator acting on HS but a unitary matrix mixing weighted eigenvectors into d non-normalized vectors.

The power of the HJW theorem is not only that it introduces a constructive way to build E(ρ) ensembles but that this way is complete. Namely, all ensembles can be built in this way. This is a remarkable fact, which the following sections rely heavily on.

2.2. Physically Realizable Ensembles

For our purposes, a particularly relevant result is that of Wiseman and Vaccaro [25]. (See also subsequent results by Wiseman and collaborators on the same topic [28]). The authors argue for a Physically Realizable ensemble that is implicitly selected by the fact that if a system is in a stationary state ρss, one would like to have an ensemble that is stable under the action of the dynamics generated by monitoring the environment. This is clearly desirable in experiments in which one monitors an environment to infer properties about the system. While this is an interesting way to answer the same question we tackle here, their answer is based on dynamics and limited to stationary states. The approach we propose here is very different, being based on an inference principle. This opens interesting questions related to understanding the conditions under which the two approaches provide compatible answers. Work in this direction is ongoing and it will be reported elsewhere.

2.3. Gaussian Adjusted Projected Measure

Reference [26] asks a similar question to that here but in a statistical mechanics and thermodynamics context. Namely, viewing pure states as points on a high-dimensional sphere ψS2dS1, which probability measure μ on S2dS1, interpreted as a smooth ensemble on S2dS1, leads to a thermal density matrix:

ρth=?dψμ(ψ)|ψψ|

Here, ρth could be the microcanonical or the canonical density matrix. Starting with Schrödinger’s [29,30] and Bloch’s [31] early work, the authors argue in favor of the Gaussian Adjusted Projected (GAP) measure. This is essentially a Gaussian measure, adjusted and projected to live on ψS2dS1:

GAP(σ)eψ|σ1|ψ.

Written explicitly in terms of complex coordinates ψj, it is clear that this is a Gaussian measure with vanishing average E[ψj]=0 and covariance specified by E[ψj*ψk]=σjk. In particular, σ=ρ guarantees that GAP(ρ) has ρ as density matrix.

The GAP measure has some interesting properties [26,32,33] and, as we see in Section 4, it is also closely related to one of our results in a particular case. Our results can therefore be understood as a generalization of the GAP measure. We will not delve deeper into this matter now but comment on it later.

2.4. Geometric Approach

In 2000, Brody and Hughston performed the first maximum entropy analysis for the ensemble behind the density matrix [27], in a language and spirit that is quite close to those we use here. Their result came before the definition of the GAP measure, but it is essentially identical to it: μ(ψ)exp(j,kLjkψj*ψk). Their perspective, however, is very different from that in Ref. [26], which is focused on thermal equilibrium phenomenology. The work we perform here, and our results, can also be understood as a generalization of Ref. [27]. Indeed, as we argued in Ref. [34] (and will show again in Section 4), the definition of entropy used (see Equation (10) in Ref. [27]) is meaningful only in certain cases, in particular, when the ensemble has support with dimension equal to the dimension of the state space of the system of interest. In general, more care is required.

2.5. Summary

We summarized four relevant sets of results on selecting one ensemble among the infinitely many that are generally compatible with a density matrix. Our work relies heavily on the HJW theorem [24], and it is quite different from the approach by Wiseman and Vaccaro [25]. Moreover, it constitutes a strong generalization with respect to the results on the GAP measure [26] in a thermal equilibrium context and with respect to the analysis by Brody and Hughston in [27].

3. Geometric Quantum States

Our maximum geometric entropy principle relies on a differential-geometric approach to quantum mechanics called Geometric Quantum Mechanics (GQM). The following gives a quick summary of GQM and how its notion of Geometric Quantum State [6,7,34] can be elegantly used to study physical and information-theoretic aspects of ensembles. More complete discussions are found in the relevant literature [8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23].

3.1. Quantum State Space

The state space of a finite-dimensional quantum system with Hilbert space HS is a projective Hilbert space P(HS), which is isomorphic to a complex projective space P(HS)CPdS1:=ZCdS:ZλZ,λC/0. Pure states are thus in one-to-one correspondence with points ZCPdS1. Using a computational basis as reference basis |jj=1dS, Z has homogeneous coordinates Z=(Z1,,ZdS) where |Z=j=1dSZj|jHS. One of the advantages in using the geometric approach is that one can exploit the symplectic character of the state space. Indeed, this implies that the quantum state space CPn can essentially be considered as a classical, although curved, phase space. With probability and phases being canonically conjugated coordinates, Zj=pjeiϕj, we have pj,ϕk=δjk. The intuition from classical mechanics can then be used to understand the phenomenology of quantum systems.

3.2. Observables

Within GQM, observables are Hermitian functions from CPdS1 to the reals:

fO(Z):=j,k=1dSZj*ZkOjk/h=1dS|Zh|2,

where Ojk=j|O|k are the matrix elements of the Hilbert space self-adjoint operator O. An analogous relation holds for Positive Operator-Valued Measures (POVMs).

3.3. Geometric Quantum States

The quantum state space P(HS) has a preferred metric gFS and a related volume element dVFS—the Fubini–Study volume element. The details surrounding these go beyond our present purposes. It is sufficient to give dVFS’s explicit form in the coordinate system we use for concrete calculations. This is the “probability + phase” coordinate system, given by Z(pj,ϕj)j=1dS:

dVFS=detgFSj=1dSdZjdZj*=j=1dSdpjdϕj2.

This volume element can be used to define integration. Indeed, calling Vol[B] the volume of a set BP(HS), we have Vol[B]:=BdVFS. In turn, this provides the fundamental, unitarily invariant, notion of a uniform measure on the quantum state space. This is the normalized Haar measure μHaar:

μHaar[B]=Vol[B]/Vol[P(HS)].

μHaar is a probability measure that weights all pure states uniformly with the total Fubini–Study volume of the quantum state space. Probability measures [35,36] are the appropriate mathematical formalization behind the physical notion of ensembles and they formalize the concept of a Geometric Quantum State (GQS): A probability measure on the (complex projective) quantum state space. For example, a pure state corresponds to a Dirac measure δψ with support on a single point ψP(HS), with Hilbert space representation |ψ.

3.4. GQS as Conditional Probability Measures

One way to embed the HJW theorem in this geometric context is the following.

Any density matrix can be purified in an infinite number of different ways. A purification |ψ(ρ) of ρ is a pure state in a larger Hilbert space |ψ(ρ)HSHE such that TrE|ψ(ρ)ψ(ρ)|, where TrE is the partial trace over the additional Hilbert space HE. It is known that, for the purification to be achieved, dEr. Since we assume r=dS, we have dEdS. Any purification of ρ will have a Schmidt decomposition of a specific type:

|ψ(ρ)=j=1dSλj|λj|SPj,

where |SPjHEj=1dS are the dS orthonormal “Schmidt partners”. These can be extended to a full orthonormal basis on HE by adding dEdS orthonormal vectors which are orthogonal to span(|SPjj).

L(ρ) is therefore understood as the ensemble resulting from conditioning on the Schmidt partners. Namely, when measuring the environment in the basis |SPjj=1dE, the state of the system after the measurement will be |λj with probability λj. Its GQS is μL=j=1dSλjδλj. If we now measure the environment in a generic basis, instead of using the Schmidt partners, we generate a different ensemble. Calling |vαα=1dE one such basis, we have:

j=1dSλj|λj|SPj=α=1dEpα|χα|vα,

with pα|χα=IS|vαvα|ψ(ρ), pα,|χαα=1dEE(ρ), and GQS μ=α=1dEpαδχα.

Starting from the Schmidt partners, these bases are in one-to-one correspondence with dE×dE unitary matrices acting on HE: |vα:=U|SPα. And these, in turn, are in one-to-one correspondence with the unitary matrices in the HJW theorem. Therefore, they are an analogously complete classification of ensembles. The reason for this slight rearrangement of things with respect to the HJW theorem is that we now have an interpretation of |χα as the conditionally pure state of the system, conditioned on the fact that we make a projective measurement |vαα=1dE on the environment where the result α occurs with probability pα.

3.5. Quantifying Quantum Entropy

To develop entropy, the following uses the setup in Refs. [6,7,34] to study the physics of ensembles using geometric quantum mechanics. Since the focus here is a maximum entropy approach to select an ensemble behind a density matrix, it is important to have a proper understanding of how to quantify the entropy of an ensemble or, equivalently, of the GQS.

First, we look at the statistical interpretation of the pure states which participate in the conditional ensembles pα,|χα. The corresponding kets |χα are not necessarily orthogonal χα|χβδαβ, so the states are not mutually exclusive or distinguishable in the Fubini–Study sense. However, these states come with the classical labels α|χα associated with the outcomes of projective measurements on the environment. In this sense, if αβ, we have a classical way to distinguish them, and thus we can understand how to interpret expressions like αpαlogpα.

Then, we highlight that the correct functional to use to evaluate the entropy of μ is not always the same. It depends on another feature of the ensemble, the quantum information dimension, which is conceptually related to the dimension of its support in quantum state space. To illustrate the concept, consider the following four GQSs of a qubit:

μ1=δψμ2=kpkδψkμ3=1T0Tdtδψ(t)μ4=μHaar.

Naturally, the entropy of μ1 vanishes, since there is no uncertainty. The system inhabits only one pure state, ψ. The entropy of μ2 is already nontrivial to evaluate. Indeed, while one obvious way is to use the functional kpklogpk, it is also very clear that this notion of entropy does not take into account the location of the points ψkP(HS). Intuitively, if all these points are close to each other, we would like our entropy to be smaller than in the case in which all the points are uniformly distributed on ψkP(HS).

The entropy of μ3 is perhaps the most peculiar, but it illustrates the points in the best way. Let us assume that our qubit is evolving with a Hamiltonian H such that E1E0=ω. Then, ψ(t)=(1p0,p0eiϕ0ωt). If we aggregate the time average and look at the statistics we obtain, it is clear that the variable p is a conserved quantity—p(t)=p0, while ϕ(t)=ϕ0ωt is an angular variable that, over a long time, will be uniformly distributed in [0,2π]. This means limTμ3=12πδp0p, where δp0p is a Dirac measure over the first variable p with support on p=p0. How do we evaluate the entropy of μ3?

While to evaluate μ4=μHaar we simply integrate over the whole state space and obtain logVol(CP1), this does not work for μ3. Indeed, with respect to the full, 2D quantum state space (p,ϕ)[0,1]×[0,2π], the distribution clearly lives on a 1D line, which is a measure-zero subset.

To properly address all these different cases, a more general approach is needed. Reference [34] adapted previous work by Renyi to probability measures on a quantum state space. This led to the notions of Quantum Information Dimension D and Geometric Quantum Entropy HD that address these issues and properly evaluate the entropy in all these cases. We now give a quick summary of the results in Ref. [34].

3.6. Quantum Information Dimension and Geometric Entropy

Thanks to the symplectic nature of P(HS), the quantum state space is essentially a curved, compact, classical phase space. We can therefore apply classical statistical mechanics to it, using (pj,ϕj)j=1dS as canonical coordinates. Since the Fubini–Study volume is dVFSjdpjdϕj, we can coarse-grain P(HS) by partitioning it into phase-space cells Ca,b:

Cab=j=1dS1ajN,aj+1N×2πbjN,bj+1N

of equal Fubini–Study volume VolCab=Vol[P(HS)]N2(dS1)=ϵ2(dS1), where a=(a1,,adS1), b=(b1,,bdS1) and aj,bj=0,1,,N.

The coarse-graining procedure produces a discrete probability distribution qab:=μ[Cab], for which we can compute the Shannon entropy:

H[ϵ]:=a,bqablogqab.

As we change ϵ=1/N0, the degree of coarse-graining changes accordingly. The scaling behavior of H[ϵ] provides structural information about the underlying ensemble. Indeed, since one can prove that for ϵ0, H[ϵ] has asymptotics

H[ϵ]ϵ0Dlogϵ+hD,

two quantities define its scaling behavior: D is the quantum information dimension and hD is the geometric quantum entropy. Their explicit definitions are:

D:=limϵ0H[ϵ]logϵ, (1a)
hD:=limϵ0H[ϵ]+Dlogϵ. (1b)

Note how this keeps the dependence of the entropy on the information dimension explicit. This clarifies how, only in certain cases, one can use the continuous counterpart of Shannon’s discrete entropy. In general, its exact form depends on the value of D and it cannot be written as an integral on the full quantum state space with the Fubini–Study volume form.

4. Principle of Maximum Geometric Quantum Entropy

This section presents a fine-grained characterization of selecting an ensemble behind a given density matrix. This leverages both the HJW theorem and previous results by the authors. First, we note that D foliates E(ρ) into non-overlapping subsets ED(ρ) collecting all ensembles μ at given density matrix ρ and with information dimension D:

ED(ρ)ED(ρ)=δD,DED(ρ),E(ρ)=DED(ρ).

As argued above, ensembles with different D pertain to different physical situations. These can be wildly different. Therefore, we often want to first select the D of the ensemble we will end up with and then choose that with the maximum geometric entropy. Thus, here we introduce the principle of maximum geometric entropy at fixed information dimension.

Proposition 1

(Maximum Geometric Entropy Principle). Given a system with density matrix ρ, the ensemble μMED that makes the fewest assumptions possible about our knowledge of the ensemble among all elements of E(ρ) with fixed information dimension dimension D is given by:

μMED:=arg maxμED(ρ)hD.

Several general comments are in order. First, we note that μMED might not be unique. This should not come as a surprise. For example, with degeneracies, even the eigenensemble is not unique. Second, the optimization problem defined above is clearly constrained: the resulting ensemble has to be normalized and the average of (Zj)*Zk must be ρjk. Calling Eμ[A] the state space average of a function A performed with the GQS μ, these two constraints can be written as C1:=Eμ[1]1=0 and Cjkρ:=Eμ[(Zj)*Zk]ρjk=0. Using Lagrange multipliers, we optimize Λ[μ,γ1,γjk] defined as:

Λ[μ,γ1,γjk]:=hDμ+γ1C1+j,kγjkCjkρ.

While the vanishing of Λ’s derivatives with respect to the Lagrange multipliers γ1,γjk enforces the constraints C1=Cjkρ=0, derivatives with respect to μ give the equation whose solution is the desired ensemble μMED. We also note that the γjk are not all independent. This is due to the fact that ρ is not an arbitrary matrix: Trρ=1, ρ0, and ρ=ρ. A similar relation holds for γjk.

To illustrate its use, we now solve this optimization problem in a number of relevant cases. In discussing them, it is worth introducing additional notation. Since we often use canonically conjugated coordinates, (pj,ϕj)j=1dS, we introduce vector notation (p,ϕ), with pΔdS1 and ϕTdS1, where ΔdS1 is the (dS1)-dimensional probability simplex and TdS1 is the (dS1)-dimensional torus. Analogously, we introduce the Dirac measures δxp and δφϕ with support on xΔdS and φTdS, respectively.

4.1. Finite Environments: D=0

If D=0, then the support of the ensemble is made by a number of points, which is a natural number. That is, there exists NN such that μMED=0=α=1Npαδχα, with h0=α=1Npαlogpα. Note how this is the HJW theorem’s domain of applicability, and this allows us to give a constructive solution.

We start by noting that N also foliates ED=0 into non-overlapping sets in which the ensemble consists of exactly N elements. We call this set E0,N(ρ) and it is such that E0,N(ρ)E0,N(ρ)=δN,NE0,N(ρ), with ED=0(ρ)=NdSE0,N(ρ). Within E0,N(ρ), we can use the HJW theorem with the interpretation in which the ensemble is the conditional ensemble. Here, pα and χα are generated by creating a purification of dimension N, in which the first dS elements of the basis |SPjj=1dS are fixed and the remaining NdS are free. We denote the entire basis of this type with the same symbol but a different label: |SPαα=1N. The ensemble we obtain if we measure it is the eigenensemble L(ρ).

However, measuring in a different basis yields a general ensemble, with probabilities pα=ψ(ρ)|IS|vαvα|ψ(ρ)=j=1dSλjSPj|vα2 and states |χα=j=1dSλjvα|SPjpα|λj. With h0=αpαlogpα, the absolute maximum is attained at pα=1/N. We now show, constructively, that this is always achievable while still satisfying the constraints C1=Cijρ=0, thus solving the maximization problem.

This is achieved by measuring the environment in a basis that is unbiased with respect to the Schmidt partner basis:

|vα:vα|SPβ=eiθαβNα,β=1,,N. (2)

One such basis can always be built starting from |SPαα by exploiting the properties of the Weyl–Heisenberg matrices via the clock-and-shift construction [37]. This is true for all NN. When N=knkNk with nk primes and Nk some integers, the finite-field algorithm [38,39] can be used to build a whole suite of N bases that are unbiased with respect to the Schmidt partner basis. This leads to |χα=j=1dSλjeiθαj|λj and to:

μMED=0=δλp1Nα=1Nδθαϕ,h0=logN,

with λ=(λ1,,λdS) and θα=(θα0,,θαdS).

To conclude this subsection, we simply have to show that this ensemble satisfies the constraints: C1=0 and that the density matrix given by μMED=0 is ρ, giving Cjkρ=0:

σMEjk:=EμMED=0Zj*Zk=λjλkNα=1Nei(θαjθαk)=λjλkδjk=λkδjk=ρjk.

Here, the key property used is that 1Nα=1Nei(θαγθαβ)=SPβ|α=1N|vαvα||SPγ=δβγ, which comes from Equation (2) and the fact that |vαα is a basis.

4.2. Full Support: D=2(dS1)

The second case of interest is the one in which the quantum information dimension takes the maximum value possible, namely D=2(dS1). Then, the GQS’s support has the same dimension as the full quantum state space and the optimization problem is also tractable. This is indeed the case solved by Brody and Hughston [27]. We do not reproduce the treatment here, which is almost identical in the language of GQM. Rather, we discuss some of its physical aspects from the perspective of conditional ensembles.

If D=2(dS1) and there are no other constraints aside from C1 and Cjk, the measure μME2(dS1) can be expressed as an integral with a density qME with respect to the uniform, normalized, Fubini–Study measure dVFS:

μME2(dS1)[A]=AdVFSqME(Z).

And its geometric entropy h2(dS1) is the continuous counterpart of Shannon’s functional on the quantum state space:

h2(dS1)=P(HS)dVFSq(Z)logq(Z).

This was proven in Ref. [34]. Hereafter, with a slight abuse of language, we refer to both μME2(dS1) and the density qME(Z) as an ensemble or the GQS.

The maximization problem leads to:

qME(Z)=1Q2(dS1)(ρ)ejkγjk(Zj)*Zk,Q2(dS1)(ρ)P(HS)dVFSejkγjk(Zj)*Zk

and Lagrange multipliers γjk are the solution of the nonlinear equations  logQγjk=ρjk. We note how using as reference basis the eigenbasis |λjj=1dS of ρ and Z(p,ϕ) as coordinate system reveals that  logQγjk=0 when jk and  logQγjj=λj. Thus, in this coordinate system the dependence of μME2(dS1) on the off-diagonal Lagrange multipliers disappears and we retain only the diagonal ones γjj.

Moving to a single label γjjγj and using a vector notation:

qME2(dS1)(p,ϕ)=1Q2(dS1)(τ)eτ·p,Q2(dS1)(τ)ΔdS1dpeτ·pτγdSγ1,γdSγ2,,γdSγdS1

Here, Q(τ) is the normalization function (a partition function). Its exact expression can be derived analytically and it is given in Appendix A.

We can see how μME2(dS1) is the product of an exponential measure on the probability simplex ΔdS1 and the uniform measure on the high-dimensional torus of the phases TdS1. This leads to the following geometric entropy h2(dS1):

h2(dS1)(τ)=logQ2(dS1)(τ)τ·λ. (3)

In this case the explicit expression of the Lagrange multipliers τ satisfying the constraints, which was previously unknown, can be found analytically. This is reported in Appendix B.

We note that this exponential distribution on the probability simplex was recently proposed within the context of statistics and data analysis in Ref. [40]. Moreover, the exponential form associated to the maximum entropy principle is reminiscent of thermal behavior. Indeed, the shape of this distribution is closely related to the geometric canonical ensemble; see Refs. [7,14,27]. However, the value of the Lagrange multipliers is set by a different constraint, in which we fix the average energy rather than the whole density matrix.

4.3. Integer, but Otherwise Arbitrary, D

While one expects D to be an integer, there are GQSs that have fractal support, thus exhibiting a noninteger D. This was shown in Ref. [34]. This section discusses the generic case in which D0,2(dS1), but it is still an integer. Within ED(ρ), our ensemble μMED has support on a D-dimensional submanifold of the full 2(dS1)-dimensional quantum state space, where it has a density. Reference [34] discusses, in detail, the case in which D=1 and dS=2. Here, we generalize the procedure to arbitrary D and dS.

If the support of μMED is contained in a submanifold of dimension D<2(dS1), which we call SD, we can project the Fubini–Study metric gFS down to P(HS) to get gSFS. Let us call Xj:ξaSDXj(ξa)P(HS) the functions which embed SD into the full quantum state space P(HS). Then, the metric induced on SD is gabS=j,kaXjbXkgjkFS, where a:=/ξa. Note that here we are using the “real index” notation even for coordinates Xj on P(HS). While P(HS) is a complex manifold, admitting complex homogeneous coordinates, we can always use real coordinates on it. Then, gS induces a volume form dωSξ=ωS(dξ)=detgSdξ, where dξ is the Lebesgue measure on the RD which coordinatizes SD. Then, μMED can be written as:

μMED[ASD]=AdωSξf(ξ).

Eventually, this leads to:

hD=SDdωSξf(ξ)logf(ξ).

This allows rewriting the constraints explicitly in a form that involves only probability densities on SD:

C1=μMED[SD]1=SDdωSξf(ξ)1,Cjk=EμMED[(Zj)*Zk]ρjk=SDdωSξf(ξ)(Zj)*(ξ)Zk(ξ)ρjk,

where Z(ξ):ξSDZ(ξ)P(HS) are the homogeneous coordinate representation of the embedding functions Xa of SD onto P(HS).

The solution of the optimization problem leads to the Gaussian form, in homogeneous coordinates, with support on SD:

qMED(ξ)=1QDej,k=1dSγjk(Zj)*(ξ)Zk(ξ)QD=SDdωSξej,k=1dSγjk(Zj)*(ξ)Zk(ξ).

Again, we can move from a homogeneous representation to a symplectic one Z(ξ)p(ξ),ϕ(ξ) in which the reference basis is the eigenbasis of ρ. This gives ρjk=λjδjk. This, in turn, means we only need the diagonal Lagrange’s multipliers γjj. As for the previous case, we move to a single label notation γjjγj:

qMED(ξ)=1QD(τ)eτ·p(ξ),QD(τ)=SDdωSξeτ·p(ξ)τγdSγ1,γdSγ2,,γdSγdS1

with an analytical expression for the entropy:

hD(τ)=logQD(τ)τ·λ.

While this solution appears to have much in common with the D=2(dS1) case, there are profound differences. Indeed, the functions p(ξ) can be highly degenerate, since we are embedding a low-dimensional manifold, SD, into a higher one, P(HS). Indeed, the coordinates ξ emerge from coordinatizing a submanifold of dimension D within one of dimension 2(dS1). This means that for SD there are 2(dS1)D independent equations of the type Kn(Z)=0n=12(dS1)D. In general, we expect them to be highly nonlinear functions of their arguments. While choosing an appropriate coordinate system allows simplifying, this choice has to be made on a case-by-case basis. In specific cases, discussed in the next section, several exact solutions can be found analytically.

4.4. Noninteger D: Fractal Ensembles

As Ref. [34] showed, even measuring the environment in a local basis can lead to GQSs with noninteger D. For example, if we explicitly break the translational invariance of the spin-1/2 Heisenberg model in 1D by changing the local magnetic field of one spin, the GQS of one of its spin-1/2 is described by a fractal resembling Cantor’s set in the thermodynamic limit of an infinite environment. Its quantum information dimension and geometric entropy have been estimated numerically to be D0.83±0.02 and h0.83 grows linearly with NE, the size of the environment: h0.830.66NE. Their existence gives physical meaning to the question of finding the maximum geometric entropy ensemble with noninteger D.

Providing concrete solutions to this problem is quite complex, as it requires having a generic parametrization for an ensemble with an arbitrary fractional D. As far as we know, this is currently not possible. While we do know that certain ensembles have a noninteger D, there is no guarantee that fixing the value of the information dimension, e.g., D=N/M with N,MN relative primes, turns into an explicit way of parametrizing the ensemble. We leave this problem open for future work.

5. How Does μME Emerge?

While the previous section gave the technical details regarding ensembles resulting from the proposed maximum geometric quantum entropy principle, the following identifies the mechanisms for their emergence in a number of cases of physical interest.

5.1. Emergence of μME0

As partly discussed in the previous section, μME0 can emerge naturally as a conditional ensemble, when our system of interest interacts with a finite-dimensional environment (dimension N). If the environment is probed with projective measurements in a basis that is unbiased with respect to the Schmidt-partner basis |SPαα=1N, we reach the absolute maximum of the geometric entropy, logN. The resulting GQS is μME0=δλp1Nα=1Nδθαϕ, with members of the ensemble being |χα =j=1dSλjeiθαj|λj and pα=1/N.

As argued in Ref. [41], the notion of unbiasedness is typical. Physically, this is interpreted as follows. Imagine someone gives up |ψ(ρ), a purification of ρ, without telling us anything about the way the purification is performed. This means we know nothing about the way ρ has been encoded into |ψ(ρ). Equivalently, we do not know what the |SPjj=1dS are. If we now choose a basis of the environment to study the conditional ensemble, |vαα=1N, this will have very little information about the |SPjj=1dS—there is a very high chance that we will end up very close to the unbiasedness condition.

The mathematically rigorous version of “very high chance” and “very close” is given in Ref. [41] and it is not relevant here. The only thing we need is that this behavior is usually exponential in the size of the environment 2N. Somewhat more accurately, the fraction of bases which are vα|SPj21/N are 12N. Therefore, statistically speaking, it is extremely likely that, in absence of meaningful information about what the |SPjj=1dS are, the conditional ensemble we will see is μME0.

5.2. Emergence of μME2(dS1)

For μME2(dS1) to emerge as a conditional ensemble, our dS-dimensional quantum system must interact with an environment that is being probed with measurements whose outcomes are parametrized by 2(dS1) continuous variables, each with the cardinality of the reals. This is because we have to guarantee that D=2(dS1). Therefore, conditioning on projective measurements on a finite environment is insufficient. One possibility is to have a finite environment that we measure on an overcomplete basis, like coherent states. A second possibility is to have a genuinely infinite-dimensional environment, on which we perform projective measurements. For example, we could have 2(dS1)/3 quantum particles in 3D that we measure on the position basis n=12(dS1)/3|xn,yn,zn. All the needed details were given in Ref. [6], where we studied the properties of a GQS emerging from a finite-dimensional quantum system interacting with one with continuous variables.

We stress here that this is only a necessary condition, not a sufficient one. Indeed, we can have an infinite environment that is probed with projective measurements on variables with the right properties but still obtain an ensemble that is not μME2(dS1). An interesting example of this is given by the continuous generalization of the notion of unbiased basis. We illustrate this in a simple example of a purification obtained with a set of 2(dS1) real continuous variables, realized by 2(dS1) non-interacting particles in a 1D box [0,L].

In this, the notion of an unbiased basis is satisfied by position and momentum eigenstates: x|k=eik·xV. Thus, if our Schmidt partners are momentum eigenstates |SPj = |kjj=1dS, and we measure the environment in the position basis, we do not obtain a GQS with the required D=2(dS1). Indeed, while we do obtain q(x)=ψ(ρ)|IS|xx||ψ(ρ) =1V, the members of the ensemble |χ(x) =j=1dSλjeikj·x|λj are not distributed in the appropriate way.

This leads to the ensemble δλp1(2π)dS1, which has the wrong information dimension: D=dS1, not D=2(dS1). This clarifies why, in order to have D=2(dS1), using an environmental basis that is unbiased with respect to the Schmidt partners is not enough. Specifically, the probabilities pj(x)=λj|χ(x)2=λj do not depend on x. They do not obtain redistributed by the unbiasedness condition and are always equal to the eigenvalues of ρ.

If we measure on a different basis |l :=Vdxul*(x), we obtain a different GQS since l|SPj=Vdxul(x)eikj·x=Fl(kj) is essentially the Fourier transform of ul:

q(l)=j=1dSλjFl(kj)2 (4)
|χ(l) =j=1dSλjFl(kj)q(l)|λj. (5)

Equation (5) gives the functions pj(l),ϕj(l)j=1dS1:

pj(l)=λjFl(kj)2n=1dSλnFl(kn)2, (6)
ϕj(l)=ArgFl(kj). (7)

This, together with the density q(l) specifies the ensemble via μ=dlq(l)δp(l)pδϕ(l)ϕ.

Finding the exact conditions that lead to μ=μME2(dS1) involves solving a complex inverse problem. However, what we have accomplished so far allows us to understand the real mechanism behind its emergence. First, the ϕj(l) must be uniformly distributed: they must be random phases. Second, the distribution of p must be of exponential form. The first condition can always be ensured by choosing some l|x=ul(x) and then multiplying it by pseudo-random phases, generated in a way that is completely independent on p. This can always be achieved without breaking the unitarity of l|x via ul(x)ul(x)eiθl. This guarantees that the marginal distribution over the phases is uniform and that the density q(p,ϕ) becomes a product of its marginals, since the distribution of the ϕ has been built to be independent of everything else: q(p,ϕ)=f(p)·unif(ϕ). Then, in order for q(p,ϕ) to be the maximum entropy one we need f(p)=1Q(τ)eτ·p.

Given a nondegenerate p(l), this can be ensured by a specific form of q(l) since f(p)=dlq(l)δp(l)p:

q(l)=detJ(l)eτ·p(l)Q(τ)f(p)=eτ·pQ(τ),

where J is the Jacobian matrix of the coordinate change lp(l). Checking that this form leads to the right distribution is simply a matter of coordinate changes. Alternatively, it can be seen by repeated use of the Laplace transform on the simplex, together with the result 1Q(τ)ΔdS1dpea·peτ·p=Q(τa)/Q(τ). We now see the mechanism at play in a concrete way and how it leads to the maximum entropy GQS μME2(dS1).

First, let us take the label l=(l1,,l2(dS1)) and split it in two l=(a,b) with a=(a1,,adS1) and b=(b1,,bdS1). Then, dl=dadb. At this stage, the choice of l, the splitting, and |SPj are arbitrary. Then, we make the choice that a,b|SPj =Aj(a)eiBj(b). The only property we need to check is that |a,ba,b can be a complete set:

daAj(a)Ak(a)dbei(Bj(b)Bk(b))=δjk.

We can choose Bj(b) such that dbei(Bj(b)Bk(b))=Mδjk, for example, by choosing Bj(b) to be linear functions. Then, choosing Aj(a) such that daAj(a)=1M guarantees completeness. With this choice, we obtain:

q(a,b)=j=1dSλjAj(a),pj(a,b)=λjAj(a)n=1dS1λnAn(a)pj(a),ϕj(a,b)=Bj(b)ϕj(b).

The probability density q(a,b) can be written as a product of two probability densities: q(a,b)=f(a)1M. Here, 1/M is the uniform density for b and f(a)=j=1dSλjMAj(a) is a probability density for a. Then, the GQS becomes a product of two densities: one over the probability simplex (for p) and another one over the phases (for ϕ):

μ=dadbq(a,b)δp(a,b)pδϕ(a,b)ϕ,=dafa(a)δp(a)p·1Mdbδϕ(b)ϕ,=fp(p)fϕ(ϕ). (8)

These are the formulas for two changes of variable in integrals: ap(a) and bϕ(b). Since these are invertible, we can confirm what we understood before. fϕ(ϕ)=unif[ϕ] when the phases ϕ(b) are uniformly distributed. Moreover, when fp(p)=eτ·pQ(τ) and p(a) are exponentially distributed: fa(a)=detJap(a)eτ·p(a)Q(τ), with Jap being the Jacobian matrix of the change of variables ap(a).

5.3. Stationary Distribution of Some Dynamic

A second mechanism, which can lead to the emergence of an ensemble with D=2(dS1), is time averaging. Indeed, if we are in a nonequilibrium dynamical situation in which the system and its environment jointly evolve with a dynamical (possibly unitary) law, its conditional ensembles μ(t) depend on time.

To study stationary behavior from dynamics, one looks at time-averaged μ(t)¯ =limT1T0Tμ(t) ensembles that, in this case, have a certain stationary density matrix ρss=ρ(t)¯. Unless something peculiar happens, we expect the ensemble to cover large regions of the full state space, leading to a stationary GQS with D=2(dS1) and a given density matrix ρss.

Intuitively, we expect dynamics that are chaotic in quantum state space to lead to ensembles described by μME2(dS1). This is because the ensemble that emerges must be compatible with a density matrix ρss while still exhibiting a nontrivial dynamics due to the action of the environment. We now give a simple example of how this happens. Borrowing from Geometric Quantum Thermodynamics, see Ref. [7], where we studied a qubit with a Caldeira–Leggett-like environment, the resulting evolution for the qubit can be described using the Stochastic Schrödinger equation, which, as shown in Ref. [7], leads to a maximum entropy ensemble (see Equation (3)) of the required type.

5.4. Emergence of μMEdS1

Among all possible values of D, a third one which is particularly relevant is D=dS1, which is half the maximum value. The reason why this is important comes from the symplectic nature of the quantum state space and, ultimately, from dynamics. One physical situation in which μMEdS1 emerges naturally is the study of the dynamics of pure, isolated quantum systems. The phenomenology we discuss here is known, being intimately related to thermalization and equilibration studies. We discuss it here only in connection with the maximum geometric entropy principle introduced in Section 4.

Imagine an isolated quantum system in a pure state |ψ0 evolving unitarily with a dynamics generated by some time-independent Hamiltonian H=n=1DEn|EnEn|. Assuming lack of degeneracies in the energy spectrum, the dynamics is given by |ψt=n=1Dpn0ei(ϕn0Ent), where pn0eiϕn0En|ψ0 and we have used symplectic coordinates in the energy eigenbasis. Since pΔD1 are conserved quantities pnt=pn0 and ϕTD1 evolve independently and linearly ϕnt=ϕn0Ent on a high-dimensional torus, we know that a sufficient condition for the emergence of ergodicity on TD1 is the so-called non-resonance condition: energy gaps have to be non-degenerate: namely EnEk=EaEb if and only if n=k and a=b or n=a and k=b.

This condition is usually true for interacting many-body quantum systems. If that is the case, then the evolution of the phases is ergodic on TD1. This was first proven by von Neumann [42] in 1929. Calling (p(t),ϕ(t)) the instantaneous state and with δp(t)pδϕ(t)ϕ the corresponding Dirac measure on the quantum state space, we have:

limT1T0Tdtδp(t)pδϕ(t)ϕ=δp(0)p·unifϕTD1,

where unifϕTD1 is the uniform measure on TD1 in which all ϕn are uniformly and independently distributed on the circle. It is not too hard to see that this is the maximum geometric entropy ensemble with D=dS1, compatible with the fact that the occupations of the energy eigenstates are all conserved quantities: pnt=pn0.

Indeed, these dS1 constraints provide dS1 independent equations, thus reducing the state-space dimension that the system explores to the high-dimensional torus TD1. On this, however, the dynamics is ergodic and the resulting stationary measure is the uniform one. By definition, this is the measure with the highest possible value of geometric entropy since its density is uniform and equal to qMEϕ(ϕ)=1Vol[TD1], where Vol[TD1] =TD1dωFSϕ=(2π)D1 is the volume of TD1, computed with the Fubini–Study volume element projected on TD1, that is k=1D1dϕk:

hdS1=TD1dωFSϕ1Vol[TD1]log1Vol[TD1]=logVol[TD1].

5.5. Comment on the Generic μMED

To have a GQS with generic information dimension D result from a conditional measurement on an environment, we must condition on at least D continuous variables with the cardinality of the reals. This can be achieved either via measurements on an overcomplete basis, such as coherent states, or via projective measurements on an infinite dimensional environment with at least D real coordinates. This condition is necessary, but not sufficient, to guarantee the emergence of the corresponding maximum entropy ensemble μMED. While we have seen that the notion of an unbiased basis is relevant when D=0, we also saw how this falls short in the generic D>0 case. Understanding this general condition is a nontrivial task that requires a much deeper understanding of how systems encode quantum information in their environment and how this is extracted by means of quantum measurements. Further work in this direction is ongoing and will be reported elsewhere.

For a GQS with arbitrary dimension D to emerge as a stationary distribution on a quantum state space with dimension 2(dS1), it is likely that we need 2(dS1)D independent equations constraining the dynamics, that is, if D is an integer. Indeed, due to the continuity of time and the smoothness of the time evolution in quantum state space, we expect DN in the vast majority of cases. If these equations constraining the motion on the quantum state space are linear, then we know that having 2(dS1)D independent equations is both necessary and sufficient to have D as quantum information dimension. This, however, says virtually nothing about the maximization of the relevant geometric entropy hD. Moreover, constraints on an open quantum system can take very generic forms and the relevant equations will not always be linear.

An explicit example where such ensemble can be found constructively is given by the case of an isolated quantum system. The conditions for the emergence of a maximum entropy μMEdS1 are then known, being equivalent to the conditions for the ergodicity of periodic dynamics on a high-dimensional torus, which are known.

6. Conclusions

While a density matrix encodes all the statistics available from performing measurements on a system, they do not give information about how the statistics were created. And infinite possibilities are available. A natural way to select a unique ensemble behind a given density matrix is to approach the problem from the perspective of information theory. In this case, the issue becomes a standard inference problem, one to which we can apply the maximum entropy principle. To properly formulate the problem in this way requires a proper way to compute the entropy of an ensemble. While this is trivial for ensembles with a finite number of elements, it is not for continuous ensembles. The correct answer, the notion of Geometric Quantum Entropy hD, was given in Ref. [34]. This, however, depends strongly on another quantity that characterizes the ensemble: the quantum information dimension D. Consequently, we formulated the maximum geometric entropy principle at fixed quantum information dimension. This is a one-parameter class of maximum entropy principles, labeled by D, that can be used to explore various ways to have ensembles give rise to a specific density matrix.

As often happens with inference principles, the generic optimization problem can be hard to solve. However, here we solved a number of cases where the ensemble can be found analytically. We also explored the physical mechanism responsible for the emergence of μMED. Two different classes of situations were considered: (i) a conditional ensemble, resulting from measuring the environment of our system of interest, and (ii) stationary distributions, in which the statistics arise from aggregating data over time. We have also identified and discussed various instances where both mechanisms lead to a maximum entropy ensemble.

Acknowledgments

F.A. thanks Pawel Horodecki, Karol Zyckzowski, Marina Radulaski, Davide Pastorello, Akram Touil, Sebastian Deffner and Davide Girolami for discussions on geometric quantum mechanics. F.A. and J.P.C. thank David Gier and Ariadna Venegas-Li for helpful discussions and the Telluride Science Research Center for its hospitality during visits.

Appendix A. Calculating the Partition Function

Recall:

Zλαβ=P(H)eα,βλαβZαZ¯βdVFS.

Since λαβ are the Lagrange multipliers of Cαβ, we chose them to be Hermitian as they are not all independent. Thus, we can always diagonalize them with a unitary matrix:

αβUγαλαβUβϵ=lγδγϵ.

This allows us to define auxiliary integration variables Xγ=αUγαZα. Thanks to these, we express the quadratic form in the exponent of the integrand using that (UU)αβ=δαβ:

ZλZ¯=αβZαλαβZ¯β=αβα˜β˜Zαδαα˜λα˜β˜δββ˜Z¯β˜=αβα˜β˜abZαUαaUaα˜λα˜β˜Uβ˜bUbβZ¯β=aXa2la.

Moreover, recalling that the Fubini–Study volume element is invariant under unitary transformations, we can simply adapt our coordinate systems to Xa. And so we have Xa=qaeiνa. This gives dVFS=k=1D1dqadνa2. We arrive at the following simpler functional:

Zλαβ=P(H)ea=0D1laqak=1D1dqadνa2=(2π)D12D1ΔD1ealaqaadqa.

Now, we are left with an integral of an exponential function over the D1-simplex. We can use Laplace transform trick to solve this kind of integral:

ID1(r)ΔD1k=0D1elkqkδk=0D1qkrdq1dqD1I˜D1(z)0ezrID1(r)dr,I˜n(z)=k=0n(1)k(lk+z)=(1)n(n+1)2k=0n1zzk,

with zk=lkR.

The function I˜n(z) has n+1 real and distinct poles: z=zk=lk. Hence, we exploit the partial fraction decomposition of I˜n(z), which is:

(1)n(n+1)2k=0n1zzk=(1)n(n+1)2k=0nRkzzk,

where:

Rk=(zzk)I˜n(z)z=zk=j=0,jkn(1)n(n+1)2zkzj.

Exploiting linearity of the inverse Laplace transform plus the basic result:

L11s+a(t)=eatΘ(t),

where:

Θ(t)=1t00t<0.

We have for:

In(r)=L1[I˜n(z)](r)=Θ(r)k=0nRkezkr.

And so:

Z=ID1(1)=k=0D1elkj=0,jkD1(lklj).

Now, consider that la are linear functions of the true matrix elements:

la=fa(λαβ)=αβUaαλαβUβa.

We arrive at:

Z(λαβ)=efk(λαβ)j=0,jkD1(lk(λαβ)lj(λαβ)).

Appendix B. Calculating Lagrange Multipliers

Given the expression of the partition function, we now show that that the value of Lagrange’s multipliers γjk can be given analytically by extending the Laplace transform technique exploited Appendix A.

The nonlinear equation to fix γjk is:

logQγjk=1QP(HS)dVFS(Zj)*Zkea,bγab(Za)*Zb=ρjk.

We now use as reference basis the eigenbasis of ρ and as coordinate system (p,ϕ). This means only the diagonals γjj enter the equation.

logQγkk=λk1QΔdS1n=1dS1dpnpkeaγaapa=1QR+dSn=1dSdpnpkea=1dSγaapaδjpj1.

To compute this, we use the same Laplace transform technique we used before, with a minor adaptation. First, we do single lable notation γjjlj, then we define:

JD1(k)(r)R+Dn=1Ddpnpkj=1Deljpjδk=1Dpkr

Its Laplace transform is:

J˜D1(k)(z)=0ezrJD1(k)(r)=R+Dn=1Ddpnpkj=1De(lj+z)pj=nk0dpne(ln+z)pn×0dpk(pk)e(lk+z)pk=n=1D0dpne(ln+z)pn×lklog0dpke(lk+z)pk=n=1DGn(z)logGk(z)lk

where:

Gk(z)0dpke(lk+z)pk=1lk+z0dyey=1lk+zlogGk(z)lk=Gk(z)

Therefore, we obtain:

J˜D1(k)(z)=nkDGn(z)×Gk(z)2=nk1zzn1(zzk)2wherezn=ln.

This can be written again as a sum, using the partial fraction decomposition:

J˜D1(k)(z)=nk1zzn1(zzk)2=nkRnzzn+Rk(1)(zzk)2,

where:

Rn=(zzn)J˜D1(k)(z)z=zn=jn,k1znzj1(znzk)2=jn1znzj1(znzk)Rk(1)=(zzk)2J˜D1(k)(z)z=zk=jk1zkzj.

Exploiting the basic Laplace transform result:

L11s+an(t)=tn1eatΓ(n)Θ(t),

we can the invert the relation to compute JD1(r=1):

JD1(r)=L1J˜D1(k)(z)(r)=jkRjezjr+rRk(1)ezkr.

Eventually, remembering that zk=lk=γkk, we obtain:

JD1(k)(1)=jkRjeγjj+Rk(1)eγkk=(1)Djkeγjjaj(γjjγaa)(γjjγkk)eγkkak(γkkγaa).

The Lagrange’s multipliers γj can then be fixed by solving:

JD1(k)(1)=λk,

where λk are the eigenvalues of the density matrix: ρjk=δjkλk.

Author Contributions

Both authors contributed equally to the conceptualization of the work and writing of the manuscript. F.A. did the calculation while J.P.C. provided guidance and contextualization. All authors have read and agreed to the published version of the manuscript.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

Funding Statement

This material is based upon work supported by, or in part by, a Templeton World Charity Foundation Power of Information Fellowship, FQXi Grant FQXi-RFP-IPW-1902, and U.S. Army Research Laboratory and the U.S. Army Research Office under contracts W911NF-21-1-0048 and W911NF-18-1-0028.

Footnotes

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

References

  • 1.Pathria R.K., Beale P.D. Statistical Mechanics. Elsevier B.V.; Amsterdam, The Netherlands: 2011. [Google Scholar]
  • 2.Greiner W., Neise L., Stöcker H. Thermodynamics and Statistical Mechanics. Springer; New York, NY, USA: 1995. [DOI] [Google Scholar]
  • 3.Jaynes E.T. Information Theory and Statistical Mechanics. Phys. Rev. 1957;106:620. doi: 10.1103/PhysRev.106.620. [DOI] [Google Scholar]
  • 4.Jaynes E.T. Information Theory and Statistical Mechanics. II. Phys. Rev. 1957;108:171–190. doi: 10.1103/PhysRev.108.171. [DOI] [Google Scholar]
  • 5.Cover T.M., Thomas J.A. Elements of Information Theory. Wiley-Interscience; New York, NY, USA: 2006. [Google Scholar]
  • 6.Anza F., Crutchfield J.P. Beyond Density Matrices: Geometric Quantum States. Phys. Rev. A. 2020;103:062218. doi: 10.1103/PhysRevA.103.062218. [DOI] [Google Scholar]
  • 7.Anza F., Crutchfield J.P. Geometric Quantum Thermodynamics. Phys. Rev. E. 2022;106:054102. doi: 10.1103/PhysRevE.106.054102. [DOI] [PubMed] [Google Scholar]
  • 8.Strocchi F. Complex Coordinates and Quantum Mechanics. Rev. Mod. Phys. 1966;38:36–40. doi: 10.1103/RevModPhys.38.36. [DOI] [Google Scholar]
  • 9.Kibble T.W.B. Geometrization of quantum mechanics. Comm. Math. Phys. 1979;65:189–201. doi: 10.1007/BF01225149. [DOI] [Google Scholar]
  • 10.Heslot A. Quantum mechanics as a classical theory. Phys. Rev. D. 1985;31:1341–1348. doi: 10.1103/PhysRevD.31.1341. [DOI] [PubMed] [Google Scholar]
  • 11.Gibbons G.W. Typical states and density matrices. J. Geom. Phys. 1992;8:147–162. doi: 10.1016/0393-0440(92)90046-4. [DOI] [Google Scholar]
  • 12.Ashtekar A., Schilling T.A. AIP Conference Proceedings. Volume 342. AIP; Melville, NY, USA: 1995. Geometry of quantum mechanics; pp. 471–478. [DOI] [Google Scholar]
  • 13.Ashtekar A., Schilling T.A. On Einstein’s Path. Springer; New York, NY, USA: 1999. Geometrical formulation of quantum mechanics; pp. 23–65. [DOI] [Google Scholar]
  • 14.Brody D.C., Hughston L.P. Geometric quantum mechanics. J. Geom. Phys. 2001;38:19–53. doi: 10.1016/S0393-0440(00)00052-8. [DOI] [Google Scholar]
  • 15.Bengtsson I., Zyczkowski K. Geometry of Quantum States. Cambridge University Press; Cambridge, UK: 2017. p. 419. [DOI] [Google Scholar]
  • 16.Cariñena J.F., Clemente-Gallardo J., Marmo G. Geometrization of quantum mechanics. Theoret. Math. Phys. 2007;152:894–903. doi: 10.1007/s11232-007-0075-3. [DOI] [Google Scholar]
  • 17.Chruściński D. Geometric aspects of quantum mechanics and quantum entanglement. J. Phys. Conf. Ser. 2006;30:9–16. doi: 10.1088/1742-6596/30/1/002. [DOI] [Google Scholar]
  • 18.Marmo G., Volkert G.F. Geometrical description of quantum mechanics—transformations and dynamics. Phys. Scr. 2010;82:038117. doi: 10.1088/0031-8949/82/03/038117. [DOI] [Google Scholar]
  • 19.Avron J., Kenneth O. An elementary introduction to the geometry of quantum states with pictures. Rev. Math. Phys. 2020;32:2030001. doi: 10.1142/S0129055X20300010. [DOI] [Google Scholar]
  • 20.Pastorello D. A geometric Hamiltonian description of composite quantum systems and quantum entanglement. Intl. J. Geom. Meth. Mod. Phys. 2015;12:1550069. doi: 10.1142/S0219887815500693. [DOI] [Google Scholar]
  • 21.Pastorello D. Geometric Hamiltonian formulation of quantum mechanics in complex projective spaces. Intl. J. Geom. Meth. Mod. Phys. 2015;12:1560015. doi: 10.1142/S0219887815600154. [DOI] [Google Scholar]
  • 22.Pastorello D. Geometric Hamiltonian quantum mechanics and applications. Int. J. Geom. Methods Mod. Phys. 2016;13:1630017. doi: 10.1142/S0219887816300178. [DOI] [Google Scholar]
  • 23.Clemente-Gallardo J., Marmo G. The Ehrenfest picture and the geometry of quantum mechanics. Il Nuovo C. C. 2013;3:35–52. doi: 10.1393/ncc/i2013-11522-6. [DOI] [Google Scholar]
  • 24.Hughston L.P., Jozsa R., Wootters W.K. A complete classification of quantum ensembles having a given density matrix. Phys. Lett. A. 1993;183:14–18. doi: 10.1016/0375-9601(93)90880-9. [DOI] [Google Scholar]
  • 25.Wiseman H.M., Vaccaro J.A. Inequivalence of pure state ensembles for Open Quantum Systems: The preferred ensembles are those that are physically realizable. Phys. Rev. Lett. 2001;87:240402. doi: 10.1103/PhysRevLett.87.240402. [DOI] [PubMed] [Google Scholar]
  • 26.Goldstein S., Lebowitz J.L., Tumulka R., Zanghi N. On the Distribution of the Wave Function for Systems in Thermal Equilibrium. J. Stat. Phys. 2006;125:1193–1221. doi: 10.1007/s10955-006-9210-z. [DOI] [Google Scholar]
  • 27.Brody D.C., Hughston L.P. Information content for quantum states. J. Math. Phys. 2000;41:2586–2592. doi: 10.1063/1.533260. [DOI] [Google Scholar]
  • 28.Karasik R.I., Wiseman H.M. How Many Bits Does It Take to Track an Open Quantum System? Phys. Rev. Lett. 2011;106:020406. doi: 10.1103/PhysRevLett.106.020406. [DOI] [PubMed] [Google Scholar]
  • 29.Schroedinger The exhcange of energy in wave mechanics. Ann. Der Phys. 1927;387:257–264. [Google Scholar]
  • 30.Schrödinger E. Statistical Thermodynamics. Dover Publications; New York, NY, USA: 1989. p. 95. [Google Scholar]
  • 31.Walecka . Fundamentals of Statistical Mechanics. Manuscript and Notes by Felix Bloch. Stanford University Press; Stanford, CA, USA: 1989. [Google Scholar]
  • 32.Goldstein S., Lebowitz J.L., Mastrodonato C., Tumulka R., Zanghi N. Universal Probability Distribution for the Wave Function of a Quantum System Entangled with its Environment. Commun. Math. Phys. 2016;342:965–988. doi: 10.1007/s00220-015-2536-0. [DOI] [Google Scholar]
  • 33.Reimann P. Typicality of Pure States Randomly Sampled According to the Gaussian Adjusted Projected Measure. J. Stat. Phys. 2008;132:921–935. doi: 10.1007/s10955-008-9576-1. [DOI] [Google Scholar]
  • 34.Anza F., Crutchfield J.P. Quantum Information Dimension and Geometric Entropy. Phys. Rev. X Quatum. 2022;3:020355. doi: 10.1103/PRXQuantum.3.020355. [DOI] [Google Scholar]
  • 35.Pollard D. A User’s Guide to Measure Theoretic Probability. Cambridge University Press; Cambridge, UK: 2002. [Google Scholar]
  • 36.Kolmogorov A. Foundations of the Theory of Probability. Chelsea Publishing Company; Chelsea, MI, USA: 1933. [Google Scholar]
  • 37.Vourdas Quantum systems with finite Hilbert spaces. Rep. Prog. Phys. 2004;67:267. doi: 10.1088/0034-4885/67/3/R03. [DOI] [Google Scholar]
  • 38.Bengtsson Three ways to look at Mutually Unbiased Bases. AIP Conf. Proc. 2007;889:40–51. [Google Scholar]
  • 39.Lawrence J., Brukner Č., Zeilinger A. Mutually unbiased binary observable sets on N qubits. Phys. Rev. A. 2002;65:032320. doi: 10.1103/PhysRevA.65.032320. [DOI] [Google Scholar]
  • 40.Gordon-Rodriguez E., Loaiza-Ganem G., Cunningham J. The continuous categorical: A novel simplex-valued exponential family; Proceedings of the 37th International Conference on Machine Learning; Vienna, Austria. 8 June 2020; pp. 3637–3647. [Google Scholar]
  • 41.Anza F., Gogolin C., Huber M. Eigenstate Thermalization for Degenerate Observables. Phys. Rev. Lett. 2018;120:150603. doi: 10.1103/PhysRevLett.120.150603. [DOI] [PubMed] [Google Scholar]
  • 42.Neumann J.v. Beweis des Ergodensatzes und desH-Theorems in der neuen Mechanik. Z. Phys. 1929;57:30–70. doi: 10.1007/BF01339852. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.


Articles from Entropy are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES