Skip to main content
Entropy logoLink to Entropy
. 2024 Mar 26;26(4):286. doi: 10.3390/e26040286

The ff˜ Correspondence and Its Applications in Quantum Information Geometry

Paolo Gibilisco 1
Editor: Carlo Cafaro1
PMCID: PMC11048813  PMID: 38667840

Abstract

Due to the classifying theorems by Petz and Kubo–Ando, we know that there are bijective correspondences between Quantum Fisher Information(s), operator means, and the class of symmetric, normalized operator monotone functions on the positive half line; this last class is usually denoted as Fop. This class of operator monotone function has a significant structure, which is worthy of study; indeed, any step in understanding Fop, besides being interesting per se, immediately translates into a property of the classes of operator means and therefore of Quantum Fisher Information(s). In recent years, the ff correspondence has been introduced, which associates a non-regular element of Fop to any regular element of the same set. In terms of operator means, this amounts to associating a mean with multiplicative character to a mean that has an additive character. In this paper, we survey a number of different settings where this technique has proven useful in Quantum Information Geometry. In Sections 1–4, all the needed background is provided. In Sections 5–14, we describe the main applications of the ff˜ correspondence.

Keywords: operator monotone functions, operator means, quantum Fisher information

1. Introduction: The Chentsov Uniqueness Theorem and the Chentsov–Morozova Problem

The basic theorems of classical and quantum information geometry are categorical in character: one is the Chentsov theorem and the other one is its quantum counterpart, the Petz–Kubo–Ando (PKA) theorem.

Let us first describe the structure of the Chentsov theorem. The idea is as follows: Imagine that we want to use a family of Riemannian metrics on the family of the simplexes of probability vectors to distinguish the states (namely the probability vectors themselves). It would be natural that these metrics should contract under the effect of noise, namely under the effect of the morphisms of such structures, which are the stochastic maps; the distance between states could shrink if we muddy the waters. Let us translate this into a formal mathematical structure.

If N is a differential manifold, let us denote by TρN the tangent space to N in the point ρN. In the present commutative case, we define a Markov morphism as a stochastic map, T:RnRk. Let

Pn1:={ρRn|ρi=1,ρi>0}. (1)

The tangent space of Pn1 can be naturally represented as

TρPn1={vRn|ivi=0}. (2)

A monotone metric will be any family of Riemannian metrics g={gn} on {Pn1}nN, such that

gT(ρ)m(TX,TX)gρn(X,X)

holds for every Markov morphism T:RnRm, for every ρPn1 and for every XTρPn1.

Let us remember that Fisher Information is the Riemannian metric on Pn1, defined as

u,vρ,F:=iuiviρi,u,vTρPn1.

Rao was the first to realize that Fisher Information was indeed a Riemann metric on statistical models. The surprising result, as proven by Chentsov (see [1]), is as follows:

Theorem 1. 

There exists a unique monotone metric on Pn1 (up to scalars) given by the Fisher information.

How do we generalize this in the quantum setting? Chentsov himself and Morozova were the ones to correctly formalize this new categorical problem (see [2]).

We denote by Mn (resp. Mn,sa), the space of complex (resp. self-adjoint) n×n matrices and define Dn1 as the space of the faithful states. This means

Dn1:={ρMn,sa|ρ>0,Tr(ρ)=1}. (3)

Due to the needs of quantum dynamics in the non-commutative case, a Markov morphism should be defined as a completely positive and trace-preserving operator T:MnMk. There exists a straightforward identification of TρDn1 with the space of self-adjoint traceless matrices; namely, for any ρDn1,

TρDn1={AMn,sa|Tr(A)=0}. (4)

Emphasizing the perfect analogy with the classical case, a monotone metric or Quantum Fisher Information in the non-commutative case is defined as a family of Riemannian metrics g={gn} on {Dn1}nN, such that

gT(ρ)m(TX,TX)gρn(X,X)

holds for every Markov morphism T:MnMm, for every ρDn1, and for every XTρDn1.

Again, we see that distances becomes shorter under noise effect. It is now time to see if non-commutative monotone metrics exist and how we can classify them. The Fisher metric comes from division by ρ, but there is no natural division by ρ in the quantum setting.

To solve this problem, we need more complex mathematical instruments. This is the argument contained in the following sections.

2. Means for Positive Numbers and the f˜ Function

A basic ingredient to answer the Chentsov–Morozova problem is the notion of operator means. To introduce this, we first need to understand the notion of numerical means.

Definition 1. 

Let R+=(0,+). A mean for pairs of positive numbers is a function m(·,·):R+×R+R+ such that (see [3])

(1) m(x,x)=x ;

(2) m(x,y)=m(y,x) ;

(3) x<yx<m(x,y)<y ;

(4) x<xy<ym(x,y)<m(x,y) ;

(5) m(·,·) is continuous;

(6) for t>0 one has m(tx,ty)=t·m(x,y).

Definition 2. 

Mnu:={m(·,·):R+×R+R+|misamean}

Definition 3. 

Fnu is the class of functions f(·):R+R+ such that

(i) f is continuous;

(ii) f is monotone increasing;

(iii) f(1)=1;

(iv) tf(t1)=f(t).

Proposition 1. 

There is a bijection between Mnu and Fnu given by the formulas

mf(x,y):=yf(xy1)
fm(t):=m(1,t)

Proof. 

Straightforward.

Here we have some examples of means and of the associated representing function (Table 1).

Table 1.

Means and representing function.

Name of the mean f  mf
Arithmetic  1+x2  x+y2
Heinz  12(xβ+x1β)  12(xβy1β+x1βyβ)
 β(0,1/2)  β(0,1/2)
Geometric  x  xy
Logarithmic  x1logx  xylogxlogy
Harmonic  2xx+1  21x+1y

Remark 1. 

It is possible to prove that, in the above table, the representing functions are concave and more: they are all operator concave. However, in Fnu we also have convex functions, such as this piecewise affine function (see [4]):

f(x)=x+34,if 0x1,3x+14,if x1.

Setting f(0)=limx0f(x), it is straightforward to verify that each mean mf(·,·) has a continuous extension to [0,+)×[0,+), provided by

mf(0,y)=f(0)·ymf(x,0)=f(0)·xmf(0,0)=0x,y>0.

We call the functions with f(0)>0 regular and all the others non-regular. Indeed, the associated regular means have an additive character; namely, if x,y>0 then mf(x,0)>0 and mf(0,y)>0. On the contrary, if f(0)=0 the mean appears multiplicative, that is, mf(x,0)=0 and mf(0,y)=0.

Note that no negative connotation should be associated with the denomination non-regular.

Definition 4. 

For fFnu such that f(0)>0, we set

f˜(x)=12(x+1)(x1)2f(0)f(x)x>0.

It is not difficult to see that this definition associates a non-regular element, f˜, to a regular one, f.

3. Operator Means, Operator Monotone Functions, Quantum Fisher Information: The Petz–Kubo–Ando Theorem

We are now ready to introduce operator means. As previously, let Mn:=Mn(C) be the set of all n×n complex matrices and Mn,sa be the set of all the self-adjoint elements of Mn. We shall denote general matrices by X,Y,, while letters A,B, will be used for self-adjoint matrices. The Hilbert–Schmidt scalar product will be denoted by X,Y=Tr(X*Y), where the adjoint of matrix X is denoted by X*. Let Dn be the set of strictly positive elements in Mn,sa and let Dn1Dn be the set of strictly positive density matrices previously introduced, namely Dn1={ρMn,saTrρ=1,ρ>0}. If not otherwise specified, we shall (from now on) only consider faithful states (ρ>0).

A function f:(0,+)R is said to be operator monotone (increasing) if, for any nN and ABMn,sa such that 0<AB, the inequality f(A)f(B) holds. A positive operator monotone function, f, is said to be symmetric if f(x)=xf(x1) and normalized if f(1)=1.

Definition 5. 

Fop is the class of functions f:(0,+)(0,+) such that

  • (i)

    f(1)=1,

  • (ii)

    xf(x1)=f(x) for x>0,

  • (iii)

    f is operator monotone.

Note that all the functions in Section 2 (except for the counterexample in Remark 1) belong to Fop.

Proposition 2. 

All the functions in Fop are operator concave.

The Kubo–Ando theory of operator means [3,5,6] can be seen as the matrix version of Section 2.

Definition 6. 

A bivariate mean for pairs of positive operators is a function

(A,B)m(A,B)

defined in and with values in positive definite operators on a Hilbert space and satisfying mutatis mutandis conditions (1) to (5) in Definition 1. In addition, the transformer inequality (see [6])

Cm(A,B)C*m(CAC*,CBC*)

holds for positive definite A,B, and arbitrary C.

Notice that the transformer inequality replaces (6) in Definition 1. We denote the set of matrix means as Mop and let Qop be the set of the Quantum Fisher Information(s).

The fundamental result is as follows.

Theorem 2 

(Petz, Kubo, Ando in [6,7]). There are two bijections linking FopMop, and Qop, provided by the following formulas:

f
mf(A,B)=A1/2f(A1/2BA1/2)A1/2.
A,Bρ,f=Tr(A·mf(Lρ,Rρ)1(B))

Let us rephrase the Petz–Kubo–Ando classification theorem: any operator monotone function fFop generates an operator mean mf(A,B), which in turn produces Quantum Fisher Information A,Bρ,f using the above formulas. There are no other operator means or QFIs; they all come from an fFop, according to the above described procedure.

This explains why it is so interesting to study the structure of Fop: any understanding in this field necessarily provides us with more insight into operator means and Quantum Fisher Information(s).

This is exactly what the ff˜ correspondence will produce.

4. The ff˜ Bijection for Operator Monotone Functions

As in Section 2, we divide the representing functions for operator means into two parts.

Definition 7. 

For fFop, we define f(0)=limx0f(x). We say that a function fFop is regular if f(0)0 and non-regular if f(0)=0, cf. [8,9].

We introduce the sets of regular and non-regular functions,

Fopr:={fFopf(0)0},Fopn:={fFopf(0)=0},

and notice that, trivially, Fop is the disjoint union of Fopr and Fopn.

Definition 8. 

For fFopr, we set

f˜(x)=12(x+1)(x1)2f(0)f(x)x>0.

Set G(f)=f˜, cf. [5].

Theorem 3. 

The correspondence ff˜ is a bijection between Fopr and Fopn.

5. The Inversion Formula and Wigner–Yanase–Dyson Information

Definition 9. 

For gFopn, we set

gˇ(x)=g(1)·(x1)22g(x)(x+1),x(0,1)(1,),1,x=1. (5)

Define H(g)=gˇ.

Proposition 3. 

If g is non-regular then gˇ is regular; namely, gˇFopr. Moreover, if fFopr and gFopn, then

H(G(f))=fandG(H(g))=g.

The correspondence between the WYD information (see [10]),

Iρβ(A)=12Tr([ρβ,A][ρ1β,A]),0<β<1/2,

and the Quantum Fisher Information depends on the operator monotonicity of the functions

fβ(x)=β(1β)(x1)2(xβ1)(x1β1)0<β<1/2.

See [8,10,11] for the existing proofs. Indeed, Proposition 3 provides a new approach to the above result.

The function

gβ(x)=xβ+x1β20<β<1/2

is operator monotone and, moreover, gβFop and gβ is non-regular. The calculations show that f˜β=gβ. Therefore, the function fβFopr for 0<β<1/2.

Here we provide the first examples of the correspondence (Table 2).

Table 2.

ff˜ Correspondence.

f  f˜
 1+x2  2xx+1
 1+x22  x
 β(1β)(x1)2(xβ1)(x1β1)  xβ+x1β2

where β(0,1/2).

6. Regular QFI in Terms of Covariance

Quantum covariance is usually defined as

Covρ(A,B):=12Tr(ρ(AB+BA))Tr(ρA)·Tr(ρB), (6)

where A=A* and B=B*. The above formula can be written using the arithmetic mean of the left and right multiplication operator as

Covρ(A,B):=TrLρ+Rρ2(A0)B0, (7)

where A0=ATr(ρA)·I. This simple remark led Petz to the following definition (see [12]):

Definition 10. 

For any fFop, define the Quantum f-Covariance as

Covρf(A,B):=Tr(mf(Lρ,Rρ)(A0)B0). (8)

As usual, Varρf(A):=Covρf(A,A). If f(x)=(1+x)/2, then

Covρf(A,B)=12Tr(ρ(AB+BA))Tr(ρA)·Tr(ρB)=Covρ(A,B), (9)

which is the above given standard definition for the quantum covariance.

With this generalized notion of Petz covariance, we show that there is an unexpected relation between QFI and the covariance itself.

We stated previously that there exists a natural identification of TρDn1 with the space of self-adjoint traceless matrices; namely, for any ρDn1

TρDn1={AMnA=A*,TrA=0}.

Moreover, the PKA theorem states that the Quantum Fisher Information(s) are given by the formula

A,Bρ,f=Tr(A·mf(Lρ,Rρ)1(B))

for positive matrices A,BTρDn1, where fFop.

Monotone metrics are usually normalized in such a way that [A,ρ]=0 implies gρ(A,A)=Tr(ρ1A2).

Remark 2. 

Let us remember that Tρ:={A=A*|Tr(ρA)=0}; the tangent space in ρ to the state space has a natural orthogonal decomposition in terms of “commuting” and “noncommuting” parts as

Tρ=TρcTρn, (10)

where

Tρc={A=A*|[ρ,A]=0},Tρn={i[ρ,A]|A=A*}. (11)

Due to the Chentsov uniqueness theorem, the different QFI(s) are characterized from what they do on the noncommuting part of the tangent space; namely, on Tρn that is on tangent vectors of the form i[ρ,A].

We are now ready to state the QFI(s) in terms of covariances.

Theorem 4. 

Gibilisco, Imparato, and Isola (Proposition 6.3, page 11 in [13]).

If fFopr, then

f(0)2·i[ρ,A],i[ρ,B]ρ,f=Covρ(A,B)Covρf˜(A,B). (12)

The above formula has many important consequences.

7. A Look at the Petz–Sudar Theorem

In the PKA classification theorem (Theorem 2 in Section 3), we see that the QFI is defined only for faithful states (ρ>0). It is Petz himself, in collaboration with Sudár, who understood how to define a radial extension of a QFI to pure states and how to prove that only regular QFIs possess such an extension (for all details, the reader can refer to [9] or to [13]). The statement is as follows:

Theorem 5 

(Petz and Sudár in [9]). A QFI admits a radial extension iff it is regular (f(0)>0). In such a case

2f(0)·,··,f·,··,FS (13)

where ·,··,FS is the Fubini-study metric on the space of pure states.

The fact that the radial limit of 2f(0)·,··,f does not depend on f is an immediate consequence of Theorem 4 in Section 6.

It is natural to ask, can the Petz–Sudár theorem be generalized and proven using Formula (22)? Here, generalization means using Formula (22) for states that are neither faithful nor pure.

8. Extension of Regular QFI and MASI for Non-Faithful States

A far-reaching generalization of the Wigner–Yanase Skew Information has been proposed by Hansen in [8].

Definition 11. 

Metric Adjusted Skew Information (MASI).

For fFopr and ρ>0, set

Iρf(A):=f(0)2·i[ρ,A],i[ρ,A]ρ,f. (14)

In the case where f(x)=(1+x)2/4, we can see that the MASI coincides with the Wigner–Yanase Skew Information:

Iρ(A):=Iρf(A)=12Tr([ρ,A]2). (15)

Note that recently, using MASI, it has been proven that the Local Quantum Uncertainty (LQU) and the Interferometric Power (IP), which are two important measures of quantum discord, are instances of a family of quantum discords parametrized by the function fFopr. This allows a unified study of the properties of LQU and IP (see [14]). Due to Theorem 4, we have the following:

Proposition 4. 

Iρf(A)=Varρ(A)Varρf˜(A). (16)

It is important to note that the two sides of Equation (16) are somehow different in nature. The MASI on the left side is defined only for faithful states (ρ>0), while the right-hand side always makes good sense since quantum covariance is defined for any state. Therefore, one can look to Equation (16) as a “definition” of the LHS, which solves the problem of extending the MASI with an approach that is different from the one proposed by Hansen in Theorem 3.8 in [8]. Motivated by the above consideration, it is natural to introduce the following sesquilinear form, which is the natural extension of MASI for two observables.

Definition 12. 

Iρf(A,B):=Covρ(A,B)Covρf˜(A,B).

Another important remark is that, using the ff˜ correspondence, it is possible to establish a relation between MASI and the quasi-entropy SF(·,·) introduced by Petz in [15]; SF(·,·) can be seen as a quantum version of Csiszar F-entropy in classical statistics and information theory (see [16]). Indeed, if Tr(ρA)=0, Theorem 3.1 in [17] proves that

2tsSf˜(ρ+ti[ρ,A],ρ+si[ρ,A])|t=s=0=2Iρf(A).

9. Inequalities for the MASI and the Bloch Sphere Case

In this section, we discuss some basic properties of MASI and we will see how the f˜ function appears, for example, as a calculation tool. What follows is the generalization of the work in [18] that appears in [19].

(a) If a quantum evolution is given by a Hamiltonian H that commutes with the observable A, then the MASI is a constant of motion. Namely, if we set ρH(t):=eitHρeitH and [A,H]=0, then the function IρH(t)f(A) is constant. Since the Quantum Fisher Information contracts under coarse graining, we can see that QFI is a unitary covariant, and this is the crucial ingredient of the proof.

(b) For any MASI, we have:

Iρf(A)IρSLD(A)12f(0)Iρf(A). (17)

(c) The constant 12f(0) is optimal in inequality (17). Namely, if 1k<12f(0), the inequality

IρSLD(A)kIρf(A)

is false and a counterexample can be found in the elementary 2×2 case, namely on the Bloch sphere.

Let us see how this can be proven by means of the f˜ function.

Let φi be a complete orthonormal base composed of eigenvectors of ρ, and {λi} the corresponding eigenvalues. Set aijA0φi|φj, where A0=ATr(ρA). Note that aijAij:= the i,j entry of A.

Recall that the Pauli matrices are as follows:

σ1=0110,σ2=0ii0,σ3=1001.

A generic 2×2 density matrix in the Stokes parameterization is written as

ρ=121+xy+izyiz1x=12(I+xσ1+yσ2+zσ3),

where (x,y,z)R3 and x2+y2+z21. Let r:=x2+y2+z2[0,1]. The eigenvalues of ρ are λ1=1r2 and λ2=1+r2.

Proposition 5. 

Iρf(A)=1mf˜(1r,1+r)·|a12|2.

Corollary 1. 

If r0 then

IρSLD(A)=r21mf˜(1r,1+r)·Iρf(A).

Proposition 6. 

If f is regular, then

limr0r21mf˜(1r,1+r)=12f˜(1)=12f(0).

From this last result, the optimality of the constant follows.

10. The Dynamical Uncertainty Principle

From Equation (16), one has

Varρ(A)Iρf(A).

This is the case n=1 of the Dynamical Uncertainty Principle, which reads

det{Covρ(Aj,Ak)}det{Iρf(Aj,Ak)}, (18)

or equivalency

det{Covρ(Aj,Ak)}det{f(0)·12·i[ρ,Aj],i[ρ,Ak]ρ,f}, (19)

where fFopr. On the left-hand side, we have the Generalized Variance of the random vector (A1,...,An). Please note that, in this case, the right-hand side depends on the state-observables’ non-commutativity, and this is strictly related to a non-trivial dynamic induced by the observables according to the Schrödinger equation.

To understand the terminology, recall that the Standard Uncertainty Principle (SUP) in the Robertson version reads

det{Covρ(Aj,Ak)}det{i·12·Tr(ρ[Aj,Ak])}, (20)

where A1,...,An is an arbitrary number of observables (self-adjoint matrices) and ρ is a state. For n=2, one obtains the Schrödinger uncertainty principle from which the Heisenberg uncertainty principle follows. The bound in the right-hand side depends on the non-commutativity among the observables (see [20,21]).

Now, let n=2m+1 be odd; in this case the right-hand side is the determinant of an antisymmetric matrix and therefore is zero; for an odd number of observables the SUP does not say anything “quantum”.

Therefore, using the QFI and the ff˜ correspondence, a new uncertainty principle has been proven, which is also not trivial for an odd number of observables. Moreover, SUP and DUP have been generalized for an arbitrary g-covariance; see [22,23,24].

If we set

V(f):=det{Iρf(Aj,Ak)},

one can see that (Theorem 4.4 in [22])

f˜g˜V(f)V(g).

This implies, for example, that we have the biggest bound in the DUP for f(x)=(1+x)/2. Indeed, in this case, f˜(x)=2x/(1+x)g˜ for any regular g, and this provides the conclusion.

11. Semplification of Kosaki’s Work

To see how the ff˜ correspondence sheds light on certain subjects, consider the paper by Kosaki [25]. In this paper, the author’s aim is to study how the RHS of the DUP (for n=2) depends on the function f. The main result by Kosaki is as follows. Remember that

fβ(x)=β(1β)(x1)2(xβ1)(x1β1)0<β<1/2,
f˜β(x)=12(xβ+x1β)

Let ρ,A1,A2 be fixed and set

F(f):=det{Covρ(Ai,Aj)}det{Iρf(Ai,Aj)}.
F(β):=F(fβ).

The main result in [25] is Theorem 5, which reads as follows: F(β) is decreasing in (0,1/2), F(1/2)0 so that F(β)0. The result was the final output of a rather complicated tour de force of calculations.

Look how simple the approach is using the ff˜ correspondence. First of all, it is straightforward that

f˜g˜F(f˜)F(g˜).

For x fixed, the function

βf˜β(x)=12(xβ+x1β)

is decreasing in (0,1/2) so that

β1β2f˜β1f˜β2F(β1)F(β2),

and the Kosaki’s conclusion follows. One should read the complicated proof in [25] to fully appreciate the efficiency and clarity of the ff˜ correspondence.

12. Refinements of Heisenberg Uncertainty Relations

In the literature, several quantities appear with the same aim: to measure quantum uncertainty. We will discuss some examples in this paper. For example, to quantify such uncertainty Luo introduced the following state-observable quantity,

Uρ(A):=Vρ(A)2(Vρ(A)Iρ(A))2,

where Vρ(A):=Varρ(A). Furthermore, he was able to prove the following inequality:

Uρ(A)·Uρ(B)14|Trρ[A,B]|2.

Clearly, this can be seen as a refinement of the Heisenberg uncertainty principle because Varρ(A)Uρ(A).

After some failed attempt to generalize this result (see Kosaki [25], Remarks 3.2 and 3.3), Yanagi (see [26]) was able to prove a generalization that makes sense for the WYD information. He introduced the following quantity,

Uρβ(A):=Vρ(A)2(Vρ(A)Iρβ(A))2,

and was able to prove this inequality:

Uρβ(A)·Uρβ(B)β(1β)|Trρ[A,B]|2.

Note that β(1β)=fβ(0) where

fβ(x)=β(1β)(x1)2(xβ1)(x1β1)0<β<1/2,

which is the function associated with the WYD information. It is straightforward to propose an f-depending quantity,

Uρf(A):=Vρ(A)2(Vρ(A)Iρf(A))2,

as a measure of quantum uncertainty and try to prove the following inequality:

Uρf(A)·Uρf(B)f(0)|Trρ[A,B]|2fFopr.

Unfortunately, this inequality, in general, is false. Yanagi proved that the theorem holds true under a condition involving f˜; namely, we have the following:

Proposition 7. 

For fFopr if

x+12+f˜(x)2f(x)x>0

then it holds

Uρf(A)·Uρf(A)f(0)·Tr(ρ[A,B])fFopr

On the other hand, one can prove the following:

Proposition 8. 

For any fFopr and x>0

f˜(x)214(x+1)2f(0)2(x1)2.

This has the following as a consequence:

Corollary 2. 

f(0)2(xy)214(x+y)2mf˜(x,y)2

From this, an unconditional inequality follows: if we switch from the constant f(0) to the constant f(0)2 (see [27]), we obtain the following:

Proposition 9. 

For fFopr and A,BMn,sa, it holds that

Uρf(A)·Uρf(A)f(0)2·Tr(ρ[A,B]).

13. State Quantum Uncertainty Based on MASI

Luo proposed a notion of quantum uncertainty depending only on the state ρ. In the paper [28], starting from the Wigner–Yanase information and from an orthonormal basis {Hj}, he introduced the quantity

QWY(ρ):=jIρWY(Hj)

as a measure of such uncertainty. First, Luo proved that QWY(ρ) is basis independent, and after that

QWY(ρ)=j<kλjλk2,

where {λj} is the spectrum of ρ. Applications of the function QWY(ρ) also appear in paper [29].

If we remember that the WY information is the QFI associated with the functions

fWY(x):=1+x22,f˜WY=x,

we obtain

QWY(ρ)=2j<kλj+λk2λjλk=2j<kma(λj,λk)mf˜WY(λj,λk).

The above considerations lead naturally to the following questions:

(i) For a regular fFopr, does the definition

Qf(ρ):=jIρf(Hj)

produce a basis-independent function of the state ρ?

(ii) Imagine that we obtain a positive answer for (i). We may also ask if

Qf(ρ)=2j<kma(λj,λk)mf˜(λj,λk).

These questions both received a positive answer from Cai in their paper [30]. Once again, the f˜ function shows up when one has to look at a general scheme for Quantum Fisher Information.

14. The Average Coherence of a Quantum State

In a recent paper [31], Fan, Li, and Luo attempted to study quantum coherence (an important feature of a quantum system) by eliminating the influence of a reference basis. They introduced the average quantum coherence using three procedures: (1) average over all orthonormal basis; (2) average over all elements of operator orthonormal basis; (3) average over a complete family of MUBs (Mutually Unbiased Bases). The result of the paper was that these three different procedures produce the same quantity. The basic ingredient of the proof is the ff˜ correspondence.

Indeed, if E is a quantum channel and {Ej} are the Kraus operators of E, the authors define a channel-depending coherence as

Cf(ρ,E):=jIρf(Ej).

In the first case, they consider the channel as induced by a von Neumann measurement or equivalently by an orthonormal basis. Averaging on this reference basis is equivalent to integrating over the unitary orbit of a fixed basis, which amounts to using the normalized Haar measure over the unitary group, U, of the system Hilbert space. They set

CfU(ρ)=UCf(ρ|UΠU)dU,

where UΠU={U|ii|U:i=1,2,...,d}. In the second case, they defined

Cfob(ρ)=1d+1α=1d2Iρf(Xα),

where {Xα:α=1,2,...,d2} is a family of d2 operators that constitute an operator orthonormal basis for L(H), the space of all bounded linear operators on H. This can be proven to be independent of the chosen basis.

In the third case, they define the Cfmub(ρ) coherence averaging on MUBs, which surely exist if d is a power of a prime number.

Finally, they proved the following result:

Theorem 6. 

For any state ρ of any prime power dimensional system and for any regular operator monotone function f, one has that

CfU(ρ)=Cfob(ρ)=Cfmub(ρ)=dTr[mf˜(Lρ,Rρ)]d+1.

Note that if λi are the eigenvalues of the state ρ, then Tr[mf˜(Lρ,Rρ)]=ijmf˜(λi,λj).

15. Conclusions

The notion of means has its roots deeply situated in the history of Western mathematics; the Greeks themselves knew eleven different types of means. Still, the subject is currently undergoing strong developments. As an example, starting from the work of Rao [32,33] and Prakasa–Rao [34,35], the Jensen inequality for numerical and operator means has been proven, and more generalizations seem to be on their way [4,36,37].

The ff˜ correspondence is indeed a correspondence between means. From the work by Petz, we know that two of the basic objects of Quantum Probability and Quantum Statistics, namely Quantum Covariance and Quantum Fisher Information, are indeed necessarily built on the notion of operator mean, and this explains why we find the manifold of different applications of the ff˜ correspondence described in this paper.

It is rational to expect that this is not the end of the story and that many other applications will appear in the coming years.

At the moment, the most promising area is that of the extension of the Petz–Kubo–Ando theorem for states that are neither faithful nor pure. We expect this could be performed using the formula

f(0)2·i[ρ,A],i[ρ,B]ρ,f=Covρ(A,B)Covρf˜(A,B) (21)

for regular f. The right-hand side makes sense for any state and, on the other hand, by a continuity-approximation argument the formula is, somehow, forced to be unique; indeed, we can approximate any state by faithful states. A fully satisfying theorem would certainly deduce the scalar product

Covρ(A,B)Covρf˜(A,B) (22)

from first principles, as in the Petz proof of the PKA theorem. At the moment, a similar theorem has not yet been proven.

Conflicts of Interest

The authors declare no conflicts of interest.

Funding Statement

This research received no external funding.

Footnotes

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

References

  • 1.Čencov N.N. Statistical Decision Rules and Optimal Inference. American Mathematical Society; Providence, RI, USA: 1982. [Google Scholar]
  • 2.Morozova E.A., Chentsov N.N. Markov invariant geometry on state manifolds Itogi Nauk. Tekhniki. 1990;36:69–102. (In Russian) [Google Scholar]
  • 3.Petz D., Temesi R. Means of positive number. SIAM J. Matrix Anal. Appl. 2005;27:712–720. doi: 10.1137/050621906. [DOI] [Google Scholar]
  • 4.Gibilisco P., Hansen F. An inequality for expectation of means of positive random variable. Ann. Funct. Anal. 2017;8:142–151. doi: 10.1215/20088752-3750087. [DOI] [Google Scholar]
  • 5.Gibilisco P., Hansen F., Isola T. On a correspondence between regular and non-regular operator monotone functions. Linear Algebra Appl. 2009;430:2225–2232. doi: 10.1016/j.laa.2008.11.022. [DOI] [Google Scholar]
  • 6.Kubo F., Ando T. Means of positive linear operators. Math. Ann. 1979;246:205–224. doi: 10.1007/BF01371042. [DOI] [Google Scholar]
  • 7.Petz D. Monotone metrics on matrix spaces. Linear Algebra Appl. 1996;244:81–96. doi: 10.1016/0024-3795(94)00211-8. [DOI] [Google Scholar]
  • 8.Hansen F. Metric adjusted skew information. Proc. Natl. Acad. Sci. USA. 2008;105:9909–9916. doi: 10.1073/pnas.0803323105. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Petz D., Sudár C. Geometry of quantum states. J. Math. Phys. 1996;37:2662–2673. doi: 10.1063/1.531535. [DOI] [Google Scholar]
  • 10.Hasegawa H., Petz D. Non-commutative extension of information geometry II. In: Hirota O., editor. Quantum Communication, Computing and Measurement. Springer; Berlin/Heidelberg, Germany: 1997. [Google Scholar]
  • 11.Szabó V.E.S. A class of matrix monotone functions. Linear Algebra Appl. 2007;420:79–85. doi: 10.1016/j.laa.2006.06.027. [DOI] [Google Scholar]
  • 12.Petz D. Covariance and Fisher information in quantum mechanics. J. Phys. Math. Gen. 2002;35:929. doi: 10.1088/0305-4470/35/4/305. [DOI] [Google Scholar]
  • 13.Gibilisco P., Imparato D., Isola T. Uncertainty principle and quantum Fisher information II. J. Math. Phys. 2007;48:072109. doi: 10.1063/1.2748210. [DOI] [Google Scholar]
  • 14.Gibilisco P., Girolami D., Hansen F. A unified approach to Local Quantum Uncertainty and Interferometric Power by Metric Adjusted Skew Information. Entropy. 2021;23:263. doi: 10.3390/e23030263. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Petz D. Quasi-Entropies for States of a von Neumann Algebra. Publ. Res. Inst. Math. Sci. 1985;2:781–800. doi: 10.2977/prims/1195178929. [DOI] [Google Scholar]
  • 16.Csiszár I. Information type measure of difference of probability distributions and indirect observations. Studia Sci. Math. Hungar. 1967;2:299–318. [Google Scholar]
  • 17.Petz D., Szabó V.S. From quasi entropy to skew information. Int. J. Math. 2009;20:1421–1430. doi: 10.1142/S0129167X09005832. [DOI] [Google Scholar]
  • 18.Luo S. Wigner-Yanase skew information vs. quantum Fisher information. Proc. Am. Math. Soc. 2003;132:885–890. doi: 10.1090/S0002-9939-03-07175-2. [DOI] [Google Scholar]
  • 19.Gibilisco P., Imparato D., Isola T. Inequalities for quantum Fisher information. Proc. Am. Math. Soc. 2009;137:317–327. doi: 10.1090/S0002-9939-08-09447-1. [DOI] [Google Scholar]
  • 20.Robertson H.P. An indeterminacy relation for several observables and its classical interpretation. Phys. Rev. 1934;46:794–801. doi: 10.1103/PhysRev.46.794. [DOI] [Google Scholar]
  • 21.Schrödinger E. About Heisenberg uncertainty relation (original annotation by Angelow A. and Batoni M. C.) Bulgar. J. Phys. 1999;26:193–203. Translation of Proc. Prussian Acad. Sci. Phys. Math. Sect. 1930, 19, 296–303. [Google Scholar]
  • 22.Gibilisco P., Imparato D., Isola T. A Robertson-type uncertainty principle and quantum Fisher information. Linear Algebra Its Appl. 2008;428:1706–1724. doi: 10.1016/j.laa.2007.10.013. [DOI] [Google Scholar]
  • 23.Gibilisco P., Isola T. How to distinguish quantum covariances using uncertainty relations. J. Math. Anal. Appl. 2011;384:670–676. doi: 10.1016/j.jmaa.2011.06.016. [DOI] [Google Scholar]
  • 24.Andai A. Uncertainty principle with quantum Fisher information. J. Math. Phys. 2008;49:012106. doi: 10.1063/1.2830429. [DOI] [Google Scholar]
  • 25.Kosaki H. Matrix trace inequality related to uncertainty principle. Int. J. Math. 2005;16:629–645. doi: 10.1142/S0129167X0500303X. [DOI] [Google Scholar]
  • 26.Yanagi K. Metric adjusted skew information and uncertainty relation. J. Math. Anal. Appl. 2011;380:888–892. doi: 10.1016/j.jmaa.2011.03.068. [DOI] [Google Scholar]
  • 27.Gibilisco P., Isola T. On a refinement of Heisenberg uncertainty relation by means of quantum Fisher information. J. Math. Anal. Appl. 2011;375:270–275. doi: 10.1016/j.jmaa.2010.09.029. [DOI] [Google Scholar]
  • 28.Luo S. Quantum uncertainty of mixed states based on skew information. Phys. Rev. A. 2006;73:022324. doi: 10.1103/PhysRevA.73.022324. [DOI] [Google Scholar]
  • 29.Luo S., Fu S., Oh C.H. Quantifying correlations via the Wigner-Yanase skew information. Phys. Rev. A. 2012;85:032117. doi: 10.1103/PhysRevA.85.032117. [DOI] [Google Scholar]
  • 30.Cai L. Quantum uncertainty based on metric adjusted skew information. Infinite Dimens. Anal. Quantum Probab. Relat. Top. 2018;21:1850006. doi: 10.1142/S0219025718500066. [DOI] [Google Scholar]
  • 31.Fan Y., Li N., Luo S. Average coherence and entropy. Phys. Rev. A. 2023;108:052406. doi: 10.1103/PhysRevA.108.052406. [DOI] [Google Scholar]
  • 32.Rao C.R. RA Fisher: The founder of modern statistics. Stat. Sci. 1992;7:34–48. doi: 10.1214/ss/1177011442. [DOI] [Google Scholar]
  • 33.Rao C.R. Seven inequalities in statistical estimation theory. Student. 1996;1:149–158. [Google Scholar]
  • 34.Prakasa Rao B.L.S. On a matrix version of a covariance inequality. Gujarat Stat. Rev. 1990;17A:150–159. [Google Scholar]
  • 35.Prakasa Rao B.L.S. Inequality for random matrices with an application to statistical inference. Student. 2000;3:198–202. [Google Scholar]
  • 36.Gibilisco P. About the Jensen inequality for numerical n–means. Int. J. Mod. Phys. A. 2022;37:2243010. doi: 10.1142/S0217751X22430102. [DOI] [Google Scholar]
  • 37.Prakasa Rao B.L.S. Applied Linear Algebra, Probability and Statistics: A Volume in Honour of CR Rao and Arbind K. Lal. Springer; Berlin/Heidelberg, Germany: 2023. On Some Matrix Versions of Covariance, Harmonic Mean and Other Inequalities: An Overview; pp. 1–10. [Google Scholar]

Articles from Entropy are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES