Skip to main content
Entropy logoLink to Entropy
. 2021 Nov 17;23(11):1527. doi: 10.3390/e23111527

Limits to Perception by Quantum Monitoring with Finite Efficiency

Luis Pedro García-Pintos 1,2,*, Adolfo del Campo 1,3,4,5,*
Editor: Ronnie Kosloff
PMCID: PMC8624899  PMID: 34828225

Abstract

We formulate limits to perception under continuous quantum measurements by comparing the quantum states assigned by agents that have partial access to measurement outcomes. To this end, we provide bounds on the trace distance and the relative entropy between the assigned state and the actual state of the system. These bounds are expressed solely in terms of the purity and von Neumann entropy of the state assigned by the agent, and are shown to characterize how an agent’s perception of the system is altered by access to additional information. We apply our results to Gaussian states and to the dynamics of a system embedded in an environment illustrated on a quantum Ising chain.

Keywords: quantum monitoring, Quantum Darwinism, continuous quantum measurements


Quantum theory rests on the fact that the quantum state of a system encodes all predictions of possible measurements as well as the system’s posterior evolution. However, in general, different agents may assign different states to the same system, depending on their knowledge of it. Complete information of the physical state of a system is equated to pure states, mathematically modeled by unit vectors in Hilbert space. By contrast, mixed states correspond to a lack of complete descriptions of the system, either due to uncertainties in the preparation, or due to the system being correlated with secondary systems. In this paper, we address how the perception of a system differs among observers with different levels of knowledge. Specifically, we quantify how different the effective descriptions that two agents provide of the same system can be, when acquiring information through continuous measurements.

Consider a monitored quantum system, that is, a system being continuously measured in time. An omniscient agent O is assumed to know all interactions and measurements that occur to the system. In particular, she has access to all outcomes of measurements that are performed. As such, O has a complete description of the pure state ρtO=ρtO2 of the system.

While not necessary for subsequent results, we model such a monitoring process by continuous quantum measurements [1,2,3] as a natural test-bed with experimental relevance [4,5,6]. For ideal continuous quantum measurements, the state ρtO satisfies a stochastic equation dictating its change,

dρtO=iH,ρtOdt+ΛρtOdt+αIAαρtOdWtα. (1)

The dephasing superoperator ΛρtO is of Lindblad form,

ΛρtO=α18τmαAα,Aα,ρtO (2)

for the set of measured physical observables {Aα}, and the “innovation terms” are given by

IAαρtO=14τmα{Aα,ρtO}2TrAαρtOρtO. (3)

The latter account for the information about the system acquired during the monitoring process, and model the quantum back-action on the state during a measurement. The characteristic measurement times τmα depend on the strength of the measurement, and characterize the time over which information of the observable Aα is acquired. The terms dWtα are independent random Gaussian variables of zero mean and variance dt.

An agent A without access to the measurement outcomes possesses a different–incomplete description of the state of the system. The need to average over the unknown results implies that the state ρtA assigned by A satisfies the master equation

dρtA=iH,ρtAdt+ΛρtAdt, (4)

obtained from (1) by using that dWtα=0, where · denote averages over realizations of the measurement process [1]. Assuming that agent A knows the initial state of the system before the measurement process, ρ0O=ρ0A, the state that she assigns at later times is ρtAρtO.

As a result of the incomplete description of the state of the system, agent A suffers from a growing uncertainty in the predictions of measurement outcomes. We quantify this by means of two figures of merit: the trace distance and the relative entropy.

The trace distance between states σ1 and σ2 is defined as

D(σ1,σ2)=σ1σ212, (5)

where the trace norm for an operator with a spectral decomposition A=jλj|jj| is A1=j|λj|. Its operational meaning derives from the fact that the trace distance characterizes the maximum difference in probability of outcomes for any measurement on the states σ1 and σ2:

D(σ1,σ2)=max0P𝟙|TrPσ1TrPσ2|, (6)

where P is a positive-operator valued measure. It also quantifies the probability p of successfully guessing, with a single measurement instance, the correct state in a scenario where one assumes equal prior probabilities for having state σ1 or σ2. Then, the best conceivable protocol gives p=121+D(σ1,σ2). Thus, if two states are close in trace distance they are hard to distinguish under any conceivable measurement [7,8,9].

The relative entropy also serves as a figure of merit to quantify the distance between probability distributions, in particular characterizing the extent to which one distribution can encode information contained in the other one [10]. In the quantum case, the relative entropy is defined as:

Sσ1||σ2Trσ1logσ1Trσ1logσ2. (7)

In a hypothesis testing scenario between states σ1 and σ2, the probability pN of wrongly believing that σ2 is the correct state scales as pNeNSσ1||σ2 in the limit of large N, where N is the number of copies of the state that are available to measure on [11,12]. That is, σ2 is easily confused with σ1 if Sσ1||σ2 is small [13,14].

1. Quantum Limits to Perception

Lack of knowledge of the outcomes from measurements performed on the system induces A to assign an incomplete, mixed, state to the system. This hinders the agent’s perception of the system (see illustration in Figure 1). We quantify this by the trace distance and the relative entropy.

Figure 1.

Figure 1

Illustration of the varying degrees of perception by different agents. The amount of information that an agent possesses of a system can drastically alter its perception, as the expectations of outcomes for measurements performed on the system can differ. (a) The state ρtO assigned by omniscient agent O, who has full access to the measurement outcomes, corresponds to a complete pure-state description of the system. O thus has the most accurate predictive power. (b) An agent A completely ignorant of measurement outcomes possesses the most incomplete description of the system. (c) A continuous transition between the two descriptions, corresponding to the worst and most complete perceptions of the system respectively, is obtained by considering an agent B with partial access to the measurement outcomes of the monitoring process.

We are interested in comparing A’s incomplete description to the pure state ρTO assigned by O, i.e., to the complete description. Under ideal monitoring of a quantum system, the pure state ρTO remains pure. Therefore, the following holds [7]:

1TrρTOρTADρTO,ρTA1TrρTOρTA. (8)

One can then directly relate the average trace distance to the purity PρTATrρTA2 of state ρTA as

1PρTADρTO,ρTA1PρTA, (9)

by using Jensen’s inequality and the fact that the square root is concave. The level of mixedness of the state ρTA that A assigns to the system provides lower and upper bounds to the average probability of error that she has in guessing the actual state of the system ρTO. This provides an operational meaning to the purity of a quantum state, as a quantifier of the average trace distance between a state ρtO and post-measurement (average) state ρtA.

To appreciate the dynamics in which the average trace distance evolves, we note that at short times

TτDDρTO,ρTATτD, (10)

where the decoherence rate is given by [15,16]

1τD=α14τmαVarρ0A(Aα), (11)

in terms of the variance Varρ0A(Aα) of the measured observables over the initial pure state ρ0A. Analogous bounds can be derived at arbitrary times of evolution for the difference of perceptions among various agents (see Appendix A).

For the case of the quantum relative entropy between states of complete and incomplete knowledge, the following identity holds:

SρtO||ρtA=SρtA, (12)

proven by using that ρtO is pure and that the von Neumann entropy of a state σ is Sσ:=Trσlogσ. Thus, the entropy of the state assigned by the agent A fully determines the average relative entropy with respect to the complete description ρtO (alternative interpretations to this quantity have been given in [17,18]).

Similar calculations allow to bound the variances of DρTO,ρTA and of SρtO||ρtA as well. The variance of the trace distance, ΔDT2D2ρTO,ρTADρTO,ρTA2, satisfies

ΔDT2PρTAPρTA2, (13)

while for the variance of the relative entropy it holds that

ΔS2ρtO||ρtATrρtAlog2ρtAS2ρtA. (14)

The right-hand side of this inequality admits a classical interpretation in terms of the variance of the surprisal (logpj) over the eigenvalues pj of ρtA [14]. We thus find that, at the level of a single realization, the dispersion of the relative entropy between the states assigned by the agents O and A is upper bounded by the variance of the surprise in the description of A. The later naturally vanishes when ρtA is pure, and increases as the state becomes more mixed.

2. Transition to Complete Descriptions

So far we considered the extreme case of comparing the states assigned by A, who is in complete ignorance of the measurement outcomes, and by an omniscient agent O. One can in fact consider a continuous transition between these limiting cases, i.e., as the accuracy in the perception of the monitored system by an agent is enhanced, as illustrated in Figure 1. Consider a third agent B, with access to a fraction of the measurement output. This can be modeled by introducing a filter function η(α)[0,1] characterizing the efficiency of the measurement channels in Equation (1) [1]. Then, the dynamics of state ρtB is dictated by

dρtB=iH,ρtBdt+ΛρtBdt+αη(α)IAαρtBdVtα, (15)

with dVtα Wiener noises for observer B. It holds that ρtBρtOB, where the average is now over the outcomes obtained by O that are unknown to B [1].

Note that the case with null measurement efficiencies η(j)=0 gives the exact same dynamics as that of a system in which the monitored observables {Aα} are coupled to environmental degrees of freedom, producing dephasing [19,20]. Equations (15) and (1) then correspond to unravellings in which partial or full access to environmental degrees of freedom allow learning the state of the system by conditioning on the state observed in the environment. Therefore, knowing how DρtB,ρtO and SρtO||ρtB decrease as η increases directly informs of how much the description of an open system can be improved by observing a fraction of the environment. This is reminiscent of the Quantum Darwinism approach, whereby fractions of the environment encode objective approximate descriptions of the system. While in the Darwinistic framework the focus is on environmental correlations, we focus on the state of the system itself.

The results of the previous section hold for partial-ignorance state ρtB as well:

1PρTBDρTO,ρTBB1PρTB (16a)
SρtO||ρtBB=SρtB. (16b)

Similar extensions are obtained for the variances. This allows exploring the transition from the incomplete description of A, to a complete description of the state of the system as η1. Note that these results hold for each realization of a trajectory of B’s state ρtB, and that if one averages over the measurement outcomes unknown to both agents A and B, Equation (16b) gives SρtO||ρtB=S(ρtB).

These results allow to compare the descriptions of different agents that jointly monitor a system [1,20,21,22,23]. We show in Appendix A that

TrρTA2TrρTB2DρTA,ρTBAB1TrρTA2+1TrρTB2. (17)

The joint monitoring of a system by independent observers has been realized experimentally in [24,25].

3. Illustrations

3.1. Evolution of the Limits to Perception

Consider a 1D transverse field Ising model, with the Hamiltonian

H=hjNσjxJjN1σjzσj+1z, (18)

where σjx and σjz denote Pauli matrices on the x and z directions, and {h,J} denote coupling strengths.

We study the case of observer O monitoring the individual spin z components. Equation (1) thus governs the evolution of the state ρtO, with {Aα}={σjz}. Meanwhile, the state assigned by observers with partial access to measurement outcomes follows Equation (15). The case η(j)=0 gives equivalent dynamics to that of an Ising chain in which individual spins couple to environmental degrees of freedom via σjz, producing dephasing.

Figure 2 illustrates the evolution of the averaged relative entropy SρtO||ρtB between the complete description and B’s partial one, for different values of the monitoring efficiency η. The average · is over all measurement outcomes. Analogous results for the average trace distance can be found in Appendix A. The dynamics are simulated by implementation of the monitoring process as a sequence of weak measurements, which can be modeled by Kraus operators acting on the state of the system. Specifically, the evolution of ρtO and corresponding state ρtB with partial measurements is numerically obtained from assuming two independent measurement processes, as in [1].

Figure 2.

Figure 2

Evolution of the average relative entropy. Simulated evolution of the average SρtO||ρtB=SρtB of the relative entropy between complete and incomplete descriptions for a spin chain initially in a paramagnetic state on which individual spin components σjz are monitored. Here · denotes an average over all measurement outcomes, and ρtB=ρtOB is the state assigned by agent B after discarding the outcomes unknown to him. The simulation corresponds to N=6 spins, with couplings Jτm=hτm=1/2. For η=0 (black continuous curve), agent A, without any access to the measurement outcomes, has the most incomplete description of the system. For η=0.5 (red dashed curve), B gets closer to the complete description of the state of the system, after gaining access to partial measurement results. Finally, when η=0.9 (blue dotted curve), access to enough information provides B with an almost complete description of the state. Importantly, in all cases the agent can estimate how far the description possessed is from the complete one solely in terms of the entropy S(ρtB).

3.2. Transition to Complete Descriptions

Consider the case of a one-dimensional harmonic oscillator with position and momentum operators X and P. We assume agent B is monitoring the position of the oscillator with an efficiency η. The dynamics is dictated by Equation (15) for the case of a single monitored observable X, and can be determined by a set of differential equations on the moments of the Gaussian state ρtB [1,21].

We prove in Appendix A that the purity of the density matrix for long times has a simple expression in terms of the measurement efficiency, satisfying PρTBη for long times. Equation (16) and properties of Gaussian states [22,23,24,25,26] then imply

1ηDρTO,ρTBB1η, (19)

and

SρtO||ρtBB=12η+12log12η+1212η12log12η12. (20)

See [27] for further results on the gains in purity that can be obtained from conditioning on measurement outcomes in Gaussian systems. Figure 3 depicts the trace distance DρtB,ρtOB and the relative entropy SρtO||ρtBB as a function of the measurements efficiency of B’s measurement process, illustrating the transition from least accurate perception to most accurate perception and optimal predictive power as η1. Note that, since both the bounds on the trace distance and relative entropy are independent of the parameters of the model in this example, the transition to most accurate perceptions of the system is solely a function of the measurement efficiency. The figures show that a high knowledge of the state of the system is gained for η0 as η increases. This gain decreases for larger values of η. This observation is confirmed by explicit computation using the relative entropy, which satisfies ddηSρtO||ρtBB=log1η1+η/(4η3/2). Thus, its rate of change and the information gain diverges for η0 as a power law ddηSρtO||ρtBB=(1/6+1/2η)+O(η2), while it becomes essentially constant for intermediate values of η. In the transition to most accurate perception the effective description of the system changes from a mixed to a pure state, and the information gain becomes divergent as well as η1.

Figure 3.

Figure 3

Transition between levels of perception. Bounds on average trace distance (left) and average relative entropy (right) as function of measurement efficiency for a harmonic oscillator undergoing monitoring of its position. For such a system the purity of the state ρtB depends solely on the measurement efficiency with which observer B monitors the system. This illustrates the transition from complete ignorance of the outcomes of measurements performed (η=0), to the most complete description as η1—the situation with the most accurate perception. Efficient use of information happens when a small fraction of the measurement output is incorporated at η1, as then both DρtB,ρtO and the relative entropy SρtO||ρtB decay rapidly.

4. Discussion

Different levels of information of a system amount to different effective descriptions. We studied these different descriptions for the case of a system being monitored by an observer, and compared this agent’s description to that of other agents with a restricted access to the measurement outcomes. With continuous measurements as an illustrative case study, we put bounds on the average trace distance between states that different agents assign to the system, and obtained exact results for the average quantum relative entropy. The expressions solely involve the state assigned by the less-knowledgeable agent, providing estimates for the distance to the exact state that can be calculated by the agent without knowledge of the latter.

The setting we presented here has a natural application to the case of a system interacting with an environment. For all practical purposes, one can view the effect of an environment as effectively monitoring the system with which it interacts [28,29]. Without access to the environmental degrees of freedom, the master equation that governs the state of the system takes a Lindblad form with Hermitian operators, as in Equation (4). However, access to the degrees of freedom of the environment can provide information of the state of the system, effectively leading to a dynamics governed by Equation (15). Access to a high fraction of the environment leads to a dynamics as in Equation (1), providing complete description of the state of the system by conditioning on the observed state of the environmental degrees of freedom. With this in mind, our results shed light on how much one can improve the description of a given system by incorporating information encoded in an environment [29,30,31,32,33,34,35], as experimentally explored in [36,37]. Note that since our bounds depend on the state assigned by the agent with less information, the above is independent of the unraveling chosen. It would also be interesting to extend our results and the connections to the dynamics of open systems to more general monitoring dynamics (e.g., non-Hermitian operators or other noise models).

As brought up by an analysis of a continuously-monitored harmonic oscillator, a large gain of information about the state of the system occurs when an agent has access to a small fraction of the measurement output, when quantified both by the trace distance and by the relative entropy. Our results thus complement the Quantum Darwinism program and related approaches [29,30,31,32,33,34,35], where the authors compare the state of a system interacting with an environment and the state of fractions of such an environment. While those works focused on the correlation buildup between the system and the environment, we instead address the subjective description that observers assign to the state of the system, conditioned on the information encoded in a given measurement record.

Appendix A

Appendix A.1. Derivation of Bounds to Average Trace Distance

Using Equations (2) and (4) in the main text and that ρ0O=ρ0A, we find

1TrρTOρTA=Trρ0Oρ0ATrρTOρTAdTrρtOρtA=F0FTdTrρtAρtA=20TTrρtAΛρtAdt=+2α18τmα0TTrAα,Aα,ρtAρtAdt=α14τmα0TTrρtA,AαAα,ρtAdt. (A1)

This identity can be conveniently expressed in terms of the 2-norm of the commutator [ρtA,A] as

1TrρTOρTA=α14τmα0TρtA,Aα22dt=αT4τmαρtA,Aα22¯, (A2)

where we denote the time-average of a function f by f¯0Tf(t)dt/T. Note that the expression α14τmαρtA,Aα22¯ plays the role of a time-averaged decoherence time [15,16], generalizing Equation (11) in the main text.

This sets alternative bounds on the average distance between the state ρtA assigned by A and the actual state of the system ρtO, in terms of the effect of the Lindblad dephasing term acting on the incomplete-knowledge state ρtA,

Tα14τmαρtA,Aα22¯DρTO,ρTATα14τmαρtA,Aα22¯.

A short time analysis provides a sense of the evolution of the upper and lower bounds on the trace distance and how they compare to its variance. To leading order in a Taylor series expansion,

PρτA1+2Trρ0AΛρ0Aτ=1α14τmαTrρ0A,AαAα,ρ0Aτ, (A3)

and one finds

τα14τmαρ0A,Aα22DρτO,ρτAτα14τmαρ0A,Aα22. (A4)

Note that the behavior of the trace distance is determined by the timescale in which decoherence occurs.

Using Equation (9) in the main text and Jensen’s inequality, one obtains

D2ρTO,ρTA1PρTA, (A5)

which implies that the variance ΔDT2D2ρTO,ρTADρTO,ρTA2 satisfies

ΔDT2PρTAPρTA2. (A6)

In the short time limit this becomes

ΔDτ22Trρ0AΛρ0Aτ. (A7)

Appendix A.2. Derivation of the Average and Variance of the Quantum Relative Entropy

Using that ρtO is pure, and that the von Neumann entropy is given by SρTrρlogρ, we obtain that the average over the results unknown to agent A satisfy

SρtO||ρtA=TrρtOlogρtOTrρtOlogρtA=0TrρtAlogρtA=SρtA. (A8)

This sets a direct connection between the average error induced by assigning state ρtA instead of the exact state ρtO, as quantified by the relative entropy, in terms of the von Neumann entropy of the state accessible to agent A.

In turn, the variance of the relative entropy satisfies

ΔS2ρtO||ρtA=S2ρtO||ρtASρtO||ρtA2=TrρtOlogρtA2S2ρtATrρtOTrρtOlog2ρtAS2ρtA=TrρtAlog2ρtAS2ρtA, (A9)

using the Cauchy–Schwarz inequality in the third line. Note that this expression is identical to the variance of the operator logρtA, which can be thought of as the quantum extension to the notion of the “information content” or “surprisal” logp in classical information theory.

Appendix A.3. Bounds to the Difference between Perceptions of Multiple Agents

Consider two agents A and B who simultaneously monitor different observables on a system. Each one has access to the measurement outcomes of their devices, but not to the results obtained by the other agent. The states ρTA and ρTB that A and B assign to the system differ from the actual pure state ρTO that corresponds to the complete description of the system. For simplicity let us consider that A monitors a single observable A and B monitors a single observable B. The complete-description state of the system assigned by all-knowing agent O evolves according to

dρtO=LρtOdt+IAρtOdWtA+IBρtOdWtB, (A10)

with the Lindbladian LρtOiH,ρtO+ΛAρtO+ΛBρtO, with corresponding dephasing terms on observables A and B. The innovation terms IA and IB are defined as in Equation (3) in the main text, and dWtA and dWtB are independent noise terms.

The states of both observers satisfy

dρtA=LρtAdt+IAρtAdVtA (A11)
dρtB=LρtBdt+IBρtBdVtB. (A12)

Consistency between observers implies that their noises are related to the ones appearing in Equation (A10) by [1,3]:

dWtA=TrρtAATrρtOAdtτm+dVtAdWtB=TrρtBBTrρtOBdtτm+dVtB. (A13)

As the state of each observer satisfies Equation (9), the triangle inequality provides the upper bound

DρTA,ρTBAB1TrρTA2+1TrρTB2, (A14)

and the lower bound

DρTA,ρTBABTrρTA2TrρTB2. (A15)

Appendix A.4. Illustration—Evolution of Limits to Perception

We consider the case of observer O monitoring the spin components σjz on a 1D transverse field Ising model, with the Hamiltonian defined in Equation (18) of the main text. Figure A1 shows the evolution of the average trace distance DρTO,ρTB between the complete description and B’s partial one, along with the bounds (16), for different values of the monitoring efficiency η. Figure A2 shows the evolution of the average relative entropy SρTO||ρTB. The dynamics are simulated by implementation of the monitoring process as a sequence of weak measurements modeled by Kraus operators acting on the state of the system. Specifically, the evolution of ρtO and corresponding state ρtB with partial measurements is numerically obtained from assuming two independent measurement processes, as in [1].

Figure A1.

Figure A1

Evolution of the average trace distance and its bounds. Simulated evolution of the average trace distance DρTO,ρTB between complete and incomplete descriptions for a spin chain initially in a paramagnetic state on which individual spin components σjz are monitored. The simulation corresponds to N=6 spins, with couplings Jτm=hτm=1/2. The upper and lower bounds (16) on the average trace distance is depicted by dashed lines, while the shaded area represents the (one standard deviation) confidence region obtained from the upper bound (13) on the standard deviation in the main text, calculated with respect to the mean distance. For η=0 (left), agent A, without any access to the measurement outcomes, has the most incomplete description of the system. After gaining access to partial measurement results, with η=0.5 (center) B gets closer to the complete description of the state of the system. Finally, when η=0.9 (right), access to enough information provides B with an almost complete description of the state. Importantly, in all cases the agent can bound how far the description possessed is from the complete one solely in terms solely of the purity PρTB.

Figure A2.

Figure A2

Evolution of the average relative entropy and its bounds. Simulated evolution of the average relative entropy SρTO||ρTB between complete and incomplete descriptions for a spin chain on which the z components of individual spins are monitored. The shaded area represents the (one standard deviation) confidence region obtained from the upper bound on the standard deviation of the relative entropy, Equation (14) in the main text. As in the case of the trace distance, access to more information leads to a more accurate state assigned by the agent.

Appendix A.5. Illustration—Transition to Complete Descriptions

Consider the case of a one-dimensional harmonic oscillator with position and momentum operators X and P, respectively. We assume agent B is monitoring the position of the harmonic oscillator, with an efficiency η. The dynamics of state ρtB is dictated by Equation (15) in the main text for the case of a single monitored observable, with

ΛρtB=18τmX,X,ρtB;IXρtB=14τm{X,ρtB}2TrXρtBρtB. (A16)

Such a dynamics preserves the Gaussian property of states. For these, the variances

vxTrρtBX2TrρtBX2, (A17)
vpTrρtBP2TrρtBP2, (A18)

and covariance

cxpTrρtB{X,P}2TrρtBXTrρtBP, (A19)

satisfy the following set of differential equations (in natural units) [1,21]:

ddtvx=2ωcxpητmvx2, (A20a)
ddtvp=2ωcxp+14τmητmcxp2, (A20b)
ddtcxp=ωvpωvxητmvxcxp. (A20c)

While the first moments do evolve stochastically, the second moments above satisfy a set of deterministic coupled differential equations. This in turn implies that the purity of the state, which can be obtained from the covariance matrix [22,23,24,25,26]

σ(t)vxcxpcxpvp (A21)

as

PρTB=12det[σ(t)], (A22)

evolves deterministically as well.

The solution for long times can be derived from Equations (A20), giving

cxpss=ωτm±ω2τm2+η/4η, (A23a)
vxss=2ωτmηcxpss, (A23b)
vpss=vxss1+ηωτmcxpss, (A23c)

which provides the long-time asymptotic value of the purity as a function of the measurement efficiency. The latter turns out to have the following simple expression

PρTB=12vxssvpss(cxpss)2=122ωτmηcxpss1+ηωτmcxpss(cxpss)2=122ωτmηcxpss+(cxpss)2 (A24)
=12τmη14τmητm(cxpss)2+(cxpss)2=1214η=η. (A25)

Using that

1PρTBDρTO,ρTBB1PρTB, (A26)

then implies

1ηDρTO,ρTBB1η. (A27)

The entropy of a 1-mode Gaussian state can be expressed in terms of the purity of the state as

SρTB=12PρTB+1/2log12PρTB+1/212PρTB1/2log12PρTB1/2. (A28)

Then, using that SρtO||ρtBB=SρtB and Equation (A25), we obtain that for long times,

SρtO||ρtBB=SρTB=12η+12log12η+1212η12log12η12. (A29)

Author Contributions

Formal analysis, L.P.G.-P. and A.d.C.; Investigation, L.P.G.-P. and A.d.C.; Writing—original draft, L.P.G.-P. and A.d.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the John Templeton Foundation, UMass Boston (project P20150000 029279), and DOE grant DE-SC0019515.

Data Availability Statement

Not Applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Footnotes

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Jacobs K., Steck D. A straightforward introduction to continuous quantum measurement. Contemp. Phys. 2006;47:279–303. doi: 10.1080/00107510601101934. [DOI] [Google Scholar]
  • 2.Wiseman H.M., Milburn G.J. Quantum Measurement and Control. Cambridge University Press; Cambridge, UK: 2009. [Google Scholar]
  • 3.Jacobs K. Quantum Measurement Theory and Its Applications. Cambridge University Press; Cambridge, UK: 2014. [Google Scholar]
  • 4.Murch K.W., Weber S.J., Macklin C., Siddiqi I. Observing single quantum trajectories of a superconducting quantum bit. Nature. 2013;502:211–214. doi: 10.1038/nature12539. [DOI] [PubMed] [Google Scholar]
  • 5.Devoret M.H., Schoelkopf R.J. Superconducting Circuits for Quantum Information: An Outlook. Science. 2013;339:1169–1174. doi: 10.1126/science.1231930. [DOI] [PubMed] [Google Scholar]
  • 6.Weber S.J., Chantasri A., Dressel J., Jordan A.N., Murch K.W., Siddiqi I. Mapping the optimal route between two quantum states. Nature. 2014;511:570. doi: 10.1038/nature13559. [DOI] [PubMed] [Google Scholar]
  • 7.Nielsen M.A., Chuang I.L. Quantum Computation and Quantum Information: 10th Anniversary Edition. Cambridge University Press; Cambridge, UK: 2010. [DOI] [Google Scholar]
  • 8.Wilde M.M. Quantum Information Theory. Cambridge University Press; Cambridge, UK: 2013. [DOI] [Google Scholar]
  • 9.Watrous J. The Theory of Quantum Information. Cambridge University Press; Cambridge, UK: 2018. [DOI] [Google Scholar]
  • 10.Cover T.M., Thomas J.A. Elements of Information Theory. John Wiley & Sons; Hoboken, NJ, USA: 2012. [DOI] [Google Scholar]
  • 11.Hiai F., Petz D. The proper formula for relative entropy and its asymptotics in quantum probability. Commun. Math. Phys. 1991;143:99–114. doi: 10.1007/BF02100287. [DOI] [Google Scholar]
  • 12.Ogawa T., Nagaoka H. Asymptotic Theory of Quantum Statistical Inference: Selected Papers. World Scientific; Singapore: 2005. Strong converse and Stein’s lemma in quantum hypothesis testing; pp. 28–42. [DOI] [Google Scholar]
  • 13.Schumacher B., Westmoreland M.D. Relative entropy in quantum information theory. Contemp. Math. 2002;305:265–290. doi: 10.1090/conm/305. [DOI] [Google Scholar]
  • 14.Vedral V. The role of relative entropy in quantum information theory. Rev. Mod. Phys. 2002;74:197–234. doi: 10.1103/RevModPhys.74.197. [DOI] [Google Scholar]
  • 15.Chenu A., Beau M., Cao J., del Campo A. Quantum Simulation of Generic Many-Body Open System Dynamics Using Classical Noise. Phys. Rev. Lett. 2017;118:140403. doi: 10.1103/PhysRevLett.118.140403. [DOI] [PubMed] [Google Scholar]
  • 16.Beau M., Kiukas J., Egusquiza I.L., del Campo A. Nonexponential Quantum Decay under Environmental Decoherence. Phys. Rev. Lett. 2017;119:130401. doi: 10.1103/PhysRevLett.119.130401. [DOI] [PubMed] [Google Scholar]
  • 17.Barchielli A. Quantum Communication, Computing, and Measurement 3. Springer; Berlin/Heidelberg, Germany: 2002. Entropy and information gain in quantum continual measurements; pp. 49–57. [DOI] [Google Scholar]
  • 18.Barchielli A., Gregoratti M. Quantum Trajectories and Measurements in Continuous Time: The Diffusive Case. Volume 782. Springer; Berlin/Heidelberg, Germany: 2009. [DOI] [Google Scholar]
  • 19.Zurek W.H. Decoherence and the Transition from Quantum to Classical. Phys. Today. 1991;44:36–44. doi: 10.1063/1.881293. [DOI] [Google Scholar]
  • 20.Schlosshauer M.A. Decoherence: And the Quantum-to-Classical Transition. Springer Science & Business Media; Berlin/Heidelberg, Germany: 2007. [DOI] [Google Scholar]
  • 21.Doherty A.C., Jacobs K. Feedback control of quantum systems using continuous state estimation. Phys. Rev. A. 1999;60:2700–2711. doi: 10.1103/PhysRevA.60.2700. [DOI] [Google Scholar]
  • 22.Paris M.G.A., Illuminati F., Serafini A., De Siena S. Purity of Gaussian states: Measurement schemes and time evolution in noisy channels. Phys. Rev. A. 2003;68:012314. doi: 10.1103/PhysRevA.68.012314. [DOI] [Google Scholar]
  • 23.Ferraro A., Olivares S., Paris M. Gaussian States in Quantum Information. Bibliopolis; Pittsburgh, PA, USA: 2005. (Napoli Series on physics and Astrophysics). [Google Scholar]
  • 24.Wang X.B., Hiroshima T., Tomita A., Hayashi M. Quantum information with Gaussian states. Phys. Rep. 2007;448:1–111. doi: 10.1016/j.physrep.2007.04.005. [DOI] [Google Scholar]
  • 25.Weedbrook C., Pirandola S., García-Patrón R., Cerf N.J., Ralph T.C., Shapiro J.H., Lloyd S. Gaussian quantum information. Rev. Mod. Phys. 2012;84:621–669. doi: 10.1103/RevModPhys.84.621. [DOI] [Google Scholar]
  • 26.Adesso G., Ragy S., Lee A.R. Continuous variable quantum information: Gaussian states and beyond. Open Syst. Inf. Dyn. 2014;21:1440001. doi: 10.1142/S1230161214400010. [DOI] [Google Scholar]
  • 27.Laverick K.T., Chantasri A., Wiseman H.M. Quantum State Smoothing for Linear Gaussian Systems. Phys. Rev. Lett. 2019;122:190402. doi: 10.1103/PhysRevLett.122.190402. [DOI] [PubMed] [Google Scholar]
  • 28.Schlosshauer M. Decoherence, the measurement problem, and interpretations of quantum mechanics. Rev. Mod. Phys. 2005;76:1267–1305. doi: 10.1103/RevModPhys.76.1267. [DOI] [Google Scholar]
  • 29.Zurek W.H. Quantum darwinism. Nat. Phys. 2009;5:181. doi: 10.1038/nphys1202. [DOI] [Google Scholar]
  • 30.Zwolak M., Quan H.T., Zurek W.H. Redundant imprinting of information in nonideal environments: Objective reality via a noisy channel. Phys. Rev. A. 2010;81:062110. doi: 10.1103/PhysRevA.81.062110. [DOI] [Google Scholar]
  • 31.Jess Riedel C., Zurek W.H., Zwolak M. The rise and fall of redundancy in decoherence and quantum Darwinism. New J. Phys. 2012;14:083010. doi: 10.1088/1367-2630/14/8/083010. [DOI] [Google Scholar]
  • 32.Zwolak M., Zurek W.H. Complementarity of quantum discord and classically accessible information. Sci. Rep. 2013;3:1729. doi: 10.1038/srep01729. [DOI] [Google Scholar]
  • 33.Brandão F.G.S.L., Piani M., Horodecki P. Generic emergence of classical features in quantum Darwinism. Nat. Commun. 2015;6:7908. doi: 10.1038/ncomms8908. [DOI] [PubMed] [Google Scholar]
  • 34.Horodecki R., Korbicz J.K., Horodecki P. Quantum origins of objectivity. Phys. Rev. A. 2015;91:032122. doi: 10.1103/PhysRevA.91.032122. [DOI] [Google Scholar]
  • 35.Le T.P., Olaya-Castro A. Strong Quantum Darwinism and Strong Independence are Equivalent to Spectrum Broadcast Structure. Phys. Rev. Lett. 2019;122:010403. doi: 10.1103/PhysRevLett.122.010403. [DOI] [PubMed] [Google Scholar]
  • 36.Ciampini M.A., Pinna G., Mataloni P., Paternostro M. Experimental signature of quantum Darwinism in photonic cluster states. Phys. Rev. A. 2018;98:020101. doi: 10.1103/PhysRevA.98.020101. [DOI] [Google Scholar]
  • 37.Chen M.C., Zhong H.S., Li Y., Wu D., Wang X.L., Li L., Liu N.L., Lu C.Y., Pan J.W. Emergence of classical objectivity of quantum Darwinism in a photonic quantum simulator. Sci. Bull. 2019;64:580–585. doi: 10.1016/j.scib.2019.03.032. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Not Applicable.


Articles from Entropy are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES