Skip to main content
The Journal of Chemical Physics logoLink to The Journal of Chemical Physics
. 2019 Nov 5;151(17):174108. doi: 10.1063/1.5120511

Transient probability currents provide upper and lower bounds on non-equilibrium steady-state currents in the Smoluchowski picture

Jeremy Copperman 1, David Aristoff 2,a), Dmitrii E Makarov 3,b), Gideon Simpson 4,c), Daniel M Zuckerman 1,d),
PMCID: PMC7043855  PMID: 31703496

Abstract

Probability currents are fundamental in characterizing the kinetics of nonequilibrium processes. Notably, the steady-state current Jss for a source-sink system can provide the exact mean-first-passage time (MFPT) for the transition from the source to sink. Because transient nonequilibrium behavior is quantified in some modern path sampling approaches, such as the “weighted ensemble” strategy, there is strong motivation to determine bounds on Jss—and hence on the MFPT—as the system evolves in time. Here, we show that Jss is bounded from above and below by the maximum and minimum, respectively, of the current as a function of the spatial coordinate at any time t for one-dimensional systems undergoing overdamped Langevin (i.e., Smoluchowski) dynamics and for higher-dimensional Smoluchowski systems satisfying certain assumptions when projected onto a single dimension. These bounds become tighter with time, making them of potential practical utility in a scheme for estimating Jss and the long time scale kinetics of complex systems. Conceptually, the bounds result from the fact that extrema of the transient currents relax toward the steady-state current.

INTRODUCTION

Nonequilibrium statistical mechanics is of fundamental importance in many fields, and particularly in understanding molecular and cell-scale biology.1–4 Furthermore, fundamental theoretical ideas from the field (e.g., Refs. 5–7) have often been translated into very general computational strategies (e.g., Refs. 8–12).

Continuous-time Markov processes occurring in continuous configurational spaces form a central pillar of nonequilibrium studies,13,14 including chemical and biological processes.15 The behavior of such systems is described by a Fokker-Planck equation or, when momentum coordinates are integrated out, by the Smoluchowski equation.16,17 In the latter case, the probability density p(x, t) and the probability current J(x, t) are the key observables, and their behavior continues to attract theoretical attention.18–24 Continuous-time Markov processes in discrete spaces obey a master equation;15 such “Markov state models” play a prominent role in modern biomolecular computations25–28 as well as in the interpretation of experimental kinetic data.4,29

An application of great importance is the estimation of the mean first-passage time (MFPT) for transitions between “macrostates” A and B, two nonoverlapping regions of configuration space. The MFPT(AB) is the average time for trajectories initiated in A according to some specified distribution, to reach the boundary of B.17 If a large number of systems are orchestrated so that trajectories reaching B (the absorbing “sink”) are reinitialized in A (the emitting “source”), this ensemble of systems will eventually reach a steady state. Such a steady-state ensemble can be realized via the “weighted ensemble” (WE) path sampling strategy,8,30–32 for example. In a steady state, the total probability arriving to state B per unit time (the “flux”—i.e., the integral of the current J over the boundary of B) will be constant in time. The “Hill relation” then provides an exact relation between the steady-state flux and the mean first-passage time (MFPT),2,31,33

1MFPT(AB)=Flux(AB|steady state), (1)

where the dependence on the initializing distribution in A is implicit and does not affect the relation. When macrostates A and B are kinetically well-separated, the reciprocal of the MFPT is the effective rate constant for the transition.34 The Hill relation (1) is very general2,35–38 and is not restricted to one dimensional systems or to particular distributions of feedback within the source state A.

In one dimension, the steady flux is equivalent to the steady-state current Jss, which will be used in much of the exposition below.

While the relaxation time to converge to the steady-state can be very short compared to the MFPT, it is unknown a priori and may be computationally expensive to sample in complex systems of interest.32 This limitation applies to the weighted ensemble strategy, as traditionally implemented, because WE is unbiased in recapitulating the time evolution of a system.8 Thus, there is significant motivation to obtain information regarding the converged steady-state current (which depends on the boundary conditions and not the initial condition) of complex systems, from the observed transient current (which does depend on the initial condition). Here, we take first steps toward this goal and show that the maximum (minimum) transient current, regardless of the initial condition, serves as an upper (lower) bound on the nonequilibrium steady-state current, in a class of one-dimensional continuous-time Markovian stochastic systems, with prescribed boundary conditions.

In cases of practical interest, the full system is likely to be high-dimensional, and we do not expect that the local transient currents at particular configurations will provide a bound on the steady-state flux. However, one-dimensional (1D) projections of the current, along a well-chosen collective coordinate of interest, may still exhibit monotonic decay as discussed below.

The paper is organized as follows. After proving and illustrating bounds in discrete and continuous 1D systems, we discuss more complex systems and provide a numerical example of folding of the NTL9 protein using WE.32 We speculate that effective upper and lower bounds for the current exist in high-dimensional systems and hope that the 1D derivations presented here will motivate future work in this regard.

DISCRETE-STATE FORMULATION: BOUNDS AND INTUITION

The essence of the physics yielding bounds on the steady current can be appreciated from a one dimensional continuous-time discrete-state Markov process, as shown in Fig. 1. The dynamics will be governed by the usual master equation

dPidt=jiPikij+jiPjkji, (2)

where probabilities Pi = Pi(t) vary in time, while rate constants kijki,j for ij transitions are time-independent. (We use a subscript convention throughout where the forward direction is left-to-right, and commas are omitted when possible.) We will assume that only nearest-neighbor transitions are allowed—i.e.,

kij=0 for |ji|>1. (3)

Indeed, discrete random walks of this type provide a finite-difference approximation to diffusion in continuous space.39 The net current in the positive direction between any neighboring pair of states is given by the difference in the two directional probability flows,

Ji,i+1=Piki,i+1Pi+1ki+1,i. (4)

Because the probabilities Pi vary in time, so too do the currents: Ji,i+1 = Ji,i+1(t). Using (4), the master equation (2) can be rewritten as

dPidt=Ji1,iJi,i+1, (5)

which is merely a statement of the familiar continuity relation: the probability of occupying a state increases by the difference between the incoming and outgoing currents.

FIG. 1.

FIG. 1.

One-dimensional discrete state system. States (black numbers) and currents (blue) are shown.

To establish a bound, assume without loss of generality that a local maximum of the current occurs between states 5 and 6. That is,

J56>J45 and J56>J67. (6)

Differentiating J56 with respect to time and employing Eqs. (4) and (5) yield

dJ56dt=k56dP5dtk65dP6dt=k56(J45J56)k65(J56J67)<0, (7)

where both terms are negative because rate constants are positive and the signs of the current differences are determined by assumption (6). The local maximum current must decrease in time.

If instead J exhibited a local minimum at the 5 → 6 transition, reversing the directions of the inequalities (6), then the corresponding time derivative would be positive, implying that a local minimum must increase.

We have therefore shown regardless of boundary conditions that local maxima must decrease with time and local minima must increase with time in the discrete-state case with nearest-neighbor transitions. Under stationary boundary conditions, the current will decay to its steady state value Jss and thus, the global extrema at any time are bounds on the steady current. Physically, the changes in the probability produced by local differences in current—Eq. (5)—necessarily cause relaxation of the current toward its steady state value. We note that in a one-dimensional steady state, whether equilibrium or nonequilibrium, the current Jss is a constant independent of position.

Boundary behavior in a discrete-state source-sink system

The preceding conclusions were obtained for local extrema without any assumptions about boundary conditions. We now want to examine boundary conditions of particular interest, namely, a feedback system with one absorbing boundary (“sink”) state and one emitting boundary (“source”) state, where flux into the sink is reinitialized. In such a source-sink system, we will see that similar conclusions are reached regarding the relaxation of current extrema at the boundaries.

For concreteness, suppose in our linear array (Fig. 1) that state 0 is the source and state 9 is the sink: the probability current into state 9 is fed back to state 0. The source at state 0 is also presumed to be the left boundary of the system, which is implicitly reflecting.

Consider first the case where the maximum of the current occurs at the source at some time t—i.e., J01 is the maximum. To analyze this case, note that by assumption, the source state 0 in fact receives probability that arrives to the sink state 9. That is, Eq. (5) applies in the form

dP0dt=J89J01. (8)

This, in turn, implies that the analog of Eq. (7) applies directly, and we deduce that if J01 is the “maximum among J values” then it must decrease in time.

Because analogous arguments apply to all the other boundary cases (maximum at sink, minimum at either boundary), we conclude that any boundary extremum current must decay with time toward Jss in a source-sink discrete-state system.

For completeness, we note that in principle the feedback of the flux reaching the sink state could occur at a set of source states, in contrast to the single source state assumed above. However, because of the locality property (3) which has been assumed, if we consider any current maximum not part of the set of source states, the same arguments will apply.

CURRENT BOUNDS FOR CONTINUOUS SYSTEMS IN THE SMOLUCHOWSKI FRAMEWORK

Our primary interest is continuous systems, and so we turn now to a formulation of the problem via the Smoluchowski equation, which describes overdamped Langevin dynamics.16,17 Conceptually, however, it is valuable to note that the preceding discrete-state derivation of current bounds depended on the locality embodied in (3) and the Markovianity of dynamics, two properties that are preserved in the Smoluchowski picture.

Our derivation proceeds in a straightforward way from the one-dimensional Smoluchowski equation. Defining p(x, t) and J(x, t) as the probability density and current at time t, we write the Smoluchowski equation as the continuity relation,

pt=Jx, (9)

with current given by

J(x,t)=D(x)kBTf(x)p(x,t)D(x)px, (10)

where D > 0 is the (possibly) position-dependent diffusion “constant,” kBT is the thermal energy at absolute temperature T, and f = −dU/dx is the force resulting from potential energy U(x).

We now differentiate the current with respect to time and examine its behavior at extrema—local minima or maxima. We find

Jt=D(x)kBTf(x)pttD(x)px, (11)
=D(x)kBTf(x)ptD(x)2pxt, (12)
=D(x)kBTf(x)Jx+D(x)2Jx2  [General], (13)
=D(x)2Jx2[Extrema only], (14)

where the third line is derived by equating 2p/∂t∂x = 2p/∂x∂t and then substituting for ∂p/∂t in all three terms using the continuity relation (9). The last line is obtained because ∂J/∂x = 0 at a local extremum (in x).

Equation (14) is the sought-for result: it implies decay with time of local extrema in the current J. If x is a local maximum, then 2J/∂x2 < 0 and conversely for a minimum; recall that D(x) is strictly positive. (Strictly speaking, for a local maximum, one has 2J/∂x2 ≤ 0 rather than a strict inequality, but the case of vanishing second derivative is pathological for most physical systems.) With open boundaries, the global maximum is also a local maximum and must decrease in time. Likewise, the global minimum must increase. The global extrema therefore provide upper and lower bounds at any given t that tighten with time.

See below for discussion of boundaries and source/sink systems.

It is interesting to note that Eq. (13) resembles a Smoluchowski equation but for the current J instead of p. Except in the case of simple diffusion [f(x) = 0 and D(x) = D = const.], this is a “resemblance” only, in which the right-hand side cannot be written as the divergence of an effective current and hence the integral of the current is not a conserved quantity. However, the similarity may suggest why the current has a “self healing” quality like the probability itself—i.e., the tendency to relax toward the steady state distribution.

Maximum principle in a spatially bounded region

The preceding results could be obtained with elementary calculus, but characterizing current extrema in systems with more challenging boundary behavior requires the use of mathematical approaches not well known in the field of chemical physics. Mathematically, it is known that Eq. (13) is a “uniformly parabolic” equation—defined below—and therefore obeys a “maximum principle” (MP).40 The MP, in turn, implies the monotonic decay of extrema noted above, away from boundaries. In addition to vanishing first derivatives at extrema, the MP only requires the nonstrict inequality 2J/∂x2 ≤ 0, or the corresponding inequality for a minimum. For reference, we note that a uniformly parabolic partial differential equation for a function u(x, t) takes the form

ut=a(x,t)2ux2+b(x,t)ux, (15)

where a(x, t) ≥ a0 > 0 for all x and t in the domain of interest, with a0 being a constant. Note that b(x, t) is not restricted to be positive or negative.

The maximum principle dictates that if one considers the space-time plane defined by 0 ≤ xL and t1tt2, then any local extremum must occur on the spatial boundaries (x = 0 or x = L) or at the initial time t1. Most importantly, the extremum cannot occur at t = tmax away from the boundaries. Because t1 and t2 are arbitrary, one can inductively consider decreasingly small t1 values arbitrarily close to t2 to infer monotonic decay of extrema which occur away from the boundaries. We note that nonrectangular space-time domains are covered by MPs to some extent.40

It is interesting that the Smoluchowski equation itself for p(x, t) does not generally take the form (15) and hence may not obey a maximum principle. The value of the maximum of p could grow over time. One example is the relaxation of an initially uniform distribution in a harmonic potential, which would develop an increasing peak at the energy minimum as equilibrium was approached. The density satisfies a maximum principle in simple (force-free) diffusive behavior40—which does conform to (15)—in which the density must spread with time. The current, like the density in simple diffusion, tends toward a constant value in the steady state—even when there is a nonzero force.

Maximum principle for a continuous source-sink system

The case of primary interest is a source-sink feedback system because, as noted above, the steady current quantitatively characterizes the system’s kinetics. This 1D current is exactly the inverse MFPT, from the Hill relation (1). We have not yet explicitly considered the addition of source and sink terms in the Smoluchowski equation Eq. (9). With explicit inclusion of source-sink feedback in one dimension, we find that a paradigmatic system which obeys a maximum principle over the entire domain (including boundaries) is a finite interval with one end acting as a perfect sink and the other end being the source where flux from the sink is reintroduced. The source boundary is taken to be reflecting, so this is not a fully periodic system.

When the global maximum or minimum occurs at a boundary of a one-dimensional source-sink system—either at the sink or the other boundary—additional consideration (beyond what is discussed above) is necessary because the condition ∂J/∂x = 0 generally will not hold at the boundaries. However, as motivation, we point out that the same continuity arguments employed above in the discrete case apply in the continuous case as well, at least for the case of feedback to a single source state at a boundary. Intuitively, then, monotonic decay of extrema is again expected.

Mathematically, we start by considering a system bounded by an interval 0 ≤ xx1 with sink at x = 0 and source location xsrc ∈ (0, x1). The source is initially located in the interior of the interval for mathematical simplicity but later will be moved (infinitely close) to the boundary. The probability current reaching the sink is reinitialized at x = xsrc, while the x1 boundary is reflecting in this formulation. The governing equation therefore includes a source term,

p(x,t)t=J(x,t)xJ(x=0,t)δ(xxsrc), (16)

with current given again by (10), with sink boundary condition p(x = 0, t) = 0, and with reflecting boundary condition J(x = x1, t) = 0 to a model a finite domain with no probability loss. The negative sign preceding the source term J(x = 0, t) δ(xxsrc) is necessary because the current arriving to the sink (at the left side of the interval) is negative by convention. Note that (16) is a special case where feedback occurs at a point; more generally, instead of a delta function, an arbitrary distribution on the domain could be used in the second term on the right-hand side; see Appendix A.

In Appendix A, we show that (16) obeys a maximum principle regardless of the location of the source xsrc or the initial condition. However, on its own, this maximum principle does not establish the sought-for monotonic decay of extrema because maxima and minima could still occur on the spatial boundaries, or at the source, with increasing time. Note that the maximum principle applies only to global extrema inside the domain.

We therefore turn to an alternative formulation that includes a boundary source implicitly via boundary conditions without the source term of (16), and a more powerful maximum principle is also seen to hold. As shown in Appendix B, by taking the limit of xsrcx1 (or, equivalently, x1xsrc), we obtain the standard Smoluchowski description of Eqs. (9) and (10) with sink boundary condition p(x = 0, t) = 0, along with an additional boundary condition—the “periodicity” of current, namely, J(x = xsrc, t) = J(x = 0, t). The source term of Eq. (16) is no longer present, but identical behavior for p and J is obtained, along with a maximum principle, as shown in Appendix B.

In this special case, when the single source point occurs at a boundary, the periodicity of the current does not allow a local extremum on the boundary and leads to a maximum principle (MP) implying monotonic decay of extrema; see Appendix B. The MP for a periodic function indicates that the maximum in a space-time domain between arbitrary t1 < t2 must occur at the earlier time. This implies, inductively, the monotonic decay of local extrema in J—i.e., decrease with t of maxima and increase in minima.

Although monotonic decay of extrema may seem obvious from the discrete case, the maximum principle for the continuous case covers instances that may seem surprising. In looking for a counterexample, one could construct a system with a very low diffusion constant but large and spatially varying forces. For example, one could initialize a spatially bounded probability distribution on the side of an inverted parabolic potential: intuitively, one might expect the maximum current to increase as the probability packet mean velocity increases down the steepening potential. However, so long as the diffusion rate is finite, the spreading of the probability distribution (lowering the peak density) will counteract the increase in velocity. A numerical example is shown in Fig. 4.

FIG. 4.

FIG. 4.

Numerical data for diffusion down an inverted parabola potential in a source-sink system (MFPT=t¯=6.2 ns). Left: The system is initialized with a delta-distribution at the source, and the current toward the target is plotted from the source at x = 12.0 nm to the sink (target) at x = 0 and is seen to relax monotonically to the steady-state. Right: The maximum (solid black) and minimum (dashed black) currents converge monotonically toward the steady-state value with increasing time, while the current at the sink (red) relaxes nonmonotonically. Inset: potential energy in the domain.

Numerical evidence: One dimension

We have employed numerical analysis of one-dimensional systems to illustrate the behavior of the time-evolving current. In these examples, we define positive current as directed toward the sink.

We first examined a simple diffusion process with source at x = 1 and sink boundary condition at x = 0 using units where the diffusion coefficient is D = 1 and the mean first-passage time is t¯=L2/2D, where L = 1 is the domain length. In all examples, probability is initialized as a delta-function distribution at the source and propagated via numerical solution of the Smoluchowski equation using the FiPy package.41 In all examples, we have applied periodic boundary conditions for the current (with a reflecting boundary at the source), appropriate to describe the evolution of Eq. (16) for a single-source point at the system boundary (see Appendix B for a complete discussion of boundary conditions). Figure 2 shows clearly that the spatial maximum and minimum values bracket the true steady-state current. In this system, the minimum current value and the “target” current (at the sink) are identical.

FIG. 2.

FIG. 2.

Numerical data for simple diffusion in a source-sink system. Left: The system is initialized with all probability at the source x/L = 1, and the current toward the sink is seen to relax to steady behavior as t becomes a substantial fraction of the first-passage time t¯. Right: The maximum and minimum currents converge monotonically toward the steady-state value with increasing time.

We also examined a numerical system with a potential barrier separating the source (x = 1.6 nm) and sink (x = 0) states. See Fig. 3. Parameters for this example were roughly intended to model the diffusion of a 1 nm sphere in water at 298 K: D = 2.6 × 10−10 m2/s, and a Gaussian barrier of height 10 kBT and width of 2 nm. Probability was again initialized at the source. Qualitatively the results are similar to the simple diffusion case, with the spatial maximum and minimum current bracketing the true steady-state current. In this system, the minimum current value and the current at the sink are identical. Here, the relaxation time to the steady state is roughly four orders of magnitude faster than the MFPT of ∼0.5 ms, as shown in Fig. 3.

FIG. 3.

FIG. 3.

Numerical data for diffusion over a central barrier in a source-sink system. Left: The system is initialized with all probability at the source, x/L = 1, and the current toward the sink at x = 0 is seen to relax to steady behavior at a fraction of the MFPT. Right: The currents at the maximum and sink (identical to the minimum in this system) converge monotonically toward the steady-state value with increasing time. Inset: potential energy in the domain.

Finally, we examined a one-dimensional system described in the last paragraph of “Maximum principle for a continuous source-sink system” in which the monotonic decay of the current may not be intuitive. We initialize a delta-function distribution at the top of an inverted parabolic potential U(x)=12k(xx0)2 and force constant k=5kBT(3nm)2, with the source at the peak (x0 = 12.0 nm) and the sink at x = 0. Dynamics parameters for this example are identical to the previous case, T = 298 K and D = 2.6 × 10−10 m2/s. Even though the mean velocity of the “particle” initialized at the top of the inverted parabola increases rapidly, this acceleration is counteracted by the spreading of the initial distribution. In accordance with the maximum principle, Fig. 4 shows that the current maximum (minimum) monotonically decreases (increases) until a steady-state is reached (MFPT = 6.2 ns). Interestingly, in this system, the minimum current value and the current at the sink differ. Although the maximum principle implies that the minimum current will increase monotonically over time, the MP does not intrinsically characterize the target current (at the sink), which may not be a minimum.

DISCUSSION OF MORE COMPLEX SYSTEMS

Should we expect that analogous bounds exist in cases of practical interest, when the current from a high-dimensional system is integrated over isosurfaces of a one-dimensional coordinate q? This is a situation often encountered in molecular simulation, where conformational transitions of interest require correlated motion between many hundreds or thousands of atoms and are observed along a handful of collective coordinates. In fact, there is no maximum principle for the locally defined current magnitude in higher dimensional spaces, but even when the local high-dimensional current magnitude does not monotonically decay, the flux over isosurfaces of a reaction coordinate may exhibit monotonic decay; see below and Appendix C. We have not derived general results for this case, but there are interesting hints in the literature that a more general formulation may be possible.

Most notably, Berezhkovskii and Szabo showed that the probability density p(ϕ, t) of the “committor” coordinate ϕ evolves according to a standard Smoluchowski equation under the assumption that “orthogonal coordinates” (along each isocommittor surface) are equilibrium-distributed according to a Boltzmann factor;42 see also Ref. 43. Note that the committor 0 ≤ ϕ ≤ 1 is defined in the full domain to be the probability of starting at each point and reaching a chosen target state before visiting the given initial state. Because our preceding derivation of current bounds for one-dimensional systems relied entirely on the Smoluchowski equation, it follows that the current projected onto the committor, J(ϕ), would be subject to the same bounds—so long as the additional assumption about equilibrated orthogonal coordinates holds.44–46

It is intriguing to note that the orthogonal equilibration assumption is true in one type of A → B steady state. Consider a steady state constructed using “EqSurf” feedback,47 in which probability arriving to the target state B is fed back to the surface of initial state A according to the distribution which would enter A (from B) in equilibrium; this procedure preserves the equilibrium distribution within A.47 For any steady state, the current is a vector field characterized by flow lines, each of which is always tangent to the current. Then, the probability density on any surface orthogonal to the flow lines must be in equilibrium: if this were not the case, a lack of detailed balance would lead to net flow of probability, violating the assumption of orthogonality to the current lines. A visual schematic of such a steady-state is shown in Fig. 5. The same orthogonal surfaces must also be isocommittor surfaces in the EqSurf case, which can be shown by direct calculation. Using the known relationship between the steady current, committor, and potential energy for the EqSurf steady state,42 one finds that the current is indeed parallel to the gradient of the committor,

Jss=(1/Z)eβU(x)ϕ(x), (17)

where x is the full set of configurational coordinates, ϕ is the committor, and Z is the system partition function. This special case of “orthogonal equilibration” is quite interesting, but we remind readers that the transient (pre-steady-state) behavior orthogonal to current lines has not been characterized here.

FIG. 5.

FIG. 5.

Schematic of flows and isocommittor surfaces. Steady-state current flow lines (solid lines with arrows) and committor isosurfaces (dashed lines) are shown in a bounded domain with source (A) and sink (B) states. As discussed in the text, no component of the steady-state current can flow along isocommittor surfaces, which also must exhibit the equilibrium distribution, when suitable boundary conditions are enforced.

We also provide numerical evidence for nearly monotonic relaxation behavior of the current in a highly complex system, an atomistic model of the protein NTL9 undergoing folding. Figure 6 shows the flux (total probability per second) crossing isosurfaces of a collective variable, the RMSD, which here is the minimum root mean-squared deviation of atom pair distances between a given configuration and a fixed reference folded configuration—minimized over all translations and rotations of the two configurations. Since the collective variable isosurfaces separate the folded and unfolded states, at a steady state, the flux will become constant across isosurfaces of the collective variable. Data were harvested from a prior study using the weighted ensemble (WE) approach, which was implemented with a source at one unfolded configuration and a sink at the folded state, defined as RMSD ≤1 Å.32 Although the RMSD is a distance measure from an arbitrary configuration to the folded state, it is not claimed to be a proxy for the committor coordinate described above. Note that the WE method runs a set of unbiased trajectories and performs occasional unbiased resampling in path space;8 thus, WE provides the correct time-evolution of currents and probability distributions, which are derived directly from the path ensemble.

FIG. 6.

FIG. 6.

Numerical data for a complex system—atomistic protein folding. The data show protein folding flux for atomistic, implicitly solvated NTL932 as a function of a projected coordinate (RMSD) averaged over several time intervals during a simulation. The flux is the total probability crossing the indicated RMSD isosurface per second. Data were obtained from weighted ensemble simulation, which orchestrates multiple trajectories to obtain unbiased information in the full space of coordinates over time—i.e., Fokker-Planck-equation behavior is recapitulated.8 Only positive (folding direction) current is shown although some RMSD increments exhibit negative flux in some time intervals due to incomplete sampling/noise.

Although the RMSD coordinate used in the NTL9 simulations is not likely to be an ideal reaction coordinate, we still observe monotonic relaxation of the flux profile. For this set of 30 weighted ensemble simulations of NTL9 protein folding, during the observed transient regime (where all trajectories were initialized in the unfolded state), the steady state is monotonically approached out to 45 ns molecular time (reflecting 225 μs of aggregate simulation). Furthermore, although the current profile was still evolving in the NTL9 simulations and not fully steady, using the Hill relation (1) for the MFPT from the flux into the folded state yielded a folding time of 0.2–2.0 ms, consistent with the experimental value.32

Identifying good reaction coordinates to describe long time scale conformational transitions remains a challenging problem in complex systems48–50 and is beyond the scope of the present study. For a perfect collective variable which captures the “slow” coordinates such that orthogonal degrees of freedom are equilibrated, the system can be effectively described by a 1D Smoluchowski equation, and thus, the global flux extrema will relax monotonically. Our hope is that this work, proving monotonic decay for the current in simple 1D systems, will inspire work to show how the projected current on imperfect reaction coordinates can provide bounds for the steady-state current.

In the realm of speculation, motivated in part by our numerical data, one can ask whether a variational principle should hold. That is, if there are projected coordinates with higher and lower current maxima, is the lower maximum always a valid bound? This is a question for future investigation.

IMPLICATIONS FOR NUMERICAL COMPUTATIONS

There is significant motivation for pursuing steady-state behavior: in the nonequilibrium source-sink feedback systems studied here, the steady-state flux yields the mean first-passage time (MFPT) for transitions between two macrostates of interest via the Hill relation Eq. (1). The relaxation time to the steady state can be many orders of magnitude faster than the MFPT in kinetically well-separated systems without significant intermediate metastable states: see Fig. 3. Hence, large gains in estimating the MFPT could be obtained by sampling from the short-time nonequilibrium trajectory ensemble if the flux can be tightly bounded from above and below. Such transient information has been leveraged in Markov state modeling approaches,25,26,28 but a lack of separation of time scales between individual states can bias kinetic predictions.51 When sampling from the nonequilibrium trajectory ensemble, the concerns are different. We propose that the observed transient flux can bound the steady-state flux along suitable reaction coordinates42 with clear separation of time scales between “slow” reaction progress and orthogonal degrees of freedom—that is, in systems where transitions which are effectively one-dimensional. These bounds will become tighter as sampling time increases; the MFPT is estimated exactly via (1) when the sampling converges to a steady state.

In terms of practical estimators, having upper and lower bounds based on the “spatial” variation of the flux would imply that any spatial average of the flux is a valid estimator for the steady flux that must converge to the true value at large t. For a high-dimensional system, the “spatial” average would correspond to an average along the collective coordinate exhibiting the bounds. Such an average could be linear or nonlinear.

The potential value of such average-based estimators can be seen from the spatial current (flux) profiles plotted in Figs. 2–6, where the MFPT is estimated based on the flux into the sink and is essentially the minimum flux during the transient period prior to the steady state. It seems clear that averaging among values between the minimum and maximum would yield, at moderate t, values much closer to the steady flux reached after long times. Such estimators will be explored in the future.

ACKNOWLEDGMENTS

We are very appreciative of discussions with Sasha Berezhkovskii and David Zuckerman. This work was supported by the NSF, Grant Nos. CHE-1566001 (to D.E.M.), DMS-1522398 (D.A.), and DMS-1818726 (D.A. and G.S.), by the NIH, Grant No. GM115805 (D.M.Z.), and by the Robert A. Welch Foundation, Grant No. F-1514 (D.E.M.).

APPENDIX A: MAXIMUM PRINCIPLE FOR A SOURCE-SINK SYSTEM WITH GENERAL SOURCE TERM

In this appendix, we derive a nonstandard maximum principle for a general source-sink system. Recall that we consider a system on the spatial interval 0 ≤ xx1 with a sink at x = 0. We introduce a source γ = γ(x) corresponding to a probability density function with derivative γ′. We suppose that γ is defined on an interval away from the source and sink; thus, γ(x) = 0 for x < a or x > b and a, b satisfy 0 < a < b < x1. It is not essential that there be a boundary at x1; indeed, the arguments below apply just as well on 0 ≤ x < . We retain the notation from the previous sections for consistency. Let p = p(x, t) be the probability solving

pt=Jxγ(x)J(x=0),  p(x=0)=0,J(x=x1)=0. (A1)

J is the current defined in (10), and the overall negative sign of the source term is simply due to the overall negative direction of the current given the sink at x = 0.

To obtain the maximum principle for the current, we first differentiate (10) with respect to time t and use (A1) to obtain

Jt=D(x)kBTf(x)Jx+D(x)2Jx2+D(x)kBTf(x)γ(x)+γ(x)D(x)J(x=0), (A2)

which differs from (13) due to the source terms. Let ΩT be the set in the space-time domain corresponding to (x, t) with 0 < tT and either 0 < x < a or b < x < x1. We claim that a (weak) maximum principle holds for J in ΩT.

Let Ω¯T consist of (x, t) with 0 ≤ tT and either 0 ≤ xa or bxx1. A maximum principle will imply that the maximum of J over Ω¯T must be attained outside of ΩT. This means that the maximum of J over 0 ≤ tT and 0 ≤ xx1 must be attained either at t = 0, or at t > 0 with x = 0, x = x1, or axb. In other words, the maximum current occurs either at time zero, or on the spatial boundary, or in the source states. We prove this by showing that a contradiction is reached otherwise, as follows:

  • Let (x0, t0) be a point where J achieves its maximum in Ω¯T.

  • Suppose, for a moment, the maximum is nondegenerate, 0 < t0 < T, and 0 < x0 < a or b < x0 < x1. By nondegenerate, we mean that 2Jx2(x0,t0)<0. Since it is a maximum, Jx(x0,t0)=0. Moreover, γ(x0) = γ′(x0) = 0 as γ(x) = 0 for x < a or x > b, but this contradicts (A2) since we are left with 0=Jt(x0,t0)=D(x0)2Jx2(x0,t0)<0.

  • Now suppose the maximum is nondegenerate with t0 = T, and 0 < x0 < a or b < x0 < x1. Then, Jt(x0,T)0, Jx(x0,T)=γ(x0)=γ(x0)=0, and 2Jx2(x0,T)<0. Again, we have a contradiction since 0Jt(x0,T)=D(x0)2Jx2(x0,T)<0.

  • In general, the maximum may be degenerate, and we make the following perturbative argument. Let ε > 0 and define

Jε(x,t)=J(x,t)εt.

Then,

Jεt=εD(x)kBTf(x)Jεx+D(x)2Jεx2+D(x)kBTf(x)γ(x)+γ(x)D(x)(Jε(x=0)+εt). (A3)

Consider Ω¯T\ΩT, defined as all space-time points in Ω¯T that are not in ΩT. Let M=maxΩ¯T\ΩTJ be the maximum of J over Ω¯T\ΩT. We claim that the maximum of J over Ω¯T is less than M, that is, maxΩ¯TJM. To prove this, we will show instead that the max of Jε over Ω¯T is less than M, that is, maxΩ¯TJεM. Once the latter is established, we have maxΩ¯TJM+εT and the result follows by letting ε → 0.

First, notice that maxΩ¯T\ΩTJεM since by the definition of M, we have Jε(x, t) = J(x, t) − εtMεtM for any (x, t) in Ω¯T\ΩT. If Jε(x, t) has a maximum at (x0, t0) in ΩT, then Jεt(x0,t0)0, while Jεx(x0,t0)=0, 2Jεx2(x0,t0)0, and γ(x0) = γ′(x0) = 0, but this contradicts (A3) since

0Jεt(x0,t0)=ε+D(x0)2Jεx2(x0,t0)<0.

Therefore, the maximum of Jε over Ω¯T must occur on Ω¯T\ΩT and maxΩ¯TJε=maxΩ¯T\ΩTJεM, as desired.

  • The exact same arguments can be applied using “minimum” in place of “maximum,” where in the last step above, ε would be replaced with −ε.

In Appendix B, due to the periodic boundary condition and the source being at the boundary, we conclude that the maximum of J is attained at the initial time. This leads to the result cited in the main text above that the time dependent current gives bounds for the steady state current Jss. Note that this is not true in the setting of this appendix since for a general source γ, the maximum may be attained after the initial time in either the source states or on the spatial boundary.

APPENDIX B: MAXIMUM PRINCIPLE FOR A SOURCE-SINK SYSTEM WITH POINT SOURCE AT THE BOUNDARY

Here, we show that a source-sink system with a point source at the boundary x1 and a sink at 0 satisfies a maximum principle and that the extrema of the time dependent current give bounds for the steady state current Jss. We begin with a special case of (A1) that corresponds to a source-sink system on 0 ≤ xx1 with a point source at xsrc, sink at 0, and reflecting boundary at x = x1,

pt=Jxδ(xxsrc)J(x=0),p(x=0)=0,J(x=x1)=0, (B1)

where we assume 0 < xsrc < x1.

We first show that when x1xsrc, the PDE in (B1) is, in a sense, equivalent to the ordinary continuity relation (no source term) but different boundary conditions, namely,

p~t=J~x,p~(x=0)=0,J~(x=0)=J~(x=xsrc). (B2)

Above, J is given by (10), and J~ is analogously defined, with p~ taking the place of p. Equation (B1) is posed on 0 ≤ xx1, while (B2) is posed on 0 ≤ xxsrc. Equation (B2) thus corresponds to a source-sink system with sink at 0 and reflecting boundary and point source at x1. Since (B2) is identical to (13) but with periodic boundary condition on the current, we obtain a maximum principle for J as well as monotonic convergence of the extrema of the time dependent current toward its steady state value Jss, as discussed below. An intuitive argument supporting current periodicity is given at the end of this appendix.

To address the limit x1xsrc, we consider the “weak solutions” associated with (B1) and (B2); weak solutions are a common framework for studying partial differential equations.52 The weak solutions are obtained by multiplying by a smooth test function and then integrating by parts. For (B1), if ϕ = ϕ(x) is a smooth function on 0 ≤ xx1 vanishing at x = 0 with derivative ϕ′,

0x1ptϕdx=0x1Jxϕdx0x1δ(xxsrc)J(x=0)ϕdx=0x1JxϕdxJ(x=0)ϕ(xsrc)=J(x=0)ϕ(0)J(x=x1)ϕ(x1)J(x=0)ϕ(xsrc)+0x1ϕJdx=J(x=0)ϕ(xsrc)+0x1ϕJdx. (B3)

In parallel, for (B2), we multiply by ϕ~, a smooth function on 0 ≤ xxsrc vanishing at x = 0 with derivative ϕ~,

0xsrcp~tϕ~dx=0xsrcJ~xϕ~dx=J~(x=0)ϕ~(0)J~(x=xsrc)ϕ~(xsrc)+0xsrcϕ~J~dx=J~(x=0)ϕ~(xsrc)+0xsrcϕ~J~dx. (B4)

Thus, the PDEs (B1) and (B2) have the same weak solutions when x1xsrc.

We can now obtain a maximum principle on the current. Dropping the s and differentiating (B2) yield (13) together with the boundary conditions

J(x=0)=J(x=xsrc),Jx(x=0)=0. (B5)

The last boundary condition comes from p(x = 0) = 0. Indeed, p(x = 0) = 0 implies Jx(x=0)=pt(x=0)=0.

With the periodic boundary condition on the current, a version of the maximum principle applies and shows that the max of J over the time-space domain occurs at the initial time,

maxt0tT,0xxsrcJ(x,t)=max0xxsrcJ(x,t0).

This leads to the monotonic convergence discussed above, as follows. Suppose 0 ≤ t1t2T. Then, the max of J at time t1 is greater than or equal to the max of J at time t2,

max0xxsrcJ(x,t1)=maxt1sT,0xxsrcJ(x,s)maxt2sT,0xxsrcJ(x,s)=max0xxsrcJ(x,t2).

Of course, an analogous statement holds for the min, min0xxsrcJ(x,t1)min0xxsrcJ(x,t2).

Intuition for the periodicity of the current can be understood based on a “particle” or trajectory picture of the feedback process, where the current is defined as the net number of trajectories passing an x value per second. Note first that if there were no feedback to the boundary at xsrc, then the net current there would vanish for any t > 0 because of reflection: every trajectory reaching the boundary from x < xsrc would be reversed by construction yielding zero net flow at the boundary. With feedback, every trajectory reaching the sink x = 0 is placed immediately at x = xsrc, the source boundary. At that boundary, the current from nonfeedback trajectories is zero because of the reflectivity argument, and the injected number of trajectories will exactly match the number at the sink, implying J(x = xsrc, t) = J(x = 0, t).

APPENDIX C: ANALYSIS OF A TWO-DIMENSIONAL EXAMPLE

It is instructive to study a simple two-dimensional example in detail. Two important features emerge: (i) there is no maximum principle because the locally defined current magnitude can increase over time away from a boundary and (ii) for the example below, there is, nevertheless, a one-dimensional projection of the current which does exhibit monotonic decay.

We consider the evolution of a probability distribution p(x,t) in a two-dimensional vector space x=(x,y) with 0 ≤ xL and −y, defining an infinite rectangular strip. We take the boundaries at x = 0, L to be periodic, meaning

p(x=0,t)=p(x=L,t),p(x=0,t)x=p(x=L,t)x. (C1)

Note that this periodicity assumption is not a source-sink condition. The probability distribution p(x, t) evolves according to the continuity equation,

p(x,t)t=J(x,t), (C2)

where the Smoluchowski current J(x,t) has the usual drift and diffusion terms,

J(x,t)=βDf(x)p(x,t)Dp(x,t). (C3)

We consider a potential U(x)=12ky2bx so that the force vector is f(x)=(b,ky). The constant force in x is qualitatively similar to a source-sink setup, but probability can cross the boundary in both directions here. In the y direction, there is simple harmonic behavior. The steady-state solution p is uniform in x and varies only in y,

p(x)=1Lβk2πexpβky22. (C4)

At a steady state, there is a persistent current in x due to periodicity but no current in the y direction,

Jx=βDbp,   Jy=0. (C5)

Our interest is focused on the current extrema and particularly the maximum in this case. The maximum steady-state current magnitude, at y = 0, is found from (C3) and (C4) to be

|J|max2=β3D2b2k2πL2. (C6)

To test for monotonic behavior, we employ the current magnitude resulting from an arbitrary initial condition p0(x,t=0), which is given by

|J0|2=D2p0y2+(Dβkyp0)2+2βD2kyp0p0y+2βD2bp0p0x+(βDbp0)2+D2p0x2. (C7)

Because the diffusion coefficient scales linearly with temperature, Dβ−1, the terms here scale as β−2, β−1, and β0. From (C6), however, the steady-state flux magnitude scales as |J|max2β, so generically, given any initial condition, the final steady-state current can be made larger than the initial current by reducing the temperature. This two-dimensional behavior is distinctly different from that found above in the one-dimensional case, where the current obeys a maximum principle and thus the maximum must be at the initial condition, or on the system boundary.

As a specific example, consider the initial condition of a distribution uniform in x and Gaussian in y, p0(x)=1LC2πexpCy22, which differs from the steady distribution when Cβk. The initial current magnitude is then

|J0|2=|J|max2Cβk1+y2(βkC)2(βb)2exp[Cy2]. (C8)

Note that the symmetric current maxima are shifted away from y = 0. Setting the derivative of (C8) equal to zero, the maxima are located at ymax=±(βkC)2C(βb)2C(βkC)2. Inserting ymax into (C8) yields

|J0|max2=|J|max2(βkC)2βk(βb)2eexpC(βb)2(βkC)2 (C9)

for the maximum current magnitude.

Monotonic decay of the maximum is not always observed for this system. Over much of parameter space, |J0|max2>|J|max2, but not when (βb)2 > βkC. Defining the equilibrium root mean-square fluctuation lengths σx = (βb)−1 and σy = (βk)−1/2, when the width of the initial distribution is very large compared to the thermal fluctuation lengths, the ratio of the initial to steady-state maximum current is |J0|max|J|maxσxσy=(β1/2)kb. The initial current can be tuned to be less than the steady-state current by lowering the temperature, or by reducing the ratio of longitudinal (x) to transverse (y) fluctuations. This is a simple example which demonstrates that there is no maximum principle for the magnitude of the current in dimensionality exceeding one.

Note that in this example, the projected dynamics onto either the x or y dimension are independent because neither the potential nor the thermal noise couple x and y. In this case, Eq. (C2) is fully separable with the projected currents in x and y each satisfying a maximum principle individually.

Contributor Information

David Aristoff, Email: .

Dmitrii E. Makarov, Email: .

Gideon Simpson, Email: .

Daniel M. Zuckerman, Email: .

REFERENCES

  • 1.Hopfield J. J., “Kinetic proofreading: A new mechanism for reducing errors in biosynthetic processes requiring high specificity,” Proc. Natl. Acad. Sci. U. S. A. 71, 4135–4139 (1974). 10.1073/pnas.71.10.4135 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Hill T. L., Free Energy Transduction and Biochemical Cycle Kinetics (Dover, 2004). [Google Scholar]
  • 3.Beard D. A. and Qian H., Chemical Biophysics: Quantitative Analysis of Cellular Systems (Cambridge University Press, 2008). [Google Scholar]
  • 4.Lee Y., Phelps C., Huang T., Mostofian B., Wu L., Zhang Y., Chang Y. H., Stork P. J., Gray J. W., Zuckerman D. M. et al. , “High-throughput single-particle tracking reveals nested membrane nanodomain organization that dictates Ras diffusion and trafficking,” e-print 10.1101/552075 (2019). [DOI] [PMC free article] [PubMed]
  • 5.Jarzynski C., “Nonequilibrium equality for free energy differences,” Phys. Rev. Lett. 78, 2690 (1997). 10.1103/physrevlett.78.2690 [DOI] [Google Scholar]
  • 6.Crooks G. E., “Entropy production fluctuation theorem and the nonequilibrium work relation for free energy differences,” Phys. Rev. E 60, 2721 (1999). 10.1103/physreve.60.2721 [DOI] [PubMed] [Google Scholar]
  • 7.Seifert U., “Stochastic thermodynamics, fluctuation theorems and molecular machines,” Rep. Prog. Phys. 75, 126001 (2012). 10.1088/0034-4885/75/12/126001 [DOI] [PubMed] [Google Scholar]
  • 8.Zhang B. W., Jasnow D., and Zuckerman D. M., “The “weighted ensemble” path sampling method is statistically exact for a broad class of stochastic processes and binning procedures,” J. Chem. Phys. 132, 054107 (2010). 10.1063/1.3306345 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Dickson A. and Dinner A. R., “Enhanced sampling of nonequilibrium steady states,” Annu. Rev. Phys. Chem. 61, 441–459 (2010). 10.1146/annurev.physchem.012809.103433 [DOI] [PubMed] [Google Scholar]
  • 10.Ytreberg F. M. and Zuckerman D. M., “Single-ensemble nonequilibrium path-sampling estimates of free energy differences,” J. Chem. Phys. 120, 10876–10879 (2004). 10.1063/1.1760511 [DOI] [PubMed] [Google Scholar]
  • 11.Bello-Rivas J. M. and Elber R., “Exact milestoning,” J. Chem. Phys. 142, 094102 (2015). 10.1063/1.4913399 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Nilmeier J. P., Crooks G. E., Minh D. D., and Chodera J. D., “Nonequilibrium candidate Monte Carlo is an efficient tool for equilibrium simulation,” Proc. Natl. Acad. Sci. U. S. A. 108, E1009 (2011). 10.1073/pnas.1106094108 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Chapman S., “On the Brownian displacements and thermal diffusion of grains suspended in a non-uniform fluid,” Proc. R. Soc. London, Ser. A 119, 34–54 (1928). 10.1098/rspa.1928.0082 [DOI] [Google Scholar]
  • 14.Kolmogoroff A., “On analytical methods in the theory of probability,” Math. Ann. 104, 415–458 (1931). 10.1007/bf01457949 [DOI] [Google Scholar]
  • 15.Van Kampen N. G., Stochastic Processes in Physics and Chemistry (Elsevier, 1992), Vol. 1. [Google Scholar]
  • 16.Risken H. and Frank T., The Fokker-Planck Equation: Methods of Solution and Applications (Springer Science & Business Media, 1996), Vol. 18. [Google Scholar]
  • 17.Gardiner C., Stochastic Methods (Springer Berlin, 2009), Vol. 4. [Google Scholar]
  • 18.Berezhkovskii A. M., Szabo A., Greives N., and Zhou H.-X., “Multidimensional reaction rate theory with anisotropic diffusion,” J. Chem. Phys. 141, 204106 (2014). 10.1063/1.4902243 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Ghysels A., Venable R. M., Pastor R. W., and Hummer G., “Position-dependent diffusion tensors in anisotropic media from simulation: Oxygen transport in and through membranes,” J. Chem. Theory Comput. 13, 2962–2976 (2017). 10.1021/acs.jctc.7b00039 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Grafke T. and Vanden-Eijnden E., “Numerical computation of rare events via large deviation theory,” Chaos 29, 063118 (2019). 10.1063/1.5084025 [DOI] [PubMed] [Google Scholar]
  • 21.Polotto F., Drigo Filho E., Chahine J., and de Oliveira R. J., “Supersymmetric quantum mechanics method for the Fokker–Planck equation with applications to protein folding dynamics,” Physica A 493, 286–300 (2018). 10.1016/j.physa.2017.10.021 [DOI] [Google Scholar]
  • 22.Cossio P., Hummer G., and Szabo A., “Transition paths in single-molecule force spectroscopy,” J. Chem. Phys. 148, 123309 (2018). 10.1063/1.5004767 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.del Razo M. J., Qian H., and Noé F., “Grand canonical diffusion-influenced reactions: A stochastic theory with applications to multiscale reaction-diffusion simulations,” J. Chem. Phys. 149, 044102 (2018). 10.1063/1.5037060 [DOI] [PubMed] [Google Scholar]
  • 24.Berezhkovskii A. M. and Bezrukov S. M., “Mapping intrachannel diffusive dynamics of interacting molecules onto a two-site model: Crossover in flux concentration dependence,” J. Phys. Chem. B 122, 10996–11001 (2018). 10.1021/acs.jpcb.8b04371 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Singhal N., Snow C. D., and Pande V. S., “Using path sampling to build better Markovian state models: Predicting the folding rate and mechanism of a tryptophan zipper beta hairpin,” J. Chem. Phys. 121, 415–425 (2004). 10.1063/1.1738647 [DOI] [PubMed] [Google Scholar]
  • 26.Noé F., Horenko I., Schütte C., and Smith J. C., “Hierarchical analysis of conformational dynamics in biomolecules: Transition networks of metastable states,” J. Chem. Phys. 126, 155102 (2007). 10.1063/1.2714539 [DOI] [PubMed] [Google Scholar]
  • 27.Voelz V. A., Bowman G. R., Beauchamp K., and Pande V. S., “Molecular simulation of ab initio protein folding for a millisecond folder NTL9 (1–39),” J. Am. Chem. Soc. 132, 1526–1528 (2010). 10.1021/ja9090353 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Chodera J. D. and Noé F., “Markov state models of biomolecular conformational dynamics,” Curr. Opin. Struct. Biol. 25, 135–144 (2014). 10.1016/j.sbi.2014.04.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Makarov D. E., Single Molecule Science: Physical Principles and Models (CRC Press, 2015). [Google Scholar]
  • 30.Huber G. A. and Kim S., “Weighted-ensemble Brownian dynamics simulations for protein association reactions,” Biophys. J. 70, 97 (1996). 10.1016/s0006-3495(96)79552-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Suarez E., Lettieri S., Zwier M. C., Stringer C. A., Subramanian S. R., Chong L. T., and Zuckerman D. M., “Simultaneous computation of dynamical and equilibrium information using a weighted ensemble of trajectories,” J. Chem. Theory Comput. 10, 2658–2667 (2014). 10.1021/ct401065r [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Adhikari U., Mostofian B., Petersen A., and Zuckerman D. M., “Computational estimation of microsecond to second atomistic folding times,” J. Amer. Chem. Soc. 141, 6519 (2019). 10.1021/jacs.8b10735 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Bhatt D., Zhang B. W., and Zuckerman D. M., “Steady-state simulations using weighted ensemble path sampling,” J. Chem. Phys. 133, 014110 (2010). 10.1063/1.3456985 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Hänggi P., Talkner P., and Borkovec M., “Reaction-rate theory: Fifty years after kramers,” Rev. Mod. Phys. 62, 251 (1990). 10.1103/revmodphys.62.251 [DOI] [Google Scholar]
  • 35.Warmflash A., Bhimalapuram P., and Dinner A. R., “Umbrella sampling for nonequilibrium processes,” J. Chem. Phys. 127, 154112 (2007). 10.1063/1.2784118 [DOI] [PubMed] [Google Scholar]
  • 36.Dickson A., Warmflash A., and Dinner A. R., “Nonequilibrium umbrella sampling in spaces of many order parameters,” J. Chem. Phys. 130, 074104 (2009). 10.1063/1.3070677 [DOI] [PubMed] [Google Scholar]
  • 37.Aristoff D. and Zuckerman D. M., “Optimizing weighted ensemble sampling of steady states,” preprint arXiv:1806.00860 (2018). [DOI] [PMC free article] [PubMed]
  • 38.Zuckerman D. M., “Statistical biophysics blog: “proof” of the hill relation between probability flux and mean first-passage time,” http://statisticalbiophysicsblog.org/?p=8, 2015.
  • 39.Metzler R. and Klafter J., “The random walk’s guide to anomalous diffusion: A fractional dynamics approach,” Phys. Rep. 339, 1–77 (2000). 10.1016/s0370-1573(00)00070-3 [DOI] [Google Scholar]
  • 40.Protter M. H. and Weinberger H. F., Maximum Principles in Differential Equations (Springer Science & Business Media, 2012). [Google Scholar]
  • 41.Guyer J. E., Wheeler D., and Warren J. A., “FiPy: Partial differential equations with python,” Comput. Sci. Eng. 11, 6 (2009). 10.1109/mcse.2009.52 [DOI] [Google Scholar]
  • 42.Berezhkovskii A. M. and Szabo A., “Diffusion along the splitting/commitment probability reaction coordinate,” J. Phys. Chem. B 117, 13115–13119 (2013). 10.1021/jp403043a [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Lu J. and Vanden-Eijnden E., “Exact dynamical coarse-graining without time-scale separation,” J. Chem. Phys. 141, 044109 (2014). 10.1063/1.4890367 [DOI] [PubMed] [Google Scholar]
  • 44.Vanden-Eijnden E., “Fast communications: Numerical techniques for multi-scale dynamical systems with stochastic effects,” Commun. Math. Sci. 1, 385–391 (2003). 10.4310/cms.2003.v1.n2.a11 [DOI] [Google Scholar]
  • 45.Hartmann C., “Model reduction in classical molecular dynamics,” Ph.D. thesis, Free University Berlin, Citeseer, 2007. [Google Scholar]
  • 46.Legoll F. and Lelievre T., “Effective dynamics using conditional expectations,” Nonlinearity 23, 2131 (2010). 10.1088/0951-7715/23/9/006 [DOI] [Google Scholar]
  • 47.Bhatt D. and Zuckerman D. M., “Beyond microscopic reversibility: Are observable nonequilibrium processes precisely reversible?,” J. Chem. Theory Comput. 7, 2520–2527 (2011). 10.1021/ct200086k [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Best R. B. and Hummer G., “Reaction coordinates and rates from transition paths,” Proc. Natl. Acad. Sci. U. S. A. 102, 6732–6737 (2005). 10.1073/pnas.0408098102 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Rohrdanz M. A., Zheng W., and Clementi C., “Discovering mountain passes via torchlight: Methods for the definition of reaction coordinates and pathways in complex macromolecular reactions,” Annu. Rev. Phys. Chem. 64, 295–316 (2013). 10.1146/annurev-physchem-040412-110006 [DOI] [PubMed] [Google Scholar]
  • 50.McGibbon R. T., Husic B. E., and Pande V. S., “Identification of simple reaction coordinates from complex dynamics,” J. Chem. Phys. 146, 044109 (2017). 10.1063/1.4974306 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Suarez E., Adelman J. L., and Zuckerman D. M., “Accurate estimation of protein folding and unfolding times: Beyond Markov state models,” J. Chem. Theory Comput. 12, 3473–3481 (2016). 10.1021/acs.jctc.6b00339 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Evans L. C., Partial Differential Equations (American Mathematical Society, Providence, RI, 2010). [Google Scholar]

Articles from The Journal of Chemical Physics are provided here courtesy of American Institute of Physics

RESOURCES