Abstract
An issue of general interest in computer simulations is to incorporate information from experiments into a structural model. An important caveat in pursuing this goal is to avoid corrupting the resulting model with spurious and arbitrary biases. While the problem of biasing thermodynamic ensembles can be formulated rigorously using the maximum entropy method introduced by Jaynes, the approach can be cumbersome in practical applications with the need to determine multiple unknown coefficients iteratively. A popular alternative strategy to incorporate the information from experiments is to rely on restrained-ensemble molecular dynamics simulations. However, the fundamental validity of this computational strategy remains in question. Here, it is demonstrated that the statistical distribution produced by restrained-ensemble simulations is formally consistent with the maximum entropy method of Jaynes. This clarifies the underlying conditions under which restrained-ensemble simulations will yield results that are consistent with the maximum entropy method.
INTRODUCTION
Computer simulations are a powerful tool to derive models that are consistent with experimentally determined properties. In familiar energy-based structural refinement algorithms, whether it is based on energy-minimization or molecular dynamics (MD) simulations, one typically introduces a biasing potential that depends on the configuration X of the system in order to enforce that a given property q(X) will match the experimental value Q. One of the simplest forms of this biasing potential is a quadratic potential of the type, K(q(X) − Q)2/2. This is the approach that is broadly adopted in structural refinement based on X-ray crystallography or in nuclear magnetic resonance (NMR).1 However, by modifying the system in this way, one unavoidably also modifies the statistical properties of q along with all the atomic coordinates of the system, thereby corrupting the model in a way that is arbitrary and unwanted. If the fluctuations in q are small and can be ignored, as in the case of X-ray refinement, this issue may not be crucial. But there are numerous situations where the choice of biasing potential can have a great impact on the final result. One of the earliest methods to restore the natural fluctuations of the property q while keeping its average restrained by an experimental target value Q was introduced by van Gunsteren and co-workers in the context of inter-atomic distances deduced from experimental nuclear Overhauser effect (NOE) cross-relaxation in NMR spectroscopy.2 They replaced the instantaneous value of the property q(X) by a time-dependent moving average of the property in the classic quadratic restraint, . This treatment had the advantage of permitting fluctuations of a restrained property, while aiming to match the experimental value Q over long timescales. In a related treatment, an ensemble of structures was generated in the denatured state of proteins using distance restraints from paramagnetic relaxation.3 With the notion of “restrained-ensemble” MD simulations, Vendruscolo and co-workers extended these ideas to incorporate information from dynamically-averaged NMR scalar couplings,4 and to explore structure and fluctuations of proteins in solution,5, 6 and implemented them into the MUMO (minimal under-restraining minimal over-restraining) scheme.7 More recently, Im and co-workers extended the usage of restrained-ensemble MD simulations to deduce the structure of membrane-bound peptides on the basis of solid state NMR observables such as dipolar coupling, deuterium quadrupolar splitting, and chemical shift anisotropy;8, 9, 10 for a review see Ref. 11.
The restrained-ensemble MD simulation scheme consists of carrying out parallel MD simulations of N replicas of the basic system in the presence of a biasing potential that approximately enforces agreement between the ensemble-average over the N replicas of a given property and its known experimental value. Working from the perspective of an ensemble is appealing, since it attempts to mimic the conditions under which bulk measurements are carried out experimentally. Intuitively, it is hoped that if the ensemble comprises a fairly large number of replicas the biasing potential enforcing the ensemble-average property constitutes only a mild perturbation acting on any individual replica without causing large unwanted distortions. The idea has been illustrated by showing that solid state NMR properties within the restrained-ensemble MD rapidly converge toward unique statistical distributions as N becomes large.8, 9, 10 This observation calls out for an identification of the large N limiting distribution of restrained-ensemble MD simulations.
From a broader perspective, it is of interest to ask what is the optimal approach for incorporating the information available from experiments into a structural model while trying to avoid corrupting the model with spurious and arbitrary biases. The problem of biasing thermodynamic ensembles can be formulated rigorously based on the maximum entropy method introduced by Jaynes.12 Briefly, it can be stated in the following way. Considering a system that is statistically represented by the unperturbed distribution P0(X), the maximum entropy method provides a “minimally invasive” approach to constructing a modified distribution P*(X) such that the average of a desired property ⟨q(X)⟩ will match the desired value Q. The sense in which the method is “minimally invasive” can be quantified and will be below. It roughly means that the modified distribution P*(X) is as close as possible to P0(X) while still respecting the experimental data. Application of the method leads to a distribution of the form P*(X) ∝ P0(X)eλq(X), where λ is an unknown coefficient that must be adjusted to satisfy the condition ⟨q(X)⟩ = Q. In some cases, the method can be applied as a post-analysis treatment on the basis of the initial distribution P0(X), as recently done to match small angle X-ray scattering (SAXS) data.13 But in many applications, the maximum entropy method requires the iterative determination of several coefficients that must be adjusted to satisfy all the experimental constraints. Thus, a question of practical interest is whether the results from restrained-ensemble MD simulations can be equivalent to those from the maximum entropy method.14 It is the goal of this brief communication to examine this issue. In Sec. 2, the maximum entropy method and the restrained-ensemble MD simulations scheme are first introduced. Then, their formal equivalence is rigorously demonstrated in the limit of a large number of replicas. This analysis helps clarify the general significance of the restrained-ensemble MD simulation scheme, and the necessary conditions for its validity.
THEORETICAL DEVELOPMENTS
Maximum entropy method
Let us consider a system that is described by the potential energy U(X), where X ≡ {r1, r2, …, rN} represents all atomic coordinates. The equilibrium Boltzmann distribution is
(1) |
where β = 1/kBT. Without any additional biases, this model represents all our prior knowledge about the system. Unfortunately, it is imperfect. For example, let us assume that the average over some property q(X),
(2) |
differs from the value Q known from experiment. In this regard, it is natural to attempt to incorporate the experimental information into the model.
In practice, a wide range of arbitrary modifications could be made to the distribution P0(x) in order to impose agreement of the property q with the desired value Q. For example, one could add a steep parabolic term to the potential energy and shift its minimum until the desired value is reproduced. However, arbitrary modifications are likely to introduce undesirable and uncontrolled biases, leading to a resulting distribution that will be corrupted. Alternatively, one could constrain the system exactly so that q(X) was identically equal to Q. However, such an exact constraint is far too rigid and does not reflect the fact that experimental data typically represent statistically averaged quantities. Ideally, one would like to modify the system to reflect the experimental reality in the least intrusive manner. A version of this goal can be achieved via Jayne's maximum entropy method,12 whereby one seeks to maximize the excess cross-entropy functional η,
(3) |
under the constraint that the probability distribution P(X) is normalized,
(4) |
and that the average value of the observable q(X) is known,
(5) |
The quantity η is a functional of the probability distribution P(X) and the constrained optimization problem can be solved using the method of Lagrange multipliers.
(6) |
which, when solvable, leads to the form,
(7) |
where α serves as a normalization constant, and λ must be adjusted to match the condition Eq. 5. The normalization can be incorporated immediately in the probability distribution to yield
(8) |
The value of λ corresponding to the maximum entropy distribution can alternatively be characterized as the value at which the maximum is achieved in
(9) |
(see, e.g., Ref. 15). The above quantity can be thought of as a free energy associated with the constraint on the average value of q(X).16
The system experiences an effective potential energy surface, Ueff(X) = U0(X) − kBTλq(X). This type of modification of a probability P0(X) into P*(X) ∼ eλ(X)P0(X) in order to enforce a chosen value for the average of x without inserting spurious information into the distribution is sometimes called the exponential twist or Cramér transformation. It is a common tool in the treatment of statistical large deviations.15, 17 The maximum entropy method can be extended to enforce that the average value of a set of properties qi(X) will match the experimental values Qi. In this case, the maximum entropy method, extended to enforce that the average value of a set of properties qi(X) will match the experimental values Qi, takes the form
(10) |
where the set of λi must be adjusted to match the set Qi. The system is evolving on an effective energy surface, Ueff = U0 + kBT∑iλiqi. The values of the set of λi are the values that maximize the expression
(11) |
(see, e.g., Ref. 15). As pointed out in Ref. 14, this variational expression can be used as the starting point in the design of iterative schemes to find the λi. Efficient methods are based around the expression,
(12) |
where
(13) |
This leads to the Newton iteration,
(14) |
with λnew ≡ λold + δλ. The ij entry of the matrix M is equal to ⟨qiqj⟩ − ⟨qi⟩⟨qj⟩, evaluated with the set λ. Robust modifications of this basic iterative scheme can be found, for example, in Ref. 18.
Restrained-ensemble scheme
An alternative strategy to the maximum entropy method for biasing an average property toward an experimental value is the restrained-ensemble MD simulation scheme.4, 5, 6, 7, 8, 9, 10, 11, 14 It consists of carrying out parallel MD simulations of N replicas of the basic system in the presence of a biasing potential that restrains the ensemble-averaged property toward the experimental value Q. In practice, the restraining potential URE could have the form,
(15) |
where K is some large force constant, Xs are the coordinates of system s, and is the ensemble-average of the property q(X),
(16) |
If the N replicas were uncoupled, the probability distribution of the ensemble would be a product of independent normalized distributions; for each system s, P0(Xs), with ∫dXs P0(Xs) = 1. In the restrained-ensemble MD simulations, the effective probability distribution of the systems is modified. Choosing system 1 as an example, it is
(17) |
By asymptotic equivalence of the restrained-ensemble P(X1) with the result of the maximum entropy method, we mean that
(18) |
where λ is chosen to satisfy Eq. 5, i.e., the distribution on the right is the one that maximizes the entropy η subject to the constraint Eq. 5. In words, the indirect influence of the restrained-ensemble recapitulates the outcome of the maximum entropy method. In the limit where the force constant K is very large, the Boltzmann factor of the restraining potential becomes a delta function,
(19) |
where CK is a normalization constant. Although restrained-ensemble MD simulations are typically generated with a steep but finite biasing potential rather than a delta function constraint,8, 9, 10 this limiting form is useful when establishing the formal equivalence of restrained-ensemble simulations with the maximum entropy method and has the advantage of yielding general results relevant to all forms of restraints. However, as will be illustrated with a simple system in Sec. 3, the order in which the two limits over N and K in Eq. 18 are taken is essential. There are several routes to illustrate the asymptotic equivalence of the restrained-ensemble simulation and the maximum entropy method. In the following, we will describe two of them. We will focus on the case in which q is a one-dimensional variable. The multi-dimensional case is similar.
Illustrative Proof 1: Most probable distribution
A first route to demonstrate the equivalence relies on the principle of the most probable distribution. This is a standard approach used to derive the thermodynamic ensembles in statistical mechanics textbooks. Because all the N replicas of the system are identical, there is a large number of degenerate states and it is sufficient to determine the reduced density of the q variable implied by P(X1) of Eq. 17,
(20) |
in terms of the unperturbed density implied by P0(X) of Eq. 7,
(21) |
In order to account for the distribution of states, we divide the possible values of q into a set of M discrete bins {q(k)} with k = 1, …, M, each of width Δq, and count the number of systems in each bin as {nk}. If the ensemble of systems were unconstrained, the probability of a given realization Ω({nk}) would simply be
(22) |
where pk = ρ0(q(k))Δq. The probability Ω({nk}) is modified because the distribution of systems is constrained by the two conditions
(23) |
and
(24) |
Strictly imposing this constraint implicitly assumes that the limit K → ∞ has been taken. If N is very large, it is sufficient to determine the most probable distribution by searching for the maximum value of Ω({nk}) under the two constraints,
(25) |
where α and λ are Lagrange multipliers. Using Stirling's approximation ln (n!) ≈ nln (n) − n, the optimization yields
(26) |
In the limit of Δq → 0, (nj/NΔq) becomes the continuous distribution ρ(q), which leads to
(27) |
for the reduced distribution of system 1. The full distribution of system 1 in configurational space X1 is reconstructed from the reduced distribution ρ(q1) using the identity,
(28) |
where the conditional probability P(X1|q1) is
(29) |
The full distribution P(X1) obtained from the maximum entropy method is recovered,
(30) |
(the identity has been used).
Illustrative Proof 2: Central limit theorem
First, let us note that, by virtue of the presence of the delta function , the exponential factor , which is equal to 1 for any value of λ, can be inserted in the effective biased distribution,
(31) |
The extra factor eNQ in the numerator and denominator cancel out, leaving
(32) |
So far, this is valid for any value of λ. Let us now assume that λ is chosen to match the maximum entropy result
(33) |
for all system s = 1, …, N. Our intuition is that in the large N limit, the constraint disappears from the numerator and denominator because the sum , by the law of large numbers. To check this more carefully, let
(34) |
and let φN denote the density of the variable YN when the q(Xs) are independent and drawn from the twisted distribution
(35) |
One can check that Eq. 32 can be rewritten as
(36) |
Because, under the twisted distribution, the mean of qj is Q, the central limit theorem tells us that
Formally taking the limit in Eq. 36, we find that only the first factor survives, i.e.,
A rigorous argument along these lines can be found in Ref. 19.
Classic conditional limit theorems
The arguments above explain why the distribution of any one copy in the restrained ensemble converges to the maximum entropy distribution. These arguments, however, are not the strongest possible. The results that we are interested in belong to the family of so-called conditional limit theorems, which have a rich history in mathematics. For example, closely related results are the basis of rigorous arguments establishing continuum descriptions of interacting particle systems.20 We will briefly describe two classic results of Csiszár.21, 22
First, let us generalize our notion of an experimental observation to include bounds on the observed variable rather than an exact measurement, i.e., we replace the constraint
(37) |
by
(38) |
Since experimental observations represent statistical averages themselves, this is reasonable. To recover the equality constraint we can take Q− = Q+. The maximum likelihood distribution is the one that maximizes η[P(X)‖P0(X)], subject to Eq. 38. It is not always the case that this constrained optimization problem has a solution. For example, U0 could represent a harmonic potential and the experimental measurement might correspond to a third moment of the system. Whenever the measure quantity q grows faster at infinity than U0, the twisted measures
are not even defined for λ ≠ 0. However, we can slightly generalize our notion of maximum entropy to avoid this problem. If we let
(39) |
then there is a unique probability measure P*(X) such that
(40) |
As we have already mentioned, in Eq. 39 the maximum may not be attained by any one distribution P and γQ is in fact a supremum. When the maximum in Eq. 39 is attained by some distribution P*, then P* is also the solution to Eq. 40. It is important to notice that P* solving Eq. 40 need not satisfy constraint Eq. 38. For example, in the case of a harmonic potential with a cubic observable mentioned above, P* solving Eq. 40 is just the original Gaussian distribution.
In Ref. 21, Csiszár established the following two results about the unique solution P* of Eq. 40,
(41) |
and
(42) |
Both of these expressions deserve comment. First, we note that the convergence of the cross entropy η of two probability measures to zero is a strong form of convergence. In particular, Eq. 41 implies the convergence results of our two earlier illustrative arguments. Second, while it is clear that the constrained ensemble members are not completely independent, it seems that the effect of the constraint on any one member of the ensemble should somehow diminish as N becomes large. In other words, we might hope that in some appropriate sense the ensemble members become “more” independent as N → ∞. Expression Eq 42 suggests that they are “almost” asymptotically independent. Note that the distribution
(43) |
is the distribution of N independent copies drawn from P*. While it is not true that cross entropy η[P(X1, …, XN)‖P*(X1) ⋯ P*(XN)] vanishes as N → ∞, Eq. 42 does establish that the cross entropy per copy vanishes, a weaker notion of asymptotic independence.
DISCUSSION
We have given illustrative arguments as well as provided the statements of existing rigorous results to establish the fact that the distribution of any single copy in the exactly constrained ensemble converges precisely to the maximum entropy distribution. In these two illustrative arguments, the ensemble averaged quantity was strictly constrained to equal some experimentally measured quantity (though we constrained the average to lie within an interval in Sec. 2 C). In practice, one does not typically use such a strict constraint in ensemble-type MD simulations, but instead employs a strong harmonic restraining potential defined in Eq. 15 to impose the ensemble average. The present results can thus be interpreted to say that if the spring constant K in the harmonic restraint is taken large enough relative to N then the distribution of a single copy will again converge to the maximum entropy distribution. For any fixed value of N, K should be chosen as large as possible provided that it remains feasible to numerically integrate the equation of motions with a reasonable time-step. We will discuss an example below that makes the required relationship between N and K more explicit. We note that obtaining the maximum entropy distribution from the restrained-ensemble MD method does not require any foreknowledge of the maximum entropy twist λ. This is a key feature of the restrained-ensemble MD method. Our results are slightly at odds with the result in Ref. 14, which suggested that only for a carefully chosen value of the spring constant K does one obtain the maximum entropy distribution from the restrained-ensemble MD method. The primary reason for this discrepancy is that in Ref. 14 the authors postulated a K that grows too slowly with N. This is now discussed in more detail.
To clarify issues of convergence with respect to the number of copies N and the magnitude of the restraining force constant K, it is useful to consider the case of a simple 1D harmonic system with potential energy U0 = k(q − a)2/2 for the property q. In the restrained ensemble, the natural average, ⟨q⟩0 = a, is shifted toward the desired experimental value Q. The total energy of the restrained ensemble is U = URE + U0(q1) + … + U0(qN), where . By construction, the restrained-ensemble is also a multivariate Gaussian system; the statistical behavior of all the copies is a Gaussian, with effective mean ⟨q1⟩ and quadratic fluctuations ⟨(q1 − ⟨q1⟩)2⟩. From this point of view, the restrained-ensemble trivially preserves the functional form expected from the maximum entropy, whereby the distribution is twisted by a factor exp [λq]. The real issue of interest here is how the restrained ensemble converges toward the maximum entropy result as a function of the number of copies N and the restraining force constant K. All the copies are statistically equivalent. Considering system 1 as an example, it can be shown (see the Appendix) that its average will be
(44) |
and that its variance is equal to
(45) |
Therefore, the average ⟨q1⟩ will indeed converge properly toward the imposed value Q, as long as K/N ≫ k, which can be realized if K → ∞ faster than N → ∞. It is particularly noteworthy that, if K ∝ N, or grows more slowly than N, then one does not get the correct average Q in the large N limit. Since K and N appear only as the ratio K/N in Eq. 44, it is tempting to conclude that fixing this ratio to obtain an acceptable degree of accuracy on the moment ⟨q1⟩ would yield similar errors for any value of N. However, we caution that the dependence of the restrained mean on K and N is likely to be more complicated for general non-linear systems than for the simple harmonic system presented here. Furthermore, it is important to realize that, even with fixed K/N, the magnitude of the fluctuations is still affected by the particular value of N according to Eq. 45. Assuming that K → ∞ (i.e., the ensemble is constrained by a delta function), the variance ⟨(q1 − ⟨q1⟩)2⟩ will converge (from below) to kBT/k, the variance of both the original distribution and of the maximum entropy distribution.
The convergence of the restrained ensemble toward the maximum entropy method as the number of replicas is increasing may be further illustrated by considering the simple 1D anharmonic system shown in Figure 1. The unperturbed average ⟨q⟩ of the model is 0.258. Restrained-ensemble simulations imposing an average of Q = −0.127 were carried out using Metropolis Monte Carlo with 5 (green), 10 (red), and 100 (blue) replicas. Rather than a smooth restraining potential, the condition, (q1 + q2 + … + qN)/N = Q, was strictly enforced (i.e., K = ∞). The resulting distribution ρ(q) is shown for 5, 10, and 100 replicas (top), the total effective potential U0(q) + W(q) is shown for 5, 10, and 100 replicas (middle), and the effects of the ensemble on one single replica, W(q), are shown for 5, 10, and 100 replicas (bottom). For comparison, the unperturbed distribution ρ0(q) (top) and energy U0(q) (middle) are shown as a gray line. It is observed that W(q) from the ensemble (bottom) reproduces the linear shift λq, equivalent to the functional form arising from the maximum entropy method (shown as a gray line). Interestingly, W(q) displays a slightly complex oscillatory structure with N = 5 (green line), which converts to form with a slight positive curvature with N = 10 (red line), and a simple straight line with N = 100 (blue line) that is equivalent to the maximum entropy result λq (black line).
Figure 1.
Illustration of the equivalence of the constrained ensemble and the maximum entropy method using the simple 1D model for property q with βU0(q) = 25(q − 0.25)4 − 5.0 − qcos (q) + (1.0/(q2 + 0.5))sin (20q). The unperturbed distribution ρ0(q) (top) and the energy U0(q) (middle) are shown as a gray line. The unperturbed average ⟨q⟩ from the model is 0.258. According to the maximum entropy method, a Lagrange multiplier λ equal to 10.0 is required to shift the average to a value of −0.127. Constrained ensemble with N = 5 (green), 10 (red), and 100 (blue) were simulated using Metropolis Monte Carlo to generate the distribution ρ(q) (top), the effective total potential of mean force [U0(q) + W(q)] (middle), and the mean effect of the ensemble on one single replica W(q) (bottom). The net effect of the constrained ensemble is to produce a linear shift λq, with λ = 10, which is equivalent to that from the maximum entropy method (black line). The ensemble average (q1 + … + qn)/N = Q was strictly imposed via a delta function in the Monte Carlo simulations. All the curves were offset very slightly for better visualization.
The illustrative examples considered here indicate that 10–100 replicas are needed to approach meaningful results. This is consistent with previous applications of restrained-ensemble simulations.4, 5, 6, 7, 8, 9, 10, 11, 14 The implication, particularly if the basic molecular system considered is large, is that the approach can be computationally onerous. On the other hand, high dimensional stochastic optimization problems such as in Eq. 11 are notoriously difficult to solve. While a high quality method to solve Eq. 11 may find the maximum entropy distribution at less computational cost than restrained-ensemble MD simulation, the maximum entropy distribution must still be sampled. Moreover, the investment required to develop and apply such a method is significant compared to what is required to develop and apply restrained-ensemble MD. Thus, in practice, restrained-ensemble simulations are likely to be advantageous compared to a straight application of the maximum entropy method when a very large number of experimental properties qi must be satisfied simultaneously. This is of particular importance when one is trying to incorporate a massive amount of information into the simulations. A good example concerns a large number of distance histograms between pairs of spin label that are measured by electron spin resonance (ESR) double electron electron resonance (DEER) spectroscopy.23 Incorporating such massive amount of data in structural models is essentially infeasible, unless one adopts a restrained-ensemble simulation strategy. Future efforts will be devoted to expand and develop the computational methodology for treating ESR/DEER distance histogram.
SUMMARY
Restrained-ensemble molecular dynamics simulations, in which N replicas of a system are coupled via a global biasing potential, offer a practical strategy to incorporate the information from experimental data. Here, it was demonstrated that the statistical distribution produced by such restrained-ensemble simulations is statistically equivalent with the maximum entropy method of Jaynes. It was shown that this statistical equivalence is realized only if K, the force constant of the restraining potential imposing the ensemble-average properties, is sufficiently large compared to the number N of replicas. In practice, the condition that K → ∞ faster than N → ∞ is a prerequisite for a valid restrained-ensemble simulation that is consistent with the maximum entropy method. Illustrative examples indicate that N ≈ 10–100 replicas can yield meaningful results. This analysis clarifies the underlying conditions under which restrained-ensemble simulations can yield meaningful results.
ACKNOWLEDGMENTS
Helpful discussions with John Chodera and Jed Pitera are gratefully acknowledged. We thank John Chodera and the reviewers for their careful review of the manuscript and useful comments. This work was carried out in the context of the Membrane Protein Structural Dynamics Consortium, which is funded by grant U54-GM087519 from the National Institutes of Health (NIH). The efforts of J.W. were supported by National Science Foundation (NSF) through award DMS-1109731.
APPENDIX: RESTRAINED-ENSEMBLE FOR HARMONIC SYSTEM OF N REPLICAS
The total energy of the restrained-ensemble system can be re-written as
(A1) |
where the following definitions:
(A2) |
and
(A3) |
have been used. The quadratic form can be expanded as
(A4) |
where I is the identity matrix, to match the generic form
(A5) |
where
(A6) |
and
(A7) |
The equilibrium probability is a multi-variate Gaussian proportional to . The thermal fluctuation matrix ⟨(q − m)(q − m)⟩ is equal to kBT times the inverse of the matrix C−1, which is obtained from the Sherman-Morrison formula,24
(A8) |
The average ⟨q⟩ = m is
(A9) |
(The identity 1t1 = N has been used).
References
- Brunger A., Adams P., Clore G., DeLano W., Gros P., Grosse-Kunstleve R., Jiang J., Kuszewski J., Nilges M., Pannu N., Read R., Rice L., Simonson T., and Warren G., Acta Crystallogr., Sect. D: Biol. Crystallogr. 54, 905 (1998). 10.1107/S0907444998003254 [DOI] [PubMed] [Google Scholar]
- Fennen J., Torda A. E., and van Gunsteren W. F., J. Biomol. NMR 6, 163 (1995). 10.1007/BF00211780 [DOI] [PubMed] [Google Scholar]
- Gillespie J. R. and Shortle D., J. Mol. Biol. 268, 170 (1997). 10.1006/jmbi.1997.0953 [DOI] [PubMed] [Google Scholar]
- Lindorff-Larsen K., Best R. B., and Vendruscolo M., J. Biomol. NMR 32, 273 (2005). 10.1007/s10858-005-8873-0 [DOI] [PubMed] [Google Scholar]
- Lindorff-Larsen K., Best R. B., Depristo M. A., Dobson C. M., and Vendruscolo M., Nature (London) 433, 128 (2005). 10.1038/nature03199 [DOI] [PubMed] [Google Scholar]
- Best R. B., Lindorff-Larsen K., DePristo M. A., and Vendruscolo M., Proc. Natl. Acad. Sci. U.S.A. 103, 10901 (2006). 10.1073/pnas.0511156103 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Richter B., Gsponer J., Varnai P., Salvatella X., and Vendruscolo M., J. Biomol. NMR 37, 117 (2007). 10.1007/s10858-006-9117-7 [DOI] [PubMed] [Google Scholar]
- Lee J., Chen J., Brooks C. L., and Im W., J. Magn. Reson. 193, 68 (2008). 10.1016/j.jmr.2008.04.023 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jo S. and Im W., Biophys. J. 100, 2913 (2011). 10.1016/j.bpj.2011.05.009 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kim T., Jo S., and Im W., Biophys. J. 100, 2922 (2011). 10.1016/j.bpj.2011.02.063 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Im W., Jo S., and Kim T., Biochim. Biophys. Acta 1818, 252 (2012). 10.1016/j.bbamem.2011.07.048 [DOI] [PubMed] [Google Scholar]
- Jaynes E. T., Phys. Rev. 106, 620 (1957). 10.1103/PhysRev.106.620 [DOI] [Google Scholar]
- Różycki B., Kim Y. C., and Hummer G., Structure (London) 19, 109 (2011). 10.1016/j.str.2010.10.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pitera J. and Chodera J., J. Chem. Theory Comput. 8, 3445 (2012). 10.1021/ct300112v [DOI] [PubMed] [Google Scholar]
- Varadhan S. R. S., Large Deviations and Applications, CBMS-NSF Regional Conference Series in Applied Mathematics Vol. 46 (SIAM, Philadelphia, 1984). [Google Scholar]
- Callen H. B., Thermodynamics (Wiley, New York, 1960). [Google Scholar]
- Sadowsky J. S., IEEE Trans. Inf. Theory 39, 119 (1993). 10.1109/18.179349 [DOI] [Google Scholar]
- Stinis P., J. Comput. Phys. 208, 691 (2005). 10.1016/j.jcp.2005.03.001 [DOI] [Google Scholar]
- Van Campenhout J. and Cover T., IEEE Trans. Inf. Theory 27, 483 (1981). 10.1109/TIT.1981.1056374 [DOI] [Google Scholar]
- Guo M. Z., Papanicolaou G. C., and Varadhan S. R. S., Commun. Math. Phys. 118, 31 (1988). 10.1007/BF01218476 [DOI] [Google Scholar]
- Csiszár I., Ann. Probab. 12, 768 (1984). 10.1214/aop/1176993227 [DOI] [Google Scholar]
- Csiszár I., “An extended maximum entropy principle and a Bayesian justification,” in Bayesian Statistics (Elsevier, North-Holland, 1985), Vol. 2, pp. 83–98. [Google Scholar]
- Borbat P. P., McHaourab H. S., and Freed J. H., J. Am. Chem. Soc. 124, 5304 (2002). 10.1021/ja020040y [DOI] [PubMed] [Google Scholar]
- Sherman J. and Morrison W. J., Ann. Math. Stat. 21, 124 (1950). 10.1214/aoms/1177729893 [DOI] [Google Scholar]