Abstract
The particle mesh Ewald (PME) method has become ubiquitous in the molecular simulation community due to its ability to deliver long range electrostatics accurately with complexity. Despite this widespread use, spanning more than two decades, second derivatives (Hessians) have not been available. In this work, we describe the theory and implementation of PME Hessians, which have applications in normal mode analysis, characterization of stationary points, phonon dispersion curve calculation, crystal structure prediction, and efficient geometry optimization. We outline an exact strategy that requires effort for each Hessian element; after discussing the excessive memory requirements of such an approach, we develop an accurate, efficient approximation that is far more tractable on commodity hardware.
I. INTRODUCTION
The dominant computational cost in classical molecular simulations is the accurate modeling of “non-bonded” electrostatic interactions.1 Being pairwise additive, Coulombic interactions modeled by fixed point charges at the atomic centers require effort to evaluate exactly without truncation for a system comprising N atoms. To overcome this steep computational cost, and to permit the treatment of periodic systems, cutoffs are commonly applied,2 beyond which pairwise energies are neglected, bringing the computational effort required down to . While truncated electrostatics are more computationally tractable, their neglect of long range electrostatic interactions can introduce simulation artifacts, particularly in extended structures.3
A number of methods have been proposed to account for long range electrostatic interactions, including shifted potentials,4 the isotropic periodic sum,5,6 and the Ewald summation method upon which we focus our discussion. Originally developed in 1921, Ewald summation7–10 divides the slowly and conditionally convergent lattice summation into two separate summations that are both rapidly and absolutely convergent. The crux of this approach is to introduce a screening Gaussian, whose width is related to a parameter β, multiplied by the sign-flipped charge at each charge bearing site, thus partially canceling that charge and making it a shorter range in nature. Another term adding the Gaussian back in with the same sign as the charge must be added to compensate, and this compensating term is evaluated using reciprocal space methods, instead of conventional pairwise summation, as will be discussed later. Because the potential at a distance R from each Gaussian is , the Ewald partitioning effectively invokes the following identity for the Coulomb operator:
| (1) |
The first “direct” term involves an attenuated version of the Coulomb operator that, due to its rapid decay with respect to distance, can be evaluated exactly using pairwise loops enforcing a short cutoff. The second term is slowly decaying with respect to distance in direct space but is nonsingular and converges rapidly in “reciprocal” space; this term is the most problematic and will occupy much of our discussion. Straightforward evaluation of the reciprocal space terms via a Fourier transform requires effort, limiting its applicability; this can be reduced to with judicious choice of simulation parameters.10
Darden, York, and Pedersen recognized that the Fourier transforms could be replaced by the fast Fourier transform (FFT) if a regular grid is used to represent the charge density.11 To allow arbitrarily placed atoms to be treated this way, they devised an interpolation scheme that discretizes the atomic charges onto a regular grid, as well as allowing the potential resulting from the FFT to be mapped from the grid back to the atomic centers; this discretized Fourier space treatment is denoted as particle mesh Ewald (PME). Subsequent modifications to the interpolation scheme improved the accuracy and the treatment of forces.12
Although PME is almost a de facto standard in the molecular simulation community, to our knowledge, the second derivatives of the PME energy have not been implemented. The Hessian (second derivative) matrix has important applications including normal mode analysis,13 characterization of stationary points, establishment of structure–function relationships,13–16 phonon dispersion curves,17,18 entropy estimations in crystal structure prediction,19,20 and accelerated geometry optimizations.21,22 While phonon dispersion curves with Ewald summation have been reported,18 the force constants were obtained by computing forces at displaced geometries, which could be made much more efficient by the approach we will develop. After briefly reviewing the Ewald and PME methods, we detail an exact formulation of their second energy derivatives, with respect to geometric displacements, before suggesting an efficient and accurate approximate approach.
II. THEORY
A. Energy expression and notation
We start by noting that, for brevity, we omit the Coulomb constant from all energy expressions and derivatives thereof. In periodic systems, the energy is invariant to translations but not rotations. A consequence of this translational invariance,23–25 and of the commutativity of the derivative operators, is that there are some symmetries among the elements of the Hessian matrix H,
| (2a) |
| (2b) |
| (2c) |
where {ξ, ψ} represent an arbitrary Cartesian component {x, y, z}. Concomitantly, only of the Hessian elements are unique in the large system limit. The bold notation with subscripts in Eq. (2c) is to emphasize that the expression pertains to the 3 × 3 block for each atom pair indexed by those subscripts; similar expressions hold for individual elements. In light of these symmetries, we will focus our attention on the expressions for the 3 × 3 block of the Hessian for two distinct atoms A and B, remaining cognizant that the negative of this block will be accumulated into the A − A and B − B diagonal blocks, and its transpose assigned to the B − A block.
Differentiating the conventional electrostatic energy expression,
| (3) |
where with respect to ξ and ψ Cartesian displacements on atomic centers A and B leads to the Hessian
| (4) |
Equation (4) shows that evaluating each element of the standard electrostatic Hessian can be performed in effort; this motivates our efforts to achieve similar scaling for the PME Hessian.
The periodic system’s energy is a function of its unit cell, described by the 3 × 3 matrix a whose columns are the lattice vectors,
| (5) |
where the summation over the lattice L = l1a1 + l2a2 + l3a3 is defined by the integer indices {l1, l2, l3} up to some prescribed limit and the diagonal A = B term is neglected when L = 0, as signified by the primed summation symbol; any pairs on a list, , that should be excluded based on topology, to prevent redundancies in non-bonded and bonded terms, are also neglected. Similarly, summation over the reciprocal lattice m is indexed by three integers multiplied by the rows of a*, which are the inverse lattice vectors, . The limits of reciprocal space summation are defined by user-specified integers {K1, K2, K3}. We use the labels {A, B, C, …} to represent atomic centers, while {α, β, γ} represent summation indices over the three unit cell dimensions. This notation is chosen to be consistent with the original PME literature with {A, B, C} replacing {i, j, k} used for atomic centers to avoid confusion arising from the presence of imaginary numbers in expressions involving atomic center i. We label the attenuation parameter β, even though it could clash with unit cell summation indices; the two do not appear together in any expressions, so the intent should be clear. The normalization convention used in FFTs also requires the introduction of scaled fractional coordinates for PME, but we will limit the use of these quantities in our working equations to transparently demonstrate the similarities between the conventional Ewald and PME expressions.
The Ewald summation method partitions Eq. (5) into five terms,
| (6) |
We refer the reader to some of the excellent reviews describing these terms9,12,26 and will only offer brief descriptions Secs. II B and II C.
B. Direct space energy
For brevity, we update our notation to account for the summation over the lattice . The real space attenuated energy is then analogous to Eq. (5),
| (7) |
but the use of an attenuated operator permits short cutoffs to be used, and generally, only the L = 0 term need be included for most reasonable choices of the partitioning parameter, β. The summation in Eq. (7) excludes all pairs in that are to be neglected for topological reasons. However, these terms are inextricably present in the reciprocal space treatment described below and must be backed out in real space, giving rise to the adjusted term,
| (8) |
Each charge also interacts with its own screening Gaussian, giving rise to a self-energy that must be accounted for,
| (9) |
Finally, a surface term is sometimes added that results from the conditionally convergent terms and thus depends on the macroscopic crystal shape.27–31 For example, for a spherical crystal comprising cubic unit cells with volume V, the correction is related to the total cell dipole D = ∑AqArA by
| (10) |
Other corrections can also be derived as a function of D2, corresponding to different macroscopic crystal shapes.28 Neglect of Usurf, which is common in simulations, is consistent with the assumption that the unit cell and its images are surrounded by an infinite dielectric; this is usually referred to as “tin foil” or “conducting” boundary conditions.
C. Reciprocal space energy
The reciprocal space Ewald energy for a system with a unit cell of volume V is given by
| (11) |
where the structure factor and its complex conjugate are
| (12) |
While Ewald summation evaluates Eq. (12) directly at cost,10 the PME method proceeds by approximating the structure factor via a discretization of the charge density onto a regular grid; the regular nature of this grid enables the use of rapid FFT solvers to complete the Fourier transform. The original PME formulation11 used Lagrangian interpolation to approximate the continuous spatial distribution of charges, while the smooth PME (SPME)12 variant uses cardinal B-spline interpolation instead. We will also assume the use of B-splines throughout this work, despite using the PME moniker for the method. In the interests of compactness, we use the notation to refer to the B-spline coefficient for the Ath atom to the {n1, n2, n3} grid points; which of the three crystallographic axes, {1, 2, 3}, the spline applies to is implied by the subscript of the grid index. A spline of order Os can be analytically differentiated Os − 2 times, allowing derivative operators to be introduced, which permits the handling of multipoles and energy gradients;12,26,32 we use the parenthesized superscript to convey the derivative level of the spline.
The real space charge grid is defined within this shorthand as
| (13) |
and its 3D FFT affords an accurate approximate of the structure factor [Eq. (12)]. The spline coefficient is nonzero only at the Os grid points in the vicinity of each atom in each of the three spatial dimensions, allowing QR to be formed in effort. The corresponding Fourier space quantity QF is obtained by transforming QR using existing fast FFT solvers. Following Darden, we use F and R superscripts to denote Fourier and real space quantities, respectively. The Fourier space representations of the reciprocal space Gaussian screening charges is
| (14) |
There is a normalization term due to the B-splines used in the approximation of the structure factors omitted from this expression for brevity; we refer the interested reader to Ref. 12, particularly Eq. (2.46), for details. The real space potential on the grid is obtained as the convolution of the approximated structure factor (12) with (14), with a subsequent inverse FFT
| (15) |
where the operator ○ represents the element-by-element (Hadamard) product. Finally, the reciprocal space PME potential is readily evaluated at center A by using splines to probe the potential grid,
| (16) |
As for Eq. (13), we use simplified spline notation, and the potential for each center has contributions only from the subset of grid points in the immediate vicinity of rA, totaling terms. Summing the potential at each center, multiplied by half of the center’s charge, yields the PME reciprocal space energy. The differentiability of the B-splines allows facile computations of the derivatives of the electrostatic potential, making the PME method particularly well suited to multipole expansions and, as we will demonstrate below, derivatives of the energy. The derivative of the potential is trivially obtained by replacing the spline with its derivative along the appropriate unit cell dimension and transforming the result from scaled fractional coordinates to Cartesian coordinates.26
To forge a connection between existing PME implementations and our notation, we will now briefly outline the gradient terms, which have been widely implemented already.
D. Direct space gradient
The first derivative of Eq. (7) with respect to Cartesian displacement of center A has a compact form,
| (17) |
The summation over B may be restricted to those centers within the cutoff, resulting in complexity. Likewise, the adjusted term’s derivative is
| (18) |
The self-energy [Eq. (9)] is independent of position and does not contribute the forces. The force contribution from Eq. (10) is proportional to the unit cell’s dipole D,
| (19) |
While this final term is easy to implement, discontinuities in D resulting from charged particles wrapping around periodic boundaries are potentially problematic.
E. Reciprocal space gradient
The gradient of the Ewald reciprocal space energy comes from differentiation of (11),
| (20) |
which uses a newly introduced notation for the Cartesian components of the m vectors,
| (21) |
The appearance of the structure factor in Eq. (20) leads to scaling, as for the energy expression (11). To reduce this cost, PME again uses B-splines to efficiently approximate the structure factor via discretized intermediates.
Because the derivative of an exponential is an exponential,
| (22) |
the quantity ±2πimξ is the Fourier space representation of the derivative operator, and the first derivative B-splines should be used to construct the corresponding exponentials when approximating Ewald gradients in the PME framework. The steps leading to ϕrec(n) are common to the PME energy and force calculations. From there, instead of using B-splines to interpolate the potential at a given site, their derivatives are also used to generate its derivative (the electric field) from which the gradient is readily obtained,
| (23) |
The Kronecker delta in the superscripts, e.g., (δα,1), is a shorthand to signify that a derivative B-spline should be used if the α summation index corresponds to unit cell dimension 1. The factor in Eq. (23) simply transforms from scaled fractional coordinates, required for FFT compatibility, to Cartesian coordinates.26 The overall evaluation of the forces has complexity, as for the energy, because the probing of the grid for each center involves only the surrounding grid points, as previously discussed.
F. Direct space Hessian
The direct space term is straightforwardly obtained by double differentiation of Eq. (7),
| (24) |
Despite the more complicated form relative to Eq. (4), evaluation of this expression poses few problems as it is a simple cost term, and table lookup methods may be used to efficiently implement it. The adjusted Hessian, like the energy, takes a similar form to its direct counterpart,
| (25) |
Because the self-energy equation (9) is independent of position, it contributes to neither the forces nor the Hessian. The Hessian contribution from Eq. (10) is a position-independent term,
| (26) |
The Hessian terms detailed in this section are common to conventional Ewald summation as well as PME, and their implementation is straightforward. We now turn our attention to the more problematic reciprocal space terms.
G. Reciprocal space Hessian
Double differentiation of (11), with respect to the positions of different atomic centers, yields the standard Ewald summation reciprocal space Hessian,
| (27) |
Because the summation is performed equally over positive and negative wavevectors, and the summand has positive parity with respect to m, only the real (cosine) term in the final line of Eq. (27) survives, which is consistent with Eq. (28) of Ref. 9. Using the fact that R ≡ rB − rA in Eq. (27), we can expand the complex exponentials that appear,
| (28) |
Therefore, all Kx × Ky × Kz terms can be evaluated by computing and storing N(Kx + Ky + Kz) exponentials, aided by the fact that those with a negative exponent are readily obtained as the complex conjugate of their positive counterparts. The ensuing setup cost is acceptable, given that there are generally elements to be computed in the Hessian. However, our goal is to be able to compute each Hessian element in effort, consistent with Eq. (4), and the summation over min Eq. (28) results in asymptotic scaling per element.
For PME, a few alternative strategies exist. The simplest is to use Eq. (27) with approximated exponentials, which would incur the same cost per element as the standard Ewald algorithm outlined above. These approximated exponentials are readily constructed along the α dimension by populating the first Os elements of a vector of zeros with the corresponding B-spline coefficients and applying a forward FFT. For a more efficient algorithm, we can instead draw inspiration from Eqs. (15) and (16) to develop an interpolation scheme to probe a precomputed grid at cost. One such scheme uses the fast Fourier transform (FFT) framework via a 6D diagonally matricized generalization of Eq. (14),
| (29) |
where δ is the Kronecker delta. This tensor is then transformed with an inverse FFT applied to the primed indices and a forward FFT to unprimed indices, yielding a real grid E,
| (30) |
Probing this grid with the appropriate splines for the primed and unprimed indices using splines, for centers A and B, respectively, would generate the approximate exponentials required in Eq. (28). At first glance, this seems like a reasonable strategy that generates a six dimensional grid requiring storage. The (inverse) FFT in each dimension requires , totaling effort; this setup cost is commensurate with the fact that the number of Hessian elements grows as .
For each Hessian element, the six dimensional grid E is probed using derivative B-splines for one of the crystallographic dimensions for each center, and regular splines for the rest, consistent with Eq. (22), considering all combinations of dimensions’ derivatives. The resulting derivatives are converted from scaled fractional coordinates to Cartesian coordinates, yielding the desired result,
| (31) |
In Eq. (31), the summation over n covers the grid points surrounding atom A, while n′ covers those around atom B, resulting in terms. The spline order is fixed for any system size and thus evaluating Eq. (31) is for each Hessian element, albeit with a large prefactor.
While the asymptotic scalings presented above may look reasonable, consideration of the prefactors involved quickly nullifies this approach for large systems. While the FFTs of expression (30) can be executed rapidly using tools such as the excellent FFTW package,33 the memory required quickly becomes unmanageable. For example, a system with a cubic grid comprising 64 points in each dimension requires GiB of memory to store in single precision. Although distributed memory systems make these memory demands feasible, such large requirements are beyond the limits of most commodity hardware, and we will consider an alternative approach.
Recognizing that the coupling of the x, y, and z components in Eq. (27) is entirely due to the denominator , we follow the method pioneered34 by Almlöf to overcome a related problem in Møller–Plesset perturbation theory. The definition of the Laplace transform leads to the identity
| (32) |
Various numerical strategies exist for evaluating the integral on the left-hand side, and we will focus on the approximation with a k point quadrature, leading to the approximation
| (33) |
which we detail in Sec. III. This approximation turns the denominator into an exponential, thus making Eq. (27) separable in the three spatial dimensions. We combine Eqs. (33) and (14) to define the vector intermediates,
| (34) |
whose outer product can be used to construct a factorized version of Eq. (14),
| (35) |
Analogous to Eq. (29), we matricize the three intermediates in Eq. (34),
| (36) |
upon which we perform a forward FFT in the ξ indices and an inverse FFT in the ξ′ indices, similar to the process used in Eq. (30),
| (37) |
The 3k 2D FFTs required can be computed in effort, with just storage required. These demands are clearly much better than Eq. (30), as long as the number of terms k required does not get too large; this will be discussed later. The resulting matrices can then be probed to yield the approximated reciprocal space Hessian,
| (38) |
As a reminder, the summations over the various n and n′ indices have only Os terms each so, as for Eq. (31), each approximate Hessian element can be computed in effort. This assumes that the number of quadrature points k is independent of the system size, which we will justify in Sec. III. In a domain-decomposed parallel implementation, only the subsets n, n′ of the grids corresponding to locally processed particles need be generated and stored; however, all k terms for each intermediate must be present.
H. Diagonal Hessian elements
In some applications, e.g., preconditioning systems of equations, only diagonal blocks of the Hessian are required. Differentiation of the energy expressions yields the result that each diagonal block is the negative of the sum of its corresponding row (or column), with the diagonal block itself excluded, consistent with Eq. (2c). For the direct space terms, Eqs. (24) and (25), the use of nonbonded cutoffs gives this summation complexity. The sum of the full row of the reciprocal space Hessian can be extracted from the real space potential grid [Eq. (15)] using derivative B-splines analogous to the procedure used to extract forces [Eq. (23)],
| (39) |
This equation represents the reciprocal space electric field gradient at center A, multiplied by the electric charge at A; the summation over centers B results from the summation in Eq. (13). The “self”-block, , is readily removed via Eq. (38) for PME or via Eq. (27) with R = 0 for conventional Ewald summation. The potential grid is computed only once at cost and is already needed for forces; probing the grid with B-splines, and derivatives thereof, has fixed cost.
I. Quadrature
Various methods exist for evaluating the integral of Eq. (32) in the form of Eq. (33), but we will use the widely familiar Gauss–Legendre method as an illustrative example. This variant of Gaussian integration requires the integration range to be ; to accomplish this, we introduce a change of variables , which gives . The integral in Eq. (32) can then be rewritten as
| (40) |
As a concrete example, the 5 point Gauss–Legendre weights w and abscissae a,
| (41) |
are combined element-wise to form the weights and abscissae in Eq. (33),
| (42) |
While the parameters for this method have been tabulated and are readily available, a more efficient method—requiring fewer points, k, for a given precision—is the minimax method developed by Braess and Hackbusch.35 The general solution is obtained by investigating the range , where Rmax and Rmin are the bounds on the R values to be approximated. The error in the optimal solution using k quadrature points will alternate within this range 2k + 1 times, with the same absolute error (but alternating sign) at each successive extremum; the value of the error at these extrema is thus the maximum error of the approximation. The Remez algorithm may be used to optimize the weights and abscissae in this range.36 However, for a given k, there exists some critical value R* above which the error exponentially decays to zero, as demonstrated in Fig. 1. The parameters for up to k = 63 have been tabulated and made available online37 by the developers of the method; at this large number of points, the maximum error is just 1.4 × 10−14, which is close to the machine epsilon in double precision computations. In the interests of reproducibility, we have made a tabulation available as the supplementary material that provides the quadrature parameters up to k = 53. Because the tabulated values provide the optimized parameters up to R* for each k value, and the error for R > R* decays exponentially, these parameters actually provide approximations in the semi-infinite range [1, ∞). In this work, we choose to forgo the optimization of the parameters for the range and simply use those pretabulated for [1, ∞); while this will increase the number of k points required for a given precision, it greatly simplifies the implementation.
FIG. 1.
The error in the minimax denominator decomposition for different numbers of decomposition terms, k.
We can adapt the value for [1, ∞) to [Rmin, ∞) by replacing each Wk and Ak with their scaled counterparts,36
| (43) |
where Rmin is the lowest nonzero value of any of the arrays .
During preparation of this manuscript, Predescu et al. of the Shaw group published a decomposition of the Coulomb operator as alternative to Ewald summation that they term the u-series.38 The u-series, like our approximated Fourier space Coulomb operator, is separable in the three spatial dimensions, which could similarly aid implementation of second derivatives for that method. Moreover, the asymptotically inferior but highly scalable algorithm that they develop by leveraging the separability of the u-series could be adapted to conventional PME by means of the quadrature described herein.
III. RESULTS
To investigate the accuracy of the approximation sketched out above, we implemented the method in the CHARMM simulation package.39 As a simple test case, we investigated the accuracy of the vibrational frequencies of 46 residue protein crambin (PDB ID: 1CRN). First, we optimized the unsolvated structure in a 38 Å cubic box, using simple periodic boundary conditions for images. The exact PME vibrational frequencies were computed by evaluating Eq. (27) using interpolated exponentials as described above, at cost per element. Because we want to directly test the quadrature approximation, tight PME settings were used: the grid dimension was 64 with an eighth order B-spline and a PME separation parameter β = 0.3 Å−1. The approximate vibrational frequencies were then obtained using Eq. (38) for a range of k values and the same structure. The mean absolute error and maximum absolute error in the harmonic vibrational frequencies were computed and the results are shown in Fig. 2. Using the unoptimized parameters for the quadrature, the k = 20 approximation delivers frequencies that are well within the inherent errors of the force field, with a RMS error around 10−4 cm−1 and a maximum error of 10−2 cm−1 for all vibrational modes. The center panel of Fig. 1 shows that the maximum error in the decomposition at any point is around 4 × 10−8, eventually decaying to zero at large R; this justifies our above assumption that k is fixed for all system sizes.
FIG. 2.
The error in the harmonic vibrational frequencies for crambin, as a function of the number of terms used in the denominator decomposition, k.
Single-precision arithmetic is increasingly popular, particularly for accelerators such as GPUs. To test the robustness of the denominator decomposition in this reduced precision regime, we plot the decomposition error for k = 10 in Fig. 3. We chose k = 10 because the error in the k = 20 decomposition is already at the limit of what can be represented using 32 bit floating point storage. While the denominator decomposition itself is robust, taking the exact double precision computed Hessian and diagonalizing in single precision yields vibrational frequencies with a maximum error of 0.05 cm−1, and mean absolute deviation of 1.0 × 10−4 cm−1, for the crambin test case. These errors are commensurate with those obtained by denominator decomposition using k ∼ 18 and should be acceptable for most applications. However, diagonalization can be highly sensitive to noise in the input, and careful benchmarking should be performed before embarking on a production level single-precision implementation.
FIG. 3.
The error in the minimax denominator decomposition using double- and single-precision arithmetic, for k = 10.
It is interesting to examine how the long range contributions to the potential impact the nature of the Hessian. To test this, we constructed three variants of the crambin system: (1) the isolated molecule with a 12 Å cutoff employed, (2) the periodic system without PME and using long (16 Å) cutoffs, and (3) the same with PME. After energy minimization for each, the base-10 logarithms of the absolute value full Hessian of each were computed and the resulting values were plotted as a heatmap, shown in Fig. 4. The resulting visualizations show quite a stark difference in the sparsities of the Hessians. While all are diagonally dominant, as expected, large areas of zeros exist for the isolated crambin test as nothing beyond 12 Å can interact. The long cutoffs used in the non-PME crystal, coupled with the presence of image molecules in neighboring unit cells, lead to fewer nonzero elements; however, there is some sparsity observed, even using the very long 16 Å cutoffs. The PME Hessian shows much less sparsity; while the direct space terms will exhibit large scale sparsity due to the short associated cutoffs, the long range terms result in a background potential that essentially eliminates the sparsity of the matrix. Therefore, in contrast to the number of elements typically expected in a standard non-PME calculation with cutoffs, there are elements to be evaluated for the long range PME terms. This makes their efficient evaluation even more important. A consequence of the increase in density of the matrix elements is that sparse matrix storage techniques13 are no longer appropriate; however, Krylov space diagonalization methods that store only Hessian-vector products are unaffected.
FIG. 4.

The sparsity structure of the Hessian elements for crambin, demonstrated by plotting for a range of systems (a) in the gas phase, (b) with periodic boundary conditions and a 16 Å cutoff, and (c) with PME.
A few methods of validation exist to verify the implementation. First is to compute the Hessian numerically from energies or forces at displaced geometries, which is automated in CHARMM. With suitably fine grids and long real space cutoffs, the energy and all derivatives should be invariant with respect to β, which serves as another useful test of correctness; this is necessary but not sufficient, although the highly disparate functional form of the direct and reciprocal space terms makes it a very powerful check. Finally, implementation of the diagonal elements shown in Sec. II H is straightforward; from there, a check of the translational invariance condition Hii= −∑j≠iHij is trivial.
IV. CONCLUSIONS
We have detailed the derivation and implementation of PME energy second derivatives. While differentiating the energy expression is straightforward, we have shown that care must be taken with the implementation to avoid inefficient code. We traced the main source of inefficiency to couplings between the three spatial degrees of freedom in a key intermediate and detailed a numerical quadrature scheme that eliminates this coupling. The resulting approximation may be made arbitrarily precise, and we have demonstrated that negligible errors are obtained with highly tractable expressions using a simple 20 point quadrature without any optimization. Crucially, all Hessian terms within this scheme may be evaluated in effort, as is the case for non-PME calculations using a real space cutoff. One consequence of including PME contributions in the Hessian is a loss of sparsity that both increases the number of terms that must be considered in harmonic analysis and includes physical effects not accessible with cutoff schemes that use the ubiquitous minimum image convention.
SUPPLEMENTARY MATERIAL
See the supplementary material for a pdf file containing a tabulation of the minimax quadrature parameters up to k = 53.
ACKNOWLEDGMENTS
This work was supported by the intramural research program of the National Heart, Lung, and Blood Institute.
DATA AVAILABILITY
The data that support the findings of this study are available from the corresponding author upon reasonable request.
REFERENCES
- 1.Cisneros G. A., Karttunen M., Ren P., and Sagui C., Chem. Rev. 114, 779 (2014). 10.1021/cr300461d [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Steinbach P. J. and Brooks B. R., J. Comput. Chem. 15, 667 (1994). 10.1002/jcc.540150702 [DOI] [Google Scholar]
- 3.Piana S., Lindorff-Larsen K., Dirks R. M., Salmon J. K., Dror R. O., and Shaw D. E., PLoS One 7, e39918 (2012). 10.1371/journal.pone.0039918 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Fennell C. J. and Gezelter J. D., J. Chem. Phys. 124, 234104 (2006). 10.1063/1.2206581 [DOI] [PubMed] [Google Scholar]
- 5.Wu X. and Brooks B. R., J. Chem. Phys. 122, 044107 (2005). 10.1063/1.1836733 [DOI] [Google Scholar]
- 6.Takahashi K. Z., Narumi T., Suh D., and Yasuoka K., J. Chem. Theory Comput. 8, 4503 (2012). 10.1021/ct3003805 [DOI] [PubMed] [Google Scholar]
- 7.Ewald P. P., Ann. Phys. 369, 253 (1921). 10.1002/andp.19213690304 [DOI] [Google Scholar]
- 8.Toukmaji A. Y. and Board J. A., Comput. Phys. Commun. 95, 73 (1996). 10.1016/0010-4655(96)00016-1 [DOI] [Google Scholar]
- 9.Wells B. A. and Chaffee A. L., J. Chem. Theory Comput. 11, 3684 (2015). 10.1021/acs.jctc.5b00093 [DOI] [PubMed] [Google Scholar]
- 10.Perram J. W., Petersen H. G., and De Leeuw S. W., Mol. Phys. 65, 875 (1988). 10.1080/00268978800101471 [DOI] [Google Scholar]
- 11.Darden T., York D., and Pedersen L., J. Chem. Phys. 98, 10089 (1993). 10.1063/1.464397 [DOI] [Google Scholar]
- 12.Essmann U., Perera L., Berkowitz M. L., Darden T., Lee H., and Pedersen L. G., J. Chem. Phys. 103, 8577 (1995). 10.1063/1.470117 [DOI] [Google Scholar]
- 13.Brooks B. and Karplus M., Proc. Natl. Acad. Sci. U. S. A. 82, 4995 (1985). 10.1073/pnas.82.15.4995 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Brooks B. R., Janežič D., and Karplus M., J. Comput. Chem. 16, 1522 (1995). 10.1002/jcc.540161209 [DOI] [Google Scholar]
- 15.Bahar I., Lezon T. R., Bakan A., and Shrivastava I. H., Chem. Rev. 110, 1463 (2010). 10.1021/cr900095e [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Acbas G., Niessen K. A., Snell E. H., and Markelz A., Nat. Commun. 5, 3076 (2014). 10.1038/ncomms4076 [DOI] [PubMed] [Google Scholar]
- 17.Durand D., Field M. J., Quilichini M., and Smith J. C., Biopolymers 33, 725 (1993). 10.1002/bip.360330502 [DOI] [PubMed] [Google Scholar]
- 18.Parlinski K. and Chapuis G., J. Chem. Phys. 110, 6406 (1999). 10.1063/1.478543 [DOI] [Google Scholar]
- 19.Hoja J., Reilly A. M., and Tkatchenko A., Wiley Interdiscip. Rev.: Comput. Mol. Sci. 7, e1294 (2017). 10.1002/wcms.1294 [DOI] [Google Scholar]
- 20.Dybeck E. C., Abraham N. S., Schieber N. P., and Shirts M. R., Cryst. Growth Des. 17, 1775 (2017). 10.1021/acs.cgd.6b01762 [DOI] [Google Scholar]
- 21.Bakken V. and Helgaker T., J. Chem. Phys. 117, 9160 (2002). 10.1063/1.1515483 [DOI] [Google Scholar]
- 22.Wang L.-P. and Song C., J. Chem. Phys. 144, 214108 (2016). 10.1063/1.4952956 [DOI] [PubMed] [Google Scholar]
- 23.Banerjee A., Jensen J. O., and Simons J., J. Chem. Phys. 82, 4566 (1985). 10.1063/1.448713 [DOI] [Google Scholar]
- 24.Jensen J. O., Banerjee A., and Simons J., Proc. - Indian Acad. Sci., Chem. Sci. 96, 127 (1986). [Google Scholar]
- 25.Jensen J. O., Banerjee A., and Simons J., Chem. Phys. 102, 45 (1986). 10.1016/0301-0104(86)85116-3 [DOI] [Google Scholar]
- 26.Sagui C., Pedersen L. G., and Darden T. A., J. Chem. Phys. 120, 73 (2004). 10.1063/1.1630791 [DOI] [PubMed] [Google Scholar]
- 27.de Leeuw S. W., Perram J. W., and Smith E. R., Proc. R. Soc. London, Ser. A 373, 27 (1980). 10.1098/rspa.1980.0135 [DOI] [Google Scholar]
- 28.Smith E. R., Proc. R. Soc. London, Ser. A 375, 475 (1981). 10.1098/rspa.1981.0064 [DOI] [Google Scholar]
- 29.Roberts J. E. and Schnitker J., J. Chem. Phys. 101, 5024 (1994). 10.1063/1.467425 [DOI] [Google Scholar]
- 30.Roberts J. E. and Schnitker J., J. Phys. Chem. 99, 1322 (1995). 10.1021/j100004a037 [DOI] [Google Scholar]
- 31.Ballenegger V., J. Chem. Phys. 140, 161102 (2014). 10.1063/1.4872019 [DOI] [PubMed] [Google Scholar]
- 32.Toukmaji A., Sagui C., Board J., and Darden T., J. Chem. Phys. 113, 10913 (2000). 10.1063/1.1324708 [DOI] [Google Scholar]
- 33.Frigo M. and Johnson S. G., Proc. IEEE 93, 216 (2005). 10.1109/jproc.2004.840301 [DOI] [Google Scholar]
- 34.Almlöf J., Chem. Phys. Lett. 181, 319 (1991). 10.1016/0009-2614(91)80078-c [DOI] [Google Scholar]
- 35.Braess D. and Hackbusch W., IMA J. Numer. Anal. 25, 685 (2005). 10.1093/imanum/dri015 [DOI] [Google Scholar]
- 36.Takatsuka A., Ten-no S., and Hackbusch W., J. Chem. Phys. 129, 044112 (2008). 10.1063/1.2958921 [DOI] [PubMed] [Google Scholar]
- 37.See http://www.mis.mpg.de/scicomp/EXP_SUM/1_x/tabelle for a listing of minimax quadrature parameters; accessed 7 January 2021.
- 38.Predescu C., Lerer A. K., Lippert R. A., Towles B., Grossman J. P., Dirks R. M., and Shaw D. E., J. Chem. Phys. 152, 084113 (2020). 10.1063/1.5129393 [DOI] [PubMed] [Google Scholar]
- 39.Brooks B. R., Brooks C. L., Mackerell A. D., Nilsson L., Petrella R. J., Roux B., Won Y., Archontis G., Bartels C., Boresch S., Caflisch A., Caves L., Cui Q., Dinner A. R., Feig M., Fischer S., Gao J., Hodoscek M., Im W., Kuczera K., Lazaridis T., Ma J., Ovchinnikov V., Paci E., Pastor R. W., Post C. B., Pu J. Z., Schaefer M., Tidor B., Venable R. M., Woodcock H. L., Wu X., Yang W., York D. M., and Karplus M., J. Comput. Chem. 30, 1545 (2009). 10.1002/jcc.21287 [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
See the supplementary material for a pdf file containing a tabulation of the minimax quadrature parameters up to k = 53.
Data Availability Statement
The data that support the findings of this study are available from the corresponding author upon reasonable request.



