Abstract

Developing a force field is a difficult task because its design is typically pulled in opposite directions by speed and accuracy. FFLUX breaks this trend by utilizing Gaussian process regression (GPR) to predict, at ab initio accuracy, atomic energies and multipole moments as obtained from the quantum theory of atoms in molecules (QTAIM). This work demonstrates that the in-house FFLUX training pipeline can generate successful GPR models for six representative molecules: peptide-capped glycine and alanine, glucose, paracetamol, aspirin, and ibuprofen. The molecules were sufficiently distorted to represent configurations from an AMBER-GAFF2 molecular dynamics run. All internal degrees of freedom were covered corresponding to 93 dimensions in the case of the largest molecule ibuprofen (33 atoms). Benefiting from active learning, the GPR models contain only about 2000 training points and return largely sub-kcal mol–1 prediction errors for the validation sets. A proof of concept has been reached for transferring the model produced through active learning on one atomic property to that of the remaining atomic properties. The prediction of electrostatic interaction can be assessed at the intermolecular level, and the vast majority of interactions have a root-mean-square error of less than 0.1 kJ mol–1 with a maximum value of ∼1 kJ mol–1 for a glycine and paracetamol dimer.
1. Introduction
Traditional force fields have long suffered from the limitations inherent to their design. One issue is their fixation with point charges while the accuracy of multipolar electrostatics has been amply documented1 for a long while. Another issue is the imprint or impact of strong covalent bonds onto the force field’s image of a molecule and how the latter is modeled. In other words, the Lewis diagram calls the shots in the design of the potential energy expressions of traditional force fields. These expressions cater primarily for the local energy changes associated with strong covalent bonds and create an artificial divide because of bonded and nonbonded interactions in terms of their treatment. An alternative design for a force field is to regard atoms as interacting entities and capture those interactions by the same machine learning (ML) expressions, whether the atoms are bonded or not.
In traditional force fields, more subtle2 but still important effects such as an intramolecular hydrogen bond or the interaction between a lone pair and an antibonding π orbital remain absent. However, these interactions can and will be automatically taken into account with ML. This inclusion benefits molecular simulations where such quantum interactions are often essential contributions toward the dynamics of a simulation. As such, ML enables more accurate molecular dynamics (MD) simulations, thereby reducing the need to resort to the alternative ab initio molecular dynamics (AIMD), which covers the aforementioned quantum effects but with the drawback of computational cost. While accurate, AIMD simulations cannot be carried out on systems larger than a few dozen atoms at long timescales within a reasonable timeframe. Force fields based on ML take away this limitation.
Biomolecular simulations are typically plagued by the inaccuracies characteristic of classical force fields.3−7 Such simulations are reliant on the accurate modeling of intermolecular forces for accurate dynamics.8,9 Consequently, inaccurate modeling of these intermolecular forces produces results that are unreliable and often differ, depending on the classical force field used.9,10 Many of such inaccuracies stem from the inability of nonpolarizable force fields to effectively model the electrostatic interactions. Hence, extensive research efforts have been made toward producing accurate polarizable force fields.11−14
More recently, ML has been successfully employed within force fields15−23 including the atomic Spectrum of London and Axilrod-Teller-Muto (aSLATM) potential24 and the Faber–Christensen–Huang–Lilienfeld (FCHL) representation.25 ML captures quantum data to predict such quantum properties at a fraction of the cost of performing an ab initio calculation. Methods such as Gaussian approximation potentials,26,27 deep kernel learning,28 and neural networks29,30 have all shown great promise in accelerating the production of chemically accurate, physically sound properties within a force field framework. The current study focuses on an ML technique known as Gaussian process regression31,32 (GPR) (also known as kriging). We believe that the ML method should operate on atomic properties obtained by some (external) partitioning method rather than carry out the atomic partitioning itself. For various reasons,33,34 we chose the quantum theory of atoms in molecules35,36 (QTAIM) to produce atomic energies and multipole moments15 that the GPR then operates on. The use of QTAIM allows for models to be produced on a per-atom basis as properties are also computed on a per-atom basis. This in turn enables the prediction of properties during the simulation at atomic resolution.
The implementation of the GPR models within a force field is undertaken by the in-house software package DL_FFLUX.37 This package is based on the DL_POLY38 and implements all aspects of the force field FFLUX22,39 except its training, which is carried out by the in-house FORTRAN program FEREBUS40 and the in-house Python script ICHOR [see the Supporting Information (SI) for details]. The trained GPR models are called within DL_FFLUX such that atomic properties are predicted during the simulation. A GPR model’s input is the nuclear geometry of the given atom’s environment, and its output is a property of that atom. The GPR models thus provide instantaneous, interpolated values for the properties of each atom during a simulation, based on the local environment that the atom experiences. That GPRs can reach high accuracy has also been demonstrated by others.41,42 The availability of analytical derivatives yields forces43 for the multipolar electrostatics, for the exchange (correlation) interactions and the intra-atomic energies. The former covers polarization effects, not only associated with charge transfer (monopolar), but also with the omnipresent dipolar polarization and even higher ranks, beyond quadrupole moment. The energy decomposition scheme Interacting Quantum Atoms44 (IQA) generates all atomic energies because, as a topological method, it is consistent with the QTAIM atomic multipole moments. In other words, all atomic properties come from the same partitioning idea (which is a multidimensional integration in real space rather than in Hilbert space). Thanks to our implementation45 in DL_FFLUX of multipolar smooth particle mesh Ewald46−48 (SPME) summation, we are able to carry out MD simulations, starting with that of liquid water.49 These simulations, which now also extend to molecular clusters and crystalline material, all use nonbonded potential because so far the GPR models are confined to individual molecules only. It is known50 that hydrogen-bonded complexes can also be successfully modeled by GPR models, but this accomplishment still needs to be rolled out in the context of simulations. Currently, there is no alternative to GPR models modeling intermolecular repulsion. As a result, the current monomeric modeling invokes an external non-ML potential, such as the Lennard-Jones potential, to take care of nonelectrostatic intermolecular interactions.
The current study demonstrates the use of active learning to produce accurate GPR models for systems of interest to the pharmaceutical industry and ultimately biomolecular simulations. Active learning is a term used to describe a series of algorithms designed to improve the training set of an ML model through iteratively adding unlabeled data points that are expected to increase the predictive accuracy of the model. Active learning has been shown to be a powerful tool in the production of GPR models51 for atomistic properties. Its use extends to allow the production of an optimized training set for a single atomic property,52 followed by using this given training set to train several GPR models for a variety of atomic properties.
2. Methods
2.1. (Data) Point Generation
Modeling larger systems naturally requires more (data) points (i.e., molecular geometries) than smaller systems due to the larger sample space. Previous work51 has utilized AIMD simulations for the initial point generation. The reason for the use of AIMD is to generate chemically viable geometries at a fixed temperature, free of any potential bias caused by a traditional force field. For small systems, AIMD simulations are a good option as they allow for accurate dynamics and control over the extent of the molecular distortion using temperature. Unfortunately, moving toward larger systems, the AIMD method of point generation does not scale very well and becomes extremely expensive computationally. Coupling the nonlinear scaling of point generation with system size and the need for many more points to describe larger systems necessitates a new approach to point generation.
The most obvious choice is to replace AIMD with classical MD simulations. However, moving to classical MD simulations trades speed for accuracy. In this study, points will be added to the training set based on the geometry alone, which corresponds to unsupervised learning. For this reason, accurate dynamics of the original simulation becomes less of a concern as the simulation is only used to generate geometries and thus the forces (or any other molecular properties) are not required. Furthermore, the energies and forces from the AIMD point generation method are discarded because the “true” wavefunction and atomic properties for each point in the training set are calculated using ab initio methods prior to training. In other words, typically the level of theory used by AIMD is lower than that used by the ab initio method at the training stage.
The MD software package used in this study is AMBER1853 with the general AMBER force field (GAFF254). All systems were run in vacuo without periodic boundary conditions and at a fixed temperature. The temperature was set at 300 K for all simulations employing the Langevin thermostat (with ln(γ) = 0.7) and a timestep of 1 fs. Because accurate dynamics were less of a concern than volume of data points generated, no minimization or equilibration step was carried out and the simulations were run for 10,000,000 timesteps (10 ns).
The initial MD trajectory was then split into three sets: training set, validation set, and sample set. This study utilized the so-called per-atom active learning approach where each atom has a unique training set. Alongside this unique training set, each atom also had a unique sample set but all atoms shared the same validation set. To initialize the training set, all input features for all points in the trajectory were calculated. The points corresponding to the minimum feature, maximum feature, and those closest to the mean were added to the training set. This initialization method is known as the min–max–mean initialization and adds up to 3 times D points to the initial training set, where D is the dimensionality of the system (the number of features). The actual number of initial training points may be smaller than 3D because duplicate points are removed if, for example, a single point shares the minimum feature in multiple dimensions. The rationale behind the min–max–mean method is to add points at the boundaries of the search space, and some points in the center, while maintaining linear scaling with respect to the number of features. Alternative approaches to training set initialization, such as the Latin hypercube or minimax distance sampling, may provide better space filling properties but they scale poorly with the number of features and become infeasible for high-dimensional systems. To further assist in filling the search space of the initial training set, 1000 random geometries were added to the initial min–max–mean training set. The sample set and validation set were then selected at random from the remaining points of the initial MD trajectory. In this study 100,000 sample points and 500 validation points were selected for each system.
2.2. Feature Calculation
The features used in this study are based upon the atomic local frame43 (ALF). Prior to feature calculation, the ALF for each atom of a system must be determined. The ALF defines the axis system for a given atom and consists of the origin atom, an atom defining the x-axis, and an atom defining the xy-plane. The z-axis of the local frame is then defined using the right-hand rule. To define the x-axis and the xy-plane atoms, the Cahn-Ingold-Prelog rules are used: starting from the origin atom, the atom with highest priority is assigned to the x-axis, and the atom with second highest priority is set to the xy-plane.
Once the ALF for an atom is defined, the ALF features can be calculated. The first three features are based upon the atoms defining the ALF itself: the first feature is the distance from the origin atom to the atom defining the x-axis, the second feature is the distance from the origin atom to the atom defining the xy-plane, and the third feature is the angle subtending the vector from the x-axis atom to the origin atom and from the origin atom to the xy-plane atom. After the first three features there remain 3N-9 features, which are the spherical polar coordinates of each non-ALF atom based on the axis system defined by the ALF. The spherical polar coordinates follow the physics convention of (r,θ,φ) where r is the distance between a given atom, An, and the origin atom, Ao, while θ is the polar angle of atom An and φ is the azimuthal angle of atom An. Full details of the ALF features are in the SI. It is important to note that the polar angle has a range of [0,π] as it describes the inclination from π/2, whereas the azimuthal angle can range from [−π,π]. The importance of this distinction will become apparent in a later section.
2.3. Gaussian Process Regression
GPR is a nonlinear regression machine learning technique. In this study, a single GPR model predicts a single property of a single atom in a system. This means that the number of GPR models associated with a given system is natom × nproperties. A model consists of a training set and a set of hyperparameters, denoted θ. The training set is a series of points with a set of inputs, X, and their corresponding outputs, y, where X is a matrix of size npoints × nfeatures and y is a vector of length npoints.
A GPR model wholly consists of the training data and the hyperparameters. The latter are used to describe the kernel function (or simply known as kernel) of the GPR model. A GPR interpolates an arbitrary function by fitting the kernel to describe the deviation from the mean. This deviation from the mean is then added to a mean function to provide a prediction,
| 1 |
| 2 |
where y̅ is the mean of the training outputs, y, and the right term of the sum is responsible for the calculation of the deviation from the mean function for arbitrary point x*. In eq 2, 1 is a column vector of ones. The calculation of the deviation from the mean requires the covariance matrix R, and the covariance vector r (which is transposed when appearing in eq 1) of the arbitrary point x*. To calculate the covariance matrix and the covariance vector, a kernel is required. The kernel, k, is used to calculate the covariance matrix by computing the covariance between the training set (X) and itself,
![]() |
3 |
is used to calculate the covariance matrix and vector as follows:
| 4 |
![]() |
5 |
where the kernel is chosen based on the domain-specific problem, σ2 is a noise parameter known as a nugget (which serves the purpose of improving numerical stability), and I is the identity matrix. The nugget is added to the diagonal of the covariance matrix to improve the numerical stability of inverting the matrix R and accounting for noise in the input data. The training data used for model creation are calculated using ab initio methods and cleaned prior to training. The input data are cleaned through a process known as scrubbing whereby points with atomic integration errors (L(Ω)) greater than 0.001 a.u. are removed from the training set. Therefore, the clean input data will be smooth. As a result, the nugget value can be set as small as 10–8. Such a small nugget value is often known as jitter. Previous work has made use of the radial basis function (RBF) kernel, which models the distance between two points in space as a gaussian scaled by the lengthscale parameter l,
| 6 |
where D denotes the number of dimensions of the system. As stated in Section 2.2, after the first three features, every subsequent group of three features refers to the spherical polar coordinate (with respect to the ALF) of a non-ALF atom. Consequently, every third non-ALF feature is an azimuthal angle φ with a range of [−π,π]. Because the difference between two azimuthal angles can be greater than π radians, the distance between two features may display cyclic properties. In other words, two features may appear further away from each other (larger than π radians) when regarded in a linear rather than cyclic manner (which is the correct view). Thus, modeling this distance with a Gaussian using a simple linear distance measure is inaccurate. The method to fix this issue in previous work has been to modify the RBF kernel to add a cyclic correction to the distance for every azimuthal angle resulting in the RBF-cyclic kernel shown below
| 7 |
![]() |
8 |
The current work moves away from the use of the modified RBF kernel used in previous work. The modified RBF kernel is instead replaced with a periodic kernel called the exponential sine-squared kernel,
![]() |
9 |
The exponential sine-squared kernel is similar to the RBF in that a lengthscale parameter, l, is used to scale the difference. However, the kernels differ in the calculation of the difference between the two points: although the RBF kernel computes a linear difference, the periodic kernel replaces this with a sine difference scaled by a period parameter p. Fortunately, we know that the period of the azimuthal angle is 2π. Therefore, this means one parameter fewer to optimize as well as simplifying the kernel computation. Hence, using a well-known trigonometric identity,
![]() |
10 |
In summary, the standard RBF kernel (eq 6) is used for all noncyclic dimensions (i.e., the first three features followed by every feature except the azimuthal angle features) while the periodic kernel (eq 10) is used for all cyclic dimensions (i.e., the azimuthal angles). The fact that noncyclic and cyclic dimensions are independent contributions to the kernel, the kernels should be multiplied to provide the total covariance,
| 11 |
Because the training set does not change after training, the GPR prediction equation (eq 1) can be simplified by calculating the solution of the matrix multiplication between the inverse covariance matrix and the training outputs vector as follows
| 12 |
| 13 |
Simplifying the prediction equation to the form shown in eq 13 improves the scaling of the GPR predictions from O(n3) to O(n). In other words, the expensive inversion of the R matrix, which scales as O(n3) is avoided at “run time” when predictions are made. It is a feature of GPR models that the more training points there are in the model, the longer it takes to make a prediction. Therefore, producing the most accurate model with the fewest number of points is highly desirable. This study makes use of active learning to achieve this goal, which will be discussed later in Section 2.4.
Optimization
of the kernel hyperparameters is performed by maximizing
the concentrated log-likelihood,
, which is shown in eq 17. The concentrated log-likelihood is derived
from the marginal log-likelihood,
, by analytically optimizing the mean. For
the sake of simplicity, the value of the constant mean function will
be denoted by μ and the analytically optimized concentrated
mean value denoted by μ̂,
| 14 |
| 15 |
| 16 |
| 17 |
where n is the number of training points and 1 is a vector of length n. Maximizing the concentrated log-likelihood to find the optimal hyperparameter vector requires numerical methods. The likelihood function is a highly nonconvex surface with many local optima. To find the global optimum, an evolutionary optimization algorithm called particle swarm optimization55 (PSO) is employed. The advantage of a global optimization technique (such as PSO) over local optimizers (such as gradient descent algorithms) is the ability to traverse highly nonconvex surfaces without becoming stuck in local minima.
Inspired by the swarming behavior of birds, PSO consists of a swarm of particles that can communicate with one another to swarm toward the global optimum value. Each particle (i) of the swarm has a parameter vector known as the particle’s position, p, and a corresponding velocity, v, which updates the particle’s position every timestep, t. To calculate the velocity of particle pi for the next timestep requires three factors: (i) the particle’s current velocity, vi, (ii) the position of the previous personal best (pb) value, pipb, and (iii) the position of the swarm’s previous best global value, pgb. These three factors may be summed using three corresponding weights: (i) the inertia weight, ω, (ii) the cognitive learning rate, c1, and (iii) the social learning rate, c2. The inertia weight determines the influence of the particle’s current velocity, the cognitive learning rate determines the influence of the pull to the particle’s previously best-known value, and the social learning rate determines the influence of the pull to the swarm’s best-known value. Both the cognitive and social learning rates are combined with a random factor (r1 and r2, respectively), where rk ∈ [0,1], to prevent stagnation,
| 18 |
The GPR and PSO algorithms are implemented by the in-house GPR engine software package FEREBUS. Full details of the settings of all PSO parameters are in the SI.
2.4. Active Learning
Active learning56−59 is the process of selecting unlabeled data points (i.e., without output value) from a sample set that will improve the training set in the subsequent iteration. Active learning reduces the number of ab initio calculations required, as well as producing minimal training sets. A minimal training set is a training set that achieves a certain accuracy in the smallest number of points. This is especially important in the context of GPR models because the prediction time increases with the size of the training set as well as the model’s physical size and thus hardware memory requirements.
Many active learning methods have been developed to solve domain-specific problems. Some active learning methods can become expensive to run when moving toward larger training and sample sets. Previous work focused on the maximum expected prediction error60 (MEPE) active learning method, which requires the computation of both the expected cross-validation error and the predictive variance of each sample point. The sample set in this study is orders of magnitude larger than that used in our previous work. Therefore, the active learning method has been simplified to the highest variance method. The highest variance61 active learning method calculates the predictive variance of each point in the sample pool selecting the points with the highest variance to add to the training set,
| 19 |
where σ2 is the model variance, 1 is a column vector of ones with length ntrain, and the T again marks the transpose. The further away a sample point is from the training set, the higher the predictive variance will be and therefore the more likely it will be chosen by the active learning method, that is, included in the training set. The highest variance method favors exploration over exploitation, which could be seen as a disadvantage. Continually favoring exploration would result in a training set where the training points are far apart from one another. Therefore, parts of the potential energy surface where fine detail may be required (such as minima and transition states) may not be sampled as highly as if an exploitative method was used. However, in our case, exploring geometries that are currently not close to the training set, and adding them to the training set, guarantees successful MD runs. Indeed, an MD run can demand a prediction of the GPR model for a spurious geometry, that is, one far away from the original training set.
The mechanism of a failing MD run can be explained as follows. A characteristic of using a constant value as a mean function for the GPR model, is that predictions for points far away from the training set become this constant value. Thus, the gradient associated with this value is zero; if the value is an intramolecular energy then the gradient becomes a force applied to a nucleus due to a change in intramolecular energy. It is undesirable for such a force to vanish during an MD simulation because the absence of these forces means that atoms are not held together anymore as a molecule. Consequently, a force caused by intermolecular (e.g., electrostatic) interaction may pull an affected atom in a given molecule toward another molecule. This effect is devastating for the MD simulation. Adding points further out from geometries, typically seen within normal MD simulations, prevents such situations from occurring.
The atomic property prediction for each atom in a system is independent. Therefore, the training set for each atom in a system and, in fact, each atomic property can be unique. Furthermore, as each training set is unique, each atom’s active learning run is also unique. This allows for the optimization of the training set on a per-atom basis, which we coin per-atom active learning.
All active learning and ab initio interfaces are implemented in the in-house pipeline software ICHOR, which allows for highly parallel execution on HPC clusters. A flowchart of ICHOR is shown in the SI.
2.5. Atomic Properties
The output properties for the GPR models are atomic properties calculated using ab initio methods. In our previous work using active learning, the atomic property being modeled was the IQA energy. The IQA energy is used in atomic simulations for intramolecular interaction, that is, taking care of the internal energy balance of a flexible molecule. The IQA energy captures the intra-atomic energy along with the interaction energy with all atoms other than a given atom A in the molecule,
| 20 |
| 21 |
where EintraA is the intra-atomic energy of atom A, Vcl is the classical potential energy between atoms A and B, and VxcA is the exchange-correlation potential energy between atoms A and B. Explicit formulae defining these energies can be found in the original IQA publication44 and in work62 that made IQA compatible with DFT, for the first time. In an MD simulation, the forces to apply to each atom can be calculated from the derivative of the predicted energy allowing the modeling of a single molecule in vacuo.
FFLUX is a polarizable force field, which requires electrostatic information to cover intermolecular interactions. Short-range electrostatics are captured by the IQA energy, specifically in the interaction energy (EinterAB) of the IQA energy. For long-range electrostatic interactions, FFLUX uses multipole moment predictions63−65 from GPR models as an input45 to the SPME method.48 This allows FFLUX to capture monopole–monopole, monopole–dipole, dipole–dipole, etc. interactions using multipole moments up to hexadecapole moments (L′ = 4).
Training multipole moment models is similar to training IQA energies. In fact, the training process is identical, other than for the replacement of the training outputs by the specific multipole moment being trained. It is important to note that it is required that the multipole moments are in the ALF because the features are expressed with respect to the ALF. The multipole moments are typically computed in the global frame so a rotation to the ALF is needed prior to training. Then, when using the GPR multipole moment models in a simulation, the multipole moments must be expressed with respect to the global frame. Thus, the multipole moments must be rotated from the predicted local frame moments back to the global frame.
Quantum chemical calculations were carried out using the program66 GAUSSIAN09 at B3LYP/6-31+G(d,p) level of theory, while the QTAIM calculations were carried out using the external program67 AIMAll19.
Numerical errors in the computation of the atomic properties cause an integration error to the training input. Certain geometries may result in an especially large integration error and may not be suitable for use in the GPR model. Such geometries must thus be removed from the training set prior to training, in a process known as scrubbing. Points are scrubbed from the training set if the point’s integration error is greater than the threshold of 0.001.
3. Results and Discussion
3.1. Predictive Accuracy
The first step in validating a model is to test the predictive accuracy of each model against a validation set. Full details of the distortions used to generate each model and the number of training points are in the SI. The validation set is selected at random from the initial AMBER trajectory, after removing the points assigned to the training and sample sets. For this study, a validation set size of 500 points was chosen, and the atomic properties for all 500 points were calculated. We first define the individual prediction error (PE) for a single model, that is, for a given atom and given property. This prediction error is then calculated, for each point in the validation set, as the difference between the true value of the validation point, f(x), and the predicted value of the validation point, f̂(x),
| 22 |
The total prediction error of a given property, for each system, then involves a sum over atoms. In particular, we take the difference between the sum of the true values and the sum over the predicted values,
| 23 |
where X* is the collection of feature vectors for each atom within a given molecule and xi* is the feature vector of atom i within this molecule. The total prediction error (PEtotal, eq 23) is a useful metric for evaluating the overall accuracy of a model while individual prediction errors (PE, eq 22) allow for the identification of particularly poorly performing models. Plotting the prediction errors against the percentile of the validation point results in an S-curve. The S-curve is a useful visualization of both the average error (although not explicitly shown) and the distribution of errors across a validation set. Figures 1 and 2 show the S-curves for the IQA prediction errors of the glycine and paracetamol models, respectively. The S-curves for the remaining systems (aspirin, alanine, glucose, and ibuprofen) are in the SI.
Figure 1.

S-curves showing the individual atom IQA prediction error for a glycine AMBER 300 K model. Hydrogens are not shown here for clarity but can be found in Figure S4.1 in the SI.
Figure 2.

S-curves showing the individual atom IQA prediction error for a paracetamol AMBER 300 K model. Hydrogens are not shown here for clarity but can be found in Figure S4.2 in the SI.
All six systems demonstrate excellent predictive accuracy, with the overwhelming majority of atoms displaying prediction errors below 1 kJ mol–1 for 50% of the validation set. Almost all models are below 1 kcal mol–1 for 90% of the validation set except for 6 atoms (C3, C8, C12, H23, H27, and C28; atom numberings can be found in the SI) in the ibuprofen model, the worst of which reaches the 1 kcal mol–1 threshold at 48% of the validation set.
It is not immediately clear from IQA S-curves alone how well a training set, produced by active learning on IQA models, will transfer to multipole moment models. The predictive accuracy of both models must be compared to determine the validity of utilizing a model, created through active learning of the IQA energy, for the multipole moments. Figure 3 shows such a comparison between the mean absolute errors (MAE) of the IQA models (used to create the S-curves above) and the charge models (using the IQA training inputs) for the glycine (Figure S7a) and paracetamol (Figure S7b) models.
Figure 3.
Comparison between IQA and charge predictions for (A) glycine and (B) paracetamol. Both the IQA (kJ mol–1) and charge (milli-electron, me) training sets share the same geometries because training set input geometries taken from an active learning run, exclusively using IQA models, are then transferred to produce GPR models for atomic charge.
For both glycine and paracetamol, Figure 3 demonstrates that there is a broad positive correlation (Pearson correlation coefficients r2 > 0.75) between the predictive accuracy of the IQA models and the charge models. Because IQA prediction errors provide a good insight into the corresponding charge prediction it is unnecessary to produce multipolar models for each iteration of an active learning run. Instead, it is acceptable to infer the multipolar prediction errors from the IQA prediction errors. This insight removes the need to perform a unique active learning run for each atomic property. Rather, active learning should be performed on IQA energy models only, then using the final training set, produce models for the multipole moments once a satisfactory IQA prediction error is reached.
Once produced, the multipole moments may undergo the same prediction error analysis as the IQA energies. Unlike IQA energies, multipole moments (excluding charge) are not scalar quantities but are predicted as directional quantities. Therefore, for reasons that are discussed in detail in the next section, S-curve analysis provides less insight into their predictive accuracy. However, the charge multipole moment is a scalar quantity with an insightful S-curve analysis (Figure 4).
Figure 4.
S-curves for the (A) glycine and (B) paracetamol atomic charge prediction errors.
For completeness, the remaining multipole moment S-curves for the glycine and paracetamol models shown in Figure S8 are in the SI.
3.2. Electrostatic Interactions
For scalar quantities such as energy and charge, computing the raw prediction errors is generally sufficient because the lower the prediction error, the better the model predicts the atomic property. However, for higher-rank multipole moments, this is not necessarily the case because the absolute value of each multipole moment is only half of the picture, the other half being the orientation of the multipole moment. Both magnitude and orientation contribute to the predictive performance of the multipole moment contributions toward the electrostatic interaction. To test the accuracy of these multipole moments, a dimeric validation set is constructed and the multipole moment for each atom in the monomeric state is computed. The dimer configurations were created manually through reflection, rotation, and translation, which enabled the selection of configurations that avoid short-range convergence issues. We quantify the performance of each multipole moment model by comparing the interaction energies produced when predicting multipole moments of dimers in DL_FFLUX. In particular, we compare the interaction energy calculated from the true moments of each atom in one monomer and the true moments of each atom in the other monomer with the same interaction energy but then calculated from the predicted moments. It should be understood that we are comparing the electrostatic energy at the level of the multipolar energy only rather than the full electrostatic energy because we ignore the penetration energy that typically arises from the superposition of monomeric wavefunctions.
The interactions of the multipole moments between each molecule may be controlled using the L′ value. An L′ = 0 value signifies that only point charges are interacting and we are investigating up to hexadecapole–hexadecapole interactions (L′ = 4). Note that L′ = 4 actually includes all multipole–multipole interactions up to (and including) hexadecapole moment. A good measure for the accuracy of a model in predicting the interaction energy is the root-mean-square error (RMSE),
| 24 |
where Eintpred is the predicted interaction energy for a given atom pair, Einttrue is the true interaction energy between these same atoms, and n is the number of validation points used for the predictions.
By comparing the interaction energies between each atom of a dimer, we can produce a heatmap showing which interactions are best predicted based on the RMSE values for each atom–atom interaction. Figure 5 shows the (logarithmic) interaction heatmaps of glycine, and Figure 6 shows those of paracetamol showing the RMSE value for every interaction between each atom of the dimer configuration.
Figure 5.
Heatmap showing the RMSE of the interaction energy between each pair of atoms occurring in the glycine dimer for increasing values of L′ alongside the atom labeling used for the dimer configuration. Blue indicates a lower RMSE value, while yellow indicates a higher RMSE value.
Figure 6.
Heatmap showing the RMSE of the interaction energy between each pair of atoms occurring in the paracetamol dimer for increasing values of L′ alongside the atom labeling used for the dimer configuration. Blue indicates a lower RMSE value, and yellow indicates a higher RMSE value.
As can be seen from Figures 5 and 6, the vast majority of interactions have an RMSE of less than 0.1 kJ mol–1 with a maximum RMSE of roughly 1.1 kJ mol–1. It is worth noting that interactions such as C4–C23 in glycine and C12–C33 in paracetamol (both having an RMSE greater than 1 kJ mol–1) have absolute interaction energies around 500 kJ mol–1. Therefore, an error of 1 kJ mol–1 corresponds to a ∼0.2% error. Second, we note that the heatmaps change very little between the L′ values for a given molecule, with the exception of the two methyl interacting in the paracetamol dimer.
In summary and in general, the two main drawbacks of our methodology both have to do with computational demand. The calculation of the atomic properties is expensive. Second, the CPU time it takes to perform the active learning is amplified by the previous demand because we need to calculate new IQA energies and then retrain at each learning iteration. However, the latter drawback is mitigated somewhat by taking advantage of multiple cores and adding multiple points per iteration.
4. Conclusions
The force field FFLUX uses Gaussian process regression (GPR) to learn the energies and multipole moments of quantum topological atoms. These atoms are space filling and together make up six molecules (peptide-capped glycine and alanine, glucose, paracetamol, aspirin, and ibuprofen) that we have successfully trained for using the in-house program FEREBUS and its associated pipeline ICHOR. We distorted these molecules to a fair degree, including full methyl rotations, with an eye on covering many configurations from an AMBER-GAFF2 molecular dynamics run. As such, FFLUX has now been developed beyond its original action radius of normal-mode-distorted minimum energy geometries. The topological partitioning method offers the advantage of consistent atomic energies and multipole moments, both from the same volume integration over atomic volumes. Moreover, GPR needs fewer training points than neural networks, and when combined with active learning, manages to introduce no more than about two thousand training geometries. This methodology can handle systems of dimension of about 50 D (glycine), returning almost 90% of the validation set geometries with an energy prediction error of less than 1 kcal mol–1. As the dimensionality of the systems increases, this performance deteriorates, with the exception of glucose (66 D), to just under 20% for the 93-dimensional system ibuprofen. The prediction of electrostatic interaction, at the level of multipolar electrostatics only, can be assessed at the intermolecular level and the vast majority of interactions have a root-mean-square error of less than 0.1 kJ mol–1 with a maximum value of ∼1.1 kJ mol–1 for a glycine and paracetamol dimer. Finally, a proof of concept has been reached for utilizing a model produced through active learning of one atomic property to produce models for all atomic properties by showing the existence of a broad positive correlation between the predictive accuracy of the atomic energy models and the charge models. As a result, active learning should be performed on energy models only, and the resulting model utilized to produce multipolar models for the final training set once the energy prediction errors are satisfactory.
Acknowledgments
M.J.B. acknowledges the MRC DTP for the award of a PhD studentship. Matthew Brown is thanked for his help in preparing Figures 1 and 2.
Supporting Information Available
The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acs.jctc.2c00731.
Labeled geometries, model sizes, mist plots, S-curves, atomic local frame features, particle Swarm optimization, and ICHOR (PDF)
The authors declare no competing financial interest.
Notes
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Supplementary Material
References
- Cardamone S.; Hughes T. J.; Popelier P. L. A. Multipolar Electrostatics. Phys. Chem. Chem. Phys. 2014, 16, 10367–10387. 10.1039/c3cp54829e. [DOI] [PubMed] [Google Scholar]
- Gorenstein D. G. Stereoelectronic Effects in Biomolecules. Chem. Rev. 1987, 87, 1047–1077. 10.1021/cr00081a009. [DOI] [Google Scholar]
- Best R. B.; Buchete N.-V.; Hummer G. Are Current Molecular Dynamics Force Fields too Helical?. Biophys. J. 2008, 95, L07–L09. 10.1529/biophysj.108.132696. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Freddolino P. L.; Park S.; Roux B.; Schulten K. Force Field Bias in Protein Folding Simulations. Biophys. J. 2009, 96, 3772–3780. 10.1016/j.bpj.2009.02.033. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fadda E.; Woods R. J. Molecular simulations of carbohydrates and protein–carbohydrate interactions: motivation, issues and prospects. Drug Discovery Today 2010, 15, 596–609. 10.1016/j.drudis.2010.06.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- MacKerell A. D. Empirical Force Fields for Biological Macromolecules: Overview and Issues. J. Comput. Chem. 2004, 25, 1584–1604. 10.1002/jcc.20082. [DOI] [PubMed] [Google Scholar]
- Piana S.; Lindorff-Larsen K.; Shaw D. E. How Robust Are Protein Folding Simulations with Respect to Force Field Parameterization?. Biophys. J. 2011, 100, L47–L49. 10.1016/j.bpj.2011.03.051. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bruist M. F. A simple demonstration of How Intermolecular Forces Make DNA helical. J. Chem. Educ. 1998, 75, 53. 10.1021/ed075p53. [DOI] [Google Scholar]
- Patapati K. P.; Glykos N. M. Three Force Fields’ Views of the 310 Helix. Biophys. J. 2011, 101, 1766–1771. 10.1016/j.bpj.2011.08.044. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Debiec K. T.; Gronenborn A. M.; Chong L. T. Evaluating the Strength of Salt Bridges: A Comparison of Current Biomolecular Force Fields. J. Phys. Chem. B 2014, 118, 6561–6569. 10.1021/jp500958r. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Friesner R. A.; Baker D.. Modeling Polarization in Proteins and Protein-ligand Complexes: Methods and Preliminary Results. In Advances in Protein Chemistry; Academic Press: USA, 2005; Vol. 72, pp 79–104. [DOI] [PubMed] [Google Scholar]
- Halgren T. A.; Damm W. Polarizable force fields. Curr. Opin. Struct. Biol. 2001, 11, 236–242. 10.1016/S0959-440X(00)00196-2. [DOI] [PubMed] [Google Scholar]
- Jing Z.; Liu C.; Cheng S. Y.; Qi R.; Walker B. D.; Piquemal J.-P.; Ren P. Polarizable Force Fields for Biomolecular Simulations: Recent Advances and Applications. Annu. Rev. Biophys. 2019, 48, 371–394. 10.1146/annurev-biophys-070317-033349. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Baker C. M. Polarizable force fields for molecular dynamics simulations of biomolecules. Wiley Interdiscip. Rev.: Comput. Mol. Sci. 2015, 5, 241–254. 10.1002/wcms.1215. [DOI] [Google Scholar]
- Bereau T.; Andrienko D.; von Lilienfeld O. A. Transferable atomic multipole machine learning models for small organic molecules. J. Chem. Theory Comput. 2015, 11, 3225–3233. 10.1021/acs.jctc.5b00301. [DOI] [PubMed] [Google Scholar]
- Unke O. T.; Chmiela S.; Sauceda H. E.; Gastegger M.; Poltavsky I.; Schütt K. T.; Tkatchenko A.; Müller K.-R. Machine Learning Force Fields. Chem. Rev. 2021, 121, 10142–10186. 10.1021/acs.chemrev.0c01111. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Deringer V. L.; Bartók A. P.; Bernstein N.; Wilkins D. M.; Ceriotti M.; Csányi G. Gaussian Process Regression for Materials and Molecules. Chem. Rev. 2021, 121, 10073–10141. 10.1021/acs.chemrev.1c00022. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Behler J. Four Generations of High-Dimensional Neural Network Potentials. Chem. Rev. 2021, 121, 10037–10072. 10.1021/acs.chemrev.0c00868. [DOI] [PubMed] [Google Scholar]
- Behler J. Perspective: Machine learning potentials for atomistic simulations. J. Chem. Phys. 2016, 145, 170901 10.1063/1.4966192. [DOI] [PubMed] [Google Scholar]
- Ko T.-W.; Finkler J. A.; Goedecker S.; Behler J. General-Purpose Machine Learning Potentials Capturing Nonlocal Charge Transfer. Acc. Chem. Res. 2021, 54, 808–817. 10.1021/acs.accounts.0c00689. [DOI] [PubMed] [Google Scholar]
- Musil F.; Grisafi A.; Bartók A. P.; Ortner C.; Csányi G.; Ceriotti M. Physics-Inspired Structural Representations for Molecules and Materials. Chem. Rev. 2021, 121, 9759–9815. 10.1021/acs.chemrev.1c00021. [DOI] [PubMed] [Google Scholar]
- Popelier P. L. A. QCTFF: On the Construction of a Novel Protein Force Field. Int. J. Quant. Chem. 2015, 115, 1005–1011. 10.1002/qua.24900. [DOI] [Google Scholar]
- Mittal S.; Shukla D. Recruiting machine learning methods for molecular simulations of proteins. Mol. Simul. 2018, 44, 891–904. 10.1080/08927022.2018.1448976. [DOI] [Google Scholar]
- Huang B.; von Lilienfeld O. A. Quantum machine learning using atom-in-molecule-based fragments selected on the fly. Nat. Chem. 2020, 12, 945–950. 10.1038/s41557-020-0527-z. [DOI] [PubMed] [Google Scholar]
- Faber F. A.; Christensen A. S.; Huang B.; von Lilienfeld O. A. Alchemical and structural distribution based representation for universal quantum machine learning. J. Chem. Phys. 2018, 148, 241717 10.1063/1.5020710. [DOI] [PubMed] [Google Scholar]
- Bartók A. P.; Payne M. C.; Kondor R.; Csanyi G. Gaussian Approximation Potentials: The Accuracy of Quantum Mechanics, without the Electrons. Phys. Rev. Lett. 2010, 104, 136403 10.1103/PhysRevLett.104.136403. [DOI] [PubMed] [Google Scholar]
- Nguyen T. T.; Szekely E.; Imbalzano G.; Behler J.; Csanyi G.; Ceriotti M.; Gotz A. W.; Paesani F. Comparison of permutationally invariant polynomials, neural networks, and Gaussian approximation potentials in representing water interactions through many-body expansions. J. Chem. Phys. 2018, 148, 241725 10.1063/1.5024577. [DOI] [PubMed] [Google Scholar]
- Sivaraman G.; Jackson N. E. Coarse-Grained Density Functional Theory Predictions via Deep Kernel Learning. J. Chem. Theory Comput. 2022, 18, 1129–1141. 10.1021/acs.jctc.1c01001. [DOI] [PubMed] [Google Scholar]
- Behler J.; Lorenz S.; Reuter K. Representing molecule-surface interactions with symmetry-adapted neural networks. J. Chem. Phys. 2007, 127, 014705 10.1063/1.2746232. [DOI] [PubMed] [Google Scholar]
- Handley C. M.; Popelier P. L. A. Potential Energy Surfaces Fitted by Artificial Neural Networks. J. Phys. Chem. A 2010, 114, 3371–3383. 10.1021/jp9105585. [DOI] [PubMed] [Google Scholar]
- Rasmussen C. E.; Williams C. K. I.. Gaussian Processes for Machine Learning; The MIT Press: Cambridge, USA, 2006. [Google Scholar]
- Yuan Y.; Mills M.; Popelier P. L. A. Multipolar electrostatics based on the Kriging machine learning method: an application to serine. J. Mol. Model. 2014, 20, 2172 10.1007/s00894-014-2172-1. [DOI] [PubMed] [Google Scholar]
- Popelier P. L. A., On Topological Atoms and Bonds. In Intermolecular Interactions in Molecular Crystals, Novoa J., Ed.; RSC: Cambridge, Great Britain, 2018; pp 147–177. [Google Scholar]
- Popelier P. L. A.On Quantum Chemical Topology. In Challenges and Advances in Computational Chemistry and Physics dedicated to ″Applications of Topological Methods in Molecular Chemistry″, Chauvin R.; Lepetit C.; Alikhani E.; Silvi B., Eds. Springer: Switzerland, 2016; pp 23–52. [Google Scholar]
- Bader R. F. W.Atoms in Molecules: A Quantum Theory; Oxford Univ. Press: Oxford, Great Britain, 1990. [Google Scholar]
- Popelier P. L. A.Atoms in Molecules: An Introduction; Pearson Education: London, Great Britain, 2000. [Google Scholar]
- Thacker J. C. R.; Wilson A. L.; Hughes Z. E.; Burn M. J.; Maxwell P. I.; Popelier P. L. A. Towards the simulation of biomolecules: optimisation of peptide-capped glycine using FFLUX. Mol. Simul. 2018, 44, 881–890. 10.1080/08927022.2018.1431837. [DOI] [Google Scholar]
- Todorov I. T.; Smith W.. The DL_POLY_4 User Manual, CCLRC Daresbury Laboratory, Warrington, Great Britain; CCLRC Daresbury Laboratory:Warrington, Great Britain, 2018. [Google Scholar]
- Popelier P. L. A. Molecular Simulation by Knowledgeable Quantum Atoms. Phys. Scr. 2016, 91, 033007 10.1088/0031-8949/91/3/033007. [DOI] [Google Scholar]
- Di Pasquale N.; Bane M.; Davie S. J.; Popelier P. L. A. FEREBUS: Highly Parallelized Engine for Kriging Training. J. Comput. Chem. 2016, 37, 2606–2616. 10.1002/jcc.24486. [DOI] [PubMed] [Google Scholar]
- Chmiela S.; Sauceda H. E.; Müller K.-R.; Tkatchenko A. Towards exact molecular dynamics simulations with machine-learned force fields. Nat. Commun. 2018, 9, 3887 10.1038/s41467-018-06169-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sauceda H. E.; Gálvez-González L. E.; Chmiela S.; Paz-Borbón L. O.; Müller K.-R.; Tkatchenko A. BIGDML—Towards accurate quantum machine learning force fields for materials. Nat. Commun. 2022, 13, 3733 10.1038/s41467-022-31093-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mills M. J. L.; Popelier P. L. A. Electrostatic Forces: formulae for the first derivatives of a polarisable, anisotropic electrostatic potential energy function based on machine learning. J. Chem. Theory Comput. 2014, 10, 3840–3856. 10.1021/ct500565g. [DOI] [PubMed] [Google Scholar]
- Blanco M. A.; Martín Pendás A.; Francisco E. Interacting Quantum Atoms: A Correlated Energy Decomposition Scheme Based on the Quantum Theory of Atoms in Molecules. J. Chem. Theory Comput. 2005, 1, 1096–1109. 10.1021/ct0501093. [DOI] [PubMed] [Google Scholar]
- Symons B. C. B.; Popelier P. L. A. Flexible multipole moments in smooth particle mesh Ewald. J. Chem. Phys. 2022, 156, 244107 10.1063/5.0095581. [DOI] [PubMed] [Google Scholar]
- Belhadj M.; Alper H. E.; Levy R. M. Molecular dynamics simulations of water with Ewald summation for the long-range electrostatic interactions. Chem. Phys. Lett. 1991, 179, 13–20. 10.1016/0009-2614(91)90284-G. [DOI] [Google Scholar]
- Duan Z.-H.; Krasny R. An Ewald summation based multipole method. J. Chem. Phys. 2000, 113, 3492–3495. 10.1063/1.1289918. [DOI] [Google Scholar]
- Essmann U.; Perera L.; Berkowitz M. L.; Darden T.; Lee H.; Pedersen L. G. A smooth particle mesh Ewald method. J. Chem. Phys. 1995, 103, 8577–8593. 10.1063/1.470117. [DOI] [Google Scholar]
- Symons B. C. B.; Popelier P. L. A. Application of quantum chemical topology force field FFLUX to condensed matter simulations: liquid water. J. Chem. Theory Comput. 2022, 18, 5577–5588. 10.1021/acs.jctc.2c00311. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hughes T. J.; Kandathil S. M.; Popelier P. L. A. Accurate prediction of polarised high order electrostatic interactions for hydrogen bonded complexes using the machine learning method kriging. Spectrochim. Acta, Part A 2015, 136, 32–41. 10.1016/j.saa.2013.10.059. [DOI] [PubMed] [Google Scholar]
- Burn M. J.; Popelier P. L. A. Creating Gaussian Process Regression Models for Molecular Simulations Using Adaptive Sampling. J. Chem. Phys. 2020, 153, 054111 10.1063/5.0017887. [DOI] [PubMed] [Google Scholar]
- Ramakrishnan R.; von Lilienfeld O. A. Many Molecular Properties from One Kernel in Chemical Space. Chimia 2015, 69, 182–186. 10.2533/chimia.2015.182. [DOI] [PubMed] [Google Scholar]
- Case D. A.; Ben-Shalom I. Y.; Cerutti D. S.; Cheatham T. E. III; Cruzeiro W. D. V.; Darden T. A.; Duke R. E.; Gilson M. K.; Gohlke H.; Goetz A. W.; Greene D.; Harris R.; Homeyer N.; Huang Y.; Izadi S A. K.; Kurtzman T.; Lee T. S.; LeGrand S.; Li P.; Lin C.; Liu J.; Luchko T.; Luo R.; Mermelstein D. J.; M M K.; Miao Y.; Monard G.; Nguyen C.; Nguyen H.; Omelyan I.; Onufriev A.; Pan F.; Qi R.; R R D.; Roitberg A.; Sagui C.; Schott-Verdugo S.; Shen J.; Simmerling C. L.; Smith J.; SalomonFerrer R.; Swails J.; Walker R. C.; Wang J.; Wei H.; Wolf R. M.; Wu X.; Xiao L.; York D. M.; Kollman P. A.. AMBER 2018; University of California: San Francisco, USA, 2018.
- Wang J.; Wang W.; Kollman P. A.; Case D. A. Automatic atom type and bond type perception in molecular mechanical calculations. J. Mol. Graphics Modell. 2006, 25, 247–260. 10.1016/j.jmgm.2005.12.005. [DOI] [PubMed] [Google Scholar]
- Kennedy J.; Eberhart R. C. In Particle Swarm Optimization, Proceedings of the IEEE Int. Conf. on Neural Networks, IEEE, 1995; pp 1942–1948.
- Bachman P.; Sordoni A.; Trischler A. In Proceedings of the 34th International Conference on Machine Learning, Proceedings of Machine Learning Research, Doina P.; Yee Whye T., Eds.; PMLR, 2017; pp 301–310.
- Cohn D. A.; Ghahramani Z.; Jordan M. I. Active Learning with Statistical Models. J. Art. Intell. Res. 1996, 4, 129–145. 10.1613/jair.295. [DOI] [Google Scholar]
- Gubaev K.; Podryabinkin E. V.; Shapeev A. V. Machine learning of molecular properties: Locality and active learning. J. Chem. Phys. 2018, 148, 241727 10.1063/1.5005095. [DOI] [PubMed] [Google Scholar]
- Settles B.From Theories to Queries: Active Learning in Practice In Active Learning and Experimental Design Workshop in Conjunction with AISTATS 2010, Proceedings of Machine Learning Research, Isabelle G.; Gavin C.; Gideon D.; Vincent L.; Alexander S., Eds.; PMLR, 2011; pp 1–18.
- Liu H.; Cai J.; Ong Y.-S. An adaptive sampling approach for Kriging metamodeling by maximizing expected prediction error. Comput. Chem. Eng. 2017, 106, 171–182. 10.1016/j.compchemeng.2017.05.025. [DOI] [Google Scholar]
- Jones D. R.; Schonlau M.; Welch W. J. Efficient Global Optimization of Expensive Black-Box Functions. J. Global Optim. 1998, 13, 455–492. 10.1023/A:1008306431147. [DOI] [Google Scholar]
- Maxwell P.; Martín Pendás A.; Popelier P. L. A. Extension of the interacting quantum atoms (IQA) approach to B3LYP level density functional theory. Phys. Chem. Chem. Phys. 2016, 18, 20986–21000. 10.1039/C5CP07021J. [DOI] [PubMed] [Google Scholar]
- Fletcher T. L.; Popelier P. L. A. Multipolar Electrostatic Energy Prediction for all 20 Natural Amino Acids Using Kriging Machine Learning. J. Chem. Theory Comput. 2016, 12, 2742–2751. 10.1021/acs.jctc.6b00457. [DOI] [PubMed] [Google Scholar]
- Fletcher T. L.; Popelier P. L. A. Polarizable multipolar electrostatics for cholesterol. Chem. Phys. Lett. 2016, 659, 10–15. 10.1016/j.cplett.2016.06.033. [DOI] [Google Scholar]
- Cardamone S.; Popelier P. L. A. Prediction of Conformationally Dependent Atomic Multipole Moments in Carbohydrates. J. Comput. Chem. 2015, 36, 2361–2373. 10.1002/jcc.24215. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Frisch M. J.; Trucks G. W.; Schlegel H. B.; Scuseria G. E.; Robb M. A.; Cheeseman J. R.; Scalmani G.; Barone V.; Mennucci B.; Petersson G. A.; Nakatsuji H.; Caricato M.; Li X.; Hratchian H. P.; Izmaylov A. F.; Bloino J.; Zheng G.; Sonnenberg J. L.; Hada M.; Ehara M.; Toyota K.; Fukuda R.; Hasegawa J.; Ishida M.; Nakajima T.; Honda Y.; Kitao O.; Nakai H.; Vreven T.; Montgomery J. A. Jr.; Peralta J. E.; Ogliaro F.; Bearpark M.; Heyd J. J.; Brothers E.; Kudin K. N.; Staroverov V. N.; Kobayashi R.; Normand J.; Raghavachari K.; Rendell A.; Burant J. C.; Iyengar S. S.; Tomasi J.; Cossi M.; Rega N.; Millam J. M.; Klene M.; Knox J. E.; Cross J. B.; Bakken V.; Adamo C.; Jaramillo J.; Gomperts R.; Stratmann R. E.; Yazyev O.; Austin A. J.; Cammi R.; Pomelli C.; Ochterski J. W.; Martin R. L.; Morokuma K.; Zakrzewski V. G.; Voth G. A.; Salvador P.; Dannenberg J. J.; Dapprich S.; Daniels A. D.; Farkas Ö.; Foresman J. B.; Ortiz J. V.; Cioslowski J.; Fox D. J.. Gaussian 09, revision B.01; Gaussian, Inc.: Wallingford, CT, USA, 2009.
- Keith T. A.AIMAll, version 19; TK Gristmill Software: Overland Park, Kansas, USA, 2019.
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.









