Skip to main content
ACS AuthorChoice logoLink to ACS AuthorChoice
. 2024 Jan 31;128(6):1142–1153. doi: 10.1021/acs.jpca.3c06159

Interpretable Performance Models for Energetic Materials using Parsimonious Neural Networks

Robert J Appleton , Peter Salek , Alex D Casey §, Brian C Barnes , Steven F Son §, Alejandro Strachan †,*
PMCID: PMC10875666  PMID: 38296225

Abstract

graphic file with name jp3c06159_0006.jpg

Predictive models for the performance of explosives and propellants are important for their design, optimization, and safety. Thermochemical codes can predict some of these properties from fundamental quantities such as density and formation energies that can be obtained from first principles. Models that are simpler to evaluate are desirable for efficient, rapid screening of material screening. In addition, interpretable models can provide insight into the physics and chemistry of these materials that could be useful to direct new synthesis. Current state-of-the-art performance models are based on either the parametrization of physics-based expressions or data-driven approaches with minimal interpretability. We use parsimonious neural networks (PNNs) to discover interpretable models for the specific impulse of propellants and detonation velocity and pressure for explosives using data collected from the open literature. A combination of evolutionary optimization with custom neural networks explores and trains models with objective functions that balance accuracy and complexity. For all three properties of interest, we find interpretable models that are Pareto optimal in the accuracy and simplicity space.

1. Introduction

High energy density materials, including nitramines such as 1,3,5-trinitro-1,3,5-triazinane (RDX) and 1,3,5,7-tetranitro-1,3,5,7-tetrazocane (HMX), as well as 2-methyl-1,3,5-trinitrobenzene (TNT) and 2,4,6-trinitrobenzene-1,3,5-triamine (TATB), are important components in explosive and propellant formulations. Models capable of predicting the performance of these materials can accelerate the design of new energetic materials with improved performance and safety.1 This is particularly important in the field of energetic materials since experiments are not only costly and time-consuming but inherently dangerous. Molecular dynamics simulations, either driven by electronic structure calculations or reactive interatomic potentials, can provide important information regarding thermo-mechanical properties,27 decomposition paths,810 and performance of energetic materials.11 Recent progress has been made even in the prediction of kinetics and the role of defects in initiation.1115 These simulations are computationally intensive and are not applicable to high throughput screening. Thermochemical models can be used to compute performance properties from fundamental thermodynamic data and equations of state.16,17 These latter models, parametrized from a combination of electronic structure calculations and experiments,18,19 can predict detonation velocity and Chapman–Jouguet (C–J) pressure in detonations20 and ideal specific impulse of propellants. Examples of these thermochemical codes are Cheetah16 from Lawrence Livermore National Laboratory and NASA’s CEA.21 Given a potential new energetic molecule, when experimental measurements are not available, input parameters for thermomechanical codes can be obtained from electronic structure calculations.2229 For exploration and discovery purposes, less computationally intensive models with predictive power are highly desirable.

Physics-based and machine-learning expressions for the performance of these materials have been of interest for decades. Early work by Kamlet, Jacobs, and collaborators resulted in analytical expressions based on data collected from the Ruby thermochemical code and simple assumptions about the decomposition of CHNO materials.3033 A similar expression exists to predict the ideal specific impulse of propellants34 using the same decomposition assumptions in the Kamlet–Jacobs work. The interest in data-driven models has motivated recent efforts in the use of machine learning in conjunction with experimental or computational data for detonation performance models3541 and specific impulses (Isp).34 For example, Elton et al. assessed the importance of various material descriptors and compared several machine learning models to predict energetic properties, such as detonation velocities and pressures. Using a data set of 109 molecules they found the best descriptor to be sum-overbonds and the best model to be kernel ridge regression.36 More recent work by Barnes showed that generating a much larger data set (∼17,000 molecules) using quantum mechanical calculations and Cheetah thermochemical code allows for more advanced deep learning techniques, such as directed message-passing neural networks. These results showed that detonation properties could be predicted using only the skeletal formula for the molecule.38 Though the promise of these structural-based descriptors and graph-based featurization schemes has proven successful for energetic materials modeling, they lack the inherit interpretability that we want to achieve in this work. We also would like to leverage available data related to composite materials that contain multiple ingredients and phases; therefore, molecular connectivity-based descriptors, such as ECFP4 or Morgan descriptors, would not be best suited for these mixtures. Lastly, the focus on using primarily experimental data in this work results in a very limited data set, and these descriptors are largely successful in deep learning applications where the data is orders of magnitude more abundant. For these reasons and the ability to reconstruct state-of-the-art equations, we elect to use descriptors related to the composition and crystalline properties, such as heat of formation and loading density.

A persistent challenge in the application of machine learning to the field of energetics is data scarcity due to the complexity and cost of experiments and relatively few studies use experimental data.4244 Another challenge is that most machine learning techniques lack interpretability. To address these challenges, we apply a recently proposed tool, parsimonious neural networks (PNNs),45 to discover interpretable models that balance accuracy with simplicity to predict detonation properties and specific impulse (Isp) in CHNO materials. Using an evolutionary optimization, PNNs predict a family of models with varying accuracy and parsimony, from which an optimal Pareto front can be found. PNNs not only rediscover the previous state-of-the-art expressions for detonation performance and specific impulse but also find models that are simpler and more accurate.

2. Methods

2.1. Parsimonious Neural Networks

Neural networks use a network of artificial neurons, each of which takes several inputs and produces a single output via the application of an activation function, to model complex and nonlinear relationships.46 A typical artificial neuron, i, generates a single output (Yi) from its various inputs and a bias,

2.1. 1

where, fi is the activation function, N is the number of nodes in the previous layer, Yj are inputs to neuron i (the outputs of the previous layer), wij are the weights or the strength of the connection between neurons, and bi is a bias term. PNNs use custom NNs with evolutionary optimization to find models that balance accuracy and simplicity.45 PNNs bear similarities with symbolic regression, which also seeks to find expressions that describe data. Symbolic regression, as implemented in gplearn,47 uses trees to create expressions and the genetic algorithm to evolve the length and elements in the trees (that include operations and constants). The use of NNs in PNNs provides additional flexibility in terms of the expressions that can be constructed and parameter optimization. For example, weights in PNNs can be fixed or adjustable; in addition, PNNs can combine interpretable expressions with black-box NNs. Another difference is in the definition of the complexity term. The complexity of an expression in symbolic regression is typically defined by the number of terms present in the expression. PNNs go beyond this definition of complexity to distinguish between fixed and trainable weights, as well as the level of nonlinearity described by the functions used in each term. A more in-depth discussion of how complexity is defined in our objective function is presented in the following sections.

2.1.1. Custom Networks and Activation Functions

Activation functions typically used in NNs, such as the rectified linear unit (ReLU), sigmoid, and hyperbolic tangent, are motivated by data processing but are rarely found in chemistry or physics laws. This motivates the use of custom activation functions, with functions found in the physical sciences and engineering. Each neuron can take any of the following 5 activation functions: linear, inverse, multiplication, √, and Inline graphic. In addition, since physical laws often involve simple numbers, each weight or bias can be set to (i) a fixed nontrainable value (0, 1, 2) or (ii) a randomly initialized value that is trained through backpropagation. Note that weights of zero can remove neurons from the network, reducing the size of the model.

There are some expressions that exist, like products of values, which cannot be described with eq 1. Within our PNN structure, it is common for not all weights to be nonzero values. Simply multiplying all the inputs to a neuron together only allows for neurons that are completely connected with the previous layer to result in a nonzero output. To allow for partial connections to still utilize a multiply activation function, we only include nonzero values to be multiplied together (see eq S1 in the Supporting Information). To enable products, PNNs use eq 2, where only inputs with nonzero weights and/or biases are included in the multiplication function,

2.1.1. 2

Even root functions are common in physics expressions but are a potential problem within neural networks. Through initialization and training of weights/biases, negative inputs to even root functions are a common occurrence; even the activation function could be part of an optimal solution. To enable training through the nonphysical regions, PNNs use eq 3 to handle even root activation functions,

2.1.1. 3

Equation 3 allows for negative inputs to the even root functions and can result in the nonphysical meaning of a PNN equation. To handle this potential nonphysical activation function, we use a multioutput network and train the inputs of even root functions to be positive values. Including the bias term within the even root activation function gives the optimization process more tools to use when training the inputs of the even root function to be positive value. This process is explained in detail in the supplementary documentation (see eq S2).

2.1.2. Evolutionary Optimization

Depending on the task, our architectures have between 4 and 26 neurons (each with 5 possible activation functions), 4 to 26 bias terms (each with 3 possible settings), and between 12 and 126 weights (each with 3 possible settings). For our smallest architectures, the total number of networks possible is already over 26 billion, and thus brute force training is a computationally extensive task. Thus, we use a genetic algorithm to explore the space of potential networks with the goal of balancing accuracy and simplicity with an objective function to be minimized,

2.1.2. 4

where ETest represents the aggregated test error of the PNN across a 5-fold cross validation scheme, where the model is trained and tested under 5 independent train and test splits and each data point appears in the test set once and only once. The terms inside parentheses represent the complexity of the model. The first term in the parentheses sums over the NN neurons in the network and is designed to favor simple activation functions. In our examples, activation functions such as linear, inverse, multiplication, √, and Inline graphic are considered and the corresponding complexity scores are assigned wi = 0,1,2,3, and 4, respectively. The second complexity term sums over all weights and biases (Nw) and favors a weight of zero, then simple, nontrainable numbers followed by trainable parameters. A weight 0 (no connection) is scored with a 0, nonzero, fixed parameters are scored with a 1, and trainable parameters are scored with a 2. Together, these terms combine the number of trainable parameters and the level of nonlinearity in the functional form of the model to capture the complexity. The models balance the complexity and accuracy terms using the parsimony term, p. The larger the parsimony term, the more important simplicity in the model is. Using a range of parsimony values, we can discover a family of expressions with various levels of complexity and accuracy. The cost of genetic optimization increases with the number of input variables, and techniques to reduce the number of input descriptors including CUR decomposition48 of feature importance metrics from random forests49,50 could be used as a preprocessing step before PNNs. This was not necessary for the examples discussed below because the number of features is small (3–6).

The DEAP51 package is used for the genetic optimization and Keras52 was used to train each network. Population sizes varied from 400 to 800 individuals and were initialized by the random permutation of the possible activation functions and parameter types. A tournament-based selection was used to select the fittest individuals in each generation to be used for reproduction. Random mutations and a two-point crossover scheme were used for the reproduction of each generation. The mutation rate was initially set to a high value such as 0.7 and decreased progressively every 10 generations reaching values as low as 0.05. The crossover probability was held fixed at 0.3. The populations evolved hundreds of generations until the population became saturated and a new optimal model had not been discovered in several (∼10) generations. For each generation, the models are trained and tested across a 5-fold cross-validation scheme where only the trainable parameters are fit. The values of the weights and biases are averaged across the splits to generate a single expression, as shown in later sections. To ensure the models maintained the same performance after the averaging of the parameters we have plotted the predictions from these expressions against the experimental and reference results see Supporting Information. For benchmarking purposes, we also compare the resulting PNNs to random forest models fit for each case.

2.2. Detonation: State-of-the-Art and Data

Kamlet and Jacobs3033 used the so-called H2O–CO2 arbitrary decomposition that assumes the formation of N2, H2O and CO2 (not CO) as the most important detonation products (H2O is favored over CO2). Following this assumption, a given CaHbNcOd explosive that contains at least enough oxygen to convert all hydrogen to water but nothing more than what is needed to convert carbon to CO2 decomposes following

2.2. 5

From this decomposition, one can calculate the moles of gas products (N) and the average molecular weight of gas products (M) from the following expressions,

2.2. 6a-b

Similarly, if the crystal heat of formation (ΔH0f) is known, the amount of heat release upon detonation, also known as the heat of reaction (Q), can be calculated from

2.2. 7

From these variables, the Kamlet–Jacobs3033 (K–J) expressions for detonation velocity (D) and C–J pressure (PCJ) for an explosive at a given loading density (ρ0) are

2.2. 8

and

2.2. 9

Given this state of the art, we will explore two types of network architectures. The first uses the Kamlet–Jacobs reaction assumption and takes N, M, and Q and the loading density (ρ0) as input features, which we will call K–J features. We apply these models to predict detonation velocity (D) and C–J pressure (PCJ) separately as well as a dual-objective network to predict both properties simultaneously. The architectures, shown in Figure 1a, are designed such that the number of nodes is sufficient for generating the equation from the K–J model.

Figure 1.

Figure 1

Network architecture for models predicting detonation velocity (D), C–J pressure (PCJ), and specific impulse (Isp) using K–J features (a) and composition features (b).

The second, shown in Figure 1b, takes the composition ((a, b, c, and d)), the crystal heat of formation (ΔH0f), and loading density (ρ0) as input features, which we will call composition features. Here, we seek to lift the decomposition assumption in the search of more general models. As before, we apply these methods for both single and dual objective predictions for detonation velocity (D) and C–J pressure (PCJ). The architectures are designed such that the number of nodes is sufficient for regenerating the equation from the K–J model.

To train the models, we collected performance properties from the open literature5355 for CHNO explosives. We found 144 experimental detonation velocities and 64 experimental C–J pressures. To complete the data set, we used the thermochemical code Explo517 to calculate the detonation properties for 132 of these explosives for which reactant data were available in the code’s database. In this way, we supplemented the 64 experimental C–J pressures with 68 values calculated with Explo5. This data set includes aromatic pure explosives (TNT), aliphatic pure explosives (RDX, HMX, and PETN), as well as plastic bonded explosives (PBXs), and CHNO composites (Pentolite, Cyclotol, and Octol). The data collected are available in the electronic form as the Supporting Information.

2.3. Specific Impulse: State-of-the-Art and Data

As mentioned above, work was done to predict the ideal specific impulse using the same H2O–CO2 arbitrary decomposition assumption used by Kamlet-Jacobs. Using multilinear regression (MLRA), Frem fitted the square of the ideal Isp to N (mols of gas products) and Q (heat of reaction)34

2.3. 10

Note that this work found that the K–J feature M (grams of gas products) was not important to predict ideal specific impulse. We created a PNN architecture to learn models using the features computed by the H2O–CO2 arbitrary decomposition assumption as done in the work by Frem,34 see Figure 1a. This structure is large enough to reconstruct the MLRA equation and discover a potentially wide range of other expressions. As done in the detonation models, we also developed models that did not assume the K–J decomposition, see Figure 1b. This structure can rediscover the same decomposition assumption used by Kamlet–Jacobs3033 and a wide range of other potential decompositions.

The models above were trained using CHNO explosives and propellants found in the work by Frem.34 The propellant database contains monopropellants, single/double/triple-base propellants, pseudopropellants, composite, liquid monopropellants, and liquid bipropellants. The ideal specific impulse is determined by an ISPBKW thermochemical code56 at a constant chamber pressure of 1000 psi and a pressure ratio that expands to sea level pressure. The MLRA equation weights were only trained on monopropellants, and for the work described, this model was retrained on the same data set available to the PNNs. The data used is included in electronic form as Supporting Information.

3. Detonation Models

3.1. Kamlet–Jacobs Features

We developed a family of PNN detonation models with K–J features by varying the parsimony parameter across orders of magnitude to observe the trade-off between model complexity and accuracy. Figure 2 shows the various models obtained in terms of the accuracy (average test RMSE across 5 independent test sets) vs complexity, defined as the term multiplying the parameter p in the objective function, eq 5. We show models of detonation velocity, C–J pressure, and combined predictions. In all cases, the K–J model is shown as a black star. The family of PNN models defines a Pareto front of models that cannot be made simpler without sacrificing accuracy or more accurate without making them more complex. Interestingly, PNNs find interpretable models that are simpler and more accurate than the K–J equations.

Figure 2.

Figure 2

Pareto fronts for PNNs discovered using normalized (min-max norm) K–J features for predicting (a) detonation velocity, (b) C–J pressure, and (c) dual objective networks for predicting detonation velocity and C–J pressure. The networks for the K–J equations are identified by the black stars in each front.

A key advantage of PNNs over other ML techniques is their interpretability, and Figure 2 highlights selected models we will discuss in some detail. For all the expressions shared below, the values of the adjusted weights/biases are retrained over the entire data set (see the supplemental for the models maintain the claimed accuracy from Figure 2). The simplest model for detonation velocity with an accuracy close to that of K–J (PNN1) predicts a linear function of N, Q, and ρ0,

3.1. 11

Note that the linear dependence on density also appears in the K–J model and the theory of Rankine–Hugoniot.57 As expected, the model predicts increasing velocity with N and Q. It is encouraging that the model can make good predictions with basic information about the loading density, product formation, and heat release, and it is interesting that it does not use M (average molecular weight of products) as an input. The next model in terms of complexity (PNN2) is also linear and improves the accuracy by including a dependence on M that provides more information related to product formation,

3.1. 12

PNN3 is among the most accurate models for detonation velocity. It includes a fourth root dependence on M which is consistent with the functional form from the K–J model and emphasizes that the dependence on M is weak compared to the other variables. Note the value of M cannot be less than 19.43 or this model will return a complex number. This model also includes a term that multiplies Q and ρ0, favoring explosives that are both high in density and heat release,

3.1. 13

Finally, the detonation velocity branch of the dual-objective PNNs yields two expressions (PNNs 7 and 8) that are identical in form to PNN1 (eq 11) containing a linear function of N, Q and ρ0, differing only by the values of the coefficients. The fact that this function appears in the dual objective networks also points to the idea that the term can be fit to be useful for predicting both detonation velocity and pressure. The performance does decrease slightly between these models and PNN1:

3.1. 14
3.1. 15

Turning our attention to models for C–J pressure, we find cubic and biquadratic dependence on ρ0 for C–J pressure, instead of quadratic shown in the K–J model, as seen in PNN4–PNN6 and the C–J pressure portion of PNN7 and PNN8.

The simplest model found for C–J pressure (PNN4) with an accuracy slightly worse than the K–J model, again contains a linear combination of N, Q, and ρ0, consistent with the findings for detonation velocity,

3.1. 16

However, in this case, the functional form of the equation is extremely nonlinear, showing that the resulting C–J pressure is sensitive to variations of this term. The next model (PNN5) is simpler and more accurate than the K–J model,

3.1. 17

This model contains a quadratic term on a linear combination of N and ρ0, multiplied by a linear combination of Qρ0, Q, and ρ0. The second term is like the term in eq 13, which is our most accurate model for detonation velocity. So far, both models can achieve relatively good accuracy without the use of M. However, the next model (PNN6) contains M and makes a significant improvement to accuracy at a cost in complexity,

3.1. 18

Increasing complexity past PNN6 shows no significant increase in the accuracy, as shown in Figure 2b. The expressions for C–J pressure extracted from the dual objective networks (PNN7 and PNN8) are vastly different, despite having identical expressions for detonation velocity. The simpler model (PNN7) is like the expression in eq 16 (PNN 4), with an extra term relating to ρ0. The model for PNN8 contains the linear function of N, Q and ρ0 that has been consistent throughout several of the expressions discovered. However, in this case, the term is added to a large product of different linear terms.

3.1. 19
3.1. 20

Across the equations, we find trends, such as repeated use of a linear combination of N, Q, and ρ0. The repeated use of this linear combination of N, Q, and ρ0 in both detonation velocity and pressure is evidence that this term must be closely related to the detonation performance. We also note the ability to find accurate expressions that do not include M, however the models that achieve the highest accuracy do include this variable, meaning that giving the model more information about product formation does help. We want to highlight that the results for equations derived using K–J features we consistently find that detonation velocity shows linear dependence with loading density, in agreement with both the original K–J equation and the theory of Rankine–Hugoniot. However, the equations derived for C–J pressure consistently show cubic or even biquadratic dependence with loading density, which contrasts with the quadratic dependence described in the K–J equation. In the context of machine learning, these expressions are extremely simple with only 2–8 trainable parameters where traditional neural networks can have 100s or 1000s (or more) trainable parameters.

3.2. Composition Features

Going beyond the ideal H2O–CO2 product hierarchy assumption, we explored models that use the composition, heat of formation, and loading density to predict the detonation properties. Figure 3 shows the various models and Pareto fronts for the PNNs discovered for predicting detonation velocity (D), C–J pressure (PCJ), and dual objective networks for predicting both properties. Again, PNNs find models that are simple and more accurate than K–J. The latter written as explicit functions of a, b, c, and d is represented as a black star:

3.2. 21
3.2. 22

Figure 3.

Figure 3

Pareto fronts for PNNs discovered using composition features (ΔH0f and ρ0 min-max norm) for predicting detonation velocity (a), detonation pressure (b), and dual objective networks for predicting detonation velocity and pressure (c). The networks for the K–J equations are identified by the black star in each front.

Again, we consistently find linear dependence on ρ0 for detonation velocity, in agreement with the K–J model and theory of Rankine–Hugoniot, as seen in PNN1, PNN2, and the detonation velocity portion of PNN5. The expression for PNN1 is particularly interesting because the model learned to generate a term that shares a similar form to the expression for heat of detonation (Q) in the K–J expression, where the numerator is a linear function relating the heat of formation of the explosive (ΔH0f) and the amount of hydrogen and oxygen in the molecule:

3.2. 23

There are some clear differences between PNN1 and K–J, one being that the denominator only includes the sum of the amounts of carbon, hydrogen, and oxygen, and the model chooses to treat the nitrogen separately. Also, the model does not apply any root function to this term, whereas the K–J model applies a fourth root to Q.

Remarkably, the expression for PNN2 does not include ΔH0f and yet has superior accuracy to the K–J model,

3.2. 24

This model is interesting because it can be applied to data sets with only composition and density for energetic materials, such as the Cambridge Structural Database (CSD), without the need for ΔH0f which requires additional experiments or simulations. We also note that this model is independent of the number of hydrogen atoms in the molecule.

To simplify the expression for PNN5, we represent a repeated term with the symbol β. The form of this term is expressed,

3.2. 25

The term β is expressed,

3.2. 26

Remarkably, we are again able to find a model that does not use ΔH0f; however, this model is not nearly as accurate as the one in eq 24 and is even less accurate than the K–J equation.

In the expressions for PNN3, PNN4, and the C–J pressure portion of PNN5, shown in eqs 2831, there is a quadratic dependence on ρ0 in agreement with the K–J model and contrary to the results from the K–J features. The simplest model for C–J pressure (PNN3) with an accuracy slightly worse than that of the K–J model is shown here:

3.2. 27

The term ψ is expressed,

3.2. 28

The expression for this model does not include dependence on the amount of nitrogen in the molecule, which is surprising given the intuition that explosives tend to be higher performing as the fraction of nitrogen increases, this could be accomplished by the term including ΔH0f. This expression includes a term that combines the ΔH0f with the amount of carbon and oxygen, which is like a heat of reaction term.

To simplify the expressions for PNN4 and PNN5, we have represented repeated terms with symbols Φ and β. The forms of these terms are expressed below. PNN4 was found to be simpler and more accurate than the K–J model,

3.2. 29
3.2. 30

Term β comes from eq 26 and Φ is expressed,

3.2. 31

In summary, the equations derived using the composition features again show a consistent linear dependence of detonation velocity and loading density, like the results for the K–J features. Also, the C–J pressure equations now reflect the quadratic dependence on loading density, in agreement with the K–J equation and different from the higher order dependence observed in the results for models using the K–J features. We also note that we discovered terms that reflect the functional form of a heat of detonation term (see eqs 23 and 28) but are different than what is described by the H2O–CO2 product hierarchy assumption (see eq 7).

4. Specific Impulse Models

4.1. Kamlet–Jacob features

Applying the same process as the detonation models, using the K–J features, Figure 4 shows the models and Pareto fronts for the PNNs predicting the Ideal Specific Impulse. The black star represents the MLRA equation,

4.1. 32

Interestingly, this linear model of Isp squared lies on the Pareto front and is found by the PNNs.

Figure 4.

Figure 4

Pareto front for PNNs discovered using K–J features for predicting the Ideal Specific Impulse using no normalization. The networks for the MLRA equations are identified by the black star in the front.

The equations for PNN1–PNN3 have been extracted and are listed in eqs 3335. PNN1 and PNN2 exhibit linear functions of N and Q as the simplest expressions with reasonable accuracy. In PNN3, the form of the MLRA equation is rediscovered. We note that the back-propagation method used to train the neural network fails to converge to the same accuracy obtained via linear fitting. Only models that are more complex than the MLRA equation are more accurate, and this indicates that the MRLA equation is on the Pareto front for this PNN structure:

4.1. 33
4.1. 34
4.1. 35

The square root persists in the more complex PNNs also. We attribute this to the known relationship of specific impulse squared being proportional to the flame temperature over the average molecular weight of the exhaust gas.34,58 The average molecular weight of the gases shows up in the model at higher complexity PNNs with a small increase in accuracy. It makes sense that providing more information about product formation would improve the model; however, we note that it was stated by Frem to not be an important parameter in the prediction of ideal specific impulse.34

4.2. Composition Features

Again, like the detonation models, the removal of the H2O–CO2 arbitrary decomposition assumption used in the K–J model allows the PNNs to discover expressions to predict ideal Isp using the atomic composition and heat of formation. The specific impulse is proportional to the square of the flame temperature over the average product molecular weight. Therefore, to simplify the combinatorial problem addressed by the GA optimization, we fixed the output node of the PNN structures to create models following this trend and only consider models with a square root as the final activation function. Figure 5 shows the Pareto fronts for the PNNs predicting Isp. The black star represents the multilinear regression analysis equation. In the non-normalized composition features, we can consistently find families of PNNs that have better predictive capabilities than the MLRA equation,

4.2. 36

Figure 5.

Figure 5

Pareto front for PNNs discovered using composition features for predicting Ideal Specific Impulse. The networks for the MLRA equations are identified by the black star.

The PNN4 model has a slightly lower RMSE than MLRA but is significantly simpler,

4.2. 37

In PNN4, we observe that the expression inside of the square root has a very similar structure to the heat of reaction (Q) from the H2O–CO2 decomposition assumption, like the results seen for the detonation models using composition features. The expression not only has a linear function relating the heat of formation of the explosive (ΔH0f) and the amount of hydrogen and oxygen in the molecule but also contains the amount of nitrogen. This function could be using a different decomposition assumption but not easily determined.

5. Discussion

5.1. Trends in Discovered Equations

For detonation properties, across all expressions discovered using the K–J features, a linear function of N, Q, and ρ0 was the most common term. Using the composition features, a model for detonation velocity was discovered (eq 25) that is more accurate than the K–J model and does not depend on the heat of formation of the explosive. We also discovered a model for detonation velocity (see eq 24) with a term that resembles an expression for the heat of detonation with some similarities and differences from the expression derived from the H2O–CO2 arbitrary decomposition assumption. For detonation velocity, models that do not include M or heat of formation can outperform the K–J equation and generally capture most materials in our data set apart from a few outliers that fall far from the parity line; see parity plots in the Supporting Information. The only model discussed that can capture these outliers and ultimately lead to the lowest RMSE error is eq 13. This equation contains both M and the heat of formation (by way of Q) as well as a few nonlinear terms in contrast to the other linear models based on the K–J features. This shows that a nonlinear description of the product formation (N and M) and heat release (Q which depends on heat of formation) is needed to capture the detonation performance of these materials that are otherwise hard to predict by the simpler models. There are models on the high complexity tail of the Pareto front for the composition features for detonation velocity (see Figure 3a) that can match the accuracy of eq 13, but at an extreme cost in complexity that they cannot be feasibly written into a simple equation. All the models presented for C–J pressure contain Q or heat of formation and are always highly nonlinear. The most accurate model for C–J pressure, eq 18, stands out by its inclusion of M compared with the other models discussed. In practice, we suggest that when the ΔH0f is known, the model described by eq 13 should perform the best in predicting detonation velocity and eq 18 for predicting C–J pressure. When ΔH0f is not known, the model described by eq 24 should provide a reasonable estimate for detonation velocity and thus can be extremely useful for high-throughput screening of large unknown chemical spaces. A demonstration of this is shown in Section 5.3 below.

For specific impulse using the K–J features, our results confirm that the MLRA model34 is an effective expression (eq 32), given this level of parsimony, it should be expected that PNNs cannot beat a linear regression. More accurate expressions than the MLRA model have been discovered but come at the cost of increasing complexity. The best model discovered for Isp, eq 37, uses composition features and lifts the restriction to the H2O–CO2 decomposition assumption. This model is more accurate and has a lower complexity than the MLRA equation and takes a functional form similar to the heat of reaction term (Q) as described by the H2O–CO2 decomposition assumption but with clear differences. The function does not use the molecular weight in the denominator, instead a linear combination of the number of carbons, nitrogens, and oxygens and neglecting the hydrogens. The numerator also adds a term relating to nitrogen, which differs from the H2O–CO2 decomposition assumption that considers only the formation of water. Though the increase in accuracy from the models developed with the K–J features to the models developed with the composition features is very minimal, not restricting the model to the H2O–CO2 decomposition assumption is helpful for generalizing beyond the typical CHNO materials.

5.2. Benchmarking PNNs against Other Machine Learning Methods

To benchmark PNNs, we compare their performance against random forests (RF), a popular machine learning method for scarce data sets. For each RF model, the hyperparameters (number of trees and maximum depth) were optimized using the GridSearchCV function from scikit-learn.58 The RF models are also evaluated using a 5-fold cross-validation scheme, the same way as the PNN models. Below in Table 1, we compare the PNNs to random forest models that are fit for the detonation velocity, C–J pressure, and specific impulse.

Table 1. Comparison of Mean Cross-Validated Test Metrics (RMSE and Q2) for Literature Models (K–J3033 for D and PCJ (eqs 8, 9) and MLRA34 for Isp (eq 32)), Random Forest Models and the Equations Derived by the PNNs for Predicting Detonation Velocity (eqs 1115, 2325), C–J Pressure (eqs 1620, 27, 29, and 30), and Specific Impulse (eqs 3335, 37)a.

detonation velocity (km/s)
model DKJ(3033) RFKJ RFc DKJ1 DKJ2 DKJ3 DKJ7 DKJ8 Dc1 Dc2 Dc5
RMSE 0.40 0.30 0.31 0.34 0.30 0.28 0.42 0.40 0.37 0.33 0.57
STDEV 0.08 0.07 0.05 0.06 0.05 0.03 0.06 0.06 0.10 0.08 0.09
Q2 0.87 0.93 0.92 0.87 0.92 0.93 0.85 0.87 0.87 0.88 0.80
STDEV 0.02 0.03 0.03 0.05 0.04 0.03 0.06 0.05 0.07 0.03 0.12
C–J pressure (kbars)
model PKJ(3033) RFKJ RFc PKJ4 PKJ5 PKJ6 PKJ7 PKJ8 Pa3 Pa4 Pa5
RMSE 17.3 16.5 19.4 17.2 14.7 12.8 14.7 13.9 17.1 14.3 16.1
STDEV 2.9 1.5 3.9 3.3 2.7 2.6 2.9 3.1 2.8 3.4 2.8
Q2 0.95 0.96 0.94 0.94 0.97 0.98 0.97 0.97 0.95 0.97 0.96
STDEV 0.02 0.01 0.03 0.02 0.01 0.01 0.02 0.02 0.02 0.01 0.02
specific impulse (ns/g)
model Isp, MLRA(34) RFKJ RFc IKJsp1 IKJsp2 IKJsp3 Icsp4
RMSE 0.06 0.09 0.14 0.16 0.08 0.07 0.05
STDEV 0.01 0.03 0.02 0.04 0.01 0.01 0.01
Q2 0.95 0.91 0.81 0.81 0.89 0.95 0.96
STDEV 0.02 0.05 0.05 0.05 0.05 0.03 0.02
a

The standard deviation across the 5 splits is also shown. The RF and PNN models have a superscript KJ when trained with K–J features and a superscript c when trained with composition features.

RF models do not achieve the same level of predictive power as the equations derived by using PNNs. This emphasizes that the level of simplicity that is enforced during the design of the PNNs helps the performance on otherwise difficult data sets to model with traditional machine learning methods. We also provide further validation for eqs 13, 23, and 24 in the Supporting Information by comparing predictability to a recent ANN42 and the K–J equation. The PNNs performed at the same level of accuracy as the ANN for both test sets. The ANN is a fully connected neural network acting as a “black box” model, while the PNN is just as predictive while providing a high level of interpretability by providing a simple functional form. We note that our model does not incorporate experimental measurement noise or other uncertainties. Published measurements of detonation velocity exhibit standard deviations of the same material and density in the range of 0.1–0.2 km/s.5961 As expected, this is smaller than the RMSE of our best-performing model. Additional details about experimental uncertainties are included in the Supporting Information.

5.3. Large-Scale Material Screening

To demonstrate the capabilities of using PNNs for large-scale screening of materials, we queried CHNO materials from the Cambridge Structural Database (CSD)62 that had reported experimental densities at standard conditions. We then filtered out all of the cocrystals, materials with an oxygen balance below −100, and materials with N and M values outside of the range of the training set. On the remaining 2700 materials, we used composition PNN2 (eq 24) to predict the detonation velocity. We found 80 materials to have a detonation velocity above 9 km/s. After searching the literature, all of these 80 materials had a paper or a patent investigating it as a potential energetic material. Out of the 80 materials, 63 calculated or experimentally measured detonation velocities. All but one of these 63 materials were reported to have a detonation velocity above 8 km/s. It is important to note that the density reported in the CSD is almost always higher than the density reported in the literature, leading to slightly higher predicted detonation velocity when using the PNN to screen. This is evidence that the equation can be used to discover useful energetic materials. The distribution of the predictions can be seen in the Supporting Information.

6. Conclusions

In this work, we applied PNNs to discover interpretable models for the specific impulse of propellants and detonation velocity and pressure for explosives using data collected from the open literature. In both cases, we were able to generate a family of expressions that mark the Pareto front between accuracy and complexity. Using these methods, we not only rediscover current state-of-the-art expressions but also find models that are simpler and more accurate. The methods described in this work are useful for designing models that are interpretable but also simplified, giving them an advantage over other machine learning techniques when applied to small data sets.

Acknowledgments

This research study was sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement No. W911NF-20-2-0189. The authors would also like to acknowledge Dr. Saaketh Desai and Dr. Betsy Rice for helpful discussions on this work.

Data Availability Statement

A demo for the design and training of parsimonious neural networks is freely available on nanoHUB at the following link: https://nanohub.org/tools/pnndemo.

Supporting Information Available

The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acs.jpca.3c06159.

  • Additional details to describe our methods and figures to support our results in the supplemental document. The data used to train and evaluate our models for both the detonation properties for high explosives and the specific impulse for propellants (PDF)

  • Data used for model training (ZIP)

Author Contributions

R.J.A. and P.S. equal contributions.

The authors declare no competing financial interest.

Special Issue

Published as part of The Journal of Physical Chemistry A virtual special issue “Machine Learning in Physical Chemistry Volume 2”.

Supplementary Material

jp3c06159_si_001.pdf (962KB, pdf)
jp3c06159_si_002.zip (80.9KB, zip)

References

  1. Advanced Energetic Materials . In Advanced Energetic Materials; National Academies Press, 2004. [Google Scholar]
  2. Sakano M.; Hamilton B.; Islam M. M.; Strachan A. Role of Molecular Disorder on the Reactivity of RDX. J. Phys. Chem. C 2018, 122, 27032–27043. 10.1021/acs.jpcc.8b06509. [DOI] [Google Scholar]
  3. Hamilton B. W.; Strachan A.. Many-Body Mechanochemistry: Intra-molecular Strain in Condensed Matter Chemistry ChemRxiv 2022, 10.26434/chemrxiv-2022-bvdmp. [DOI] [Google Scholar]
  4. Hamilton B. W.; Kroonblawd M. P.; Strachan A. The Potential Energy Hotspot: Effects from Impact Velocity, Defect Geometry, and Crystallographic Orientation. J. Phys. Chem. C 2022, 126, 3743–3755. 10.1021/acs.jpcc.1c10226. [DOI] [Google Scholar]
  5. Kroonblawd M. P.; Hamilton B. W.; Strachan A. Fourier-like Thermal Relaxation of Nanoscale Explosive Hot Spots. J. Phys. Chem. C 2021, 125, 20570–20582. 10.1021/acs.jpcc.1c05599. [DOI] [Google Scholar]
  6. Zybin S. V.; Goddard W. A.; Xu P.; Van Duin A. C. T.; Thompson A. P.. Physical mechanism of anisotropic sensitivity in pentaerythritol tetranitrate from compressive-shear reaction dynamics simulations Appl. Phys. Lett. 2010, 96, 081918, 10.1063/1.3323103. [DOI] [Google Scholar]
  7. Budzien J.; Thompson A. P.; Zybin S. V. Reactive molecular dynamics simulations of shock through a single crystal of pentaerythritol tetranitrate. J. Phys. Chem. B 2009, 113, 13142–13151. 10.1021/jp9016695. [DOI] [PubMed] [Google Scholar]
  8. Ford J.; et al. Nitromethane decomposition via automated reaction discovery and an ab initio corrected kinetic model. J. Phys. Chem. A 2021, 125, 1447–1460. 10.1021/acs.jpca.0c09168. [DOI] [PubMed] [Google Scholar]
  9. Sakano M. N.; et al. Unsupervised Learning-Based Multiscale Model of Thermochemistry in 1,3,5-Trinitro-1,3,5-triazinane (RDX). J. Phys. Chem. A 2020, 124, 9141–9155. 10.1021/acs.jpca.0c07320. [DOI] [PubMed] [Google Scholar]
  10. Wood M. A.; Van Duin A. C. T.; Strachan A. Coupled thermal and electromagnetic induced decomposition in the molecular explosive α-HMX; A reactive molecular dynamics study. J. Phys. Chem. A 2014, 118, 885–895. 10.1021/jp406248m. [DOI] [PubMed] [Google Scholar]
  11. Hamilton B. W.; et al. Predicted Reaction Mechanisms, Product Speciation, Kinetics, and Detonation Properties of the Insensitive Explosive 2,6-Diamino-3,5-dinitropyrazine-1-oxide (LLM-105). J. Phys. Chem. A 2021, 125, 1766–1777. 10.1021/acs.jpca.0c10946. [DOI] [PubMed] [Google Scholar]
  12. Hamilton B. W.; Kroonblawd M. P.; Islam M. M.; Strachan A. Sensitivity of the Shock Initiation Threshold of 1,3,5-Triamino-2,4,6-trinitrobenzene (TATB) to Nuclear Quantum Effects. J. Phys. Chem. C 2019, 123, 21969–21981. 10.1021/acs.jpcc.9b05409. [DOI] [Google Scholar]
  13. Wood M. A.; Cherukara M. J.; Kober E. M.; Strachan A. Ultrafast Chemistry under Nonequilibrium Conditions and the Shock to Deflagration Transition at the Nanoscale. J. Phys. Chem. C 2015, 119, 22008–22015. 10.1021/acs.jpcc.5b05362. [DOI] [Google Scholar]
  14. Kroonblawd M. P.; Fried L. E.. High Explosive Ignition through Chemically Activated Nanoscale Shear Bands Phys. Rev. Lett. 2020, 124, 206002, 10.1103/PhysRevLett.124.206002. [DOI] [PubMed] [Google Scholar]
  15. Wood M. A.; Kittell D. E.; Yarrington C. D.; Thompson A. P.. Multiscale modeling of shock wave localization in porous energetic material Phys. Rev. B 2018, 97, 014109, 10.1103/PhysRevB.97.014109. [DOI] [Google Scholar]
  16. Fried L.; Souers P.. CHEETAH: A Next Generation Thermochemical code. United States, 1994. Preprint at DOI: 10.2172/95184. [DOI]
  17. Sućeska M.EXPLO5-COMPUTER PROGRAM FOR CALCULATION OF DETONATION PARAMETERS. 110–111, 2001, Preprint at.
  18. Yoo P.Neural network reactive force field for C, H, N, and O systems npj Comput. Mater., 2021, 7, 9, 10.1038/s41524-020-00484-3. [DOI] [Google Scholar]
  19. Senftle T. P.The ReaxFF reactive force-field: Development, applications and future directions npj Comput. Mater. 2016, 2, 15011, 10.1038/npjcompumats.2015.11. [DOI] [Google Scholar]
  20. Weiser V.; Webb R.. Ernst-Christian Koch. Review on Thermochemical Codes. In 36th International Pyrotechnics Seminar; MSIAC, 2009. [Google Scholar]
  21. Mcbride B. J.; Gordon S.. Computer Program for Calculation of Complex Chemical Equilibrium Compositions and Applications II. Users Manual and Program Description. Preprint at, 1996. 10.1093/jnci/88.23.1710. [DOI]
  22. Rice B. M.; Pai S. V.; Hare J. Predicting Heats of Formation of Energetic Materials Using Quantum Mechanical Calculations. Combust. Flame 1999, 118, 445–458. 10.1016/S0010-2180(99)00008-5. [DOI] [Google Scholar]
  23. Fried L. E.; Manaa R.; Pagoria P. F.; Simpson R. L. DESIGN AND SYNTHESIS OF ENERGETIC MATERIALS 1. Annual Review of Research 2001, 31, 291–321. 10.1146/annurev.matsci.31.1.291. [DOI] [Google Scholar]
  24. Rice B. M.; Hare J. J.; Byrd E. F. C. Accurate predictions of crystal densities using quantum mechanical molecular volumes. J. Phys. Chem. A 2007, 111, 10874–10879. 10.1021/jp073117j. [DOI] [PubMed] [Google Scholar]
  25. Rice B. M.; Byrd E. F. C. Evaluation of electrostatic descriptors for predicting crystalline density. J. Comput. Chem. 2013, 34, 2146–2151. 10.1002/jcc.23369. [DOI] [PubMed] [Google Scholar]
  26. Byrd E. F. C.; Rice B. M. A comparison of methods to predict solid phase heats of formation of molecular energetic salts. J. Phys. Chem. A 2009, 113, 345–352. 10.1021/jp807822e. [DOI] [PubMed] [Google Scholar]
  27. Byrd E. F. C.; Rice B. M. Improved prediction of heats of formation of energetic materials using quantum mechanical calculations. J. Phys. Chem. A 2006, 110, 1005–1013. 10.1021/jp0536192. [DOI] [PubMed] [Google Scholar]
  28. Manaa M. R.; Fried L. E.; Kuo I. F. W. Determination of enthalpies of formation of energetic molecules with composite quantum chemical methods. Chem. Phys. Lett. 2016, 648, 31–35. 10.1016/j.cplett.2016.01.071. [DOI] [Google Scholar]
  29. Bastea S.; Fried L. E.. Chemical Equilibrium Detonation, in Shock Wave Science and Technology Reference Library; Springer: Berlin, Heidelberg, 2012; Vol. 6. [Google Scholar]
  30. Kamlet M. J.; Jacobs S. J. Chemistry of detonations. I. A simple method for calculating detonation properties of C-H-N-O explosives. J. Chem. Phys. 1968, 48, 23–35. 10.1063/1.1667908. [DOI] [Google Scholar]
  31. Kamlet M. J.; Ablard J. E. Chemistry of detonations. II. Buffered equilibria. J. Chem. Phys. 1968, 48, 36–42. 10.1063/1.1667930. [DOI] [Google Scholar]
  32. Kamlet M. J.; Dickinson C. Chemistry of detonations. III. Evaluation of the simplified calculational method for chapman-jouguet detonation pressures on the basis of available experimental information. J. Chem. Phys. 1968, 48, 43–50. 10.1063/1.1667939. [DOI] [Google Scholar]
  33. Kamlet M. J.; Hurwitz H. Chemistry of detonations. IV. Evaluation of a simple predictional method for detonation velocities of C-H-N-O explosives. J. Chem. Phys. 1968, 48, 3685–3692. 10.1063/1.1669671. [DOI] [Google Scholar]
  34. Frem D.A reliable method for predicting the specific impulse of chemical propellants J. Aerosp. Technol. Manage. 2018, 10, 945, 10.5028/jatm.v10.945. [DOI] [Google Scholar]
  35. Elton D. C.; Boukouvalas Z.; Fuge M. D.; Chung P. W. Deep learning for molecular design - A review of the state of the art. Molecular Systems Design and Engineering 2019, 4, 828–849. 10.1039/C9ME00039A. [DOI] [Google Scholar]; Preprint at.
  36. Elton D. C.; Boukouvalas Z.; Butrico M. S.; Fuge M. D.; Chung P. W.. Applying machine learning techniques to predict the properties of energetic materials Sci. Rep. 2018, 8, 9059, 10.1038/s41598-018-27344-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Barnes B. C.Machine Learning of Energetic Material Properties. In Proceedings of the 16th International Detonation Symposium (arXiv:1807.06156 [cond-mat.mtrl-sci], 2018.
  38. Barnes B. C.Deep learning for energetic material detonation performance.In AIP Conference Proceedings; American Institute of Physics Inc., 2020; Vol. 2272. [Google Scholar]
  39. Zang X.; et al. Prediction and Construction of Energetic Materials Based on Machine Learning Methods. Molecules 2023, 28, 322. 10.3390/molecules28010322. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Casey A. D.; Son S. F.; Bilionis I.; Barnes B. C. Prediction of energetic material properties from electronic structure using 3D convolutional neural networks. J. Chem. Inf Model 2020, 60, 4457–4473. 10.1021/acs.jcim.0c00259. [DOI] [PubMed] [Google Scholar]
  41. Balakrishnan S.Locally Optimizable Joint Embedding Framework to Design Nitrogen-rich Molecules that are Similar but Improved Mol. Inf. 2021, 40, 2100011, 10.1002/minf.202100011. [DOI] [PubMed] [Google Scholar]
  42. Chandrasekaran N.; et al. Prediction of Detonation Velocity and N–O Composition of High Energy C–H–N–O Explosives by Means of Artificial Neural Networks. Propellants, Explosives, Pyrotechnics 2019, 44, 579–587. 10.1002/prep.201800325. [DOI] [Google Scholar]
  43. Keshavarz M. H. Simple determination of performance of explosives without using any experimental data. J. Hazard Mater. 2005, 119, 25–29. 10.1016/j.jhazmat.2004.11.013. [DOI] [PubMed] [Google Scholar]
  44. Keshavarz M. H.; Zamani A.; Shafiee M. Predicting detonation performance of CHNOFCl and aluminized explosives. Propellants, Explosives, Pyrotechnics 2014, 39, 749–754. 10.1002/prep.201300169. [DOI] [Google Scholar]
  45. Desai S.; Strachan A.. Parsimonious neural networks learn interpretable physical laws Sci. Rep. 2021, 11, 12761, 10.1038/s41598-021-92278-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Murphy K. P.Machine Learning: A Probabilistic Perspective; MIT Press, 2012. [Google Scholar]
  47. Stephens T.gplearn Documentation. https://gplearn.readthedocs.io/en/stable/, 2023.
  48. Mahoney M. W.; Drineas P. CUR matrix decompositions for improved data analysis. Proceedings of the National Academy of Science 2009, 106, 697. 10.1073/pnas.0803205106. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Ho T. K.Random Decision Forests. In Proceedings of the 3rd International Conference on Document Analysis and Recognition, 1995; pp 278–282.
  50. Svetnik V.; et al. Random Forest: A Classification and Regression Tool for Compound Classification and QSAR Modeling. J. Chem. Inf. Comput. Sci. 2003, 43, 1947–1958. 10.1021/ci034160g. [DOI] [PubMed] [Google Scholar]
  51. Fortin F.-A.; Marc-André Gardner U.; Parizeau M.; Gagné C. DEAP: Evolutionary Algorithms Made Easy François-Michel De Rainville. J. Mach. Learn. Res. 2012, 13, 2171–2175. [Google Scholar]
  52. Chollet F.; et al. Keras. Preprint at https://github.com/fchollet/keras, 2015.
  53. Meyer R.; Köhler J.; Homburg A.Explosives; Wiley-VCH, 2007. [Google Scholar]
  54. Dobratz B. M.; Rence L.. PROPERTIES OF CHEMICAL EXPLOSIVES AND EXPLOSIVE SIMULANTS, 1972.
  55. Hobbs M. L.; Baer M. R.. CALIBRATING THE BKW-EOS WITH A LARGE PRODUCT SPECIES DATA BASE AND MEASURED C-J. PROPERTIES, 1992.
  56. Mader C. L.NUMERICAL MODELING of EXPLOSIVES and PROPELLANTS; CRC Press, 2008. [Google Scholar]
  57. Cooper P. W.Explosives Engineering; Wiley-VCH, 1996. [Google Scholar]
  58. Pedregosa F.Scikit-learn: Machine Learning in Python J. Mach. Learn. Res. 2011, 12, 2825, http://scikit-learn.sourceforge.net. [Google Scholar]
  59. Klapötke T. M.Energetic Materials Encyclopedia; DeGruyter, 2021; Vol. 1 O-Z. [Google Scholar]
  60. Klapötke T. M.Energetic Materials Encyclopedia; DeGruyter, 2021; Vol. 1 A-D. [Google Scholar]
  61. Klapötke T. M.Energetic Materials Encyclopedia; DeGruyter, 2021; Vol. 1 E-N. [Google Scholar]
  62. Groom C. R.; Bruno I. J.; Lightfoot M. P.; Ward S. C. The Cambridge structural database. Acta Crystallogr. B Struct Sci. Cryst. Eng. Mater. 2016, 72, 171–179. 10.1107/S2052520616003954. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

jp3c06159_si_001.pdf (962KB, pdf)
jp3c06159_si_002.zip (80.9KB, zip)

Data Availability Statement

A demo for the design and training of parsimonious neural networks is freely available on nanoHUB at the following link: https://nanohub.org/tools/pnndemo.


Articles from The Journal of Physical Chemistry. a are provided here courtesy of American Chemical Society

RESOURCES