Abstract
Accurate and rapid evaluation of whether substrates can undergo the desired the transformation is crucial and challenging for both human knowledge and computer predictions. Despite the potential of machine learning in predicting chemical reactivity such as selectivity, popular feature engineering and learning methods are either time-consuming or data-hungry. We introduce a new method that combines machine-learned reaction representation with selected quantum mechanical descriptors to predict regio-selectivity in general substitution reactions. We construct a reactivity descriptor database based on ab initio calculations of 130k organic molecules, and train a multi-task constrained model to calculate demanded descriptors on-the-fly. The proposed platform enhances the inter/extra-polated performance for regio-selectivity predictions and enables learning from small datasets with just hundreds of examples. Furthermore, the proposed protocol is demonstrated to be generally applicable to a diverse range of chemical spaces. For three general types of substitution reactions (aromatic C–H functionalization, aromatic C–X substitution, and other substitution reactions) curated from a commercial database, the fusion model achieves 89.7%, 96.7%, and 97.2% top-1 accuracy in predicting the major outcome, respectively, each using 5000 training reactions. Using predicted descriptors, the fusion model is end-to-end, and requires approximately only 70 ms per reaction to predict the selectivity from reaction SMILES strings.
Integrating feature learning and on-the-fly feather engineering enables fast and accurate reacitvity predictions using large or small dataset.
1. Introduction
The ability to correctly anticipate chemical reactivity enables chemists to assess whether given substrates might undergo a desired transformation and thus realize the synthesis of a target product more quickly. In this respect, chemical reactivity screening or optimization through automated platforms open the door to the accelerated reaction discovery.1–6 Despite many successes in experimental reactivity exploration, fast and accurate in silico chemical reactivity modeling (e.g. selectivity and yield) remains challenging due to the complex relationship between chemical structures and reactivity.
Quantum mechanical (QM) methods, especially density functional theory (DFT), provide powerful tools to infer reactivity trends of organic reactions, for example via the local reactivity descriptors of an individual molecule within the conceptual density functional theory (CDFT).7–10 These reactivity descriptors, such as condensed Fukui functions,11 indicate how the electron density of a given molecule responds upon the approach of a second reactant, and have been successfully applied to identify the site most prone to either electrophilic or nucleophilic attack.12–15 A set of such chemical meaningful descriptors for individual reactants can thus carry key information about chemical reactivity.
Machine learning (ML) algorithms, especially feature engineering methods, aim to learn the correlation between a sequence of descriptors and chemical reactivity (Fig. 1A). In the late 1990s, Norrby and co-workers16,17 predicted the regio- and stereo-selecitivity for palladium-catalyzed allylation using QSAR and steric descriptors through molecular mechanics. Later works by Lipkowitz and Pradhan18 and Melville et al.19 developed QSSR (quantitative structure–selectivity relationships) methods for predicting enantioselectivity by using comparative molecular field analysis (CoMFA). The recent advance in high-throughput experimentation and data-mining techniques and thus the presence of high-quality data, have significantly populated ML methods in chemical reactivity predictions.20–28 Recently, Sigman and co-workers21,22 advanced multivariate linear regression to predict the selectivity of a reaction (formally, the difference of free energy barriers), by relying on sophisticated electronic and steric descriptors of substrates and catalysts. An alternate statistical approach built on support vector machines (SVM) and feed-forward neural networks (FFNN) was demonstrated by Denmark and coworkers,23 in which the authors proposed a new 3D shape descriptor for catalysts, the average steric occupancy (ASO). Using more than 4000 data points obtained via high-throughput experimentation, Doyle and co-workers24 demonstrated the prediction of reaction yields of C–N cross-coupling reactions via a random forest (RF) model (among other architectures) by selecting reaction-specific descriptors. Although descriptors tailored to a specific reaction class, as seen in feature engineering ML, can be effective representations for predicting chemical reactivity, they might not be generally applicable across reaction and substrate classes.2 In other words, such methods are not universal and still require human insight and expertise to design or select corresponding descriptors for each individual task. Moreover, expensive computations associated with QM descriptors often cause bottlenecks in the feature engineering workflow. For example, featurizing a molecule through QM calculations usually requires 3D conformer generation and structure optimization, which usually leads to tedious and time-consuming processes to featurize all molecules in a given dataset.
In addition to expert-guided descriptors, chemical reactivity can be predicted through non-expert descriptors.29–32 Typically, reactants and/or reagents are encoded into a 1D vector based on the presence or absence of substructures. Although those structural representations do not carry explicit physicochemical information about molecules, and do not benefit from the insight of experts, the simplicity of this inexpensive fingerprint generation allows fast high-throughput prediction with minimal demands on the user. For example, very recently, Glorius and co-workers32 reported success of fingerprint-based ML models in multiple tasks of predicting properties of chemical reactivity.
In feature engineering ML methods described above, reaction representations are built through human intelligence. In contrast, feature learning methods learn representations that capture properties relevant to the prediction task through end-to-end learning (e.g. from SMILES strings and 2D/3D structures to properties directly). Compared with other ML models, such feature learning methods including graph neural networks (GNN) and language models have achieved state-of-the-art accuracy in property predictions,33–37 and reaction predictions (Fig. 1).38–44 With respect to chemical reactivity, GNN models have been demonstrated to be able to predict reaction outcomes given a set of reactants and reagents,39 or predict potential electrostatic substitution sites given an aromatic compound.40 We note that such feature learning methods usually require considerable training data to offset the lack of functional information in plain molecular graphs or strings to achieve successful end-to-end learning. Furthermore, molecular representations learned from the training set usually show poor out-of-domain performance. Since data deficiency is ubiquitous in the field of chemical reactivity predictions (i.e., often the reactions we most want to predict with are those with the least data), methods that learn well from sparse datasets and exhibit outstanding extrapolation performance are highly desirable.
In the present work, we bridge the gap between feature engineering and feature learning methods discussed above and propose a strategy that unifies the machine learned reaction representation and QM descriptors to predict properties of chemical reactivity, i.e., regio-selectivity (Fig. 1C). We hypothesize that the proposed fusion method could inherit advantages from both feature engineering and feature learning in terms of accuracy, generality, and demand for training data. To overcome the bottleneck of relatively slow QM computations, we construct an ab initio database for selected reactivity descriptors and train a multitask neural network to predict QM descriptors for a given molecule on-the-fly. Predicted descriptors are then combined with the machine learned reaction representation to predict regio-selectivity. We note that a number of ML models have been developed to predict some QM descriptors in real time.37,45–49 However, to our knowledge, there have not been such a database and model focusing on reactivity descriptors.
We select reactions involving a pair of reacting heavy atoms, such as substitution reactions, to demonstrate the proposed platform. The studied reactions are extracted from a commercial database, Pistachio,50 which are more heterogeneous and challenging to predict than the more homogeneous reaction sets obtained through high-throughput experimentation. First, we demonstrate and discuss the fusion model, using QM calculated descriptors, on the task of site-selectivity prediction in electrophilic aromatic substitution (EAS) reactions. A thorough benchmarking shows that machine learned representation and chemically meaningful descriptors complement each other in the fusion model, enhancing performance, and allow learning from a tiny experimental dataset. Second, we implement a multi-task neural network that is trained on DFT calculations of 136k organic molecules to enable on-the-fly calculations for six key atomic/bond descriptors. Finally, we demonstrate the fusion model using on-the-fly descriptors on three general types of substitution reactions including aromatic C–H functionalization, aromatic C–X substitution, and other selective substitution reactions.
2. Results
2.1. Predicting regioselectivity with machine learned representations and QM descriptors
We start our discussion by implementing the fusion model using machine learned reaction representation and descriptors through QM calculations. First, a dataset containing selective aromatic nitration and halogenation reactions was curated from the Pistachio database via reaction templates. Reaction templates were first extracted from selected reactions using RDChiral,51 which were then reapplied to enumerate possible products and identify reactions that are site- or regio-selective. The dataset was further filtered to exclude reactions with <50% yield, due to our inability to know with certainty that the reported product in the Pistachio database is the major one and not merely the desired one. In total, 3003 aromatic nitration and halogenation reactions were selected to demonstrate our protocol. Details about the dataset curation and statistics are provided in the ESI S1.2.1 and S1.2.2.†
A graph neural network (GNN), modeled after the Weisfeiler-Lehman network (WLN) architecture for reaction outcome predictions of Jin and Coley,38,39 was implemented to predict regio-selectivity. As GNN is a deep data-driven method that is highly dependent on molecular structures it has seen from the training data, it is often challenging to make predictions on structures out of the scope of training set or when training on sparse data. To benefit the model with heuristic information derived from quantum mechanics in addition to the experimental information in the available reaction database, we feed the model quantum mechanical (QM) information of the reactants.
QM methods enable definition of many molecular and local quantities characterizing physicochemical properties of a given molecule. In principle, each such quantity can be employed as a descriptor.52 However, many descriptors may carry redundant information. Due to the computational complexities of various descriptors obtained under different levels of theory, covering all accessible descriptors is beyond the scope of this study. In the present work, we focus on a series of the most frequently used local reactivity quantities generated by QM methods, including (1) atomic charges, condensed Fukui functions or Fukui indices,53 and shielding constants as atomic descriptors; (2) bond lengths and bond orders as bond descriptors. These descriptors can provide precise quantitative descriptions of electrostatic properties and local environments for each atom. An automated workflow was developed to calculate these descriptors for all reactants starting from a SMILES string. After automated conformer searching via Merck Molecular Force Field (MMFF94s),54 chemically meaningful descriptors were calculated at the B3LYP/def2svp level of theory.55–57 Detailed computational methods are provided in the method section and ESI S1.2.3.†
Calculated descriptors were then incorporated into the GNN model to predict site-selectivity. The architecture of the QM enhanced graph neural network (QM-GNN) is shown in Fig. 2. Atomic descriptors and bond descriptors are taken as inputs in different parts of QM-GNN. In the conventional GNN, bonds are often featurized via their bond type and ring status. Those discrete bond features are replaced by the continuous bond order and bond length in QM-GNN (in analogy to the 3D SchNet model of Schütt et al.36), which carry more information than a plain 2D graph. The continuous bond order and bond length were converted into a continuous vector through the radial basis function (RBF) expansion before passed in the WLN encoder. In principle, the atomic features could also be converted from discrete choices of atomic number to continuous QM descriptors. For example, MoleculeNet34 and ChemProp35 both provide options of using fast calculated empirical descriptors as atomic features. However, our recent studies suggested such a strategy using heuristically-calculated descriptors usually fails to improve the model performance for reactivity predictions.40 We feel the critical information carried by QM descriptors could degrade during the message passing due to mixing with other atoms. To best leverage the benefit of QM descriptors, we incorporate atomic QM descriptors only after the WLN encoder and global attention layer have generated the learned atomic embedding, while discrete atom features (including atomic number, degree of connectivity, valence, and aromaticity) are still used as input of the GNN model. The QM atomic features are first expanded via RBF expansion and then concatenated with the machine learned atomic representation. The RBF expansion is chosen to ensure the size of the QM descriptor vector matches that of the learned atomic representation thus to prevent the model biasing towards the graph representation. The RBF expansion also serves as a good normalization method for QM descriptors.
Fusion atomic representations combining the graph embedding and QM information are then sum-pooled over reacting atom pairs, e.g. the highlighted sp2 C and Br atoms in Fig. 2, to represent the reactivity between atom pairs leading to the corresponding major/minor product. We note that in order for the model to automatically determine reacting centers, atom mapping numbers need to be included, which can be obtained through several automatic mapping toolkits.58–60 The reacting pair hidden state was then passed through a feed-forward neural network (FFNN) to generate a selectivity score, which is finally scaled to values between zero and one by a softmax function across available products. The softmax function is chosen so that the model is trained to rank major/minor reactions in a relative way. A full description of the model architecture is provided in the ESI S1.3.†
We train and evaluate the developed QM-GNN model on the curated EAS reaction dataset for the task of regio-selectivity predictions. The parent models, GNN and QM, were selected as baselines. The GNN model does not use human-specified chemically meaningful descriptors and predicts selectivity based only on machine learned representations. The QM model is a FFNN using QM calculated descriptors for the reacting atoms as input (a full description of baseline models is provided in the ESI†). We chose top-1 success rate of predicting the major reaction to evaluate the model. We first compare the model performance on random splits of data through 10-fold cross-validation. Average values and standard error of the mean of top-1 prediction accuracy for each fold are depicted in Fig. 3A (additional statistics analysis are provided in ESI Fig. S14†).Using all training data, feature learning and semi-feature learning methods including GNN and QM-GNN outperforms the QM feature engineering method (e.g. 90.8% vs. 87.4% in average prediction accuracy for QM-GNN and QM). To examine the model sensitivity to the size of the training set, we trained models with gradually decreasing training set size, but retain the size of validation and test sets (303 for each fold) so that performance comparisons across different training sizes are based on the same testing examples. As the size of the training set drops down, the accuracy of GNN rapidly declines to 77.8% with 200 training points (an increase of 124% in the error). However, QM and QM-GNN models remain high performing even with a tiny training set (an increase of 11.1% and 29.3% in error for QM and QM-GNN, respectively, with 200 training points). Essentially, QM descriptors carrying more physicochemical information enable a comparatively simpler function to map from descriptors to complex properties.61,62 For example, given a set of optimized and expert designed descriptors, even a linear function can predict enantio-selectivity well.22 The trend observed above demonstrates that the correlation between QM descriptors and reactivity can be learned even with a training set of just 200 examples. However, since one does not know what the optimized descriptors are for a task a priori, the QM model using selected descriptors and relatively simple mapping function is eventually outpaced by the GNN model, as we expose the model to more training examples. Consequently, QM-GNN, the fusion model—inheriting advantages from both QM and GNN—overcomes the limitation of its parent models and achieves superior performance using both the full and reduced training set sizes.
Next, we examine the extrapolated performance of the model via scaffold-based splitting. We split the whole dataset based on the scaffold of aromatic rings in a ratio of 80 : 10 : 10 so that training, validating, and testing set do not share common backbones (Fig. 3B). Instead of using cross-validation, we test a single split decided through greedy bin-packing. The scaffold split represents a more challenging evaluation, as molecules in the testing set require a greater degree of extrapolation than in the random split.61 Again, the QM-GNN model outperforms GNN (88.5% vs. 84.4%). The supplemental QM descriptors implemented in the QM-GNN model facilitate prediction of the out-of-domain unseen examples, which are common in many practical chemical and biochemical problems. In order to further understand the role of QM descriptors in the QM-GNN model, an iodination reaction of compound 1 was selected to study the latent space of GNN and QM-GNN models. The output from the second-to-last layer of the NN in Fig. 2 was extracted from both models as continuous high-dimensional vectors representing two potential reactions (“major” and “minor”). We calculate the Euclidean distance between those two latent vectors and compare it with distances between the major reaction and its neighboring reactions in the training set. Intuitively, for an unseen selective reaction, if the minor reaction is closer to the major reaction than any of its neighbors in the training set, it will be hard to distinguish the two possible outcomes. Here, the distance between the major/minor reactions are similar for the two models (10.7 vs. 7.1). However, the neighborhood of the major reaction in the QM-GNN model is far more dense than that in the GNN model (ESI Fig. S16†). The top-2 nearest neighbors of the major/minor reactions in the latent space of two models are shown in Fig. 4. The GNN model tries to distinguish the major/minor outcomes based solely on recognizing structural patterns it learned from the training set (i.e., reactions of compound 3), and thus results in a high overlap between the neighborhood of major/minor reactions (identical top-2 nearest neighbors for major/minor reactions). On the other hand, after incorporating the QM chemically meaningful descriptors, the QM-GNN model is able to look beyond the molecular structure and lead to drastically different neighborhoods for major/minor reactions, which suggests that the QM-GNN model is capable of capturing fundamental physicochemical rules. A statistical analysis on the above trends for the whole testing set is provided in ESI Fig. S16.†
Overall, the QM-GNN model demonstrates outstanding performance in prediction accuracy compared to the conventional GNN and QM models. However, a prominent disadvantage of using QM descriptors is the extra computing time. We can see from Fig. 3C that the computational time for QM-GNN is more than six orders of magnitude larger than that of the GNN model, even with a relatively fast semi-empirical structure optimization method. Calculating the descriptors for a single molecule in QM-GNN took an average of 6200 CPU-seconds. The large computational cost involved impedes the application on large-scale or real-time predictions. In the next section, we describe the development of a deep learning model to rapidly and accurately predict QM descriptors using a multitask and constrained deep learning model, to avoid this CPU-time issue.
2.2. Multitask constrained neural network for the fast calculation of QM descriptors
180k organic molecules containing C, H, O, N, P, S, F, Cl, Br, I, Si, B were selected from the ChEMBL63 and Pistachio50 databases. The automated workflow described in the above section was employed here to perform the high-throughput calculations. About 30% of the initially selected molecules were discarded throughout the workflow, primarily due to imaginary frequencies or timing out. The successful QM calculations on 136k molecules provided a set of more than 26 million data: 4 atomic descriptors (charge, two Fukui indices, NMR shielding constant) for each of 4 363 861 atoms (2 004 079 H atoms and 2 359 782 heavy atoms) plus the bond length and bond order for each of 4 487 376 bonds. More details about data curation are provided in the ESI S1.2.3.†
A multitask GNN model was developed to predict multiple QM descriptors from a 2D molecular structure (Fig. 5). The approach was modeled after the directed message passing neural network (D-MPNN).35 The D-MPNN encodes a molecular graph into node features and edge features. In principle, we could train a D-MPNN for each type of the chemically meaningful descriptors described above. However, this approach would result in multiple independent models leading to inefficiency and inconvenience in both training and inference. Instead, we constructed a multitask predictor by connecting multiple FFNNs with a single D-MPNN encoder to read out different atomic/bond properties, as shown in Fig. 5. This multitask encoder-readout model uses shared atomic/bond feature vectors as input for different FFNNs, which is inspired by quantum chemical intuition that our target properties, for example partial charges, chemical shift, and bond orders, are derived from the same electronic structure of a given chemical system. Therefore, this synergy is expected to improve the model's ability to learn meaningful functional representations of molecules. Another benefit of the proposed multitask model described above is its ability to systematically handle constraints applied to different atomic descriptors. Taking the atomic charge as an example, a NN is first used to translate the learned atomic representation into initial atomic charges (qi in Fig. 5). However, throughout the NN, each atomic charge is predicted independently so that the sum of qi does not necessarily equal the net charge of the molecule. This discrepancy between and the true net charge Q can be corrected by spreading the excess charge over the molecule.64 Inspired by the word attention mechanism used in natural language processing (NLP),65 we developed an attention-based constraining method that determines a weight for each atom to tune how much they need to be corrected. That is, we measure the contribution of each atom to the net correction as the similarity of the atomic hidden representation ai with a learnable atomic level vector u that can be seen as a high level representation of a fixed query “which atom needs more correction?”, and get a normalized weight wi through a softmax function. The final predicted atomic charge qfinali can then be generated from the initial predicted charge qi and the weight wi as:
1 |
Due to the multitask architecture of our model, the end-to-end constrained learning can be implemented independently for each desired property. All molecules curated are neutral so that the summation constraint is 0 for atomic charges and 1 for nucleophilic/electrophilic Fukui indices, while no constraints are required for NMR shielding constants, bond orders, and bond lengths. Performance of the model was tested on the held-out testing set, consisting of 431 858 atoms and 444 100 bonds. Good correlations between predicted and DFT computed values shown in Fig. 6 suggest that the developed model is reliable in predicting atomic and bond descriptors. More benchmarking studies are provided in ESI S2.3.†
We then use rapidly generated QM descriptors via the multi-task constrained model to predict site-selectivity for the 3003 EAS reactions discussed above. We note that compounds involved in those 3003 EAS reactions have been excluded from the 136k training molecules so that the descriptor predicting model does not predict based on memorizing the training data. The prediction accuracy for descriptors of those EAS reactants is similar to the testing set accuracy shown above (ESI S2.3†). The QM calculated descriptors in the QM-GNN model are then replaced by ML predicted ones, referred as ml-QM-GNN. Using the same training and testing methods described above, we evaluate the accuracy and speed of the ml-QM-GNN model on 3003 EAS reactions against the QM-GNN model (Fig. 3). Considering inter/extra-polated behavior, the ml-QM-GNN model maintains a high performance close to the QM-GNN model and significantly outperforms the GNN and QM models. Considering computation time, the ml-QM-GNN model requires only 70 milliseconds to predict the selectivity for a reaction from SMILES strings, which is almost six orders of magnitude faster than the QM-GNN model.
2.3. Predicting regioselectivity for general substitution reactions
With the fast and accurate ml-QM-GNN model, we are now able to explore other reaction spaces more efficiently. In this section, we further demonstrate this protocol on more general selective reactions. The ml-QM-GNN model predicts the chemical reactivity using a pair of reacting heavy atoms. Therefore, we extend the present model to all selective substitution and addition reactions involving a pair of approaching heavy atoms, while other types of reactions will be studied in extensions of this work.
Using the same filtering method discussed above, we extract 20 438 selective reactions from Pistachio, which are further grouped into three classes according to the rough mechanism: (1) 7378 aromatic C–H functionalization (Fig. 7B); (2) 7045 aromatic C–X substitution (Fig. 7C); and (3) 6715 other substitution and addition reactions (Fig. 7D). In contrast to the high-throughput datasets used in pioneering works for descriptor-based chemical reactivity predictions,22,24,27,32 reactions curated here are much more heterogeneous in terms of both reaction types and substrate scopes. For example, the 7378 member aromatic C–H functionalization class is composed of 10 types of reactions, involving 5963 unique aromatic substrates and 147 reagents. The pairwise Tanimoto similarity distribution for aromatic substrates shows a single peak at 0.2, indicating the high diversity of molecules studied here (detailed statistics for each reaction class is provided in the ESI S1.2.4†). The ml-QM-GNN model is trained and evaluated on the three curated datasets. 10-Fold cross-validation with gradually downsampled training set and consistent testing set, as discussed above, was applied again to evaluate the model. GNN and a fingerprint-based (FP) model were selected as baseline models here. In the FP-baseline model, the Morgan reaction fingerprint with 2048 bits and a radius of 2, as implemented in RDKit,66 was used to encode the major/minor reaction, followed by a FFNN to score the selectivity. This strategy of encoding reactions has been demonstrated to be successful to predict the plausibility of a given reaction on a heterogeneous dataset.67 As seen from Fig. 7A, in general, the FP-baseline model showed a poor to medium performance. The ml-QM-GNN model maintains the highest accuracy throughout three classes of reactions. When using 5000 training reactions, the model correctly predicts the major product for 89.7% reactions in class (1); the top-1 accuracy is 96.7% and 97.2% for class (2) and (3), respectively. The ml-QM-GNN model retains a high accuracy with reduced training sets (e.g. 84.3%, 92.1%, and 94.3% for classes (1)–(3) when using 300 training points, respectively, which is about half the size of the testing set). The GNN model also achieves a remarkable performance that is comparable to ml-QM-GNN when trained on a large training set. However, the performance of GNN quickly declines as we downsample the training set, especially for the more challenging class 1 dataset.
Selected examples from three reaction classes are also provided in Fig. 7B–D. Selectivity for the class (1) example is mainly driven by the nucleophilicity of competing sites, while the selectivity for the class (2) example is dominated by electrophilicity. The class (3) example is an alkylimino-de-oxo-bisubstitution reaction, which follows the nucleophilic addition mechanism. Regarding proneness toward the nucleophilic attack, the minor reacting site (site 2) is more or at least equivalently likely to react compared to the major site (site 1), as indicated by the Fukui indices in Fig. 7D. The preference for the major site is dominated by the steric hindrance at the minor site from two t-Bu groups. The ml-QM-GNN model correctly predicts the major reacting site with high confidence, suggesting that in addition to the electrostatic effect captured by the chemical meaningful descriptors, the machine learned molecular representation implemented is also able to learn the steric effects by recognizing similar structure patterns.
3. Conclusion and outlook
This work introduced a novel platform to predict the selectivity of chemical reactions that combines machine-learned reaction representation and quantum mechanical descriptors, including local reactivity descriptors and bond descriptors. The platform leverages the benefits of QM descriptors while minimizing the additional computational cost through the use of an auxiliary multi-task prediction network based on molecular structures alone. A thorough benchmarking on regio-selectivity predictions demonstrates that the fusion model achieves better performance than the conventional graph neural network and descriptor-based feature engineering model in both inter/extra-polated predictions. The fusion model overcomes limitations of feature learning methods and enables learning from a tiny dataset (e.g. 200 training points for 300 testing examples). Further latent space analysis reveals that by using chemically meaningful descriptors, the model can learn richer functional representations of reactions in addition to substructural patterns. The combination of learned reaction representation and on-the-fly QM descriptors therefore leads to a fast, end-to-end, generally applicable, and accurate model for chemical reactivity predictions, which is further demonstrated on the prediction of regio-selectivity for three general types of organic reactions. The model achieves 89.7%/96.7%/97.2% top 1 accuracy for aromatic C–H functionalization, aromatic C–X substitution, and other substitution reactions curated from the Pistachio database, within seconds.
In the present work, we have demonstrated the efficacy of combining graph representation and ML predicted QM descriptors in predicting the regio-selectivity for substitution reactions. In addition to ranking relative reactivity (i.e., selectivity), we note that the fusion ml-QM-GNN model can also be adapted to predict quantitative reactivity measures (e.g. reaction yields) for a given reaction, requiring minimum modifications. We further evaluated the ml-QM-GNN model on yield predictions including regression and binary classification tasks using both datasets discussed above and high-throughput experimentation data from Doyle and co-workers.24 The unbiased ml-QM-GNN model showed comparable performance to expert-guided feature engineering methods on the regression task and provided a measurable improvement over GNN-based and reaction fingerprint-based models on the binary classification task (ESI S2.5†).
The framework presented here still leaves room for improvement. For example, the fusion model performance could be further improved by using higher level of theory to construct the QM descriptors database and including more explicit steric descriptors such as the solvent accessible surface area (SASA). At this stage, on-the-fly QM descriptors calculation in the proposed platform only supports neutral molecules with C, H, O, N, P, S, F, Cl, Br, I, B elements. However, for more general applications, explicit coverage of charged molecules and transition metals in the QM descriptors calculation will significantly improve the performance of the proposed platform. We mention that in contemporary work, Isayev and co-workers68 extended the AIMNet model towards open-shell molecules to predict atomic QM descriptors. Compared with the AIMNet model, our model includes less atomic descriptors, but covers essential bond descriptors and is more automated and straightforward to use requiring only the SMILES string of the reactions of interest as input. Both our ml-QM-GNN and the AIMNet model are not able to predict reactivity descriptors for charged molecules, since those require electron structure information for the double-charged species (e.g. to compute nucleophilicity Fukui indices for a cation). Including a comprehensive set of reactivity descriptors for charged states of a molecule therefore would be the next challenging step in the real-time QM descriptors computations.
More broadly, this study demonstrates the power of connecting feature engineering and feature learning in addition to providing a useful and convenient tool for chemical reactivity prediction. Future work will look to expand these approaches to reactions involving more complex intermolecular interactions and mechanisms.
4. Computational methods
To generate a computational database covering the proposed atomic/bond descriptors, an automated computing workflow was developed. The workflow started by sampling conformers from SMILES strings using the RDKit library,69 and the Merck Molecular Force Field (MMFF94s).54 The lowest-lying conformer was then optimized at the GFN2-xtb level of theory.70 GFN2-xtb is parametrized for all the elements through radon with emphasis on yielding reasonable structures. Since descriptors of interest within this study are more sensitive to molecular structures rather than energetic values, the selected semi-empirical method can provide reliable structures at low computational cost for more than 136k molecules. A variety of convergence checks were performed to ensure the optimization converged to a correct structure, including checks for imaginary frequencies and ensuring that the molecule did not further converge into other species. The final chemically meaningful descriptors were calculated with the B3LYP functional55,56 and the def2svp basis set.57 All DFT computations were performed using Gaussian 16.71 Bond orders were calculated through NBO 6.0.72 More details about the descriptor calculation are provided in the ESI.†
The reaction database used in this work is the Pistachio patent database from NextMove (v3.0 released in June 2019). See ESI† for detailed model structures and training procedures. All code used in this work can be found on GitHub (ESI S1.1†)
Conflicts of interest
There are no conflicts to declare.
Supplementary Material
Acknowledgments
The authors thank the Machine Learning for Pharmaceutical Discovery and Synthesis (MLPDS) Consortium for support. E. H. acknowledges support from the Austrian Science Fund (FWF), project J-4415. We thank Pritha Verma and Jessica Xu for helpful comments and discussions on this manuscript.
Electronic supplementary information (ESI) available: Data, additional methods, and results. See DOI: 10.1039/d0sc04823b
References
- Schneider G. Nat. Rev. Drug Discovery. 2018;17:97. doi: 10.1038/nrd.2017.232. [DOI] [PubMed] [Google Scholar]
- Coley C. W. Eyke N. S. Jensen K. F. Angew. Chem., Int. Ed. 2020;59:22858–22893. doi: 10.1002/anie.201909987. [DOI] [PubMed] [Google Scholar]
- Coley C. W. Eyke N. S. Jensen K. F. Angew. Chem., Int. Ed. 2020;59:23414–23436. doi: 10.1002/anie.201909989. [DOI] [PubMed] [Google Scholar]
- Isbrandt E. S. Sullivan R. J. Newman S. G. Angew. Chem., Int. Ed. 2019;58:7180–7191. doi: 10.1002/anie.201812534. [DOI] [PubMed] [Google Scholar]
- Santanilla A. B. Regalado E. L. Pereira T. Shevlin M. Bateman K. Campeau L.-C. Schneeweis J. Berritt S. Shi Z.-C. Nantermet P. et al. . Science. 2015;347:49–53. doi: 10.1126/science.1259203. [DOI] [PubMed] [Google Scholar]
- Lin S. Dikler S. Blincoe W. D. Ferguson R. D. Sheridan R. P. Peng Z. Conway D. V. Zawatzky K. Wang H. Cernak T. et al. . Science. 2018;361:eaar6236. doi: 10.1126/science.aar6236. [DOI] [PubMed] [Google Scholar]
- Parr R. G., Horizons of Quantum Chemistry, Springer, 1980, pp. 5–15 [Google Scholar]
- Geerlings P. De Proft F. Langenaeker W. Chem. Rev. 2003;103:1793–1874. doi: 10.1021/cr990029p. [DOI] [PubMed] [Google Scholar]
- Geerlings P. Chamorro E. Chattaraj P. K. De Proft F. Gázquez J. L. Liu S. Morell C. Toro-Labbé A. Vela A. Ayers P. Theor. Chem. Acc. 2020;139:36. [Google Scholar]
- Stuyver T. De Proft F. Geerlings P. Shaik S. J. Am. Chem. Soc. 2020;142:10102–10113. doi: 10.1021/jacs.0c02390. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yang W. Parr R. G. Proc. Natl. Acad. Sci. U. S. A. 1985;82:6723–6726. doi: 10.1073/pnas.82.20.6723. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Damoun S. Van de Woude G. Mendez F. Geerlings P. J. Phys. Chem. A. 1997;101:886–893. doi: 10.1021/jp9611840. [DOI] [Google Scholar]
- Melin J. Aparicio F. Subramanian V. Galvan M. Chattaraj P. K. J. Phys. Chem. A. 2004;108:2487–2491. doi: 10.1021/jp037674r. [DOI] [Google Scholar]
- Aurell M. J. Domingo L. R. Pérez P. Contreras R. Tetrahedron. 2004;60:11503–11509. doi: 10.1016/j.tet.2004.09.057. [DOI] [Google Scholar]
- Saha S. Roy R. K. J. Phys. Chem. B. 2007;111:9664–9674. doi: 10.1021/jp070417s. [DOI] [PubMed] [Google Scholar]
- Oslob J. D. Åkermark B. Helquist P. Norrby P.-O. Organometallics. 1997;16:3015–3021. doi: 10.1021/om9700371. [DOI] [Google Scholar]
- Norrby P.-O. ACS Symp. Ser. 1999;721:163–172. doi: 10.1021/bk-1999-0721.ch013. [DOI] [Google Scholar]
- Lipkowitz K. B. Pradhan M. J. Org. Chem. 2003;68:4648–4656. doi: 10.1021/jo0267697. [DOI] [PubMed] [Google Scholar]
- Melville J. L. Lovelock K. R. Wilson C. Allbutt B. Burke E. K. Lygo B. Hirst J. D. J. Chem. Inf. Model. 2005;45:971–981. doi: 10.1021/ci050051l. [DOI] [PubMed] [Google Scholar]
- Milo A. Neel A. J. Toste F. D. Sigman M. S. Science. 2015;347:737–743. doi: 10.1126/science.1261043. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sigman M. S. Harper K. C. Bess E. N. Milo A. Acc. Chem. Res. 2016;49:1292–1301. doi: 10.1021/acs.accounts.6b00194. [DOI] [PubMed] [Google Scholar]
- Reid J. P. Sigman M. S. Nature. 2019;571:343–348. doi: 10.1038/s41586-019-1384-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zahrt A. F. Henle J. J. Rose B. T. Wang Y. Darrow W. T. Denmark S. E. Science. 2019;363:eaau5631. doi: 10.1126/science.aau5631. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ahneman D. T. Estrada J. G. Lin S. Dreher S. D. Doyle A. G. Science. 2018;360:186–190. doi: 10.1126/science.aar5169. [DOI] [PubMed] [Google Scholar]
- Singh S. Pareek M. Changotra A. Banerjee S. Bhaskararao B. Balamurugan P. Sunoj R. B. Proc. Natl. Acad. Sci. U. S. A. 2020;117:1339–1345. doi: 10.1073/pnas.1916392117. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tomberg A. Johansson M. J. Norrby P.-O. J. Org. Chem. 2018;84:4695–4703. doi: 10.1021/acs.joc.8b02270. [DOI] [PubMed] [Google Scholar]
- Li X. Zhang S.-Q. Xu L.-C. Hong X. Angew. Chem. 2020;59:13253–13259. doi: 10.1002/anie.202000959. [DOI] [PubMed] [Google Scholar]
- Beker W. Gajewska E. P. Badowski T. Grzybowski B. A. Angew. Chem., Int. Ed. 2019;58:4515–4519. doi: 10.1002/anie.201806920. [DOI] [PubMed] [Google Scholar]
- Varnek A. Fourches D. Hoonakker F. Solov’ev V. P. J. Comput.-Aided Mol. Des. 2005;19:693–703. doi: 10.1007/s10822-005-9008-0. [DOI] [PubMed] [Google Scholar]
- Polishchuk P. Madzhidov T. Gimadiev T. Bodrov A. Nugmanov R. Varnek A. J. Comput.-Aided Mol. Des. 2017;31:829–839. doi: 10.1007/s10822-017-0044-3. [DOI] [PubMed] [Google Scholar]
- Granda J. M. Donina L. Dragone V. Long D.-L. Cronin L. Nature. 2018;559:377–381. doi: 10.1038/s41586-018-0307-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sandfort F. Strieth-Kalthoff F. Kühnemund M. Beecks C. Glorius F. Chem. 2020;6:1379–1390. [Google Scholar]
- Duvenaud D. K., Maclaurin D., Iparraguirre J., Bombarell R., Hirzel T., Aspuru-Guzik A. and Adams R. P., Advances in Neural Information Processing Systems, 2015, pp. 2224–2232 [Google Scholar]
- Wu Z. Ramsundar B. Feinberg E. N. Gomes J. Geniesse C. Pappu A. S. Leswing K. Pande V. Chem. Sci. 2018;9:513–530. doi: 10.1039/C7SC02664A. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yang K. Swanson K. Jin W. Coley C. Eiden P. Gao H. Guzman-Perez A. Hopper T. Kelley B. Mathea M. et al. . J. Chem. Inf. Model. 2019;59:3370–3388. doi: 10.1021/acs.jcim.9b00237. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schütt K. T. Sauceda H. E. Kindermans P.-J. Tkatchenko A. Müller K.-R. J. Chem. Phys. 2018;148:241722. doi: 10.1063/1.5019779. [DOI] [PubMed] [Google Scholar]
- John P. C. S. Guan Y. Kim Y. Kim S. Paton R. S. Nat. Commun. 2020;11:1–12. doi: 10.1038/s41467-020-16201-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jin W., Coley C., Barzilay R. and Jaakkola T., Advances in Neural Information Processing Systems, 2017, pp. 2607–2616 [Google Scholar]
- Coley C. W. Jin W. Rogers L. Jamison T. F. Jaakkola T. S. Green W. H. Barzilay R. Jensen K. F. Chem. Sci. 2019;10:370–377. doi: 10.1039/C8SC04228D. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Struble T. J. Coley C. W. Jensen K. F. React. Chem. Eng. 2020;5:896–902. doi: 10.1039/D0RE00071J. [DOI] [Google Scholar]
- Yan C., Ding Q., Zhao P., Zheng S., Yang J., Yu Y. and Huang J., ChemRxiv preprint, 2020, 10.26434/chemrxiv.11869692.v1 [DOI]
- Schwaller P. Laino T. Gaudin T. Bolgar P. Hunter C. A. Bekas C. Lee A. A. ACS Cent. Sci. 2019;5:1572–1583. doi: 10.1021/acscentsci.9b00576. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pesciullesi G. Schwaller P. Laino T. Reymond J. Nat. Commun. 2020;11:4874. doi: 10.1038/s41467-020-18671-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang L. Zhang C. Bai R. Li J. Duan H. Chem. Commun. 2020;56:9368–9371. doi: 10.1039/D0CC02657C. [DOI] [PubMed] [Google Scholar]
- Smith J. S. Isayev O. Roitberg A. E. Chem. Sci. 2017;8:3192–3203. doi: 10.1039/C6SC05720A. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang Y., Fass J., Stern C. D., Luo K. and Chodera J., 2019, arXiv preprint arXiv:1909.07903
- Jonas E. Kuhn S. J. Cheminf. 2019;11:1–7. doi: 10.1186/s13321-019-0374-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Unke O. T. Meuwly M. J. Chem. Theory Comput. 2019;15:3678–3693. doi: 10.1021/acs.jctc.9b00181. [DOI] [PubMed] [Google Scholar]
- Heid E. Fleck M. Chatterjee P. Schröder C. MacKerell Jr A. D. J. Chem. Theory Comput. 2019;15:2460–2469. doi: 10.1021/acs.jctc.8b01289. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pistachio (NextMove Software), https://www.nextmovesoftware.com/pistachio.html
- Coley C. W. Green W. H. Jensen K. F. J. Chem. Inf. Model. 2019;59:2529–2537. doi: 10.1021/acs.jcim.9b00286. [DOI] [PubMed] [Google Scholar]
- Karelson M. Lobanov V. S. Katritzky A. R. Chem. Rev. 1996;96:1027–1044. doi: 10.1021/cr950202r. [DOI] [PubMed] [Google Scholar]
- Fuentealba P. Pérez P. Contreras R. J. Chem. Phys. 2000;113:2544–2551. doi: 10.1063/1.1305879. [DOI] [Google Scholar]
- Halgren T. A. J. Comput. Chem. 1996;17:490–519. doi: 10.1002/(SICI)1096-987X(199604)17:5/6<490::AID-JCC1>3.0.CO;2-P. [DOI] [Google Scholar]
- Becke A. D. J. Chem. Phys. 1993;98:5648–5652. doi: 10.1063/1.464913. [DOI] [Google Scholar]
- Lee C. Yang W. Parr R. G. Phys. Rev. B: Condens. Matter Mater. Phys. 1988;37:785. doi: 10.1103/PhysRevB.37.785. [DOI] [PubMed] [Google Scholar]
- Weigend F. Ahlrichs R. Phys. Chem. Chem. Phys. 2005;7:3297–3305. doi: 10.1039/B508541A. [DOI] [PubMed] [Google Scholar]
- NameRxn (NextMove Software), https://www.nextmovesoftware.com/namerxn.html
- Jaworski W. Szymkuć S. Mikulak-Klucznik B. Piecuch K. Klucznik T. Kaźmierowski M. Rydzewski J. Gambin A. Grzybowski B. A. Nat. Commun. 2019;10:1–11. doi: 10.1038/s41467-018-07882-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schwaller P., Hoover B., Reymond J.-L., Strobelt H. and Laino T., ChemRxiv preprint, 2020, 10.26434/chemrxiv.12298559.v1 [DOI]
- Tropsha A. Mol. Inf. 2010;29:476–488. doi: 10.1002/minf.201000061. [DOI] [PubMed] [Google Scholar]
- Cherkasov A. Muratov E. N. Fourches D. Varnek A. Baskin I. I. Cronin M. Dearden J. Gramatica P. Martin Y. C. Todeschini R. et al. . J. Med. Chem. 2014;57:4977–5010. doi: 10.1021/jm4004285. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gaulton A. Bellis L. J. Bento A. P. Chambers J. Davies M. Hersey A. Light Y. McGlinchey S. Michalovich D. Al-Lazikani B. et al. . Nucleic Acids Res. 2012;40:D1100–D1107. doi: 10.1093/nar/gkr777. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rai B. K. Bakken G. A. J. Comput. Chem. 2013;34:1661–1671. doi: 10.1002/jcc.23308. [DOI] [PubMed] [Google Scholar]
- Yang Z., Yang D., Dyer C., He X., Smola A. and Hovy E., Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: human language technologies, 2016, pp. 1480–1489 [Google Scholar]
- Rogers D. Hahn M. J. Chem. Inf. Model. 2010;50:742–754. doi: 10.1021/ci100050t. [DOI] [PubMed] [Google Scholar]
- Segler M. H. Preuss M. Waller M. P. Nature. 2018;555:604–610. doi: 10.1038/nature25978. [DOI] [PubMed] [Google Scholar]
- Zubatyuk R., Smith J., Nebgen B. T., Tretiak S. and Isayev O., ChemRxiv preprint, 2020, 10.26434/chemrxiv.12725276.v1 [DOI]
- Riniker S. Landrum G. A. J. Chem. Inf. Model. 2015;55:2562–2574. doi: 10.1021/acs.jcim.5b00654. [DOI] [PubMed] [Google Scholar]
- Bannwarth C. Ehlert S. Grimme S. J. Chem. Theory Comput. 2019;15:1652–1671. doi: 10.1021/acs.jctc.8b01176. [DOI] [PubMed] [Google Scholar]
- Frisch M., Trucks G., Schlegel H., Scuseria G., Robb M., Cheeseman J., Scalmani G., Barone V., Petersson G., Nakatsuji H., et al., Gaussian 16, 2016
- Glendening E. D. Landis C. R. Weinhold F. J. Comput. Chem. 2013;34:1429–1437. doi: 10.1002/jcc.23266. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.