Abstract
Obtaining accurate ground and low-lying excited states of electronic systems is crucial in a multitude of important applications. One ab initio method for solving the Schrödinger equation that scales favorably for large systems is variational quantum Monte Carlo (QMC). The recently introduced deep QMC approach uses ansatzes represented by deep neural networks and generates nearly exact ground-state solutions for molecules containing up to a few dozen electrons, with the potential to scale to much larger systems where other highly accurate methods are not feasible. In this paper, we extend one such ansatz (PauliNet) to compute electronic excited states. We demonstrate our method on various small atoms and molecules and consistently achieve high accuracy for low-lying states. To highlight the method’s potential, we compute the first excited state of the much larger benzene molecule, as well as the conical intersection of ethylene, with PauliNet matching results of more expensive high-level methods.
Subject terms: Excited states, Chemical physics, Quantum chemistry, Computational chemistry, Method development
Deep neural networks can learn and represent nearly exact electronic ground states. Here, the authors advance this approach to excited states, achieving high accuracy across a range of atoms and molecules, opening up the possibility to model many excited-state processes.
Introduction
The fundamental challenge of quantum chemistry, solid-state physics, and many areas of computational materials science is to obtain solutions to the electronic Schrödinger equation for a given system, which in principle provides complete access to its chemical properties. The ground and low-lying excited states typically determine the behavior of a system and are therefore of the most interest in many applications. Understanding and being able to describe excited-state processes1, including a wide variety of important spectroscopy methods such as fluorescence, photoionization, and optical absorption of molecules and solids, is key to the successful design of new materials.
Unfortunately, the Schrödinger equation cannot be solved exactly except in the simplest cases, such as one-dimensional toy systems or a single hydrogen atom. Accordingly, many approximate numerical methods have been developed which provide solutions at varying degrees of accuracy. Time-dependent density functional theory2,3 (TDDFT) is the most popular method due to its computational efficiency but has well-known limitations4–9. Higher-accuracy methods have a computational cost that scales rapidly with system size—the well-established full configuration interaction10 (FCI) and coupled cluster11 (CC) techniques scale (FCI scales exponentially, while truncated CI scales polynomially.) and (The scaling of CC depends on the particular method used: CC2 , CCSD , CCSD(T) , CC3 , CCSDT , CCSDT(Q) , CCSDTQ .) respectively, where N is the number of electrons, thereby severely limiting their practical use. There is thus a huge need for ab initio methods that scale more favorably with system size, allowing the modeling of practically relevant molecules and materials.
Quantum Monte Carlo (QMC) techniques offer a route forward with their favorable scaling () and therefore dominate high-accuracy calculations where other methods are too expensive12,13. A state-of-the-art QMC calculation typically involves the construction of a multi-determinant baseline wavefunction through standard electronic-structure methods, which is augmented with a Jastrow factor to efficiently incorporate electron correlation, and then optimized through variational QMC (VMC) to obtain a trial wavefunction. This is then used within fixed-node diffusion QMC (DMC) to obtain a final electronic energy. The fixed-node approximation is used to avoid exponential scaling, with the drawback that the nodal surface of the trial wavefunction cannot be modified, which limits the accuracy of the DMC result14. A more expressive baseline wavefunction can improve upon this but traditional DMC often needs thousands to hundreds of thousands of determinants to reach convergence15. Additionally, DMC only provides the final energy, restricting the calculation of other electronic properties16. Both of these limitations can, in principle, be resolved at the VMC level, with the accuracy of VMC constrained only by the flexibility of the trainable wavefunction ansatz. So far, these techniques have mostly been developed for ground-state calculations, with different extensions proposed to address excited states12,17–26.
Recently, the new ab initio approach of deep VMC methods has been introduced27–30 and subsequently further extended and improved31–33. In particular, PauliNet27 and FermiNet28 were the first methods to demonstrate that highly accurate ground-state results for molecules could be obtained using deep VMC with lower computational complexity and using orders of magnitude fewer Slater determinants than typically employed in other methods that achieve similar accuracy.
In the same spirit as Carleo and Troyer proposed for optimizing quantum states in lattice models34, VMC is used in order to train a neural network model that represents the many-body wavefunction in an unsupervised fashion, i.e. in contrast to other quantum machine learning approaches the only input to the method is the Hamiltonian, and training data are generated on the fly by sampling from the current wavefunction model and minimizing the variational energy. In both PauliNet and FermiNet deep antisymmetric neural networks are used to represent the fermionic wavefunction in the real space of electron coordinates.
Recently, there has been much interest in developing deep learning methods for excited states35. In this paper, we extend PauliNet towards the ab initio computation of electronic excited states (see the “Methods” section for details). The input is again only the Hamiltonian of the quantum system. By employing a simple energy minimization and numerical orthogonalization procedure, we are able to obtain the lowest excited-state wavefunctions of a given system. The excited-state optimization makes use of a penalty method that minimizes the overlap between the nth excited state and the lower-lying states in the spectrum. Optimization methods that introduce additional constraints have been used in the context of VMC before26 and provide a simple way to obtain orthogonal states without explicit enforcement in the wavefunction ansatzes. Combining these techniques with the expressiveness of neural network ansatzes yields highly accurate approximations to excited states with direct access to the wavefunctions for the evaluation of electronic observables. Neural network-based methods have targeted low-lying excited states of one-dimensional lattice models25, but have not been applied to first-principles systems.
We demonstrate our method on a variety of small- and medium-sized molecules, where we consistently achieve highly accurate total energies, outperforming traditional quantum chemistry methods. We also compute excitation energies, transition dipole moments, and oscillator strengths, the main ground-to-excited transition properties, with the latter two known to be more sensitive to errors in the underlying wavefunctions than energies. In all test systems, we find PauliNet closely matches high-order CC and experimental results. Next, we show that our method can be applied in a straightforward manner to much larger molecules, using the example of benzene where we match significantly more expensive high-level electronic-structure methods. Finally, we demonstrate that PauliNet can be used to compute excited-state potential energy surfaces by modeling an avoided crossing and conical intersection of ethylene, a highly multi-referential problem.
Results
Nearly exact solutions for small atoms and molecules
To demonstrate our method we start by applying it to a range of small atoms and molecules. We optimize the lowest-lying excited states and compute their vertical excitation energies for the ground-state equilibrium geometry (see Supplementary Table I), with each PauliNet wavefunction containing a maximum of 10 determinants. In all systems, we obtain highly accurate total energies and estimates of the first few excitation energies competitive with high-accuracy quantum chemistry methods.
In Fig. 1 the excitation energies of the lowest states are shown for several atoms. For all the atoms the excitation energies are obtained within 4 mHa of the theoretical best estimates (TBE)36. Due to the high degree of symmetry the atoms exhibit degeneracies, that is, multiple orthogonal states can be found with the same energy. Being subject to the orthogonalization constraint, PauliNet approximates all orthogonal states of an energy level individually, which is observed by attaining multiple results at the same energy level. The multiplicity of the exact solution can be obtained theoretically by considering the electronic configurations of the atoms and is reproduced within our experiments.
We then compute a larger number of excited states for LiH, BeH and Be. In each experiment, we optimize eight ansatzes in parallel. In Fig. 2 we illustrate the training process by plotting the convergence of the total energies and excitation energies. Additionally, we plot the training estimates of the pairwise overlaps of the wavefunctions, which remain small throughout the optimization process. We confirm that the final overlaps are near-zero by exhaustively sampling the trained wavefunctions, thereby obtaining well-converged Monte Carlo estimates (see Supplementary Table VI). Based on the degeneracies we find a total of five (LiH), four (BeH), and three (Be) distinct excitation energies, respectively. The excitation energies match those from reference values, and in particular, we find that for all systems studied here we reliably obtain the first excited state, and apart from one case also the second excited state. However, especially for clusters of higher-lying excited states with similar energies, we typically do not find all members of the cluster. In these cases, which states are found depends on the initialization of our ansatzes, as well as the total number of states that are being sought. To give a transparent picture of the capabilities of our method, in this work we have refrained from optimizing the CASSCF baseline in order to find all possible excitations.
Highly accurate wavefunctions: transition dipole moments and oscillator strengths
Total energies and vertical excitation energies are the primary focus when benchmarking excited-state methods as they are readily available from many theoretical models and provide a good initial guess of a particular method’s accuracy. However, they provide only a partial characterization of the electronic states, and while a method in question may give accurate energies, other quantities of key importance may be inaccurate37–39.
Transition dipole moments (TDM) and oscillator strengths are two principal ground-to-excited transition properties and are of great interest. TDMs determine how polarized electromagnetic radiation will interact with a system due to its distribution of charge, and therefore determine transition rates and probabilities of induced state changes. In the electric dipole approximation, the TDM between two states i and j is given by
1 |
where is the sum over the position operator of each particle weighted by its charge, with q = −e for electronic systems. We obtain the expectation value by Monte Carlo sampling according to Eq. (15). While the TDM is important for understanding a number of processes, including optical spectra, it is generally a complex-valued vector quantity and not an experimental observable by itself. The closely related oscillator strength is what is inferred through the experiment and is given by
2 |
where ΔE is the excitation energy between states i and j, and is the dipole strength. It is known that, in addition to being more basis-set sensitive, dij and fij are both highly dependent on the quality of the trial wavefunctions40 and represent a more rigorous test for ab initio methods than just energies.
Recently, transition energies and oscillator strengths for a variety of small molecules have been computed using high-order CC calculations, systematically extrapolating to the complete basis set (CBS) limit, and comparing to experimental results where possible, in order to supply a comprehensive set of theoretical benchmarks41,42. In that spirit, we now use these results to benchmark the accuracy of oscillator strengths computed using PauliNet. Furthermore, we also compare multi-reference CC (MR-CC) results where possible43. We compute the first few electronic states for five molecules (BH, CH+, H2O, NH3, CO), such that we obtain the first non-zero oscillator strength (within the dipole approximation) for each. All calculations (CH+ was not included in the CC calculations in refs. 41,42. We instead compare to (MR-)CC results in ref. 43, using the same ground-state equilibrium geometry, which was obtained in a split-valence basis augmented with diffuse and polarization functions. See refs. 43,44 for more details.) are performed at the same ground-state equilibrium geometries as refs. 41, 42 (see Supplementary Table I) and using the same number of determinants (≤10) as in the section “Nearly exact solutions for small atoms and molecules”.
Our results for all systems are shown in Fig. 3. First, we compute the amount of correlation energy recovered in the ground state and find PauliNet matches high-order CC methods (panel a). Second, we compute the excitation energy for each transition and find this to be close to the TBE, on par with CC and much more consistent than TDDFT where the accuracy depends on the molecule and on the exact TDDFT method used (panel b). Finally, we compare the oscillator strengths (for the 0 → 2 transition) in panel c. Even high-order methods such as CC and MR-CC can produce a spectrum of results depending on the expansion and basis set used, with this exacerbated in cheaper methods such as TDDFT (see the example of CO). In all systems, PauliNet compares well with experimental results, demonstrating the quality of deep VMC wavefunctions with just a minimal number of determinants.
Application to larger molecules
The previous two sections showed that we achieve highly accurate results across a range of small systems. While this is encouraging, traditional high-accuracy methods that are better established are readily available for such small systems. In this section, to demonstrate the potential of excited PauliNet, we show that it can be applied in a straightforward manner to significantly larger molecules. For this objective, we choose the example of the benzene molecule (panel a of Fig. 4). Studies of its electronic structure and other properties are plentiful due to its importance in bio and organic chemistry, and with 42 electrons all-electron calculations will be extremely demanding or even intractable for a high-level description of its electronic states, depending on the theory level used.
Using a PauliNet ansatz with just 10 determinants, the same as in the much smaller systems, and slightly deeper neural networks (see Supplementary Table VII) we obtain very good total energies for the ground state and first excited state (upper left of Fig. 4). We note the better accuracy than high-level CC calculations, with this signifying highly accurate wavefunctions that can be used to compute other observables, as demonstrated in the previous section. The computed excitation energy is also shown (right of Fig. 4), with PauliNet compared against several experimental and theoretical results. The lower experimental result45 (dashed black line) quantifies an adiabatic excitation energy, i.e. the energy difference between the ground state and the excited state at the corresponding relaxed geometries. This quantity is corrected to obtain the vertical excitation energy26 (solid black line), which omits nuclear relaxation and vibrational effects. As our calculations are performed at the ground-state equilibrium geometry, we are targeting the vertical excitation energy, and therefore consider this corrected experimental result to be closer to the ground truth. We find this to be slightly underestimated by high-order methods (CC, DMC), and slightly overestimated by PauliNet. In other systems (panel b of Fig. 3) we notice a similar trend when comparing to the TBE.
PauliNet formally scales as with the number of electrons N, and in practice, we observe a scaling behavior for the systems investigated so far, which is related to quadratic scaling of the neural network with an extra factor from the evaluation of the local energy. As PauliNet is currently implemented in a research code, which is not optimized for production purposes, the computational time will have a large prefactor which makes it computationally unfavorable to, e.g. CC methods for small molecules. However, its very favorable scaling in N compared to of high-level electronic-structure methods dominates for larger molecules, and this is clearly visible in benzene. For instance, ref. 46 used several state-of-the-art methods to obtain accurate benzene ground-state energies, with calculations run on several CPU types in a highly parallel manner (see Supporting Information of ref. 46 for details). PauliNet was run on a single RTX 3090 GPU at a fraction of the number of node hours. Although PauliNet is the computationally cheapest method in this comparison, it provides a significantly better (variational) ground-state energy than all methods (~0.48 Ha lower). As all methods compared in Fig. 4 provide similar excitation energies, these cannot be used to group the methods into more or less accurate, but overall this data indicates that PauliNet and deep VMC methods in general have a very favorable cost/accuracy trade-off for molecules of the size of benzene and beyond.
Multi-reference application: conical intersections
Molecular configurations that produce electronic states with similar energies are fundamental in photochemical applications. Such configurations can lead to several states mixing, meaning they are all necessary for an accurate description of a particular process. Conical intersections are produced when two states become degenerate and require the computation of excited-state potential energy surfaces. The modeling of energy surfaces near degeneracies is inherently multi-reference with significant electronic correlation and is thus a challenging application for electronic-structure methods.
As a final application of excited PauliNet, we compute ground- and excited-state potential energies for ethylene (H2C=CH2) as a function of its torsion and pyramidalization angles (see inset of Fig. 5). Twisting around the C=C bond raises the energy of the ground state while lowering that of the first-excited singlet state, giving rise to an avoided crossing at a torsion angle τ of 90°. From this twisted structure, the energy gap between the two states is further reduced through the pyramidalization of one of the CH2 groups, leading to a conical intersection. These potential energy curves, whose modeling is often too challenging for single-reference methods47–49, have been characterized using multi-reference configuration interaction (MR-CI) methods50 which we use for comparison.
We choose the same ground-state (planar) geometry as ref. 50 (optimized using a small CAS and the aug-cc-pVDZ basis set; see Supplementary Table I) and find the excitation energy between the ground state and first-excited singlet state to be within a few mHa of the MR-CI results. As we vary τ, while keeping all other geometric parameters fixed, we find the energy curves to be well reproduced by PauliNet, with an avoided crossing at τ = 90° (panel a of Fig. 5; curves symmetric about τ = 90°). Single-reference methods, such as TDDFT (see figure), often overestimate the energy of the ground state at τ = 90° (barrier) and produce an unphysical cusp.
Next, we take the same twisted structure (τ = 90°) as ref. 50 (optimized using a small CAS and the aug-cc-pVDZ basis set; see Supplementary Table I) and vary the pyramidalization angle ϕ, while keeping all other geometric parameters fixed. While there is a small discrepancy between PauliNet and the MR-CI results (panel b of Fig. 5), the trend of the energy curves is well described, including the correct minimum of the excited-state curve (~70°) and the conical intersection (PauliNet: ϕ ~ 100°; MR-CI: ϕ ~ 96°). We note that many single-reference methods are unable to even qualitatively describe the conical intersection, instead predicting spurious features49.
Discussion
We have introduced an approach to compute highly accurate excited-state solutions of the electronic Schrödinger equation for molecules by using deep neural networks that are trained in an unsupervised manner with variational Monte Carlo. We have employed the PauliNet architecture27 to approximate the ground- and excited-state wavefunctions, however other architectures such as FermiNet28 or second quantization approaches29 could also be employed, with suitable modifications. As our approach to find excited states only constrains the excited-state wavefunctions, the ability to compute highly accurate and variational absolute ground-state energies is unchanged. In addition, we demonstrate for a number of small molecules containing up to 42 electrons, that excited PauliNet can reliably find the first excitation energies with an accuracy that is on par with high-level electronic-structure methods, whereas cheaper methods such as TDDFT are less consistent in approximating these energies. The accuracy of the excited-state wavefunctions is underlined by an accurate match of oscillator strengths, which depend on the transition dipole moment, a quantity that is more sensitive to the exact form of the wavefunction than the energy. For benzene (42 electrons), PauliNet already requires significantly less computational time than higher-order methods, and this advantage will only improve for larger molecules. Formally, a single PauliNet is an method for N electrons, due to the computational cost of the Hartree-Fock or CASSCF baseline, however, in practice we empirically observe an dependency for the system sizes tested, as discussed above. In addition, for excited-state calculations n PauliNet replicas are used which gives rise to , with the latter term arising from the pairwise overlaps and having a much smaller prefactor than the former.
Notably, almost identical excited PauliNet architectures are used across the systems shown in this paper—up to minor modifications such as the budget of Slater determinants and the total number of excited states requested, and a deeper network for benzene to adapt for a potentially more complex wavefunction. Whereas a skilled quantum chemist can usually tune and specialize an existing electronic-structure method to give very high-accuracy results for a given molecule, our aim is the exact opposite: to provide a method that, by leveraging machine-learning tools, is as automated as possible and will work over a wide range of Hamiltonians provided.
We have demonstrated that we can compute ground- and excited-state potential energy surfaces with the example of ethylene where we model an avoided crossing and conical intersection. Here, where single-reference methods often fail, PauliNet performs well against multi-reference CI results. By combining the present approach with recent and ongoing extensions of PauliNet32 and FermiNet33 that variationally compute entire potential energy surfaces, both highly accurate ground- and excited-state energy surfaces are now accessible with deep VMC methods. Future work will investigate the application of PauliNet to other interesting processes where molecular dynamics interacts with excited states.
One of the limitations of the current approach is that it appears difficult to reliably find all excited states up to a given desired number, especially in cases where several excited states have similar energies. This is a complex problem that depends on the Hartree-Fock/CASSCF initialization, on the total number of states requested, on the learning algorithm, and the expressiveness of the architecture and will be studied in more detail elsewhere. However, the first excited state could be reliably found for all molecules studied here, and apart from one exception also the second excited state. This, in combination with the high numerical accuracy and the favorable computational cost, makes deep VMC a promising method to compute both ground- and excited-state properties for small- and medium-sized molecules with dozens or even low hundreds of electrons.
Methods
PauliNet ansatz
At the heart of our approach is the PauliNet ansatz, introduced in ref. 27 and further refined in ref. 51, a multi-determinant Slater–Jastrow-backflow type trial wavefunction that is parametrized by highly expressive deep neural networks:
3 |
4 |
where r = (r1, . . ., rN) is the 3N-dimensional real space of electron coordinates. The structure of our ansatz ensures that the correct physics is encoded: the wavefunction obeys exact asymptotic behavior through the fixed electronic cusps γ, and is antisymmetric with respect to the exchange of like-spin electrons through the use of generalized Slater determinants, guaranteeing the Pauli exclusion principle is obeyed.
The expressiveness of PauliNet is contained in the Jastrow factor Jθ and backflow fθ, which introduce many-body correlation, and are both represented through deep neural networks (denoted by trainable parameters θ). Jθ and fθ are constructed in ways that preserve the antisymmetry of the fermionic wavefunction with respect to exchanging like-spin electrons, as well as its cusp behavior. The Jastrow factor is an exchange-symmetric function, and captures complex correlation effects through augmenting the Slater-determinant baseline, but is incapable of modifying the nodal surface of the determinant expansion. Changes to the nodal surface are possible through the backflow, which acts on the single-electron orbitals φμ directly, transforming them into permutation-equivariant many-electron orbitals . fθ is split into multiplicative (m) and additive (a) components (Eq. (4)), and is designed to be equivariant under the exchange of like-spin electrons.
Ground-state optimization
Like traditional VMC methods, PauliNet is based on the variational principle, which guarantees that the energy expectation value of a trial wavefunction ψθ is an upper bound to the true ground-state energy:
5 |
For a given system, a standard quantum chemistry method (Hartree–Fock (HF) for a single determinant; complete active space self-consistent field (CASSCF) for multiple determinants) is performed, with the solution supplemented by the analytically-known cusp conditions, thus producing a reasonable baseline wavefunction. We then optimize the PauliNet ansatz by minimizing the total electronic energy (serving directly as the loss), following the standard VMC trick of evaluating it as an expectation value of the local energy, , over the probability distribution ∣ψθ∣2:
6 |
This means that, in practice, we alternate between sampling electron positions generated using a Langevin algorithm with the probability of the trial wavefunction serving as the target distribution, and optimizing the trial wavefunction parameters using stochastic gradient descent. For further details, see ref. 27.
Computing excited states
We now introduce the central idea of this paper: a deep VMC method to compute the ground and low-lying excited states of a given electronic system. While we employ PauliNet to represent the individual wavefunctions, the method can also employ FermiNet or other real-space wavefunction representations with suitable modifications.
In a similar spirit to the ground-state optimization process, we first obtain a reasonable baseline for each state by performing a minimal state-averaged CASSCF calculation. This optimizes the energy average for all states in question and yields a single set of orbitals to construct each multi-determinant wavefunction, which in turn are supplemented by the analytically-known cusp conditions. We fix the number of determinants in our ansatz by cutting off the CASSCF expansion based on the absolute values of their determinant coefficients. The choice of the CASSCF baseline ensures that the PauliNet ansatzes for the different excited states are close to orthogonal upon initialization. In contrast to the ground-state calculation, the optimization of excited states requires a more nuanced choice of active space. In principle, we must ensure that the solutions contain determinants with orbitals of the necessary rotational symmetries (the Jastrow factor and backflow correction are rotationally-symmetric modifications of the orbitals) and spin configurations (the choice of the number of spin-up and spin-down electrons does impose restrictions on the states that may be attained by our ansatz). For most systems studied in this paper, a generic choice of the active space was sufficient (see Supplementary Table VIII) and we have not studied the dependence on the CAS initialization in more depth. As shown in previous studies the quality of the orbitals has only a minor effect on the training and does not change the final energy51. If, however, the initialization is not accounted for and the baseline solutions provide a qualitatively wrong spectrum of excited states our ansatzes may be trapped in local minima and miss intermediate excited states (see Fig. 2), even though we keep the Slater-determinant coefficients cp and linear coefficients cμk of the single-electron orbitals φμ(ri) = ∑kcμkϕk(ri) trainable.
Our objective is to calculate the lowest n eigenstates of a given system, that is, find the set of orthogonal states that minimizes the energy expectation value. We approach this challenge by introducing a penalty term to the energy loss function (Eq. (6)) and optimizing the joint loss for n PauliNet instances:
7 |
where and Sij is the pairwise overlap between states i and j. The functional form of the overlap penalty is chosen to diverge when two states collapse and behave linearly when states are close to orthogonal (see the next section for details). This allows states to overlap during the optimization procedure while preventing their collapse and eventually driving them to orthogonality when they have settled in a local minimum of the energy. The hyperparameter α weights the two loss terms and can be increased throughout the training to strengthen the orthogonality condition when approaching the final wavefunctions. For a sufficiently large α, the true minimum of the loss function corresponds to the sum of the energies of the lowest-lying excited states with these states having no overlap. Thus, optimizing the penalized loss function (Eq. (7)) leads to an unbiased convergence towards the lowest-lying excited states (see below). In practice a small α is typically sufficient, making a robust choice possible.
To stabilize the training and reduce the computational cost we detach gradients in such a way that we only consider the overlap with the lower-lying states respectively, that is, the ground state is subject to unconstrained energy minimization and the nth excited state introduces n pairwise penalty terms. We compute the overlap of the unnormalized states i and j as the geometric mean of the two Monte Carlo estimates, obtained over distributions ∣ψθ,i∣2 and ∣ψθ,j∣2, respectively:
8 |
The sign of the overlap can be obtained from either of the two estimators, which match in the limit of infinite sampling. If the overlap is close to zero and the signs of the two estimates differ due to statistical noise of the sampling, we consider the states to be orthogonal. Similar to the energy loss, the gradient (We employ gradient clipping to stabilize the training.) of the pairwise overlap can be formulated such that it depends on the first derivative of the log wavefunction with respect to the parameters only (see below for details).
Finally, we note that different states may be modeled at different levels of quality, which can lead to erroneous excitation energies. In order to improve the error cancellation of our ansatzes we employ a variance-matching technique. As the variance of the energy σ2 can be considered a metric of how close a wavefunction is to a true eigenstate, variance-matching procedures can be useful tools21,52,53. Here, we utilize a simple scheme: for single-state quantities such as total energies, we evaluate all wavefunctions at the end of training. For multi-state quantities, such as excitation energies or transition dipole moments, we match states of a similar variance. That is, if final ψθ,i has a lower variance than final ψθ,j, we take ψθ,i at an earlier point in training. This simply involves computing σ2 of the training energies and applying an exponential moving average at each iteration to monitor convergence (see below for details). We find this procedure typically improves the final results.
Loss function and overlap penalty
There are a number of choices of possible loss functions for the optimization of excited states in quantum Monte Carlo20,26,54. In order to assess the feasibility of excited-state optimization with deep neural network ansatzes in variational Monte Carlo we conducted a range of experiments with different types of optimization objectives. Our empirical findings showed that employing a penalty method is the conceptually most straightforward approach and gives stable results when combining it with our implementation of PauliNet. Initially, we started with an overlap penalty term similar to Pathak et al26. However, we found that our optimization could still collapse even if we chose a sufficiently large prefactor (α) and the training could not recover. We therefore switched to an alternative penalty term (Eq. (7)) which diverges upon a collapse of the states. The effect of our penalty term can be illustrated by considering the loss for a two-state system with the exact ground state and a linear combination of the ground and first excited state (see Fig. 6):
9 |
The overlap and the energy can be obtained as
10 |
In the vicinity of the orthogonal solution, the Taylor expansion of the penalty term is
11 |
that is, the overlap penalty behaves linearly to first order. This gives rise to a penalty that is locally stable for any prefactor, lower bounded by the S2 penalty term, and diverges if states collapse. For a large enough α parameter the global optimum of the total loss is at zero overlap, that is, the optimization method is incentivized to find exactly orthogonal states without mixing.
In practice, for the batch sizes used in our calculations, we have not observed a bias due to the non-linear nature of the penalty when applied to sampled expectation values of the overlap. However, it is expected that this is no longer the case in the limit of small batches. In order to elucidate how our loss function behaves in this regard, we compute the two lowest states of LiH using a range of different batch sizes (see Fig. 7). We find the optimization procedure to be robust for the large batch sizes that we typically employ (≥2000), with the excitation energy within 1 mHa of the exact, and the pairwise overlap remaining small throughout training (panel c). For smaller batch sizes, we observe a larger degree of statistical noise in the pairwise overlap, which leads to a less reliable approximation for the excited state and the corresponding excitation energy (panel b).
Gradient of the loss function
In order to differentiate the loss function we explicitly formulate the gradient. We consider the general case of a mixed observable:
12 |
13 |
where Ni, Nj are the norms of the wavefunctions and . By the property of Hermitian matrices, Oij = Oji, we derive an expression that does not depend on the wavefunction norms:
14 |
15 |
This expression reduces to the pairwise overlaps (Eq. (8)) upon setting . The derivative of this term can be expressed as
16 |
where (i ⇔ j) is an additional term with the two indices interchanged.
By considering the Hamiltonian operator and setting i = j we recover the gradient of the energy loss27:
17 |
Variance matching
As far as relative energies are concerned most computational chemistry methods rely heavily on the cancellation of error. While quantum Monte Carlo methods using neural network-based trial wavefunctions provide highly accurate total energies, the flexibility of these ansatzes is difficult to control which can lead to varying qualities of approximations for different states. In order to account for potential imbalances we utilize the variance of the wavefunctions as a measure of the quality of the approximation (zero-variance principle) and employ a variance-matching scheme. Variance-matching techniques as well as variance extrapolation have typically been applied by optimizing a family of ansatzes and comparing variances across the optimized wavefunctions53. Instead of training multiple ansatzes we checkpoint wavefunctions during the training and compute excitation energies by rewinding the ground state to match the variance of the excited state as depicted in Fig. 8. The mean and variance of each wavefunction are computed over the batch dimension at each step in training and smoothed with an exponential walking average. For the final estimation of excitation energies, the respective wavefunctions are then sampled exhaustively as in the usual evaluation process. While the variance matching hardly impacts the excitation energies for small systems, for larger and harder-to-optimize systems, such as benzene, it becomes increasingly relevant.
Spin treatment
PauliNet encodes only the spatial part of the wavefunction and its like-spin antisymmetry explicitly12, while the spin part, which guarantees the opposite-spin antisymmetry, is only implicit. Every spin-assigned spatial ansatz such as PauliNet is always an eigenstate of with an eigenvalue of , but it may not be an eigenstate of . The spatial part of eigenstates of is characterized by specific sets of permutational symmetries involving opposite-spin electrons55. PauliNet does not enforce these symmetries but instead attempts to learn them through the variational principle because eigenstates of the Hamiltonian are also eigenstates of . Therefore, we do not, in general, control the spin of the eigenstates found in the optimization procedure—they are simply found in the order of increasing energy, independent of spin. The spin of a found eigenstate can be obtained in principle by Monte Carlo sampling56. Whether a particular spin state is found in practice may be influenced by the spin of the CASSCF baseline wavefunction, which we, therefore, report in Supplementary Table VIII. In special cases, we may wish to target a specific spin state (e.g., see the section “Multi-reference application: conical intersections”), and for that, we can take advantage of the orbital-assigned backflow of PauliNet. Combined with the freezing of the determinant coefficients, this ensures that PauliNet remains in the same spin state as the CASSCF baseline wavefunction.
Supplementary information
Acknowledgements
We thank Tim Gould (Griffith U) for early discussions about variational principles for excited states. Funding is gratefully acknowledged from the Berlin mathematics center MATH+ (Projects AA1-6, AA2-8), European Commission (ERC CoG 772230), Deutsche Forschungsgemeinschaft (NO825/3-2), and the Berlin Institute for Foundations in Learning and Data (BIFOLD).
Source data
Author contributions
M.T.E., Z.S., J.H., and F.N. designed the research. M.T.E., Z.S., P.A.E., J.H., and F.N. developed the method. M.T.E., Z.S., and J.H. wrote the computer code. M.T.E. and Z.S. carried out the numerical calculations. M.T.E., Z.S., P.A.E., J.H., and F.N. analyzed the data. M.T.E., Z.S., J.H., and F.N. wrote the manuscript.
Peer review
Peer review information
Nature Communications thanks Xiang Li and the anonymous reviewer(s) for their contribution to the peer review of this work.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Data availability
The dataset generated in this study is openly available in Zenodo (10.5281/zenodo.7274855). Source data are provided with this paper.
Code availability
The computer code used in this study is openly available in Zenodo (10.5281/zenodo.7347937).
Competing interests
The authors declare no competing interests.
Footnotes
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
These authors contributed equally: M.T. Entwistle, Z. Schätzle.
Contributor Information
J. Hermann, Email: science@jan.hermann.name
F. Noé, Email: franknoe@microsoft.com
Supplementary information
The online version contains supplementary material available at 10.1038/s41467-022-35534-5.
References
- 1.Lindh, R. & González, L. Quantum Chemistry and Dynamics of Excited States: Methods and Applications (John Wiley & Sons, 2020).
- 2.Kohn W, Sham LJ. Self-consistent equations including exchange and correlation effects. Phys. Rev. 1965;140:A1133. doi: 10.1103/PhysRev.140.A1133. [DOI] [Google Scholar]
- 3.Runge E, Gross EKU. Density-functional theory for time-dependent systems. Phys. Rev. Lett. 1984;52:997. doi: 10.1103/PhysRevLett.52.997. [DOI] [Google Scholar]
- 4.Elliott P, Fuks JI, Rubio A, Maitra NT. Universal dynamical steps in the exact time-dependent exchange-correlation potential. Phys. Rev. Lett. 2012;109:266404. doi: 10.1103/PhysRevLett.109.266404. [DOI] [PubMed] [Google Scholar]
- 5.Fuks JI, Luo K, Sandoval ED, Maitra NT. Time-resolved spectroscopy in time-dependent density functional theory: an exact condition. Phys. Rev. Lett. 2015;114:183002. doi: 10.1103/PhysRevLett.114.183002. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Suzuki Y, Lacombe L, Watanabe K, Maitra NT. Exact time-dependent exchange-correlation potential in electron scattering processes. Phys. Rev. Lett. 2017;119:263401. doi: 10.1103/PhysRevLett.119.263401. [DOI] [PubMed] [Google Scholar]
- 7.Singh N, Elliott P, Nautiyal T, Dewhurst JK, Sharma S. Adiabatic generalized gradient approximation kernel in time-dependent density functional theory. Phys. Rev. B. 2019;99:035151. doi: 10.1103/PhysRevB.99.035151. [DOI] [Google Scholar]
- 8.Maitra NT. Charge transfer in time-dependent density functional theory. J. Phys.: Condens. Matter. 2017;29:423001. doi: 10.1088/1361-648X/aa836e. [DOI] [PubMed] [Google Scholar]
- 9.Ullrich CA, Tokatly IV. Nonadiabatic electron dynamics in time-dependent density-functional theory. Phys. Rev. B. 2006;73:235102. doi: 10.1103/PhysRevB.73.235102. [DOI] [Google Scholar]
- 10.Szalay PG, Müller T, Gidofalvi G, Lischka H, Shepard R. Multiconfiguration self-consistent field and multireference configuration interaction methods and applications. Chem. Rev. 2012;112:108. doi: 10.1021/cr200137a. [DOI] [PubMed] [Google Scholar]
- 11.Sneskov K, Christiansen O. Excited state coupled cluster methods. WIREs Comput. Mol. Sci. 2011;2:566. doi: 10.1002/wcms.99. [DOI] [Google Scholar]
- 12.Foulkes WMC, Mitas L, Needs RJ, Rajagopal G. Quantum Monte Carlo simulations of solids. Rev. Mod. Phys. 2001;73:33. doi: 10.1103/RevModPhys.73.33. [DOI] [Google Scholar]
- 13.Williams KT, et al. Direct comparison of many-body methods for realistic electronic Hamiltonians. Phys. Rev. X. 2020;10:011041. [Google Scholar]
- 14.Morales MA, McMinis J, Clark BK, Kim J, Scuseria GE. Multideterminant wave functions in quantum Monte Carlo. J. Chem. Theory Comput. 2012;8:2181. doi: 10.1021/ct3003404. [DOI] [PubMed] [Google Scholar]
- 15.Benali A, et al. Toward a systematic improvement of the fixed-node approximation in diffusion Monte Carlo for solids—a case study in diamond. J. Chem. Phys. 2020;153:184111. doi: 10.1063/5.0021036. [DOI] [PubMed] [Google Scholar]
- 16.Austin BM, Zubarev DY, Lester WA. Quantum Monte Carlo and related approaches. Chem. Rev. 2012;112:263. doi: 10.1021/cr2001564. [DOI] [PubMed] [Google Scholar]
- 17.Ceperley DM, Bernu B. The calculation of excited state properties with quantum Monte Carlo. J. Chem. Phys. 1988;89:6316. doi: 10.1063/1.455398. [DOI] [Google Scholar]
- 18.Blunt NS, Smart SD, Booth GH, Alavi A. An excited-state approach within full configuration interaction quantum Monte Carlo. J. Chem. Phys. 2015;143:134117. doi: 10.1063/1.4932595. [DOI] [PubMed] [Google Scholar]
- 19.Send R, Valsson O, Filippi C. Electronic excitations of simple cyanine dyes: reconciling density functional and wave function methods. J. Chem. Theory Comput. 2011;7:444. doi: 10.1021/ct1006295. [DOI] [PubMed] [Google Scholar]
- 20.Dash M, Feldt J, Moroni S, Scemama A, Filippi C. Excited states with selected configuration interaction-quantum Monte Carlo: chemically accurate excitation energies and geometries. J. Chem. Theory Comput. 2019;15:4896. doi: 10.1021/acs.jctc.9b00476. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Pineda Flores SD, Neuscamman E. Excited state specific multi-Slater Jastrow wave functions. J. Phys. Chem. A. 2019;123:1487. doi: 10.1021/acs.jpca.8b10671. [DOI] [PubMed] [Google Scholar]
- 22.Zhao L, Neuscamman E. An efficient variational principle for the direct optimization of excited states. J. Chem. Theory Comput. 2016;12:3436. doi: 10.1021/acs.jctc.6b00508. [DOI] [PubMed] [Google Scholar]
- 23.Shea JAR, Neuscamman E. Size consistent excited states via algorithmic transformations between variational principles. J. Chem. Theory Comput. 2017;13:6078. doi: 10.1021/acs.jctc.7b00923. [DOI] [PubMed] [Google Scholar]
- 24.Blunt NS, Neuscamman E. Excited-state diffusion Monte Carlo calculations: a simple and efficient two-determinant ansatz. J. Chem. Theory Comput. 2019;15:178. doi: 10.1021/acs.jctc.8b00879. [DOI] [PubMed] [Google Scholar]
- 25.Choo K, Carleo G, Regnault N, Neupert T. Symmetries and many-body excitations with neural-network quantum states. Phys. Rev. Lett. 2018;121:167204. doi: 10.1103/PhysRevLett.121.167204. [DOI] [PubMed] [Google Scholar]
- 26.Pathak S, Busemeyer B, Rodrigues JNB, Wagner LK. Excited states in variational Monte Carlo using a penalty method. J. Chem. Phys. 2021;154:034101. doi: 10.1063/5.0030949. [DOI] [PubMed] [Google Scholar]
- 27.Hermann J, Schätzle Z, Noé F. Deep-neural-network solution of the electronic Schrödinger equation. Nat. Chem. 2020;12:891. doi: 10.1038/s41557-020-0544-y. [DOI] [PubMed] [Google Scholar]
- 28.Pfau D, Spencer JS, Matthews AGDG, Foulkes WMC. Ab initio solution of the many-electron Schrödinger equation with deep neural networks. Phys. Rev. Res. 2020;2:033429. doi: 10.1103/PhysRevResearch.2.033429. [DOI] [Google Scholar]
- 29.Choo K, Mezzacapo A, Carleo G. Fermionic neural-network states for ab-initio electronic structure. Nat. Commun. 2020;11:2368. doi: 10.1038/s41467-020-15724-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Han J, Zhang L, Weinan E. Solving many-electron Schrödinger equation using deep neural networks. J. Comput. Phys. 2019;399:108929. doi: 10.1016/j.jcp.2019.108929. [DOI] [Google Scholar]
- 31.Spencer, J. S., Pfau, D., Botev, A. & Foulkes, W. M. C. Better, faster fermionic neural networks. Preprint at https://arxiv.org/abs/2011.07125 (2021).
- 32.Scherbela, M., Reisenhofer, R., Gerard, L., Marquetand, P. & Grohs, P. Solving the electronic Schrödinger equation for multiple nuclear geometries with weight-sharing deep neural networks. Nat. Comput. Sci.2, 331–341 (2022). [DOI] [PubMed]
- 33.Gao, N. & Günnemann, S. Ab-initio potential energy surfaces by pairing GNNs with neural wave functions. Preprint at https://arxiv.org/abs/2110.05064v2 (2021).
- 34.Carleo G, Troyer M. Solving the quantum many-body problem with artificial neural networks. Science. 2017;355:602. doi: 10.1126/science.aag2302. [DOI] [PubMed] [Google Scholar]
- 35.Westermayr J, Marquetand P. Machine learning for electronically excited states of molecules. Chem. Rev. 2021;121:9873. doi: 10.1021/acs.chemrev.0c00749. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Johnson III, R. D. NIST Computational Chemistry Comparison and Benchmark Database. NIST Standard Reference Database Number 101 (2021). 10.18434/T47C7Z.
- 37.Brémond E, Savarese M, Adamo C, Jacquemin D. Accuracy of TD-DFT geometries: a fresh look. J. Chem. Theory Comput. 2018;14:3715. doi: 10.1021/acs.jctc.8b00311. [DOI] [PubMed] [Google Scholar]
- 38.Tajti A, Szalay PG. Accuracy of spin-component-scaled CC2 excitation energies and potential energy surfaces. J. Chem. Theory Comput. 2019;15:5523. doi: 10.1021/acs.jctc.9b00676. [DOI] [PubMed] [Google Scholar]
- 39.Tajti A, Tulipán L, Szalay PG. Accuracy of spin-component scaled ADC(2) excitation energies and potential energy surfaces. J. Chem. Theory Comput. 2020;16:468. doi: 10.1021/acs.jctc.9b01065. [DOI] [PubMed] [Google Scholar]
- 40.Crossley R. Fifteen years on—the calculation of atomic transition probabilities revisited. Phys. Scr. 1984;T8:117. doi: 10.1088/0031-8949/1984/T8/020. [DOI] [Google Scholar]
- 41.Loos P-F, et al. A mountaineering strategy to excited states: highly accurate reference energies and benchmarks. J. Chem. Theory Comput. 2018;14:4360. doi: 10.1021/acs.jctc.8b00406. [DOI] [PubMed] [Google Scholar]
- 42.Chrayteh A, Blondel A, Loos P-F, Jacquemin D. Mountaineering strategy to excited states: highly accurate oscillator strengths and dipole moments of small molecules. J. Chem. Theory Comput. 2021;17:416. doi: 10.1021/acs.jctc.0c01111. [DOI] [PubMed] [Google Scholar]
- 43.Bhattacharya D, Vaval N, Pal S. Electronic transition dipole moments and dipole oscillator strengths within Fock-space multi-reference coupled cluster framework: an efficient and novel approach. J. Chem. Phys. 2013;138:094108. doi: 10.1063/1.4793277. [DOI] [PubMed] [Google Scholar]
- 44.Olsen J, De Meŕas AM, Jensen HJA, Jørgensen P. Excitation energies, transition moments and dynamic polarizabilities for CH+. A comparison of multiconfigurational linear response and full configuration interaction calculations. Chem. Phys. Lett. 1989;154:380. doi: 10.1016/0009-2614(89)85373-4. [DOI] [Google Scholar]
- 45.Doering JP. Low-energy electron-impact study of the first, second, and third triplet states of benzene. J. Chem. Phys. 1969;51:2866. doi: 10.1063/1.1672424. [DOI] [Google Scholar]
- 46.Eriksen JJ, et al. The ground state electronic energy of benzene. J. Phys. Chem. Lett. 2020;11:8922. doi: 10.1021/acs.jpclett.0c02621. [DOI] [PubMed] [Google Scholar]
- 47.Krylov AI. Spin-flip configuration interaction: an electronic structure model that is both variational and size-consistent. Chem. Phys. Lett. 2001;350:522. doi: 10.1016/S0009-2614(01)01316-1. [DOI] [Google Scholar]
- 48.Mališ M, Luber S. Trajectory surface hopping nonadiabatic molecular dynamics with Kohn–Sham ΔSCF for condensed-phase systems. J. Chem. Theory Comput. 2020;16:4071. doi: 10.1021/acs.jctc.0c00372. [DOI] [PubMed] [Google Scholar]
- 49.Barbatti, M. & Crespo-Otero, R. Surface hopping dynamics with DFT excited states. In Density-Functional Methods for Excited States, (eds Ferré, N., Filatov, M., Huix-Rotllant, M.) 415–444 (Springer International Publishing, 2014).
- 50.Barbatti M, Paier J, Lischka H. Photochemistry of ethylene: a multireference configuration interaction investigation of the excited-state energy surfaces. J. Chem. Phys. 2004;121:11614. doi: 10.1063/1.1807378. [DOI] [PubMed] [Google Scholar]
- 51.Schätzle Z, Hermann J, Noé F. Convergence to the fixed-node limit in deep variational Monte Carlo. J. Chem. Phys. 2021;154:124108. doi: 10.1063/5.0032836. [DOI] [PubMed] [Google Scholar]
- 52.Otis L, Craig I, Neuscamman E. A hybrid approach to excited-state-specific variational Monte Carlo and doubly excited states. J. Chem. Phys. 2020;153:234105. doi: 10.1063/5.0024572. [DOI] [PubMed] [Google Scholar]
- 53.Robinson PJ, Pineda Flores SD, Neuscamman E. Excitation variance matching with limited configuration interaction expansions in variational Monte Carlo. J. Chem. Phys. 2017;147:164114. doi: 10.1063/1.5008743. [DOI] [PubMed] [Google Scholar]
- 54.Garner SM, Neuscamman E. A variational Monte Carlo approach for core excitations. J. Chem. Phys. 2020;153:144108. doi: 10.1063/5.0020310. [DOI] [PubMed] [Google Scholar]
- 55.Pauncz, R. Spin Eigenfunctions (Springer, New York, NY).
- 56.Huang C-J, Filippi C, Umrigar CJ. Spin contamination in quantum Monte Carlo wave functions. J. Chem. Phys. 1998;108:8838. doi: 10.1063/1.476330. [DOI] [Google Scholar]
- 57.Bande A, Nakashima H, Nakatsuji H. LiH potential energy curves for ground and excited states with the free complement local Schrödinger equation method. Chem. Phys. Lett. 2010;496:347. doi: 10.1016/j.cplett.2010.07.041. [DOI] [Google Scholar]
- 58.Jasik P, Sienkiewicz JE, Domsta J, Henriksen NE. Electronic structure and time-dependent description of rotational predissociation of LiH. Phys. Chem. Chem. Phys. 2017;19:19777. doi: 10.1039/C7CP02097J. [DOI] [PubMed] [Google Scholar]
- 59.Pitarch-Ruiz J, Sánchez-Marín J, Velasco AM. Full configuration interaction calculation of the low lying valence and Rydberg states of BeH. J. Comput. Chem. 2008;29:523. doi: 10.1002/jcc.20811. [DOI] [PubMed] [Google Scholar]
- 60.O’eill DP, Gill PMW. Benchmark correlation energies for small molecules. Mol. Phys. 2005;103:763. doi: 10.1080/00268970512331339323. [DOI] [Google Scholar]
- 61.Giner E, Traore D, Pradines B, Toulouse J. Self-consistent density-based basis-set correction: how much do we lower total energies and improve dipole moments? J. Chem. Phys. 2021;155:044109. doi: 10.1063/5.0057957. [DOI] [PubMed] [Google Scholar]
- 62.Larsen H, Hald K, Olsen J, Jørgensen P. Triplet excitation energies in full configuration interaction and coupled-cluster theory. J. Chem. Phys. 2001;115:3015. doi: 10.1063/1.1386415. [DOI] [Google Scholar]
- 63.Kowalski K, Piecuch P. Excited-state potential energy curves of CH+: a comparison of the EOMCCSDt and full EOMCCSDT results. Chem. Phys. Lett. 2001;347:237. doi: 10.1016/S0009-2614(01)01010-7. [DOI] [Google Scholar]
- 64.Kowalski K, Piecuch P. The active-space equation-of-motion coupled-cluster methods for excited electronic states: full EOMCCSDt. J. Chem. Phys. 2001;115:643. doi: 10.1063/1.1378323. [DOI] [Google Scholar]
- 65.Cronstrand P, Jansik B, Jonsson D, Luo Y, Ågren H. Density functional response theory calculations of three-photon absorption. J. Chem. Phys. 2004;121:9239. doi: 10.1063/1.1804175. [DOI] [PubMed] [Google Scholar]
- 66.Sałek P, et al. A comparison of density-functional-theory and coupled-cluster frequency-dependent polarizabilities and hyperpolarizabilities. Mol. Phys. 2005;103:439. doi: 10.1080/00268970412331319254. [DOI] [Google Scholar]
- 67.Liu F, et al. A parallel implementation of the analytic nuclear gradient for time-dependent density functional theory within the Tamm–Dancoff approximation. Mol. Phys. 2010;108:2791. doi: 10.1080/00268976.2010.526642. [DOI] [Google Scholar]
- 68.Adamo C, Scuseria GE, Barone V. Accurate excitation energies from time-dependent density functional theory: assessing the PBE0 model. J. Chem. Phys. 1999;111:2889. doi: 10.1063/1.479571. [DOI] [Google Scholar]
- 69.Biglari Z, Shayesteh A, Maghari A. Ab initio potential energy curves and transition dipole moments for the low-lying states of CH+ Comput. Theor. Chem. 2014;1047:22. doi: 10.1016/j.comptc.2014.08.012. [DOI] [Google Scholar]
- 70.Barysz M. Fock space multi-reference coupled cluster study of transition moment and oscillator strength. Theor. Chim. Acta. 1995;90:257. doi: 10.1007/BF01113471. [DOI] [Google Scholar]
- 71.Lane JR, Vaida V, Kjaergaard HG. Calculated electronic transitions of the water ammonia complex. J. Chem. Phys. 2008;128:034302. doi: 10.1063/1.2814163. [DOI] [PubMed] [Google Scholar]
- 72.Tawada Y, Tsuneda T, Yanagisawa S, Yanai T, Hirao K. A long-range-corrected time-dependent density functional theory. J. Chem. Phys. 2004;120:8425. doi: 10.1063/1.1688752. [DOI] [PubMed] [Google Scholar]
- 73.Douglass CH, Nelson HH, Rice JK. Spectra, radiative lifetimes, and band oscillator strengths of the A1Π−X1Σ+ transition of BH. J. Chem. Phys. 1989;90:6940. doi: 10.1063/1.456269. [DOI] [Google Scholar]
- 74.Mahan, B. & O’Keefe, A. Radiative lifetimes of excited electronic states in molecular ions. Astrophys. J.248, 1209–1216 (1981).
- 75.Thorn PA, et al. Cross sections and oscillator strengths for electron-impact excitation of the Ã1B1 electronic state of water. J. Chem. Phys. 2007;126:064306. doi: 10.1063/1.2434166. [DOI] [PubMed] [Google Scholar]
- 76.Chen T, Liu YW, Du XJ, Xu YC, Zhu LF. Oscillator strengths and integral cross sections of the ÃX̃1A1 excitation of ammonia studied by fast electron impact. J. Chem. Phys. 2019;150:064311. doi: 10.1063/1.5083933. [DOI] [PubMed] [Google Scholar]
- 77.Kang X, et al. Oscillator strength measurement for the A(0–6)–X(0), C(0)–X(0), and E(0)–X(0) transitions of CO by the dipole (γ, γ) method. Astrophys. J. 2015;807:96. doi: 10.1088/0004-637X/807/1/96. [DOI] [Google Scholar]
- 78.Loos P-F, Lipparini F, Boggio-Pasqua M, Scemama A, Jacquemin D. A mountaineering strategy to excited states: highly accurate energies and benchmarks for medium sized molecules. J. Chem. Theory Comput. 2020;16:1711. doi: 10.1021/acs.jctc.9b01216. [DOI] [PubMed] [Google Scholar]
- 79.Lorentzon J, Malmqvist P-Å, Fülscher M, Roos BO. A CASPT2 study of the valence and lowest Rydberg electronic states of benzene and phenol. Theor. Chim. Acta. 1995;91:91. doi: 10.1007/BF01113865. [DOI] [Google Scholar]
- 80.Fdez. Galván I, et al. OpenMolcas: from source code to insight. J. Chem. Theory Comput. 2019;15:5925. doi: 10.1021/acs.jctc.9b00532. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
The dataset generated in this study is openly available in Zenodo (10.5281/zenodo.7274855). Source data are provided with this paper.
The computer code used in this study is openly available in Zenodo (10.5281/zenodo.7347937).