Skip to main content
. 2023 Feb 3;14:579. doi: 10.1038/s41467-023-36329-y

Table 3.

Comparison of models on the QM9 dataset, measured by the MAE in units of [meV]

Model U0 U H G
Schnet25 14 19 14 14
DimeNet++77 6.3 6.3 6.5 7.6
Cormorant23 22 21 21 20
LieConv78 19 19 24 22
L1Net79 13.5 13.8 14.4 14.0
SphereNet80 6.3 7.3 6.4 8.0
EGNN40 11 12 12 12
ET38 6.2 6.3 6.5 7.6
NoisyNodes81 7.3 7.6 7.4 8.3
PaiNN27 5.9 5.7 6.0 7.4
Allegro, 1 layer 5.7 (0.3) 5.3 5.3 6.6
Allegro, 3 layers 4.7 (0.2) 4.4 4.4 5.7

Allegro outperforms all existing atom-centered message-passing and transformer-based models, in particular even with a single layer. Best methods are shown in bold.