Skip to main content
. 2022 Nov;36(5-6):587–602. doi: 10.1177/10943420221121804

Figure 8.

Figure 8.

Weak scaling of molecule language model pre-training on Summit for a constant 395 thousand molecule problem size per GPU. I/O operations are saving checkpoints and trained models.