Skip to main content
. 2023 Feb 10;4(2):100678. doi: 10.1016/j.patter.2023.100678

Figure 6.

Figure 6

Optimization workflows for various generative model categories

Note that all model classes, except conditional generation, involve a scoring step and are designed to be iterative. The reward calculation step in reinforcement learning and the selection step in distribution learning and genetic algorithms are analogous to an acquisition function in multi-objective Bayesian optimization. While the termination criterion is not explicitly shown for distribution learning, genetic algorithms, and reinforcement learning, these iterative loops can accommodate various stopping criteria. We also emphasize that while an autoencoder architecture is depicted in both distribution learning and conditional generation, these generators can also be recurrent neural networks or other generative architectures.