Skip to main content
. Author manuscript; available in PMC: 2023 Mar 1.
Published in final edited form as: IEEE Signal Process Mag. 2022 Feb 24;39(2):28–44. doi: 10.1109/msp.2021.3119273

Fig. 5.

Fig. 5.

Geometric view of deep generative models. Fixed distribution ζ in Z is pushed to μθ in X by the network Gθ, so that the mapped distribution μθ approaches the real distribution μ. In VAE, Gθ works as a decoder to generate samples, while Fϕ acts as an encoder, additionally constraining ζϕ to be as close to ζ. With such geometric view, auto-encoding generative models (e.g. VAE), and GAN-based generative models can be seen as variants of this single illustration.