Skip to main content
. 2020 Jun 9;3:30. doi: 10.3389/frai.2020.00030

Table 5.

Proposed correspondences between features of variational autoencoders and predictive processing.

Variational autoencoder features Proposed correspondences in predictive processing
Encoder network Ascending hierarchy of superficial pyramidal neurons; Message-passing at gamma frequencies
Generative decoder network Descending hierarchy of deep pyramidal neurons; Beliefs propagated at beta frequencies
Reduced dimensionality bottleneck Association cortices and deeper portions of generative models; Estimates calculated at beta, alpha, and theta frequencies
Mean vectors Activity levels for neuronal populations at different parts of hierarchy
Variance vectors Neuronal population activity variability
Sampling from latent feature space Large-scale synchronous complexes at beta, alpha, and theta frequencies; “ignition” events
Training: minimizing reconstruction loss between input layer of encoder and output layer of generative decoder; also minimizing divergence from unit Gaussian, parameterized by disentangling parameter Training: minimizing precision-weighted prediction-errors at all layers simultaneously; precision-weighting as analogous to disentanglement hyperparameter; many mechanisms including synchronous gain control and diffuse neuromodulatory systems
Potential for sequential organization via recurrent network controllers (Ha and Schmidhuber, 2018) Organization of state transitions by hippocampal system and frontal cortices (Koster et al., 2018)