Table 1.
Generic variables and quantities in the free-energy formation of active inference, under the Laplace assumption (i.e. generalised predictive coding)
Variable | Description |
---|---|
Generative model or agent: In the free-energy formulation, each agent or system is taken to be a model of the environment in which it is immersed. corresponds to the form (e.g. degrees of freedom) of a model entailed by an agent, which is used to predict sensory signals. | |
Action: These variables are states of the world that correspond to the movement or configuration of an agent (i.e. its effectors). |
|
Sensory signals: These generalised sensory signals or samples comprise the sensory states, their velocity, acceleration and temporal derivatives to high order. In other words, they correspond to the trajectory of an agent’s sensations. |
|
Surprise: This is a scalar function of sensory samples and reports the improbability of sampling some signals, under a generative model of how those signals were caused. It is sometimes called (sensory) suprisal or self-information. In statistics it is known as the negative log-evidence for the model. |
|
Entropy: Sensory entropy is, under ergodic assumptions, proportional to the long-term time average of surprise. | |
Gibbs energy: This is the negative log of the density specified by the generative model; namely, surprise about the joint occurrence of sensory samples and their causes. |
|
Free-energy: This is a scalar function of sensory samples and a recognition density, which upper bounds surprise. It is called free-energy because it is the expected Gibbs energy minus the entropy of the recognition density. Under a Gaussian (Laplace) assumption about the form of the recognition density, free-energy reduces to the simple function of Gibbs energy shown. |
|
Free-action: This is a scalar functional of sensory samples and a recognition density, which upper bounds the entropy of sensory signals. It is the time or path integral of free-energy. |
|
Recognition density: This is also know as a proposal density and becomes (approximates) the conditional density over hidden causes of sensory samples, when free-energy is minimised. Under the Laplace assumption, it is specified by its conditional expectation and covariance. |
|
True (bold) and hidden (italics) causes: These quantities cause sensory signals. The true quantities exist in the environment and the hidden homologues are those assumed by the generative model of that environment. Both are partitioned into time-dependent variables and time-invariant parameters. |
|
Hidden parameters: These are the parameters of the mappings (e.g. equations of motion) that constitute the deterministic part of a generative model. |
|
Log-precisions: These parameters control the precision (inverse variance) of fluctuations that constitute the random part of a generative model. |
|
Hidden states: These hidden variables encode the hierarchical states in a generative model of dynamics in the world. |
|
Hidden causes: These hidden variables link different levels of a hierarchical generative model. | |
Deterministic mappings: These are equations at the ith level of a hierarchical generative model that map from states at one level to another and map hidden states to their motion within each level. They specify the deterministic part of a generative model. |
|
Random fluctuations: These are random fluctuations on the hidden causes and motion of hidden states. Gaussian assumptions about these fluctuations furnish the probabilistic part of a generative model. |
|
Precision matrices: These are the inverse covariances among (generalised) random fluctuations on the hidden causes and motion of hidden states. |
|
Roughness matrices: These are the inverses of the matrices encoding serial correlations among (generalised) random fluctuations on the hidden causes and motion of hidden states. |
|
Prediction errors: These are the prediction errors on the hidden causes and motion of hidden states evaluated at their current conditional expectation. |
|
Precision-weighted prediction errors: These are the prediction errors weighted by their respective precisions. |
See main text for details