Skip to main content
. Author manuscript; available in PMC: 2009 Mar 25.
Published in final edited form as: Synthese. 2007 Dec 1;159(3):417–458. doi: 10.1007/s11229-007-9237-y

Figure 4.

Figure 4

Diagram showing the generative model (left) and corresponding recognition; i.e., neuronal model (right) used in the simulations. Left panel: this is the generative model using a single cause v(1), two dynamic states x1(1),x2(1) and four outputs y1,K, y4. The lines denote the dependencies of the variables on each other, summarised by the equation on top (in this example both the equations were simple linear mappings). This is effectively a linear convolution model, mapping one cause to four outputs, which form the inputs to the recognition model (solid arrow). The architecture of the corresponding recognition model is shown on the right. This has a corresponding architecture, but here the prediction error units, ε~u(i), provide feedback. The combination of forward (red lines) and backward influences (black lines) enables recurrent dynamics that self-organise (according to the recognition equation; μ~u(i)=h(ε~(i),ε~(i+1))) to suppress and hopefully eliminate prediction error, at which point the inferred causes and real causes should correspond.