Skip to main content
. 2014 Nov 5;369(1655):20130483. doi: 10.1098/rstb.2013.0483

Figure 1.

Figure 1.

The DAC theory of mind and brain (see [10] for a review). Left: highly abstract representation of the DAC architecture. DAC proposes that the brain is organized as a three-layered control structure with tight coupling within and between these layers distinguishing: the soma (SL) and the reactive (RL), adaptive (AL) and contextual (CL) layers. Across these layers, a columnar organization exists that deals with the processing of states of the World or exteroception (left, red), the self or interoception (middle, blue) and action (right, green). See text for further explanation. The reactive layer: the RL comprises dedicated behaviour systems (BS) that combine predefined sensorimotor mappings with drive reduction mechanisms that are predicated on the needs of the body (SL). Right lower panel: each BS follows homeostatic principles supporting the self-essential functions (SEF) of the body (SL). In order to map needs into behaviours, the strength of the essential variables served by the BSs, SEFs, have a specific distribution in task-space called an ‘affordance gradient’. In this example, we consider the (internally represented) ‘attractive force’ of the home position supporting the security SEF or of open space defining the exploration SEF. The values of the respective SEFs are defined by the difference between the sensed value of the affordance gradient (red) and its desired value given the prevailing needs (blue). The regulator of each BS defines the next action as to perform a gradient ascent on the SEF. An integration and action selection process across the different BSs forces a strict winner-take-all decision that defines the specific behaviour emitted. The allostatic controller of the RL regulates the internal homeostatic dynamic of the BSs to set priorities defined by needs and environmental opportunities through the modulation of the affordance gradients, desired values of SEFs and/or the integration process. The adaptive layer: the AL acquires a state space of the agent–environment interaction and shapes action. The learning dynamic of AL is constrained by the SEFs of the RL that define value. The AL crucially contributes to exosensing by allowing the processing of states of distal sensors, e.g. vision and audition, which are not predefined but rather are tuned in somatic time to properties of the interaction with the environment. Acquired sensor and motor states are in turn associated through the valence states signalled by the RL. The contextual layer: the core processes of the CL are divided between a task-model and a self-model. The CL expands the time horizon in which the agent can operate through the use of sequential short-term and long-term memory (STM and LTM) systems respectively. These memory systems operate on integrated sensorimotor representations that are generated by the AL and acquire, retain and express goal-oriented action regulated by the RL. The CL comprises a number of processes (right upper panel): (a) when the discrepancy between predicted and encountered sensory states falls below a STM acquisition threshold, the perceptual predictions (red circle) and motor activity (green rectangle) generated by AL are stored in STM as a, so-called, segment. The STM acquisition threshold is defined by the time-averaged reconstruction error of the perceptual learning system of AL. (b) If a goal state (blue flag) is reached, e.g. reward or punishment, the content of STM is retained in LTM as a sequence conserving its order, goal state and valence marker, e.g. aversive or appetitive, and STM is reset. Every sequence is thus labelled with respect to the specific goal it pertains to and its valence marker. (c) If the outputs generated by the RL and AL to action selection are sub-threshold, the AL perceptual predictions are matched against those stored in LTM. (d) The CL selected action is defined as a weighted sum over the segments of LTM. (e) The contribution of LTM segments to decision-making depends on four factors: perceptual evidence, memory chaining, the distance to the goal state and valence. Working memory (WM) of the CL is defined by the memory dynamics that represents these factors. Active segments that contributed to the selected action are associated with those that were previously active establishing rules for future chaining. The self-model component of the CL monitors task performance and develops (re)descriptions of task dynamics anchored in the self. In this way, the system generates meta-representational knowledge that forms autobiographical memory. This aspect of the DAC CL is not further considered in this paper.