Skip to main content
Philosophical Transactions of the Royal Society B: Biological Sciences logoLink to Philosophical Transactions of the Royal Society B: Biological Sciences
. 2025 Nov 13;380(1939):20240301. doi: 10.1098/rstb.2024.0301

Phenomenal interface theory: a model for basal consciousness

Colin Klein 1, Andrew B Barron 2,
PMCID: PMC12612696  PMID: 41229298

Abstract

An increasing number of authors are willing to attribute phenomenal consciousness to relatively simple organisms like insects. Yet it is not at all clear what functional role the substrates of consciousness would play. Here, we argue phenomenal consciousness is a consequence of how mobile animals with spatial senses and a capacity for goal-directed behaviour resolve the complex problem of action selection. To adjudicate between possible goals an animal must use sensory inputs, representations of internal state and stored knowledge of values to estimate expected value vectors for different options. Brains solve this problem by taking such heterogenous information and transforming it into a common framework—a phenomenal interface—and then using this to compute multi-objective Q-values. We use insects to flesh out the details of the phenomenal interface. A consequence of this type of processing is that it naturally generates a distinction between self and non-self and a first-person perspective in which external stimuli have a subjective value. We discuss the consequences of this theory for understanding the evolution and distribution of phenomenal consciousness and suggest an underappreciated problem that arises when thinking about how consciousness might have expanded and changed as it evolved from its simplest origins.

This article is part of the theme issue ‘Evolutionary functions of consciousness’.

Keywords: insect, sentience, mushroom body, motivation, central complex

1. Introduction

Which animals are conscious? Scientific consensus is changing. In 2012, the influential Cambridge Declaration on Consciousness [1] recognized consciousness in all mammals and birds and entertained the possibility for ‘many other creatures, including octopuses’. The recent New York Declaration on Animal Consciousness extended the realistic possibility of consciousness to all vertebrates and many invertebrates ‘including, at minimum, cephalopod molluscs, decapod crustaceans and insects’.

The latest declaration relies heavily on a weight-of-evidence approach, gathering proposed behavioural markers of conscious experience in animals. This ‘theory neutral’ [2] approach is appropriate if the objective is to assess whether a certain animal group is conscious. Yet even if we conclude that insects are likely capable of conscious experience, we might still wonder how and why they have this capacity. While we agree with Birch [2] in that one of the challenges for consciousness research is that we have no consensus theory of consciousness, we think that without such a story a theory-neutral approach will always feel wanting [3].

The demand for theory becomes more challenging as the attribution of consciousness grows broader. The hard bodies of invertebrates and the differing demands of their short lifecycles mean that interpreting behavioural signs of consciousness can be fraught. The brains of most invertebrates are minuscule compared with those of birds and mammals and are arranged very differently. Hence, there is a need for a theory that gives some plausible mechanisms for consciousness.

We are concerned in particular with the most basal forms of consciousness: that is, on whether an animal has any conscious mental states at all. Human consciousness is a complex phenomenon [4], and we exclude the more complex phenomenon for discussion. The question can be asked in many ways; for present purposes, we will consider these to be synonymous. So, one might ask whether an animal is aware of things [5]. Or as Nagel [6] puts it, whether ‘there is something that it is like to be that organism—something it is like for that organism’. We assume that there is something it is like to be a bat, and nothing it is like to be a rock. Some use the term sentience [7] or phenomenal consciousness (where the latter is contrasted with phenomena like access consciousness or self-awareness) [8].

We have argued all vertebrates and some invertebrates likely possess a basal form of consciousness [9,10], and this view is less extreme than it once was. Of course, if you come from a very different starting point—if, for example, you think that the sophisticated higher order cognition unique to humans or neocortical processing is also a precondition for phenomenal consciousness—then a project like ours will have little appeal. Most theories of phenomenal consciousness in humans either tacitly [11,12] or explicitly [13,14] appeal to cortical mechanisms. These theories obviously cannot be extended to invertebrates. Some authors would argue that most animals lacking a human neocortex, or something like it, are capable of perception, but not awareness of what they are perceiving (a state described as transitive creature consciousness) [14]. We, however, are persuaded by arguments that a cortex is not a prerequisite for all forms of conscious experience. Bjorn Merker developed a subcortical theory of phenomenal consciousness [15,16]. He also recognized that the neural systems necessary for phenomenal consciousness have homologs in all vertebrates and consequently argued that consciousness is present throughout vertebrata [15]. Elsewhere, we extended Merker’s argument even further by demonstrating that at least analogues of the key neural systems emphasized by Merker occur also in arthropods [9,10].

Our earlier work relied on a loose generalization of Merker’s claims, but a full defence requires further abstraction. Recent advances in insect neuroscience now let us move beyond analogies and to generalize subcortical theories. Here we argue that, at a high level of abstraction, insects and vertebrates possess a similar behavioural control system that interfaces sensory information and interoceptive information into a common referential framework. We call this a phenomenal interface (PI). Its function is to solve problems common to mobile animals: resolving competing behavioural priorities, selection of a single appropriate action and compensating sensory input for the liabilities of self motion. An outcome of performing these functions is that a PI processes its sensorium from an egocentric and subjective perspective. This gives both a general theory of consciousness and a specific story about where to look to flesh out the details.

It is important to be clear about the scope of our strategy. We begin by assuming that activity in certain kinds of structure is sufficient for consciousness. Our goal is to characterize that process in a suitably abstract manner. Structural similarities between conscious experience and the computational processes so characterized are evidence that we have got things correct. What we are very much not doing is arguing from first principles that a certain type of computation is identical to the bases for conscious experience. To demand that would be to demand we solve the hard problem—that s a tall order.

One might instead view the strategy as analogous to Morgan’s early work [17] explaining gene linkage via chromosomal position. This required the (reasonable) assumption that chromosomes had something to do with inheritance, and that the physical processes involved in meiosis had effects on the bases of genes. Armed with that, Morgan could give a good characterization of the mechanisms of linkage. He could do so without knowing how chromosomal material was the basis of heredity. Similarly, our goal is to give a plausible theory about the processes that support subjective experience. Like Morgan, we think we can do so in the absence of a full story, and that doing so is a step along the way to telling a full story.

2. The core problem

Any mobile animal with a reasonably complex body and sensorium faces a difficult decision-making problem. Consider the task faced by a foraging bee. First, the bee has multiple needs and corresponding sources of resources: she must choose between nectar and pollen, she must balance the energetic demands of foraging and decide when to return home and she must do so while avoiding predators and dangerous obstacles. The value of different actions is context-dependent: different foods have different utility when she is personally hungry or when her colony needs specific nutrition.

While very simple organisms may get by with a fixed lexical ordering over priorities [18], a fixed ordering lacks the flexibility needed to accommodate multiple context-sensitive demands [19,20]. Nor is it clear that multiple sources of reward can or should be linearly combined into a single scalar value [21,22]. The bee also faces a classic problem of sparse reward [23]: successful foraging only pays off upon return to the hive, which means that proper behaviour must be shaped in the absence of immediate reward. Finally, the bee faces a complex sequential decision-making task: each action affects the ability to perform future actions, both in terms of where the bee ends up and simply because foraging for food itself takes energy [24,25].

The task faced by the bee is thus best modelled as a nonlinear multi-objective Markov Decision Process (nl-MOMDP) [26,27]. nl-MOMDPs are widely recognized in the reinforcement literature as particularly difficult to solve [26,27], and may admit of no computationally tractable exact solutions. Nevertheless, the bee does solve it.

The underlying computational problem required to solve the nl-MOMDP task is threefold. First, the bee must use sensory inputs, representations of internal state and stored knowledge of value to output pairs of actions and corresponding expected value vectors (Q-values [22,28]). Second, the bee must adjudicate between Q-values in order to pick a single action to perform (or continue performing). Third, upon executing an action, the bee must update her estimates of internal state and the value of different external states using any obtained reward.

The focus of phenomenal interface theory (PIT) is on the first of these tasks. We suggest that the computational function of a phenomenal interface is, first and foremost, to take pre-processed sensory input and use that to estimate the value of actions (in the sense of expected reward). Doing so requires solving a series of interconnected problems. The particular computation involved must be sensitive to each of these, and we suggest that the biological function of a PI is to solve each efficiently by doing so simultaneously (figure 1). We call it an interface to emphasize the role it plays in bringing diverse sources of information together, and we claim that it is phenomenal because it has the right structural features to underlie subjective experience. Both of these claims will be elaborated as we go.

Figure 1.

A schema of the computation performed by a phenomenal interface.

A schema of the computation performed by a phenomenal interface. (Left) The four domains that must be integrated. Blue arrows mean ‘does not make sense without’. The phenomenal interface, π, must provide a simultaneous solution. (Centre) The output of the phenomenal interface is a set of pairs of action and value-vectors. (Right) Action selection adjudicates between Q-values. Action selection is downstream of phenomenal consciousness but provides feedback required both for updating action values and for correction of self-motion.

The computational problem solved by an interface has a number of aspects. Sensory input cannot be meaningfully interpreted without correcting for self-motion [2932]: an insect is both moving through the world and also moved by it (such as by gusts of wind), and parsing out the difference is crucial [33]. Different sensory systems and the motor system itself use fundamentally different coordinate systems that require nontrivial alignment [34]. Determining the value of actions requires combining information about current state and goals with what is perceived [35,36]. The functional role of a PI is to take this heterogenous information and transform it into a common framework and then use this to compute multi-objective Q-values.

A crucial feature of PIT is that this common framework need not be either pre-determined or simple. One need not assume (as Merker sometimes suggests) that the representation takes the form of a simple map of the organism in the world. The representational basis of the common framework is functionally specified and takes whatever form is necessary to most effectively solve the core problems [37]. The function of a PI is in part to create a common representational framework within which Q-values can be determined.

It is important that the process of creating such a framework is flexible over both developmental and evolutionary timescales. The particular details of the PI for a bee and ant will be very different, because their abilities and needs differ. But they can share the same computational mechanism. We envision a phenomenal interface as a classifier that can be described at a relatively high level of abstraction, but which primarily learns the details of a specific animal’s body, needs and capacities in order to generate appropriate Q-values.

The downstream adjudication between Q-values is beyond the scope of PIT (aside from the assertion that there should not be a simple linear mapping to a single scalar value). We think it can be done in a variety of context-sensitive ways [21]. There is a large literature on dealing with multi-objective decision problems [38]. Some of this will be heuristic (e.g. continuing action unless there is compelling reason to switch). Different animals might use different heuristics and decision heuristics might depend on, e.g. mood or overall arousal. The PI makes a variety of strategies possible but does not demand one; the point is that none of this adjudication can be until one can calculate Q-values in the first place, and doing that requires a common representational framework that spans world, value and action. Similarly, learning and updating of stored value are outside the scope of PIT, but the problem of sparse reward and credit assignment is facilitated by representation of actions in a common framework.

3. Phenomenal interface theory proper

The system we call the PI solves the basic action selection problem for complex mobile animals. Why think, however, that this has anything to do with subjective experience? We suggest that possession of a PI naturally gives rise to key features of subjectivity in particular: a first-person perspective in which stimuli have a subjective value. We flesh this out by reference to the insect central complex and associated systems, but we emphasize [10] that one finds similar integrated systems, playing a similar role, across phyla, especially vertebrate subcortical systems [16,39]. While we do not reiterate the argument here, arguments for the subcortical basis of subjective experience in humans [16,4043] provide important convergent evidence for the role of a PI in subjective experience [9,16].

In the insect brain, there are just two multisensory integration regions: the central complex and the mushroom bodies (MB). These operate together as a system of systems to constitute the insect PI. The ancestral function of the insect central complex (CX) is correcting for the effect of self-motion in order to hold a heading towards a goal [44,45]. Insects are small and easily moved by the medium in which they travel. Without correction for self-motion, information about the world becomes noisy and temporally smeared [4648], limiting the possibilities for effective target acquisition and navigation. The central complex is thus crucial for these functions. Insects can effectively factor out the consequences of self-motion generated by turning from their visual input using an efference copy of their predicted motor displacement to cancel out the visual consequences of the turn [49]. Note that this is partly a function of how self-movement disrupts sensory inputs and the consequences of such disruption for action: chemotaxis along a gradient (as found in bacteria and simple worms) may not need much correction for self-movement, but image-forming eyes are useless without it [48,50].

Processing within the central complex enables an insect to establish and hold a heading relative to an external navigational reference [45,51]. The neural representation of that heading can be maintained even if the fix on the external reference is temporarily lost [51,52], which enables insects to maintain a heading and to store vectors of travelled paths in memory [53]. This shows at least an elementary form of spatial representation and forward modelling of behaviour and its outcomes.

We suggest that the ability to correct for self-motion in order to hold a heading, when done consistently, can form the basis of a representation. As Poincaré [54,55] argued, the algebraic structure determined by the basic operations of self-correction is sufficient to define a geometry with an origin [54]. We will not make assumptions in what follows about the geometry of the space so defined; as Poincaré and later authors [5658] noted, the geometry so recovered might be Euclidean, affine or other possibilities. However, we will assume that the geometry is enough to support a perspective structure.

The relationship between self-motion and passive changes in environment allows the animal to triangulate its location in a world that contains both perception and action [59]. The properties of distal senses provide further perspective structure when combined with self-motion [47,60,61]. Visual objects occlude one another in ways systematically related to motion [34,62]. In insects, phenomena like optic flow [46,63] and visual parallax [60,64,65] partition the moving world into near and far [66]. Each of these depend constitutively on the ability to distinguish self-motion via action [64,66], giving what is known as an agentive view of self-location and perspective [34].

We stress that this perspective structure remains lightweight and tacit. A need not give a full-fledged representation of oneself as a self—does not, in philosophical terms, provide a de se representation [67]. Instead, following Schellenberg, a PI gives a de hinc representation: a representation of the world ‘from here’ [68]. A de hinc representation is arguably the minimal way in which a conscious animal can represent its world.

In addition to appropriate spatial structure, a phenomenal interface also attaches subjective value to stimuli, so that stimuli become attractive or aversive or otherwise valenced. In insects, small populations of aminergic cells signal aspects of the state of the insect and also provide reinforcement signals to the learning and memory centres of the mushroom bodies [69,70], so that the subjective meaning of objects in the environment to the organism can be learnt [7072]. The mushroom bodies and the CX work together as a system such that what has been learnt about an object (via mushroom body output circuits) will influence whether the insect orients toward or away from an object (via the spatially structured representation of the object in the CX) [73].

The perceived valence of an object depends not only on past encounters with a stimulus but also the present physiological state of the insect. For example, other aminergic systems associated with the mushroom bodies modify the neural output of the mushroom such that food-associated stimuli are only attractive if an insect is food-deprived [74]. Further specific aminergic circuits will activate the mushroom body outputs when entering environments associated with feeding [75], but flies can only learn about food stimuli if they are hungry [69].

Taken together, we see in insects a system of systems involving the mushroom bodies, the CX and associated neuromodulatory networks that attach subjective value to stimuli and relate those values to physiological or subjective state. This provides a mechanism by which the world may be given meaning relative to the needs of the organism.

It is well established that Hebbian-like plasticity in connections at the output of the mushroom body can support various forms of reinforcement learning [71,7678]. More recent work [23] exploring the theoretical capacities of gap junctions in the mushroom body network suggests the mushroom bodies could have the capacity to solve the so-called ‘sparse reward’ problem: this being that in nature the actions needed to obtain a reward may be long and convoluted, with actual reward obtained only rarely. The behaviour of foraging honey bees demonstrates they can overcome this problem. Bees connect up long foraging flights and specific flower-handing strategies to the eventual consummatory act of obtaining a floral reward. Experience-dependent modulation of gap junctions connecting the neurons of the mushroom bodies could support the mapping of states to outcomes such that a sequence of actions can be learned when the reward is obtained only on completion of the sequence [23]. In this way, the mushroom bodies could support the mapping of objectives to actions and targets, even if the actions involved are a complex sequence.

Finally, the mechanisms that support reinforcement learning and motion correction in insects appear to be sophisticated enough to allow forward modelling and prediction of the behaviour of the world, including of other entities. For example, neurons in the dragonfly visual lobes sensitive specifically to small moving targets show a modulated gain of function that predicts the anticipated position of the target based on its current trajectory [79]. Together these capacities to connect sequences of events to objectives, maintain headings and forward-model actions enable insects to robustly map functional connections between objectives and targets and to maintain and execute courses of actions to achieve those targets. This could enable ants to successfully reorientate having been blown away by a gust of wind [33], or construct complex routes or new motor sequences, such as has been documented in bees, spiders and grasshoppers [80].

In summary, we see in the processing performed by the insect brain the type of structural features that matter for an animal to have a subjective experience (figure 2). The insect brain establishes a first-person perspective on the world. This differentiates self from non-self (in a simple but fundamental way), and the external world is represented with sufficient coherence, meaning and structure that the insect can operate with agency within that world. It can effectively assess the world around it and accordingly plan toward achieving personal goals. That planning is enabled because stimuli and objects in the world carry an entirely subjective weighting or valence, which is moderated by the unique state and prior experience of the insect. Hence, how any insect perceives the world will be unique to that individual insect.

Figure 2.

Simplified schematic of the functions and connections between the multimodal integration centres of the insect brain.

Simplified schematic of the functions and connections between the multimodal integration centres of the insect brain. Shown (not to scale) are the mushroom bodies, neuropil of the central complex and the lateral accessory lobes. Arrows show information flow. The output of these interacting systems is a steering command determined relative to subjectively weighted objects in the environment.

The insect example has let us ground PIT in biological reality, but abstracting away shows why the type of processing performed by a phenomenal interface is a good candidate for an abstract theory of the mechanisms for phenomenal consciousness. This example also illustrates that a claim of phenomenal consciousness does not require an animal producing a rich veridical three-dimensional representation of the world in its head, within which it moves about to plan actions. Phenomenal consciousness is not about the richness of mental imagery. It is about representing a structured version of the world that is interfaced with and influenced by the animal’s subjective state.

4. Phenomenal interface theory as a neurocomputational framework

We have described the phenomenal interface at a computational level [81] and suggested reasons to think that it has the right sort of properties to support the basic capacity for subjective experience. PIT itself, however, is not a fully computational theory of consciousness. Instead, we take PIT to be a form of what philosophers call realizer functionalism: PIT specifies the role that an interface plays, but the claim is about an interface as instantiated in biological systems [82].

The basis of conscious experience thus relies on a particular, as yet unknown, transformation function—call it π—which serves as a phenomenal interface. Variation among individuals and species means that π itself must be understood at a suitably abstract level. Visual input is an important modality in our insect example of π. But the visually impaired are conscious too, and their blindness will give rise to a different way of solving the transformation problem, which means the particulars of their phenomenal interface will be different.

This distinction between the computational gloss and the details of π is important for two reasons. First, it re-emphasizes that π solves a particular kind of problem in an evolvable way. We have proposed the development of a PI is an evolvable solution to the core problem of behavioural control for mobile animals. For example, the central complex is extremely well-conserved across insect lineages and features in other arthropods too [83]. This suggests that the computation it performs is relatively agnostic to the sorts of interoceptive information available. The right level at which to understand π, then, is one on which it can function as a computational invariant across very different biologies—and can therefore integrate information as a system evolves. The possession of a phenomenal interface allows the problems identified above to be efficiently solved even as phenotypes change in drastic ways. Incorporating a new sense organ, or a new pair of appendages, does not require solving the same problems from scratch.

What is fed into a PI need not be just more elaborate sensory information but can also include arbitrarily processed information as well. Merker [16,39,84] recognized a version of a PI instantiated by subcortical structures in the mammalian brain. In the typical mammalian brain, however, recurrent loops between the subcortex and regions of the cortex are a vital and almost universal feature of processing [85]. Merker [16,39,84] has emphasized subcortical structures in the mammalian brain PI, but via these recurrent loops the PI is connected to the cortex, and the cortex adds input to the mammalian PI. The cortex would add to mammalian conscious experience modelling of the consequences of action that have greater spatial or temporal extent: projecting models of the consequences of actions across larger associative spaces. The output of this modelling returns as an input to the PI to refine action selection.

This means, note, that in more complex brains, there will be portions of cortex that perform computations that are similar to PIT, or which handle PIT-like functions. These are not necessarily conscious on their own: cortical information becomes conscious processing only insofar as it is able to enter as input into the more basic structures that compute π.

This fits well with the sort of model elaborated by, for example, Cisek [86], on which control of behaviour was shifted to the forebrain in early mammals, allowing for an elaboration of the behavioural repertoire. On the sort of model that Cisek [86] has proposed, older circuits like the basal ganglia (which we identify as part of the basis of computing π) remain responsible for between-action adjudication and the general problems of learning and resolving nl-MOMDP problems, but cortex can learn to detect new opportunities for action and select more appropriate fine-grained motor patterns. The ability to offload in this way is one of the features that makes a phenomenal interface consistently evolvable.

Second, the particular form of realization is important to distinguish organisms with the capacity for subjective experience from other things which might, at some very abstract computational level, do the things that we have attributed to a phenomenal interface. On the one hand, there is a close (and non-accidental) relationship between what a PI does and various forms of so-called deep reinforcement learning [87,88]. Yet extant forms of deep RL are, architecturally speaking, mostly feedforward. They also appear, from the point of view of processing nodes and computational expense, far less efficient than the recurrent biological solutions instantiated as π. We think that these architectures might end up mutually illuminating, but it is important to keep in mind that PIT is not committed to consciousness in any system that solves the problems of self-motion and action selection; it must also do so in the right way.

There is also a lower bound in the biological world: whatever slime moulds and pea plants are doing, it is unlikely that they are instantiating π. Similarly so with e.g. room-cleaning robots. That is true even if there is a way to describe what they are doing in a way that the phenomenal interface is described. That is because the relevant question is not the abstract description of what they do—movement and self-location and world-mapping and so forth—but the similarity between how they solve these problems and the transformation-function given by π.

So, for example, there is a sense in which Caenorhabditis elegans must integrate information: the motivational value of (say) food cues is downgraded after satiation, and different motivational drives mutually inhibit one another (see [89] for a good review). These processes largely operate in parallel with mutual inhibition, rather than transformation into a common evaluative framework as required by π. There is also no reason to think that such a setup would be capable of solving nl-MOMDPs in a meaningful way (parallel weighting implies a linear aggregation function; the absence of this is what makes nonlinear MOMDPs difficult [21,22]).

That said, it is important to emphasize that PIT is compatible with a range of possible positions, corresponding to the level of abstraction at which π is understood. The level of abstraction that includes the visually impaired will almost certainly cover other mammals as well. We claim that the insect central complex, while working with very different circuitry, also instantiates π. It is an open, and empirical, question whether any non-biological computational systems instantiate π. The right level of description of π will almost surely be an information-processing one, which means that PIT is not bound to biological systems. Conversely, it is not at all guaranteed that any extant systems instantiate a suitable PI, and it is not obvious that any plausible extensions of them would either [90].

5. Conclusion

PIT is a theory of the mechanisms of consciousness not a solution to the so-called ‘hard problem’ [91]. It fits with a standard empiricist line on the matter—a PI in humans ultimately performs a severe dimensionality reduction; hence, what we are aware of is much simpler than the processes that go into being aware [92,93]—but no more than that.

However, we think that PIT sheds light on a non-hard but extremely interesting problem, one that seems to have received relatively little attention in the literature on consciousness. Nagel pointed out that the conscious experience of a bat must be different to that of humans, in part because they have a wholly different modality of echolocation [6]. This is meant to make vivid the difficulties of conceiving what it is like to be a very different organism. Yet it also raises a more general issue: how it is that new properties might get added to phenomenal space at all as organisms evolve and complexify?

If one thinks that only humans are conscious, the problem does not arise. But once one thinks that consciousness can evolve by becoming richer (and the experience of a human is certainly richer than that of a fly), then there is a corresponding question about how the mechanisms of consciousness might support such a gradual expansion. This is, in some ways, to turn our original problem on its head. We started with the theoretical expansion of consciousness on behavioural grounds and asked about the mechanisms by which very simple organisms can end up conscious. Yet by focusing on simple organisms, we get a corresponding question of how it is that the richness of our own experience might have developed in evolutionary time.

One of the attractions of PIT is that it makes this problem salient in the course of offering a solution to it. Having a phenomenal interface means that body and cortex can expand, and the resulting system is still able to adjudicate between competing courses of action. PIT is not necessarily the only theory that has that property, but by emphasizing the evolvability of conscious experience, it opens a new direction for scientific study of consciousness.

Contributor Information

Colin Klein, Email: Colin.Klein@anu.edu.au.

Andrew B. Barron, Email: andrew.barron@mq.edu.au.

Ethics

This work did not require ethical approval from a human subject or animal welfare committee.

Data accessibility

This article has no additional data.

Declaration of AI use

We have not used AI-assisted technologies in creating this article.

Authors’ contributions

C.K.: conceptualization, writing—original draft; A.B.: conceptualization, writing—original draft.

Both authors gave final approval for publication and agreed to be held accountable for the work performed therein.

Conflict of interest declaration

We declare we have no competing interests.

Funding

This work was supported by Templeton World Charity Foundation project grant no. TWCF-2020-20539 and Australian Research Council Discovery Projects nos DP240100400 and DP230100006.

References

  • 1. Low P, Panksepp J, Reiss D, Edelman D, Swinderen B, Koch C. 2012. The Cambridge Declaration on Consciousness. In Proc. Francis Crick Memorial Conference. Churchill College, Cambridge, UK. [Google Scholar]
  • 2. Birch J. 2022. The search for invertebrate consciousness. Noûs 56, 133–153. ( 10.1111/nous.12351) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Halina M, Harrison D, Klein C. 2022. Evolutionary transition markers and the origins of consciousness. J. Conscious. Stud. 29, 62–77. ( 10.53765/20512201.29.3.077) [DOI] [Google Scholar]
  • 4. Bayne T, Hohwy J, Owen AM. 2016. Are there levels of consciousness? Trends Cogn. Sci. 20, 405–413. ( 10.1016/j.tics.2016.03.009) [DOI] [PubMed] [Google Scholar]
  • 5. Morin A. 2006. Levels of consciousness and self-awareness: a comparison and integration of various neurocognitive views. Conscious. Cogn. 15, 358–371. ( 10.1016/j.concog.2005.09.006) [DOI] [PubMed] [Google Scholar]
  • 6. Nagel T. 1974. What is it like to be a bat? Phil. Rev. 83, 435–450. [Google Scholar]
  • 7. Browning H, Birch J. 2022. Animal sentience. Phil. Compass 17, e12822. ( 10.1111/phc3.12822) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Block N. 2011. Perceptual consciousness overflows cognitive access. Trends Cogn. Sci. 15, 567–575. ( 10.1016/j.tics.2011.11.001) [DOI] [PubMed] [Google Scholar]
  • 9. Barron AB, Klein C. 2016. What insects can tell us about the origins of consciousness. Proc. Natl Acad. Sci. USA 113, 4900–4908. ( 10.1073/pnas.1520084113) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Klein C, Barron AB. 2016. Insects have the capacity for subjective experience. Animal Sentience 1, 100. ( 10.51291/2377-7478.1113) [DOI] [Google Scholar]
  • 11. Baars BJ, Ramsøy TZ, Laureys S. 2003. Brain, conscious experience and the observing self. Trends Neurosci. 26, 671–675. ( 10.1016/j.tins.2003.09.015) [DOI] [PubMed] [Google Scholar]
  • 12. Baars BJ. 2005. Global workspace theory of consciousness: toward a cognitive neuroscience of human experience. Prog. Brain Res. 150, 45–53. ( 10.1016/S0079-6123(05)50004-9) [DOI] [PubMed] [Google Scholar]
  • 13. Michel M. 2022. Conscious perception and the prefrontal cortex a review. J. Conscious. Stud. 29, 115–157. ( 10.53765/20512201.29.7.115) [DOI] [Google Scholar]
  • 14. Carruthers P, Gennaro R. 2023. Higher-order theories of consciousness. In The Stanford encyclopedia of philosophy (eds Zalta EN, Nodelman U). Stanford, CA: Stanford Metaphysics Research Lab, Stanford University. [Google Scholar]
  • 15. Merker B. 2005. The liabilities of mobility: a selection pressure for the transition to consciousness in animal evolution. Conscious. Cogn. 14, 89–114. ( 10.1016/s1053-8100(03)00002-3) [DOI] [PubMed] [Google Scholar]
  • 16. Merker B. 2007. Consciousness without a cerebral cortex: a challenge for neuroscience and medicine. Behav. Brain Sci. 30, 63–81. ( 10.1017/s0140525x07000891) [DOI] [PubMed] [Google Scholar]
  • 17. Shine I, Wrobel S. 1976. Thomas Hunt Morgan: pioneer of genetics. Lexington, KY: The University Press of Kentucky. [Google Scholar]
  • 18. Brooks RA. 1999. Cambrian intelligence: the early history of the new AI. Cambridge, MA: MIT Press. [Google Scholar]
  • 19. Hartley R, Pipitone F. 1991. Experiments with the subsumption architecture. In Proc.1991 IEEE Int. Conf. Robotics and Automation, pp. 1652–1658, vol. 2. Sacramento, CA: IEEE. [Google Scholar]
  • 20. Arkin RC. 1998. Behavior-based robotics. Cambridge, MA: MIT Press. [Google Scholar]
  • 21. Hayden BY, Niv Y. 2021. The case against economic values in the orbitofrontal cortex (or anywhere else in the brain). Behav. Neurosci. 135, 192–201. ( 10.1037/bne0000448) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Dulberg Z, Dubey R, Berwian IM, Cohen JD. 2022. Modularity benefits reinforcement learning agents with competing homeostatic drives. arXiv. ( 10.48550/arXiv.2204.06608) [DOI]
  • 23. Wei T, Guo Q, Webb B. 2024. Learning with sparse reward in a gap junction network inspired by the insect mushroom body. PLoS Comput. Biol. 20, e1012086. ( 10.1371/journal.pcbi.1012086) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24. Seeley TD. 1985. Honeybee ecology. Princeton, NJ: Princeton University Press. [Google Scholar]
  • 25. Cassano J, Naug D. 2022. Metabolic rate shapes differences in foraging efficiency among honeybee foragers. Behav. Ecol. 33, 1188–1195. ( 10.1093/beheco/arac090) [DOI] [Google Scholar]
  • 26. Roijers DM, Vamplew P, Whiteson S, Dazeley R. 2013. A survey of multi-objective sequential decision-making. J. Artif. Intell. Res. 48, 67–113. ( 10.1613/jair.3987) [DOI] [Google Scholar]
  • 27. Hayes CF, et al. 2022. A practical guide to multi-objective reinforcement learning and planning. Auton. Agents Multi Agent Syst. 36, 26. ( 10.1007/s10458-022-09552-y) [DOI] [Google Scholar]
  • 28. Russell SJ, Zimdars A. 2003. Q-decomposition for reinforcement learning agents. In Proc. 20th Int. Conf. Machine Learning (ICML-2003), pp. 656–663. AAAI Press. [Google Scholar]
  • 29. Lappe M, Bremmer F, van den Berg AV. 1999. Perception of self-motion from visual flow. Trends Cogn. Sci. 3, 329–336. ( 10.1016/s1364-6613(99)01364-9) [DOI] [PubMed] [Google Scholar]
  • 30. Gibson JJ. 1950. Perception of the visual world. Boston, MA: Houghton Mifflin. [Google Scholar]
  • 31. Warren WH. 2003. Optic flow. In The visual neurosciences (eds Chalupa LM, Werner JS), pp. 1247–1259. Cambridge, MA: MIT Press. [Google Scholar]
  • 32. Fajen BR, Matthis JS. 2013. Visual and non-visual contributions to the perception of object motion during self-motion. PLoS One 8, e55446. ( 10.1371/journal.pone.0055446) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33. Wystrach A, Schwarz S. 2013. Ants use a predictive mechanism to compensate for passive displacements by wind. Curr. Biol. 23, R1083–R1085. ( 10.1016/j.cub.2013.10.072) [DOI] [PubMed] [Google Scholar]
  • 34. Alsmith AJT. 2017. Perspectival structure and agentive self-location. In The subject’s matter: self-consciousness and the body (eds Vignemont FD, Alsmith AJT). Cambridge, MA: MIT Press. [Google Scholar]
  • 35. Sutton RS, Barto AG. 2018. Reinforcement learning: an introduction. Cambridge, MA: MIT Press. [Google Scholar]
  • 36. Haas J. 2023. The evaluative mind. In Mind design III (eds Haugeland J, Craver CF, Klein AC), pp. 295–314. Cambridge MA: MIT Press. [Google Scholar]
  • 37. Klein C. 2021. 139 Do we represent peripersonal space? In The world at our fingertips: a multidisciplinary exploration of peripersonal space (eds de Vignemont F, Serino A, Wong HY, Farnè A), pp. 3–16. Oxford, UK: Oxford University Press. ( 10.1093/oso/9780198851738.003.0001) [DOI] [Google Scholar]
  • 38. Hedden B, Muñoz D. 2023. Dimensions of value. Noûs 58, 291–305. ( 10.1111/nous.12454) [DOI] [Google Scholar]
  • 39. Merker B. 2013. The efference cascade, consciousness, and its self: naturalizing the first person pivot of action control. Front. Psychol. 4, 501. ( 10.3389/fpsyg.2013.00501) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40. Parvizi J, Damasio A. 2001. Consciousness and the brainstem. Cognition 79, 135–159. ( 10.1016/s0010-0277(00)00127-x) [DOI] [PubMed] [Google Scholar]
  • 41. Damasio A, Damasio H, Tranel D. 2012. Persistence of feelings and sentience after bilateral damage of the insula. Cereb. Cortex 23, 833–846. ( 10.1093/cercor/bhs077) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42. Philippi CL, Feinstein JS, Khalsa SS, Damasio A, Tranel D, Landini G, Williford K, Rudrauf D. 2012. Preserved self-awareness following extensive bilateral brain damage to the insula, anterior cingulate, and medial prefrontal cortices. PLoS One 7, e38413. ( 10.1371/journal.pone.0038413) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43. Mashour GA, Alkire MT. 2013. Evolution of consciousness: phylogeny, ontogeny, and emergence from general anesthesia. Proc. Natl Acad. Sci. USA 110, 10357–10364. ( 10.1073/pnas.1301188110) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44. Pfeiffer K, Homberg U. 2014. Organization and functional roles of the central complex in the insect brain. Annu. Rev. Entomol. 59, 165–184. ( 10.1146/annurev-ento-011613-162031) [DOI] [PubMed] [Google Scholar]
  • 45. Honkanen A, Adden A, da Silva Freitas J, Heinze S. 2019. The insect central complex and the neural basis of navigational strategies. J. Exp. Biol. 222, jeb188854. ( 10.1242/jeb.188854) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46. Egelhaaf M. 2006. The neural computation of visual motion information. In Invertebrate vision (eds Nilsson DE, Warrant E), pp. 399–461. Cambridge, UK: Cambridge University Press. [Google Scholar]
  • 47. Egelhaaf M, Kern R, Lindemann JP. 2014. Motion as a source of environmental information: a fresh view on biological motion computation by insect brains. Front. Neural Circuits 8, 127. ( 10.3389/fncir.2014.00127) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48. Land M. 2019. Eye movements in man and other animals. Vis. Res. 162, 1–7. ( 10.1016/j.visres.2019.06.004) [DOI] [PubMed] [Google Scholar]
  • 49. Kim AJ, Fitzgerald JK, Maimon G. 2015. Cellular evidence for efference copy in Drosophila visuomotor processing. Nat. Neurosci. 18, 1247–1257. ( 10.1038/nn.4083) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50. Ribak G, Egge AR, Swallow JG. 2009. Saccadic head rotations during walking in the stalk-eyed fly (Cyrtodiopsis dalmanni). Proc. R. Soc. B 276, 1643–1649. ( 10.1098/rspb.2008.1721) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51. Seelig JD, Jayaraman V. 2015. Neural dynamics for landmark orientation and angular path integration. Nature 521, 186–191. ( 10.1038/nature14446) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52. Stone T, et al. 2017. An anatomically constrained model for path integration in the bee brain. Curr. Biol. 27, 3069–3085. ( 10.1016/j.cub.2017.08.052) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53. Le Moël F, Stone T, Lihoreau M, Wystrach A, Webb B. 2019. The central complex as a potential substrate for vector based navigation. Front. Psychol. 10, 690. ( 10.3389/fpsyg.2019.00690) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54. Poincaré H. 1898. On the foundations of geometry. Monist 9, 1–43. ( 10.5840/monist1898915) [DOI] [Google Scholar]
  • 55. Poincaré H. 1913. The foundations of science: science and hypothesis, the value of science, science and method. Cambridge, UK: Cambridge University Press. [Google Scholar]
  • 56. Suppes P. 1977. Is visual space Euclidean? Synthese 35, 397–421. ( 10.1007/BF00485624) [DOI] [Google Scholar]
  • 57. Rudrauf D, Bennequin D, Granic I, Landini G, Friston K, Williford K. 2017. A mathematical model of embodied consciousness. J. Theor. Biol. 428, 106–131. ( 10.1016/j.jtbi.2017.05.032) [DOI] [PubMed] [Google Scholar]
  • 58. Rudrauf D, Sergeant-Perthuis G, Tisserand Y, Poloudenny G, Williford K, Amorim MA. 2023. The projective consciousness model: projective geometry at the core of consciousness and the integration of perception, imagination, motivation, emotion, social cognition and action. Brain Sci. 13, 1435. ( 10.3390/brainsci13101435) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59. Brewer B. 1992. Self-location and agency. Mind 101, 17–34. ( 10.1093/mind/101.401.17) [DOI] [Google Scholar]
  • 60. Horridge GA. 1986. A theory of insect vision: velocity parallax. Proc. R. Soc. Lond. B 229, 13–27. ( 10.1098/rspb.1986.0071) [DOI] [Google Scholar]
  • 61. Eckert MP, Zeil J. 2001. Towards an ecology of motion vision. In Motion vision computational, neural, and ecological constraints (eds Zanker JM, Zeil J), pp. 333–369. Berlin, Heidelberg: Springer. ( 10.1007/978-3-642-56550-2_18) [DOI] [Google Scholar]
  • 62. Alsmith A. 2020. The structure of egocentric space. In The world at our fingertips: a multidisciplinary exploration of peripersonal space (eds de Vignemont F, Serino A, Wong HY, Farnè A), pp. 3–16. Oxford, UK: Oxford University Press. ( 10.1093/oso/9780198851738.003.0001) [DOI] [Google Scholar]
  • 63. Esch HE, Burns JE. 1995. Honeybees use optic flow to measure the distance of a food source. Naturwissenschaften 82, 38–40. ( 10.1007/BF01167870) [DOI] [Google Scholar]
  • 64. Ruiz C, Theobald JC. 2020. Ventral motion parallax enhances fruit fly steering to visual sideslip. Biol. Lett. 16, 20200046. ( 10.1098/rsbl.2020.0046) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 65. Stewart FJ, Kinoshita M, Arikawa K. 2015. The roles of visual parallax and edge attraction in the foraging behaviour of the butterfly Papilio xuthus. J. Exp. Biol. 218, 1725–1732. ( 10.1242/jeb.115063) [DOI] [PubMed] [Google Scholar]
  • 66. Srinivasan MV, Lehrer M, Zhang SW, Horridge GA. 1989. How honeybees measure their distance from objects of unknown size. J. Comp. Physiol. Sens. Neural Behav. Physiol. 165, 605–613. [Google Scholar]
  • 67. Lewis D. 1986. Attitudes de dicto and de se. In Philosophical papers (ed. Lewis D), pp. 133–159, vol. 1. New York, NY: Oxford University Press. [Google Scholar]
  • 68. Schellenberg S. 2016. De se content and de hinc content. Analysis 76, 334–345. ( 10.1093/analys/anw019) [DOI] [Google Scholar]
  • 69. Krashes MJ, DasGupta S, Vreede A, White B, Armstrong JD, Waddell S. 2009. A neural circuit mechanism integrating motivational state with memory expression in Drosophila. Cell 139, 416–427. ( 10.1016/j.cell.2009.08.035) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 70. Burke CJ, et al. 2012. Layered reward signalling through octopamine and dopamine in Drosophila. Nature 492, 433–437. ( 10.1038/nature11614) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 71. Heisenberg M. 2003. Mushroom body memoir: from maps to models. Nat. Rev. Neurosci. 4, 266–275. ( 10.1038/nrn1074) [DOI] [PubMed] [Google Scholar]
  • 72. Perry CJ, Barron AB. 2013. Neural mechanisms of reward in insects. Annu. Rev. Entomol. 58, 543–562. ( 10.1146/annurev-ento-120811-153631) [DOI] [PubMed] [Google Scholar]
  • 73. Goulard R, Buehlmann C, Niven JE, Graham P, Webb B. 2021. A unified mechanism for innate and learned visual landmark guidance in the insect central complex. PLoS Comput. Biol. 17, e1009383. ( 10.1371/journal.pcbi.1009383) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 74. Tsao CH, Chen CC, Lin CH, Yang HY, Lin S. 2018. Drosophila mushroom bodies integrate hunger and satiety signals to control innate food-seeking behavior. eLife 7, e35264. ( 10.7554/elife.35264) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 75. Weaver KJ, Raju S, Rucker RA, Chakraborty T, Holt RA, Pletcher SD. 2023. Behavioral dissection of hunger states in Drosophila. eLife 12, P84537. ( 10.7554/elife.84537) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 76. Schwaerzel M, Monastirioti M, Scholz H, Friggi-Grelin F, Birman S, Heisenberg M. 2003. Dopamine and octopamine differentiate between aversive and appetitive olfactory memories in Drosophila. J. Neurosci. 23, 10495–10502. ( 10.1523/JNEUROSCI.23-33-10495.2003) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 77. Szyszka P, Galkin A, Menzel R. 2008. Associative and non-associative plasticity in Kenyon cells of the honeybee mushroom body. Front. Syst. Neurosci. 2, 3. ( 10.3389/neuro.06.003.2008) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 78. Strube-Bloss MF, Nawrot MP, Menzel R. 2011. Mushroom body output neurons encode odor–reward associations. J. Neurosci. 31, 3129–3140. ( 10.1523/jneurosci.2583-10.2011) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 79. Wiederman SD, Fabian JM, Dunbier JR, O’Carroll DC. 2017. A predictive focus of gain modulation encodes target trajectories in insect vision. eLife 6, e26478. ( 10.7554/elife.26478) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 80. Perry CJ, Chittka L. 2019. How foresight might support the behavioral flexibility of arthropods. Curr. Opin. Neurobiol. 54, 171–177. ( 10.1016/j.conb.2018.10.014) [DOI] [PubMed] [Google Scholar]
  • 81. Marr D. 1982. Vision. San Francisco, CA: WH Freeman. [Google Scholar]
  • 82. McLaughlin BP. 2006. Is role-functionalism committed to epiphenomenalism? J. Conscious. Stud. 13, 39–66. [Google Scholar]
  • 83. Strausfeld NJ. 2012. Arthropod brains: evolution, functional elegance, and historical significance. Cambridge, MA: Belknap Press. [Google Scholar]
  • 84. Merker B. 2013. Body and world as phenomenal contents of the brain’s reality model. In The unity of mind, brain and world: current perspectives on a science of consciousness (eds Pereira JA, Lehmann D), pp. 7–42. Cambridge, UK: Cambridge University Press. ( 10.1017/CBO9781139207065.002) [DOI] [Google Scholar]
  • 85. Suzuki M, Pennartz CMA, Aru J. 2023. How deep is the brain? The shallow brain hypothesis. Nat. Rev. Neurosci. 24, 778–791. ( 10.1038/s41583-023-00756-z) [DOI] [PubMed] [Google Scholar]
  • 86. Cisek P. 2021. Evolution of behavioural control from chordates to primates. Phil. Trans. R. Soc. B 377, 0200522. ( 10.1098/rstb.2020.0522) [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 87. Mnih V, et al. 2015. Human-level control through deep reinforcement learning. Nature 518, 529–533. ( 10.1038/nature14236) [DOI] [PubMed] [Google Scholar]
  • 88. Lillicrap TP, Hunt JJ, Pritzel A, Heess N, Erez T, Tassa Y, Silver D, Wierstra D. 2015. Continuous control with deep reinforcement learning. arXiv. ( 10.48550/arXiv.1509.02971) [DOI]
  • 89. Zalucki O, Brown DJ, Key B. 2023. What if worms were sentient? Insights into subjective experience from the Caenorhabditis elegans connectome. Biol. Philos. 38, 34. ( 10.1007/s10539-023-09924-y) [DOI] [Google Scholar]
  • 90. Butlin P, et al. 2023. Consciousness in artificial intelligence: insights from the science of consciousness. arXiv. ( 10.48550/arXiv.2308.08708) [DOI]
  • 91. Chalmers D. 1996. The conscious mind: in search of a fundamental theory. New York, NY: Oxford University Press. [Google Scholar]
  • 92. Hilbert DR. 1987. Color and color perception: a study in anthropocentric realism. Stanford, CA: Center for the Study of Language and Information. [Google Scholar]
  • 93. Armstrong DM. 1999. The mind-body problem: an opinionated introduction. Abingdon, UK: Routledge. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

This article has no additional data.


Articles from Philosophical Transactions of the Royal Society B: Biological Sciences are provided here courtesy of The Royal Society

RESOURCES