Abstract
Complex brains evolved in order to comprehend and interact with complex environments in the real world. Despite significant progress in our understanding of perceptual representations in the brain, our understanding of how the brain carries out higher level processing remains largely superficial. This disconnect is understandable, since the direct mapping of sensory inputs to perceptual states is readily observed, while mappings between (unknown) stages of processing and intermediate neural states is not. We argue that testing theories of higher level neural processing on robots in the real world offers a clear path forward, since (1) the complexity of the neural robotic controllers can be staged as necessary, avoiding the almost intractable complexity apparent in even the simplest current living nervous systems; (2) robotic controller states are fully observable, avoiding the enormous technical challenge of recording from complete intact brains; and (3) unlike computational modelling, the real world can stand for itself when using robots, avoiding the computational intractability of simulating the world at an arbitrary level of detail. We suggest that embracing the complex and often unpredictable closed‐loop interactions between robotic neuro‐controllers and the physical world will bring about deeper understanding of the role of complex brain function in the high‐level processing of information and the control of behaviour.

Abbreviations
- AEC
active efficient encoding
- CPG
central pattern generator
- HD
head direction
- STDP
spike timing‐dependent plasticity
Introduction
Despite decades of progress in neuroscience, it remains an enigma how the brain performs what is fundamentally its primary function of turning sensory percepts into coherent behaviour (Mante et al. 2013; Marcus et al. 2014). How do neurophysiological processes account for the adaptability, flexibility and consistency of behaviour through time? Even in C. elegans, which possesses the simplest nervous system known, and in which we are able to label all of its 302 neurons, we have only incomplete ideas of how each of those neurons affects the others to drive behaviour, and therefore how its nervous system functions overall (Schafer, 2005). In the mammalian brain, we have theories and general notions regarding how information may be represented at the sensorimotor periphery (Sanes & Donoghue, 2000; Ivry & Spencer, 2004; Olshausen & Field, 2004; Jazayeri & Movshon, 2006) and during working memory and decision tasks (Platt & Glimcher, 1999; Ivry & Spencer, 2004; Mante et al. 2013). However, the ways in which sensory percepts are integrated with internal state, how plans for future actions are made, and how those actions are coordinated through time to result in goal‐oriented behaviour remain largely as vexing and mysterious to us today as they were decades ago (Marcus et al. 2014).
Significant parallels can be drawn between the problems that must be solved by nervous systems and those that are faced by roboticists in the engineering of robotic systems. In fact, robotics has a significant history of taking inspiration from, and providing inspiration for, biology (Webb, 2000). Bio‐inspired robots are those for which inspiration is drawn from biology in order to solve an engineering challenge, without necessarily faithfully incorporating all (or even any) biological constraints. Early examples include using reinforcement learning in a quadruped robot that autonomously learns how to walk (Kimura et al. 2001), and using principles of insect vision to sense distances and move through cluttered environments (Srinivasan et al. 1999). Biomimetic robots, in contrast, are those that are constructed specifically to model biological structures and processes, with the aim of improving our understanding of the embodied biological principles (sometimes with a secondary aim of advancing the state‐of‐the‐art in robotics). Early examples include examination of quadruped gaits (Raibert, 1990), and reconstruction of honeybee landing trajectories using a robotic gantry (Srinivasan et al. 2000) (see Fig. 1).
Figure 1. Using a robotic gantry to reconstruct honeybee landing trajectories .

Reproduced with permission from Srinivasan et al. (2000).
In the following sections we focus on biomimetics, and begin by summarising the kinds of biomimetic robots that have been constructed, the range of physiological issues they have already tackled, and some of the biological principles that they have helped to clarify. Interestingly, while these biomimetic robots have contributed to neuroscientific progress, a large divide remains between their functional capabilities and the capabilities of robots that, while being bio‐inspired, do not place great emphasis on modelling biology with high fidelity. This divide arises due to the requirement for ‘engineered’ robots (even those that use bio‐inspiration) to perform high‐level processing such as planning paths to target locations, for which we currently have complete algorithmic, but only incomplete neural solutions. Subsequently, in the section ‘What we don't know about high level neural processing’, we identify the large gaps in our understanding of how neural activity could implement cognitive functions such as planning and goal‐directed action. Fittingly, these gaps correspond closely to the typically poor capacity of our engineered robotics systems to appropriately operate in complex real‐world settings. The impetus to drive further advances in this field therefore comes from both neuroscience and robotics engineering, and we discuss some of the most interesting and most promising recent developments.
Neural function is defined by a combination of physical neural structure and its ongoing dynamical activity. Neural dynamics, particularly in higher brain regions, are known to be complex; complexity is defined as being at the ‘edge of chaos’ where dynamics are not uniform and also not entirely chaotic, but instead reside in the thin transition zone between the two, alternating unpredictably between periods of transient stability and apparent randomness. Complex neural activity is being thoroughly investigated from a nonlinear dynamical systems perspective (Chialvo, 2010), but how complex dynamics can implement neural function and control ongoing behaviour is a considerably more difficult problem to study. In the final section, ‘A way forward: embracing complexity’, we conclude that artificial neural systems implemented on robotic hardware present a real opportunity to investigate and understand not just how the brain represents sensorimotor and spatial information, but how it can utilise complexity to transform such information into goal‐directed action. Support for this argument comes from the strong synergy between, on the one hand, the complex physical world and, on the other, complex brains that have evolved to represent, process and interact with information from the world in a meaningful way (Koch & Laurent, 1999; Chialvo, 2010).
Current state‐of‐the‐art in biomimetic robotics
Since Webb's seminal review (Webb, 2000), biologist–roboticist collaborations have continued to clarify and extend our knowledge of principles of sensorimotor mechanics and neural representations. Examples abound (too many to cover in this short review, but see Floreano et al. 2014); however, below we cover a number that are relevant to our theme. These studies show that, in essence, the world is the best model of itself, and that in many cases attempting to simplify or control for the complexities of the world actually changes the fundamental issues that are involved. In subsequent sections, we suggest that this very same complexity principle also applies to brains and how they interact with the world through the body.
We begin with examples from fluid dynamics, and how it affects the perception of odours. Because the physics of turbulent flows in fluids is still not fully understood, the dispersion of chemical plumes through fluids like air and water cannot be modelled with complete accuracy, and the models that we do have (which provide approximate solutions) are computationally intensive and prohibitively slow. To fully understand how insects and other animals track odours to their sources at real world scales using computational modelling alone is therefore difficult, and the validity of the results is uncertain. In addition to gaining an understanding of how animals accomplish odour tracking, the reliable tracking of odours and other chemicals in the air has significant potential for the recovery of dangerous substances, as well as for saving lives in search‐and‐rescue operations. To this end, instead of trying to create a model of how odour plumes disperse through the world, an often easier and always more accurate solution is to use the world itself. Many studies have therefore examined the tracking of plumes using robots (for a review see Kowadlo & Russell, 2008). These studies have characterised the conditions under which different odour‐tracking strategies work optimally, and related these strategies to animal behaviours under these conditions (Vergassola et al. 2007). The studies have shown, for example, that the sensor capabilities dictate, to a large extent, the best tracking strategies, and therefore are also a strong determinant of animal behaviour. The utility of using the real world in place of a model is a theme to which we will frequently return.
Robots allow us to appreciate the capabilities of sensors for which we have no intuitive understanding. The whiskers of rodents and antennae of insects are exceptionally sensitive and provide a wealth of information to the animal. Simulations of whisker dynamics can be helpful, but raise the familiar problem of only being able to include in a simulation that which is already known. Robots constructed to use whiskers for sensing have led to new analogies and new understanding of whisking capability (Prescott et al. 2009) (see Fig. 2 A); one study likens the information that can be gleaned from whisking to an ‘optical flow’ – this flow is clear enough to estimate local curvature of objects even in the presence of real‐world friction and slip (Schroeder & Hartmann, 2012). Another study shows that not only can the distance to objects be determined by insect‐like antennae (using the contact‐induced vibration frequency) but the object material can be classified by the subsequent damping profile of the vibration (Patanè et al. 2012). Such deep insights are significantly more difficult to gain through either simulation or purely theoretical considerations, due to the uncertainty as to the exact mechanical properties of the colliding materials, and also of the levels of noise vs. the size of the measurable effects in the real world.
Figure 2. Using robots to study real‐world sensory and motor capabilities .

A, a robot with whiskers called Scratchbot (reproduced from Prescott et al. 2009). B, a quadruped robot [reproduced from Wikipedia (Creative Commons license)].
In the domain of motor control, we have long been aware of the role of central pattern generators (CPGs) in creating repetitive movements such as are used for locomotion. We are also well aware of the tendency of oscillators like CPGs to entrain each other and synchronise, depending on the relative strengths of the internal oscillatory drive vs. the external coupling (for CPGs, at least some of the external coupling occurs through the sensorimotor loop that is formed with the world). However, a robot study has shown the unexpected full range of collective behaviours made possible by a distributed group of CPGs combined with corrective reflexes for controlling a quadruped robot (Kimura et al. 2007). Despite having no centralised control, the robot walks effectively over difficult natural outdoor terrain. In this study, the advantages of tuning the neural dynamics to the temporal characteristics of the body–world interaction are emphasised (Fig. 2 B depicts another quadruped robot). In a fascinating conceptual shift, an inherently chaotic neural CPG controller has been used to generate flexible, adaptive patterns of behaviour in a robotic hexapod walker (Steingrube et al. 2010). In this study, unstable periodic orbits of the neural controller circuit were stabilised by sensory input, where orbits of different periods resulted in the generation of different gaits. The chaotic ground state of the controller was applied to extricate the robot from difficult circumstances such as the trapping of a leg in a hole; without chaos, such self‐rescue was not possible. This study introduced the concept of chaos control for sensorimotor systems, and showed that chaotic motor patterns can be exploited for real‐world benefit. The insights that have been obtained in these studies are arguably impossible without the incorporation of the real‐world interaction afforded by robotics.
Sensorimotor couplings can be useful in other, less obvious ways; for example, they can give rise to new opportunities for, and robustness of, sensory learning. We have shown how stereotypical head movements made by rat pups can calibrate their head direction (HD) systems, and tested the proposed synaptic learning mechanisms by implementing a spiking‐neuron HD network on a mobile robot (Stratton et al. 2011). In another study, it has been shown that the efficient coding principle, combined with reinforcement learning, allows a robot to learn to control two entirely independent camera ‘eyes’ as one unified system (Zhao et al. 2012). Efficient coding is a hypothesis stating that sensory encoding works to minimise the number of spikes required to transmit a given amount of information (or equivalently, to maximise the information transmitted by a given number of spikes) (Barlow, 1961), and has been shown to apply in at least several sensory modalities (Olshausen & Field, 2004). The most efficient encoding of a signal depends upon the signal statistics, which in turn depend to a large extent on movement of the sensory apparatus through the world. In these elegant studies investigating efficient coding, dual cameras learn to proficiently track targets together (Zhang et al. 2014) and recalibrate after physical misalignment (Lonini et al. 2013). Notably, there is no explicit drive for self‐calibration; the only innate drives for both the sensory and reward systems are to learn an efficient encoding of the visual stimuli, and because of information redundancy, efficiency is improved when the cameras move to align the images. Due to the reliance of the formation of the efficient code on action in the world, the authors call this principle ‘active efficient encoding’ (AEC). AEC shows that sensorimotor coupling not only supports learning, but may actually embody a basic general principle by which the nervous system functions in the world – which is to actively reward behaviour that improves sensory representations. Moreover, these studies show that this principle can also generate robustness in the face of unexpected sensory perturbations, through reward‐driven updates of the policy that controls the movement. The studies demonstrate the potentially tight dependence of not just sensory input, but sensory representations, on action in the world. Ultimately, the studies may represent one of the first times that an entirely new neural principle (AEC) has been established, rather than just confirmed or extended, through robotics.
What we don't know about high level neural processing
In the previous section, a common thread amongst the studies was that the usefulness of the robot was attributable to the sensory and/or motor representations that were related directly to the perception of, or the interaction with, the real world. However, brains are also adept at the maintenance of an ongoing state and the generation of adaptive, flexible, coherent behaviour through time. Whilst we have some understanding of how the brain represents information at the sensorimotor periphery, we have very little idea how shifting, dynamic patterns of neural activity can proficiently process information, coordinate diverse brain regions, and capably maintain this behavioural coherence. Examples of these ongoing dynamics include the brain's ability to store transient activity patterns for extended periods (working memory), the maintenance of goals over long time frames and over changing perceptual environments, the rapid reconfiguration of neural circuits to process different elements in a stream of perceptual inputs (attention), and the ability to multitask and switch at will between tasks with different goals and subgoals. Even rodents have the ability to actively plan paths to specific locations, and deduce the existence of shortcuts, through dynamic environments (Alvernhe et al. 2008). The importance of these internally generated dynamic states is emphasised by the fact that, in the human brain, it is estimated that perceptual processing accounts for less than 5% of its energy consumption, with the remaining 95% or more spent on the creation and subsequent processing of entirely self‐generated activity (Raichle & Mintun, 2006).
The dynamical correlates of flexible behaviour in the brain are seen in the rapid changes in functional connectivity between brain regions observed using functional magnetic resonance imaging (fMRI), electro‐encephalography (EEG) and other activity‐imaging methodologies. Indeed it is now widely accepted that behavioural flexibility is due entirely to the brain's ability to self‐generate complex patterns of dynamical activity (Buzsáki & Draguhn, 2004; Chialvo, 2010; Tognoli & Kelso, 2014). From this perspective, understanding complex neural dynamics is the key to understanding high level brain function.
Accordingly, recent research is extending the standard idea of simply modelling biology using robotics, into the interfacing of brains with machines. Many of these studies are aimed at understanding neural representations, including neural dynamics, sufficiently well to control actuated prosthetic devices. In Kositsky et al. (2009), for example, the dynamical degrees of freedom of a lamprey brainstem are assessed by setting up closed‐loop (bi‐directional) interactions with an external device. The authors note that ‘the signals generated by a population of neurons are expected to depend in a complex way upon poorly understood neural dynamics’. Similarly, in Kocaturk et al. (2015), real cortical motor neurons are interfaced in real time with simulated neurons to implement a neural controller trained using reward‐modulated spike timing‐dependent plasticity (STDP). These studies are motivated by the need to not just understand, but to gain control of neural dynamics for motion control.
In animals, particularly rodents, a well‐studied high‐level neural function is that of path planning. In path planning experiments, animals typically must find their way through mazes or around open arenas to reach rewarding locations. Here, we use path planning as an exemplar high‐level cognitive function because (1) it is a well‐defined and well‐understood problem from a computational perspective; (2) we know that in principle, and from behavioural studies, that animals such as rodents are performing path planning; (3) we understand neural representations of space in mammals well enough to observe what may be the neural correlates of path planning as they occur in physiological recordings from the hippocampus of rodents during navigational experiments; and (4) during path planning, complex patterns of neural activity are often observed. In mammals, neurons known as grid cells are found in the entorhinal cortex and fire in repeated hexagonal tessellations of space (Hafting et al. 2005). A recent study has shown that robots equipped with grid cell representations could navigate successfully through environments that lacked uniquely identifying cues (Milford et al. 2010), suggesting that these cells may assist in cases of real world positional uncertainty. Other neurons, found in the hippocampus and known as place cells, exhibit firing patterns that correlate with unique locations in the environment (O'Keefe & Dostrovsky, 1971). Each of these neurons fires as the rodent passes through the specific place that it represents, known as its place field. Place cell representations have been shown to emerge in studies using robots that learned to solve navigation tasks (Krichmar et al. 2005 a,b). These place representations were not directly configured, but instead emerged due to constraints of, and interactions between, the various components of the neural controllers and the robots’ sensorimotor associations.
What is interesting in the context of complex dynamics and navigation planning is that place cells fire not only when a rodent occupies a specific place. They also sometimes fire when a rodent is at choice points in navigation tasks – in which case they fire in sequences representing paths (contiguous places) that emanate from those points (Johnson & Redish, 2007) (see Fig. 3 A). This process is known as preplay. Somewhat astoundingly, place cells have been seen to fire in novel sequences that rodents have never travelled but that represent new routes between places that they know (Pfeiffer & Foster, 2013) (see Fig. 3 B). These cases of preplay and novel sequence generation are predictive of the rodents’ future behaviours, and are therefore readily interpretable as representing upcoming actions that the rodent could take – that is, of the rodent ‘thinking ahead’ or planning future routes.
Figure 3. Episodes of preplay recorded from rat hippocampus .

A, the preplay process is testing alternative paths in a T‐maze (reproduced with permission from Johnson & Redish, 2007). B, preplay is searching for a path to a goal location in an open foraging arena (reproduced with permission from Pfeiffer & Foster, 2013).
Two pertinent questions raised by these discoveries are (1) how are these future‐probing sequences activated and controlled (the spatial problem)? And (2) how are they initiated at the right times (the temporal problem)? Several computational models explain how recurrent connections in the hippocampus could support preplay sequences (Azizi et al. 2013; Romani & Tsodyks, 2015) and therefore at least partially answer question (1). However, since these sequences occur only when rodents are goal seeking, they are not simply stimulus triggered, but instead appear in the context of long‐term, possibly top‐down influences from other brain regions. Some models do try to incorporate more of the brain than just the hippocampus (Fleischer et al. 2007; Chersi & Pezzulo, 2012; Erdem & Hasselmo, 2012), but these models typically incorporate simple associations or inputs which trigger specific parts of the network at specific times or places. In these models, such triggering always occurs regardless of context (and therefore loses the context‐sensitivity of preplay), or else it relies on externally generated signals (and therefore invokes the ‘homunculus’ problem and leaves question (2) incompletely explained). These studies, and many others that use similar methodologies, are therefore valuable for indicating, in general, the potential roles that functionally and anatomically distinct brain regions could play in controlling adaptive behaviour, but cannot speak to the specifics of brain dynamics, or the precise mechanisms by which information is encoded, held, shifted and transformed throughout the brain. The burning question is this: How does the brain function as a single, integrated, homunculus‐free system where its own instantaneous internal dynamics, and nothing more, organises coherent behaviour from moment to moment? This is the question that we believe robotics is well placed to help answer. Robotics may in fact be invaluable for this task, as we discuss next, due to our ability to construct and then fully observe complete neurorobotic systems that inherit much of their computational complexity from closed‐loop interactions with the complex world.
A way forward – embracing complexity
Much of the natural world, biology included, can be interpreted according to the rules of complex systems (Bak, 1996). Complex systems theory has shown that when elements of a system interact with simple non‐linear rules repeated over space and through time, non‐trivial patterns often emerge from those interactions. These patterns are generally not predictable, either quantitatively or qualitatively, from the forms of the underlying rules. The complex patterns of activity that we observe in the brain are collective, emergent phenomena that arise from the nonlinear interactions of large numbers of neurons. Neural complexity is evident in both the physical structure of the brain (characterised by massive recurrent connectivity, and an uncountable number of nested structural loops between perceptual input and behavioural output), and its dynamical activity (which is often turbulent and itinerant).
Two of the key determinants of complex systems are their sensitivity to initial conditions and their unpredictability – complex systems cannot be ‘solved’ in a traditional analytical sense. Adding and removing components, or even slightly changing the way existing components interact, can have large, even catastrophic effects on the overall behaviour of the system. Brain components – synapses, neurons, local microcircuits, macro‐regions, entire brains, the brain embedded in the body and, indeed, multiple brains and bodies interacting through and with the environment – therefore cannot be easily understood in isolation.
Our current reductionist approaches, where typically we observe local brain activity in controlled experimental settings, are undoubtedly powerful and valuable techniques for understanding brain function that have spearheaded impressive advances in neuroscience. These approaches are clearly critical for gaining detailed knowledge of specific brain circuits; however, they also present certain problems, not the least of which is the enormous technical difficulty of reliably recording neural activity en masse. Moreover, real brains are already so complex that attempting to tease apart experimental effects from intrinsic dynamics has always been, and will likely continue for some time to be, a major challenge. Until recently, trial‐to‐trial variability has been effectively dismissed by averaging the results of many trials; recently though, there has been an increasing recognition that averaging away the variability may actually be concealing the very fundamental processes that brains are using to manipulate information and control behaviour.
Why have animals evolved to have complex brains? The answer is simultaneously obvious and insightful. Animals have complex brains because the world is complex (Chialvo, 2010). If the world was either entirely uniform or entirely random, then having a complex brain would confer no advantage – there would be nothing to learn and no reason to adapt. Only in a complex world, where events are predictable some times but not at other times, and where the statistics of the environment are non‐stationary, is a complex adaptive brain of any value. Trying to understand, outside of the context of the complex physical world, how neural dynamics controls behaviour disregards the reason for the existence of the brain in the first place. The reductionist approach is therefore poorly suited to understanding the function of the whole brain, which has been sculpted by evolution and plasticity to function effectively in the real world and which, as a system dominated by complex non‐linear interactions, has collective properties that extend beyond the sum of the properties of its components.
How could robots benefit our understanding of complex brains? The potential advantages are numerous. (1) Robots’ states are fully observable, avoiding the technical challenges of recording from real brains. (2) Experiments with robots can be repeatable from similar initial conditions each time (notwithstanding environmental changes), eliminating many of the confounding factors. When sources of randomness (such as sensorimotor noise) cannot be eliminated, they are at least constrained, and can be characterised and understood under conditions that are transparent and easier to control than in animal experiments. (3) Complexity in robots’ neural controllers can be staged as desired, allowing us to add and remove components and observe the resulting effects, rather than being constrained to the current set of living creatures that for the most part are already unfathomably complex. (4) Finally, robots are the ‘whole iguana’ (Dennett, 1978), which is vital because the complex brain/body/environment is irreducible, behaviour and neural dynamics are intricately linked through the body, and any models (robotic, computational, or animal) that try to isolate behaviour or dynamics in components or subsystems are missing the collective picture (see Fig. 4). Robot studies can never replace experiments with real animals (nor even high‐level cognitive‐style models – at least not yet), but do offer an outstanding complement to them, with abundant opportunities for testing some hypotheses, developing new ones, and providing vital, fundamental information on the workings of complex brains in complex worlds.
Figure 4. Through interactions with the complex world, robots can aid our understanding of the functioning of complex brains .

The ‘simple’ task of grasping a piece of fruit involves multiple neural systems (vision, attention, representation of space, sensorimotor transformations, motor planning and motor execution) and multiple feedback loops through the world (visual, tactile and proprioceptive) Additionally, the complexity of the real world ensures that no two pieces of fruit are the same shape or can be grasped in exactly the same way. Robotic systems can help us understand the intricacies involved in seemingly simple tasks, and how multiple neural subsystems can effectively perform their individual functions as well as how they must interact in order to accomplish high level tasks over time (from Chris Lehnert).
Additional information
Competing interests
The authors declare they have no conflicting interests.
Author contributions
All authors have approved the final version of the manuscript and agree to be accountable for all aspects of the work. All persons designated as authors qualify for authorship, and all those who qualify for authorship are listed.
Funding
This work was supported by the Queensland Brain Institute and the UQ Centre for Clinical Research for P.S., an Australian Research Council Future Fellowship FT140101229 awarded to M.M. and an Office of Naval Research ONR MURI N00014‐10‐1‐0936 and Silvio O. Conte Centre Grant P50 NIMH MH094263 to M.E.H.
Biographies
Peter Stratton obtained his Ph.D. in computer science and artificial intelligence from the University of Queensland in Australia. He is currently a Research Fellow at the Queensland Brain Institute, where he has been analysing micro‐electrode recording data from patients undergoing electrode implantation for deep brain stimulation for the treatment of brain disorders. One of his main aims is to understand the computational principles which are supported by brain activity.
Michael Hasselmo received the Ph.D. in experimental psychology from Oxford University and a degree with special concentration in Behavioral Neuroscience from Harvard College. He is currently a Professor at Boston University in the Department of Psychological and Brain Science, Director of the Center for Systems Neuroscience and Associate Director of the Centre for Memory and Brain. He conducts research to analyse modulatory effects and oscillatory dynamics in brain slice preparations of cortical structures, extracellular recording of multiple single units in the entorhinal cortex and place cells in the hippocampus, and modelling including network level models and detailed biophysical models.
Michael Milford received the Ph.D. in Electrical Engineering and the Bachelor of Mechanical and Space Engineering from the University of Queensland (UQ), Brisbane, Australia. He is currently an Associate Professor and Australian Research Council Future Fellow at the Queensland University of Technology, Brisbane, Australia, and a Chief Investigator for the Australian Centre of Excellence for Robotic Vision, and was awarded an inaugural Australian Research Council Discovery Early Career Researcher Award in 2012 and became a Microsoft Faculty Fellow in 2013. He conducts interdisciplinary research into navigation across the fields of robotics, neuroscience and computer vision.

This review was presented at the symposium “Spatial Computation: from neural circuits to robot navigation”, which took place at the University of Edinburgh, on 11 April 2015.
References
- Alvernhe A, Van Cauter T, Save E & Poucet B (2008). Different CA1 and CA3 representations of novel routes in a shortcut situation. J Neurosci 28, 7324–7333. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Azizi AH, Wiskott L & Cheng S (2013). A computational model for preplay in the hippocampus. Front Comput Neurosci 7, 161. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bak P (1996). How Nature Works. The Science of Self‐organized Criticality. Copurnicus, New York. [Google Scholar]
- Barlow HB (1961). Possible principles underlying the transformations of sensory messages In Sensory Communication, ed. Rosenblith W, pp. 217–234. MIT Press, Cambridge, MA. [Google Scholar]
- Buzsáki G & Draguhn A (2004). Neuronal oscillations in cortical networks. Science 304, 1926–1929. [DOI] [PubMed] [Google Scholar]
- Chersi F & Pezzulo G (2012). Using hippocampal‐striatal loops for spatial navigation and goal‐directed decision‐making. Cogn Process 13, 125–129. [DOI] [PubMed] [Google Scholar]
- Chialvo DR (2010). Emergent complex neural dynamics. Nat Phys 6, 744–750. [Google Scholar]
- Dennett DC (1978). Why not the whole iguana? Behav Brain Sci 1, 103–104. [Google Scholar]
- Erdem UM & Hasselmo M (2012). A goal‐directed spatial navigation model using forward trajectory planning based on grid cells. Eur J Neurosci 35, 916–931. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fleischer JG, Gally JA, Edelman GM & Krichmar JL (2007). Retrospective and prospective responses arising in a modeled hippocampus during maze navigation by a brain‐based device. Proc Natl Acad Sci USA 104, 3556–3561. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Floreano D, Ijspeert AJ & Schaal S (2014). Robotics and neuroscience. Curr Biol 24, R910–R920. [DOI] [PubMed] [Google Scholar]
- Hafting T, Fyhn M, Molden S, Moser M‐B & Moser EI (2005). Microstructure of a spatial map in the entorhinal cortex. Nature 436, 801–806. [DOI] [PubMed] [Google Scholar]
- Ivry RB & Spencer RM (2004). The neural representation of time. Curr Opin Neurobiol 14, 225–232. [DOI] [PubMed] [Google Scholar]
- Jazayeri M & Movshon JA (2006). Optimal representation of sensory information by neural populations. Nat Neurosci 9, 690–696. [DOI] [PubMed] [Google Scholar]
- Johnson A & Redish AD (2007). Neural ensembles in CA3 transiently encode paths forward of the animal at a decision point. J Neurosci 27, 12176–12189. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kimura H, Fukuoka Y & Cohen AH (2007). Adaptive dynamic walking of a quadruped robot on natural ground based on biological concepts. Int J Robotics Res 26, 475–490. [Google Scholar]
- Kimura H, Yamashita T & Kobayashi S (2001). Reinforcement learning of walking behavior for a four‐legged robot In Decision and Control, 2001. Proceedings of the 40th IEEE Conference on, pp. 411–416. IEEE, Piscataway, NJ. [Google Scholar]
- Kocaturk M, Gulcur HO & Canbeyli R (2015). Towards building hybrid biological/in silico neural networks for motor neuroprosthetic control. Front Neurorobot 9, 8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Koch C & Laurent G (1999). Complexity and the nervous system. Science 284, 96–98. [DOI] [PubMed] [Google Scholar]
- Kositsky M, Chiappalone M, Alford ST & Mussa‐Ivaldi FA (2009). Brain‐machine interactions for assessing the dynamics of neural systems. Front Neurorobot 3, 1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kowadlo G & Russell RA (2008). Robot odor localization: a taxonomy and survey. Int J Rob Res 27, 869–894. [Google Scholar]
- Krichmar JL, Nitz DA, Gally JA & Edelman GM (2005. a). Characterizing functional hippocampal pathways in a brain‐based device as it solves a spatial memory task. Proc Natl Acad Sci USA 102, 2111–2116. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Krichmar JL, Seth AK, Nitz DA, Fleischer JG & Edelman GM (2005. b). Spatial navigation and causal analysis in a brain‐based device modeling cortical‐hippocampal interactions. Neuroinformatics 3, 197–221. [DOI] [PubMed] [Google Scholar]
- Lonini L, Forestier S, Teulière C, Zhao Y, Shi BE & Triesch J (2013). Robust active binocular vision through intrinsically motivated learning. Front Neurorobot 7, 20. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mante V, Sussillo D, Shenoy KV & Newsome WT (2013). Context‐dependent computation by recurrent dynamics in prefrontal cortex. Nature 503, 78–84. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Marcus G, Marblestone A & Dean T (2014). The atoms of neural computation. Science 346, 551–552. [DOI] [PubMed] [Google Scholar]
- Milford MJ, Wiles J & Wyeth GF (2010). Solving navigational uncertainty using grid cells on robots. PLoS Comput Biol 6, e1000995. [DOI] [PMC free article] [PubMed] [Google Scholar]
- O'Keefe J & Dostrovsky J (1971). The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely‐moving rat. Brain Res 34, 171–175. [DOI] [PubMed] [Google Scholar]
- Olshausen BA & Field DJ (2004). Sparse coding of sensory inputs. Curr Opin Neurobiol 14, 481–487. [DOI] [PubMed] [Google Scholar]
- Patanè L, Hellbach S, Krause AF, Arena P & Dürr V (2012). An insect‐inspired bionic sensor for tactile localization and material classification with state‐dependent modulation. Front Neurorobot 6, 8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pfeiffer BE & Foster DJ (2013). Hippocampal place‐cell sequences depict future paths to remembered goals. Nature 497, 74–79. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Platt ML & Glimcher PW (1999). Neural correlates of decision variables in parietal cortex. Nature 400, 233–238. [DOI] [PubMed] [Google Scholar]
- Prescott TJ, Pearson MJ, Mitchinson B, Sullivan JCW & Pipe AG (2009). Whisking with robots from rat vibrissae to biomimetic technology for active touch. IEEE Robot Autom Mag 16, 42–50. [Google Scholar]
- Raibert MH (1990). Trotting, pacing and bounding by a quadruped robot. J Biomech 23, 79–98. [DOI] [PubMed] [Google Scholar]
- Raichle ME & Mintun MA (2006). Brain work and brain imaging. Annu Rev Neurosci 29, 449–476. [DOI] [PubMed] [Google Scholar]
- Romani S & Tsodyks M (2015). Short‐term plasticity based network model of place cells dynamics. Hippocampus 25, 94–105. [DOI] [PubMed] [Google Scholar]
- Sanes JN & Donoghue JP (2000). Plasticity and primary motor cortex. Annu Rev Neurosci 23, 393–415. [DOI] [PubMed] [Google Scholar]
- Schafer WR (2005). Deciphering the neural and molecular mechanisms of C. elegans behavior. Curr Biol 15, R723–R729. [DOI] [PubMed] [Google Scholar]
- Schroeder CL & Hartmann MJ (2012). Sensory prediction on a whiskered robot: a tactile analogy to “optical flow”. Front Neurorobot 6, 9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Srinivasan MV, Chahl JS, Weber K, Venkatesh S, Nagle MG & Zhang S‐W (1999). Robot navigation inspired by principles of insect vision. Rob Auton Syst 26, 203–216. [Google Scholar]
- Srinivasan MV, Zhang S‐W, Chahl JS, Barth E & Venkatesh S (2000). How honeybees make grazing landings on flat surfaces. Biol Cybern 83, 171–183. [DOI] [PubMed] [Google Scholar]
- Steingrube S, Timme M, Wörgötter F & Manoonpong P (2010). Self‐organized adaptation of a simple neural circuit enables complex robot behaviour. Nat Phys 6, 224–230. [Google Scholar]
- Stratton P, Milford M, Wyeth G & Wiles J (2011). Using strategic movement to calibrate a neural compass: A spiking network for tracking head direction in rats and robots. PloS One 6, 1–15. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tognoli E & Kelso JS (2014). The metastable brain. Neuron 81, 35–48. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Vergassola M, Villermaux E & Shraiman BI (2007). ‘Infotaxis’ as a strategy for searching without gradients. Nature 445, 406–409. [DOI] [PubMed] [Google Scholar]
- Webb B (2000). What does robotics offer animal behaviour? Anim Behav 60, 545–558. [DOI] [PubMed] [Google Scholar]
- Zhang C, Zhao Y, Triesch J & Shi BE (2014). Intrinsically motivated learning of visual motion perception and smooth pursuit In Robotics and Automation (ICRA), 2014 IEEE International Conference on, pp. 1902–1908. IEEE, Piscataway, NJ. [Google Scholar]
- Zhao Y, Rothkopf CA, Triesch J & Shi BE (2012). A unified model of the joint development of disparity selectivity and vergence control In Development and Learning and Epigenetic Robotics (ICDL), 2012 IEEE International Conference on, pp. 1–6. IEEE, Piscataway, NJ. [Google Scholar]
