Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2020 Dec 1.
Published in final edited form as: Neural Netw. 2019 Aug 18;120:158–166. doi: 10.1016/j.neunet.2019.08.007

Neural dynamics of emotion and cognition: From trajectories to underlying neural geometry

Luiz Pessoa 1
PMCID: PMC6899176  NIHMSID: NIHMS1539873  PMID: 31522827

Abstract

How can we study, characterize, and understand the neural underpinnings of cognitive-emotional behaviors as inherently dynamic processes? In the past 50 years, Stephen Grossberg has developed a research program that embraces the themes of dynamics, decentralized computation, emergence, selection and competition, and autonomy. The present paper discusses how these principles can be heeded by experimental scientists to advance the understanding of the brain basis of behavior. It is suggested that a profitable way forward is to focus on investigating the dynamic multivariate structure of brain data. Accordingly, central research problems involve characterizing ‘‘neural trajectories’’ and the associated geometry of the underlying ‘‘neural space.’’ Finally, it is argued that, at a time when the development of neurotechniques has reached a fever pitch, neuroscience needs to redirect its focus and invest comparable energy in the conceptual and theoretical dimensions of its research endeavor. Otherwise we run the risk of being able to measure ‘‘every atom’’ in the brain in a theoretical vacuum.

Keywords: Emotion, Cognition, Dynamics, Trajectories, Manifold

1. Introduction

How can we study, characterize, and understand the neural underpinnings of cognitive-emotional behaviors as inherently dynamic processes?

As previously discussed, I propose eliminating the distinction between emotion and cognition (Pessoa, 2018a). What is emotion? What are its defining characteristics? Are emotions distinct from feelings? Researchers have debated, and in fact agonized over, such questions for a very long time. And the debate continues. For example, nine essays are dedicated to the topic in the latest edition of The Nature of Emotion (Fox, Lapate, Shackman, & Davidson, 2018); and additional suggestions continue appearing (see, Fox, 2018). Such pursuit of the ‘‘essence of emotion’’ appears misguided. What researchers of the mind and brain are interested in, it could be argued, is understanding behaviors. Mind scientists seek to understand the structure of behaviors, their inherent logic. Brain scientists strive to unravel how the two domains, mental and neural, map to one another during behaviors.

The framework described here is influenced by many lines of research and thinking, and strongly so by Steve Grossberg’s research. I was a graduate student at the Department of Cognitive and Neural Systems at Boston University from 1990 to 1995, and worked closely with Steve during the last year of my PhD. This work continued for a few years after I returned to Brazil. Steve also taught an enormously inspiring informal seminar during my second or third year of graduate school in which he outlined his research program. His infinite energy and untiring guidance have been constant sources of inspiration in my career.

Grossberg’s research has been enormously influential across both theoretical and empirical brain and behavioral sciences. The present paper is written with the experimentalist in mind. By embracing the Grossbergian themes discussed below, I believe that considerable progress could be attained.

2. Grossbergian themes

Grossberg has developed his theoretical framework for over 50 years. The breadth of his thinking is so enormous as to defy understanding. In this section, I will describe a series of themes that permeate his work, sometimes very explicitly, at times less so. Although the remainder of the paper will build upon more directly on only a few of the themes – and centrally on dynamics – all of them are viewed as essential to building an understanding of the cognitive-emotional brain.

2.1. Dynamics

This theme is so central to Grossberg’s work that it is fair to say that without it the work would not exist. In Grossberg’s very first publication,1 he states:

Fundamental to the motivation of the new theory is the realization that the dynamics of many psychological problems may be viewed from a unified point of view once the geometrical substrates that characterize each separate problem are elaborated and distinguished

[Grossberg, 1964; italics added].

The very first equation of his opus (Grossberg, 1964) reads as follows:

dskdt=α(Msk)TkDk,

where that the ‘‘activation’’ sk is defined via a growth process whereby sk increases toward M at rate α and its total input Tk(itself dependent on other activations), while also subject to a simple exponential decay, Dk.

At first, it would appear that one would hardly have to emphasize dynamics as an important principle. Yet, experimental brain research is frequently, and even preponderantly, quasistatic. Data from almost any measurement modality (physiology, functional MRI, etc.) are epoched in terms of trials or segments that largely discard most temporal information.

2.2. Behavior

Consideration of a very extensive body of behavioral data is essential. Contrast this to a sort of ‘‘tunnel vision’’ that is unfortunately widespread, as all too often researchers break into cliques that focus on apparently distinct sets of phenomena. For example, research addressing ‘‘appetitive’’ and ‘‘aversive’’ processing has been carried out by largely separate communities. More generally, researchers focus on ‘‘motivation’’ or ‘‘emotion’’, on ‘‘cognition’’ or ‘‘emotion’’, and so on. But behaviors do not obey boundaries, and thinking about diverse sources of data is necessary for deeper understanding.

Behavior is the founding pillar for explaining the brain. Although this sentence reads like a truism, unfortunately, it is not as suggested for example by the popularity of the recent paper by Krakauer, Ghazanfar, Gomez-Marin, MacIver, and Poeppel (2017) in the journal Neuron (see also Gomez-Marin, 2017; Gomez-Marin, Paton, Kampff, Costa, & Mainen, 2014). Their general call to arms to embrace behavior and eschew a neuronal reductionistic bias has resonated with those who believe that ever more sophisticated measurement techniques are not enough to dissect the brain.

What if the activity of every neuron could be recorded in the brain of an animal during a certain behavior (see Ahrens et al., 2012; Lovett-Barron et al., 2017). What would be gained by doing so? Consider a device that can measure the exact state of a modern Airbus 380 aircraft (which weighs more than a million pounds), say an image of every atom (aircraft are mostly made of aluminum) at millisecond resolution. An adequate level of description of the aircraft and its parts is in terms of fluid dynamics and related aerodynamics, where issues related to compressible flow, turbulence, and boundary layers are important. Therefore, although it is conceivable that this future device could provide some useful information, the point made here is that additional data are only minimally useful without more advanced theoretical understanding — of both mind and brain.

Returning to the theme of dynamics, in parallel with the way data are analyzed, behavior is frequently conceptualized in terms of discrete trials of relatively short duration. This approach is understandable from the perspective of experimental scientists who need trial averaging to handle noise. But behavior itself is inherently temporal, and neglecting that aspect seriously limits research progress.

2.3. Decentralization, heterarchy

Understanding systems in terms of the interactions between their parts fosters a way of thinking that favors decentralized organization. It is the coordination between the multiple parts that leads to the behaviors of interest, not a ‘‘controller’’ that dictates the function of the system. In many sophisticated systems, and the brain is no exception, it is natural to think that many of its chief functions depend on centralized processes. For example, the prefrontal cortex may be viewed as a uniquely positioned brain sector where multiple types of information converge, allowing it to then guide behavior (Fuster, 2001; Miller & Cohen, 2001). A contrasting view is one in which processing takes place in a distributed fashion via the interactions of constituent parts (see Grossberg, 2018). Accordingly, instead of information flowing hierarchically to an ‘‘apex region’’ where all the pieces are combined, information flows in multiple directions without a strict hierarchy. An organization of this sort is termed a heterarchy to emphasize the notion that the flow of information is multidirectional (McCulloch, 1945).

A vivid illustration of the problem of centralization involves ‘‘executive control’’ processes. Early models of executive function were built around the notion of a ‘‘controller’’ – essentially a homunculus – that regulates lower-level systems when needed (e.g., Baddeley, 1996; see Shallice, 1988). The inherent problems with such an approach were eventually recognized by several investigators, who called for a ‘‘fractionation’’ of the executive into more manageable (that is, less intelligent) units (Monsell & Driver, 2000). Functions such as ‘‘shifting’’, ‘‘updating’’, and ‘‘inhibition’’ (Miyake et al., 2000) became more prevalent when describing the executive. However, time and again the use of such constructs has amounted to a way of redescribing the object of study rather than actual explanation. To this day, the goal of ‘‘banishing the homunculus’’ remains a formidable challenge (Verbruggen, McLaren, & Chambers, 2014).

The historical conceptualization of the hypothalamus provides another useful example (for an excellent discussion, see Morgane, 1979). This structure is generally referred to as the ‘‘head ganglion of the autonomic nervous system’’. This rubric encapsulates a hierarchical theoretical view based on the idea of ‘‘descending’’ control: the area functions as a central controller of structures along the extent of the brainstem. Indeed, the hypothalamus has robust projections to multiple brainstem sites. However, no area is simply an outflow region (and thus a ‘‘head’’); all areas receive multiple inputs. In the case of the hypothalamus, multiple brainstem sites that receive projections from the hypothalamus project back to it, following the general tendency of connections to be bidirectional. More critically, the hypothalamus is extensively and bidirectionally connected with most sectors of the cortex (Nieuwenhuys, Voogd, & Huijzen, 2008; Pessoa, 2017a). Far from a master controller, the hypothalamus is an integral node of cortical–subcortical communication.

2.4. Emergence

This is a central concept in all of Grossberg’s work, as captured by this recent statement:

Brain circuits give rise to these distinct psychological functions as emergent properties that arise from interactions among brain regions that work together as functional systems. (Grossberg, 2018, p. 2; italics in the original).

von Bertalanffy (1950, p. 135), one of the chief early proponents of complex systems theory, famously asserted ‘‘the necessity of investigating not only parts but also relations of organization resulting from a dynamic interaction and manifesting themselves by the difference in behavior of parts in isolation and in the whole organism’’. But what does it mean to say ‘‘difference in behavior of parts in isolation and in the whole organism’’? Hence emergence,2 a term originally coined in the 1870s to describe instances in chemistry and physiology where new and unpredictable properties appear that are not clearly ascribable to the elements from which they arise.

But what does emergence mean? At the most basic level it reflects the notion that ‘‘something new appears’’. While fascinating, this proposition sits uncomfortably with experimental scientists. As presciently stated by von Bertalanffy (1950, p. 142) himself, the ‘‘exact scientist therefore is inclined to look at these conceptions with justified mistrust’’. Unfortunately, the picture has not appreciably changed, despite stunning developments in mathematics and physics in understanding nonlinear dynamical systems in the last 50 years. Today, emergence can be defined precisely, and in ways that leave no room for vague allusions to ‘‘wholeness’’ or ‘‘system properties’’. In the present context, the body of work by Grossberg provides a clear demonstration of how ‘‘emergent properties’’ can be precisely defined.

2.5. Selection and competition

Selection of information for further analysis is a key problem that needs to be solved for effective behavior. Indeed:

How can a limited-capacity information processing system that receives a constant stream of diverse inputs be designed to selectively process those inputs that are most significant to the objectives of the system?

[(Grossberg & Levine, 1987), p. 5015]

If selection is the ubiquitous problem that must be effectively solved, competition is the mechanism by which stimuli, objects, actions, and so forth are selected.

2.6. Autonomy

Central to Grossberg’s theoretical framework is the notion of autonomy:

Brains look the way that they do because they embody computational designs whereby individuals autonomously adapt to changing environments in real time.

[Grossberg (2018), p. 4; italics in the original)]

To understand the cognitive-emotional brain, it is necessary to consider that all animals need to function independently in diverse and challenging conditions and environments — they need to be autonomous.

All vertebrates have a brain architecture that allows a considerable amount of communication and integration of signals (Pessoa, 2018b; Pessoa et al. in preparation). Why this kind of architecture? One possibility is that it confers a high degree of flexibility that allows animals to cope with the complex interactions in their changing habitats, involving predators, prey, potential mates, and so on. Survival may benefit from circuits that can form in a combinatorial fashion, as the number of conditions related to the internal and externals worlds of the animal are exceedingly high.

Consider a key system for both appetitive and defensive behaviors, the superior colliculus in the midbrain (Dean, Redgrave, & Westby, 1989; Peek & Card, 2016; Pereira & Moita, 2016). It receives retinal inputs and has outputs that give it access to movements of head and neck, for example. In rodents, the superior colliculus could be involved in implementing the following rule: If unexpected movement is overhead, flee; otherwise, if movement is in the lower field, consider further exploration. However, simple rules based on stimulus features do not capture the flexibility of rodent behavior (think how hard it is to catch a rat! see Dean et al., 1989). In particular, rats freeze more frequently to novel stimuli in unfamiliar environments, such as an open field. Clearly, the context in which a stimulus occurs is essential (Peek & Card, 2016; Pereira & Moita, 2016).

More generally, one way to view the more elaborate architecture of birds and mammals (Striedter, 2005) is in terms of the enhanced potential for combinatorial interactions that they afford, such that the manner different signals can influence each other is considerably expanded — and accordingly expand the range of behaviors. This overall type of architecture may produce circuits with local specificity but relatively large-scale sensitivity, a type of global-within-local design, which likely contributes to more plastic and sophisticated behaviors. Yet, integration is evolutionarily ancient – it is a hallmark of the vertebrate brain – and could explain the existence of complex behaviors now recognized in all vertebrate taxa.

2.7. Computational theory

Brain research is a strongly empirical scientific enterprise. To be sure, research is inspired and guided by conceptual/theoretical thinking, although mostly in a qualitative fashion. But as Rabinovich, Huerta, and Laurent (2008, p. 48) state: ‘‘Neural networks [both natural and artificial] are complicated dynamical entities, whose properties are understood only in the simplest cases’’. Can the complex architecture that supports the cognitive-emotional brain be investigated without formal/mathematical tools? Given the richness of the multi-level interactions, does neuroscience need to migrate to a model that is closer to that of physics? Experimental physicists are not lacking in mathematical sophistication. Neuroscience, in contrast, has evolved into extremely sophisticated ‘‘laboratory techniques’’ that are often divorced from formal approaches. How should we train future generations of brain scientists? Grossberg’s position on these questions is easy to predict, as he and colleagues created the Department of Cognitive and Neural Systems at Boston University in 1989 exactly to address this issue.

3. Decentralized computing: Top-down control versus circuit interactions

Interactions between emotion and cognition are frequently viewed in terms of the top-down modulation of emotion by cognition. For example, during emotion regulation, prefrontal regions are suggested to inhibit the amygdala, thereby dampening emotion-related responses (Ochsner & Gross, 2005). For an extensive review of ‘‘segregationist’’ approaches to emotion and cognition, see Pessoa (2008, 2013). Here, we briefly illustrate how bidirectional interactions are important to understanding cognitive-emotional interactions by considering fear extinction (Fig. 1A).

Fig. 1.

Fig. 1.

Fear extinction and structure-function mapping. (A) Fear extinction. (B) Conceptualization of fear extinction in terms of the top-down regulation of the amygdala by the medial prefrontal cortex, with additional variables influencing the process. (C) Schematic representation of the connections between some of the brain regions involved, emphasizing a non-hierarchical view of the processes leading to extinction. The descriptors ‘‘valence’’, ‘‘regulation’’, and so on, are not tied to brain areas in any straightforward one-to-one fashion. Abbreviations: CS, conditioned stimulus; MPFC, medial prefrontal cortex; OFC, orbitofrontal cortex. Source: Reprinted with permission from Pessoa (2018a).

When a conditioned stimulus (CS) no longer predicts the unconditioned stimulus (UCS) to which it was paired at some point in the past, a new relationship needs to be learned, namely that the CS is no longer associated with the UCS — this type of learning is called ‘‘extinction’’. The medial prefrontal cortex (PFC) plays an important role during extinction, as initially revealed via lesioning (Morgan, Romanski, & LeDoux, 1993) and subsequently by chemical manipulation of this area (for review, see Dunsmoor, Niv, Daw, & Phelps, 2015). As the medial PFC is extensively interconnected with the amygdala, an early idea was that the former would exert an inhibitory influence on the latter, thereby enabling the extinction of the conditioned response. At this level of description, fear extinction fits the scheme of separate entities interacting to generate a new behavior: cognition (tied to the medial PFC) controlling emotion (tied to the amygdala) in a top-down fashion.

Yet, considering the PFC as ‘‘top’’ and the amygdala as ‘‘down’’ does not take into account the richness of the existing neuronal interactions. It is well known that the amygdala plays a critical role in aversive learning, that is, the initial CS–UCS learning. The amygdala also plays a critical role in the acquisition and consolidation of fear extinction itself. Chemical blockage of amygdala mechanisms (in the basolateral amygdala) either impair or entirely prevent the acquisition of extinction (Herry et al., 2008). In addition, consolidation of extinction is supported by morphological changes in amygdala synapses (in the basolateral amygdala; see Tovote, Fadok, & Lüthi, 2015). These findings, together with the existence of amygdala pathways to the medial PFC, have led some investigators to suggest that the amygdala actually should be viewed as the ‘‘top’’ region in the relationship with the medial PFC (Herry et al., 2008; see also Do-Monte, Quinones-Laracuente, & Quirk, 2015). In fact, multiple cell groups in the amygdala project to the medial PFC, whose outputs in turn influence amygdala signals.

The extinction of conditioned responses is one of the oldest and most widely known findings from psychological science (Dunsmoor et al., 2015). Despite this long history, recent research has greatly expanded our knowledge about this phenomenon, revealing that extinction, far from a simple inhibitory process, is an extremely nuanced learning process. Extinction is now understood to be a form of learning (of the new relationship between the CS and UCS) that itself involves acquisition, retrieval, and consolidation. It is not a simple inhibitory mechanism of the ‘‘fear response’’ but a sophisticated form of learning. In particular, following extinction, contextual information plays a critical role in determining whether the original fear memory or the new ‘‘extinction memory’’ controls behavior — should the animal fear or not the CS? Accordingly, an elaborate set of neural interactions is needed to support such context sensitivity.

When the CS no longer predicts an aversive outcome, it behooves the animal to take into account that information, such that features of the new environment are learned so as to predict safety. The hippocampus plays a key role in establishing context dependence during extinction learning. There are at least two anatomical routes by which the hippocampus contributes to these processes (Herry et al., 2008; Maren, Phan, & Liberzon, 2013). The first involves direct projections from the hippocampus to the amygdala. The hippocampus is part of a circuit that involves amygdala neurons that are engaged when the behavioral context is different from the extinction context, thus promoting fear. The second, indirect contribution involves dense projections from the hippocampus to the medial prefrontal cortex, from which fibers to the amygdala contribute to signaling safety (this pathway is linked to extinction behaviors; Adhikari et al., 2015).

Another region in the circuit determining if fear should be switched ‘‘on or off’’ is the thalamus, which is a major player in the processing of biologically significant stimuli (Heimer, van Hoesen, Trimble, & Zahm, 2007), as well as a key subcortical–cortical connectivity hub (Pessoa, 2017b). In the past few years, the paraventricular nucleus of the thalamus (PVT) has been established as a thalamic node that interacts with cortico-amygdala circuits for the establishment, retrieval, and maintenance of long-term fear memories (Do-Monte, Manzano-Nieves, Quiñones Laracuente, Ramos-Medina, & Quirk, 2015; Penzo et al., 2015). Neurons in the PVT are robustly activated by behaviorally relevant events, including novel stimuli, as well as reinforcing stimuli and their predicting cues (Ren et al., 2018). Notably, PVT responses are influenced by changes in homeostatic state and behavioral context, and inhibition of the PVT suppresses appetitive and aversive learning (Ren et al., 2018). Given that the PVT is bidirectionally connected with the medial PFC, and projects throughout the extended amygdala (central amygdala plus bed nucleus of the stria terminalis), this region is well placed to further refine processing during behavioral conditions eliciting fear extinction.

More generally, during fear extinction – and, in fact, fear acquisition and expression – signals from the amygdala, medial PFC, hippocampus, thalamus, among others, collectively determine behavioral responses. These multi-region interactions afford greater behavioral malleability when responding to threat. A more standard approach to attempting to explain fear extinction would be to label each brain region in the following manner, for example: amygdala-valence, medial PFC-regulation, hippocampus-context, thalamus-biological significance, and so on. One could then describe observed behaviors in terms of ‘‘standard interactions’’ (that is, those involving separate entities) between the putative processes (valence, regulation, etc.) (Fig. 1B). But if these processes are not separable, they do not encode stable variables that are simply modulated by other variables. In the end, explanations in terms of standard interactions will be found wanting — integration, with the accompanying emergent processes, is needed (Fig. 1C).

4. Transient brain dynamics

For the computational modeler, the challenge posed by the multi-region interactions discussed is to propose formal mechanisms that replicate observed findings and, importantly, that provide novel predictions. But how should the experimentalist proceed? A potential direction is to move research efforts toward describing the dynamic multivariate structure of brain data. In other words, one is interested in describing the joint state of a set of brain regions, and how their joint state evolves temporally. A major goal is then to work out how groups of regions dynamically coalesce into coherent functional units and how they dissolve when their assembly is no longer needed to meet behavioral demands.

Consider a system of neurons, neuronal populations, or brain regions, which is characterized by their activation strengths as a function of time: x1(t), x2(t), … , xn(t). The vector x describes the current joint state of the system, and x(t) describes how this joint state evolves through time. A popular approach to thinking about brain dynamics was based on the notion of steady-state attractors, in which activity levels would converge to an equilibrium (for at least some period of time). For example, when started at state xI, the system would evolve temporally and settle in state xA, where xA is the stable state closest to xI (Cohen & Grossberg, 1983; see also Hopfield, 1982, 1984). In such networks, an input pattern will cause activity changes until it settles into one pattern, the so-called attractor state (here, xA). The input is thus associated with the properties of the entire, and specific, attractor state, which can be viewed as its representation. However, the type of dynamics in ‘‘attractor networks’’ is limited in the sense that the key element is the state into which the network settles (which can be represented formally by, for example, a minimum in an energy function). Importantly, the path taken to reach the attractor state does not matter.

This idea of ‘‘computing with attractors’’ should be contrasted with the one of computing with transient dynamics (Buonomano & Maass, 2009; Rabinovich et al., 2008). Transient dynamics do not require waiting for the system to reach equilibrium, and the succession of states visited by the system provide the representation for the event in question. The temporal window considered is arbitrary; for example, 300 ms after an input stimulus, 500 ms prior to movement initiation, or 20 s during a mental event. Fig. 2A illustrates the idea in the context of recordings from neurons in the antennal lobe of the locust (Broome et al., 2006), showing the succession of states associated with the presentation of two distinct odors when projected onto a lower-dimensional three-dimensional space (those less familiar with this type of plot may benefit from Fig. 3AB). The original measurements were performed in 87 neurons, and the projection here is simply for illustrative purposes (the issue of dimensionality is discussed further below). Whereas the trajectories might come arbitrarily close at several time points,3 the entire trajectory provides a potentially unique signature for the task in question, such that the transients are input specific, and contain information about what initiated them. Furthermore, the trajectories are assumed to be stable. Thus, they are resistant to noise in that they are reliable to relatively small variations in initial conditions.

Fig. 2.

Fig. 2.

Neural trajectories. Trajectories represent the activation state of the system at every point in time. (A) Recordings were performed in 87 principal neurons (PNs) of the antennal lobe of the locust during exposure to two odors (citral: cit; geraniol: ger). Local linear embedding (LLE) was employed to reduce the dimensionality of the data. (B) Recordings were performed in premotor and motor cortex during reaching movements in the macaque monkey. A principal components analysis-based algorithm was used to determine the two-dimensional representation displayed. Individual trials are represented by trajectories colored based on the extent of preparatory/pre-movement activity (from red to green).. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

Source: (A) Adapted with permission from Broome, Jayaraman, and Laurent (2006). (B) Adapted with permission from Churchland et al. (2012)

Fig. 3.

Fig. 3.

Neural trajectories during threat processing. (A) Evoked responses at the onset of a safe block (green) or a threat block (red); x1 and x2 represent the activity of two brain regions. (B) The responses can be jointly plotted as a function of time, (x1(t), x2(t)), to show safe and threat activation trajectories. The two dots in panel A correspond to the ones here (schematically only). (C) Trajectories for safe and threat conditions when evoked responses are comparable but are more correlated during threat. (D) We can think of the trajectories for threat and safe as evolving through cylinders of different diameters (corresponding to trajectory variance). (E) Individual differences in anxiety can be understood in terms of trajectories during the safe condition becoming more similar to the ones usually observed during threat.

Thinking in terms of trajectories moves the emphasis away from simple causal interpretations. Instead of, for example, statements such as ‘‘x1 (t) causes x2 (t + 1)’’ (say, ‘‘PFC activation inhibits the amygdala’’) the framework encourages a description that summarizes the temporal evolution of the system of interest. Experimentally, a central goal then becomes estimating trajectories robustly from available data. At this point, computational models can be tested against the data, or possibly developed to explain the data. In other words, what kind of system, and what kind of interactions between system elements, generate similar trajectories, given similar inputs and conditions?

5. Temporal trajectories during threat processing

Consider a functional MRI participant experiencing alternating ‘‘safe’’ and ‘‘threat’’ blocks, where they lie passively in the former, and may experience mild shocks during the latter. At the onset of threat blocks, brain regions of the so-called salience network (including the anterior insula and medial PFC) would be expected to respond vigorously compared to the period prior to the block transition (Menon & Uddin, 2010). Regions of the salience network would also respond to the onset of the ‘‘safe’’ block, but suppose that these responses are less vigorous than during the transition to threat. In terms of evoked responses, this scenario can be illustrated as in Fig. 3A. For two hypothetical regions, if we diagram the temporal evolution of the responses during the block transitions, the trajectories for the two conditions can be illustrated as in Fig. 3B; the state-space plot describes the activity levels (x1(t), x2(t)). Now, suppose that a group of high-anxious individuals is investigated and that they partially generalize the aversiveness experienced during threat blocks to safe ones; that is, they treat every block onset as a potential transition into a threat condition (indeed, high-anxious individuals generalize conditions associated with conditioned fear; see for example Lissek et al. (2008)). In this case, the trajectory observed during safe periods would look more like threat trajectories, and the more so for individuals with higher levels of anxiety (Fig. 3E).

In an actual functional MRI study, we found that, some-what surprisingly, responses evoked when transitioning into safe and threat blocks were comparable in magnitude (McMenamin, Langeslag, Sirbu, Padmala, & Pessoa, 2014). Presumably, both safe and threat blocks were motivationally significant, thus evoking similar salience-related responses (perhaps, in the context of encountering threat periods, safe periods are quite noteworthy). However, although evoked responses were comparable, signals were more cohesive during threat relative to safe; that is, transitions to threat blocks were associated with evoked responses that were more correlated (for a given pair of regions in the salience network) (see also Najafi, Kinnison, & Pessoa, 2017). The respective trajectories for our experiment thus can be illustrated as in Fig. 3C (the trajectory linked to threat stays closer to the diagonal (x1 x2) than the one linked to safe). And if we consider multiple trials, the trajectories during threat will remain in a part of the space closer to the diagonal (Fig. 3D).

6. Dimensionality reduction of neural measurements

Neuronal measurements are inherently high dimensional. Consider, for example, the simultaneous recordings across 10–102 locations in electrophysiological grids, 102 sensors with MEG/EEG, 102–103 neurons with calcium imaging, or the 104–105 spatial locations with functional MRI. Is it possible that the information across, say, hundreds of measurements could be captured in fewer dimensions without substantial loss of information? Techniques such as principal components analysis are commonplace in data analysis (and can be used, for example, for noise reduction). However, aside from practical concerns, understanding the dimensionality of the data is also important conceptually. For example, it may help uncover relationships that are not apparent in higher dimensions, thus helping to elucidate the mapping from structure to function. In particular, a parsimonious description of the data may uncover stronger relationships with experimentally manipulated variables or other behaviorally relevant variables (see Santhanam et al., 2009). In addition, the number of dimensions of a dynamical system is an enormously important topic in mathematics (Packard, Crutchfield, Farmer, & Shaw, 1980; Takens, 1981; Sauer, Yorke, & Casdagli, 1991).

One of the most studied systems in terms of temporal trajectories involves odor processing in invertebrates. In the locust, odors generate distributed responses across the antennal lobe, and such responses evolve in an odor-specific manner (Broome et al., 2006). In Fig. 3A, the lower-dimensional representation was obtained by nonlinear dimensionality reduction (Roweis & Saul, 2000). While the dimensionality reduction technique applied was somewhat arbitrary, it helped the investigators gain insight into the following theoretical question: what happens when one odor is being experienced and a second one is presented? One possibility is that the system would ‘‘reset’’, namely responses would return to baseline, then start to evolve in the direction of the new odor (Broome et al., 2006). An alternative possibility would be for the first trajectory (associated with the first odor) to deviate from its ongoing evolution and progress along a path corresponding to the mixture of the two odors. Based on the trajectories observed under these experimental conditions, Broome and colleagues were able to rule out the first possibility, while obtaining some support for the second. In this case, dimensionality reduction helped uncover mechanisms that would be potentially hard to derive in higher dimensions.

Neuronal dynamics has been investigated in nonhuman primates, too. In one study, Churchland et al. (2012) recorded responses in motor and premotor cortex as monkeys performed reaching movements. Data from 50–200 recordings were projected onto two dimensions, revealing a rotational structure to neural trajectories (Fig. 2B). Their analysis uncovered processes at the level of the population of neurons, according to which preparatory activity (that is, prior to movement initiation) sets the initial state of a dynamical process that unfolds during movement execution. The authors proposed that motor cortex expresses a dynamical system that generates and controls movements, which can be expressed as

drdt=f(r(t))+u(t)

where r is a vector describing the firing rate of all neurons (the population response or neural state), f is an unknown function, and u is an external input. As in the example of the locust data, dimensionality reduction helped uncover processes that would not have been evident in higher dimensions.

7. Geometry of the underlying neural space

If a neural dataset is acquired in a high-dimensional space and subsequently reduced to a lower dimensionality, what should be the geometry of this space? For simplicity, the original high-dimensional space is frequently considered Euclidean, at least implicitly. But given that not all information can be preserved in fewer dimensions, the question of the nature of the lower dimensionality comes to the fore. For example, in the case of the locust data, a local linear embedding algorithm was employed that attempts to capture information about global geometry in fewer dimensions (by collectively analyzing overlapping local neighborhoods; Roweis & Saul, 2000). In the case of the monkey data, a PCA-based method was applied.

A similar approach could be applied to the functional MRI data of safe and threat periods discussed above (Section 5.1; McMenamin et al., 2014). The study focused on 51 brain regions of the so-called salience, executive, and task-negative (also called ‘‘default’’) networks, in addition to the amygdala and the bed nucleus of the stria terminalis (the latter two are particularly important during threat-related processing). We performed dimensionality reduction with the local linear embedding algorithm (Roweis & Saul, 2000) and plotted the mean trajectories for the two conditions, together with an indication of their variance (Fig. 4). The two trajectories initially overlap but are quite distinct overall.

Fig. 4.

Fig. 4.

Temporal trajectories based on functional MRI data. The original data were from safe and threat periods in the study by McMenamin et al. (2014). Trajectories for safe and threat conditions are fairly distinct in the lower-dimensional space determined by local linear embedding. (A) Mean trajectories across individuals. The colored circles indicate the starting point. (B) Surfaces provide an indication of the underlying space of trajectories, or manifold, and were created by considering the variance of the trajectories across individuals. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

More generally, the geometry of the underlying neural space will depend on a combination of the properties of the data and the task condition of interest. It is thus possible to entertain the following neural-dynamics space hypothesis: behaviors can be described via classes of trajectories within specific neural spaces (see also Gao et al., 2017). Consider the example of the citral-related trajectory in the locust antennal lobe (Fig. 2A). Multiple instances of experiencing this odor would reside within the surface schematically represented in Fig. 5A, which defines the space within which trajectories linked with this odor evolve. Such surfaces, also called manifolds, thus serve as representations of the stimuli, tasks, or conditions in question.

Fig. 5.

Fig. 5.

Trajectory manifolds. A manifold is a surface (more precisely a topological space) that near each point (that is, locally) resembles Euclidean space (circles and spheres are some of the simplest manifolds in two-and three dimensions). A non-Euclidean (Riemannian) metric on a manifold allows distances and angles to be measured. (A) Example manifold. The neural-dynamics space working hypothesis suggests that system behaviors can be characterized via classes of trajectories within neural spaces with particular geometry. (B) Neural manifolds associated with transient dynamics are, by definition, non-periodic. (C) Based on individual-level trajectories (orange), a group-level manifold needs to be estimated (green); the red trajectory illustrates the group average. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

Whereas a purely data-driven dimensionality reduction method can be employed when studying high-dimensional data, knowledge of the problem domain can guide this process, too. For example, in the case of the Churchland et al. (2012) study, a PCA-based method was developed to explicitly extract rotational information because such coordinate system was anticipated to be relevant to understanding the topology of neural trajectories in motor cortex during reaching movements.4 To further illustrate the use of domain knowledge, consider a hypothetical study that records multiple cells in each of three distinct brain areas, and suppose that different properties are thought to be important for their function. In this case, one can plot the dynamics of the system in terms of these properties (Fig. 6).

Fig. 6.

Fig. 6.

Geometry of the neural space. The activity of brain Areas 1–3 can be mapped onto distinct properties believed to reflect their function, including univariate properties and multivariate/network-level properties. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)

In the safe/threat study discussed (McMenamin et al., 2014), we found that a network-level graph-theory measure called global efficiency was altered during threat processing. Efficiency provides a measure of how effectively a network exchanges information (Latora & Marchiori, 2001). In particular, small-world networks are systems that are both locally and globally efficient (Watts & Strogatz, 1998). Another graph-theory property of interest in our study was node centrality (Newman, 2018). Increased centrality indicates that a node participates more heavily in the interactions between other nodes — that is, they become more of a hub. In addition, as stated, the motivationally significant block transitions produced stronger responses in regions of the salience network. Accordingly, it could prove informative to describe the evolution of the system along three axes: global efficiency, node centrality, and salience network activation.

Consider a different experimental paradigm in which threat ‘‘level’’ is manipulated dynamically. For example, two circles move on the screen in a quasi-random manner and, if they collide, a mild electrical shock is administered to the participant (Meyer, Padmala, & Pessoa, 2019). In this paradigm, there are periods of increased anxious anticipation (circles approaching each other) and periods of relative safety (circles retreating from each other). Fig. 7 illustrates hypothetical trajectories during approach and retreat in terms of the three dimensions above. The overall framework is also fruitful to describe trait-or temperament-like phenotypes. For example, for high-anxious individuals one could hypothesize that periods of approach would be associated with higher activation of salience-network regions, higher network efficiency, and increased centrality of regions such as the bed nucleus of the stria terminalis and amygdala.

Fig. 7.

Fig. 7.

Trajectories during threat processing. Threat-level varies dynamically as it increases (approach) and decreases (retreat). The temporal evolution of the system can be described in terms of the global efficiency of the salience network, the activation (evoked responses) in the same network, as well as the centrality of the amygdala/bed nucleus of the stria terminalis regions.

Dispositional negativity refers to a basic dimension of child-hood temperament and adult personality and constitutes a prominent risk factor for the development of pediatric and adult anxiety disorders (Hur, Stockbridge, Fox, & Shackman, 2018). Behaviorally, dispositional negativity is associated with threat-related attentional bias and deficits in executive control. Key brain systems proposed to underpin dispositional negativity include the amygdala, as well as the frontoparietal and cingulo-opercular networks (Hur et al., 2018). To further investigate the neural substrates of dispositional negativity, one could investigate performance of cognitively demanding tasks during the presence of threat. For example, during the execution of an executive task, the threat level could be increased from low to high. In terms of neural trajectories, one could hypothesize that there would be a shift in the state-space region occupied by the two conditions (Fig. 8). In addition, for individuals with higher dispositional negativity, it could be predicted that the transition from one region to another would take place faster, and the extent of the change would be greater (that is, the two regions would be farther apart). Irrespective of the potential of these particular predictions to advance the understanding of dispositional negativity, they illustrate how hypotheses can be formulated and tested according to the present ideas. Finally, it also encourages a move away from amygdala-centric proposals that dominate the literature.

Fig. 8.

Fig. 8.

Dispositional negativity and neural trajectories. Participants perform a cognitive challenging task for an extended period of time. In one condition, there is a lower level of background threat, whereas a higher level is present in the second; the latter is anticipated to impair performance to a greater extent. Hypothetical neural trajectories are shown for the two conditions: the two trajectories will reside in separate sectors of state space, with the separation between them depending on an individual’s level of dispositional negativity.

8. Conclusions for a science of emotion and cognition

Neuroscience strives to elucidate the neural underpinnings of interesting behaviors. Modern experimental neuroscience has done so in a preponderantly reductionistic fashion for over a century. The time is ripe for the field to transition into a period when Grossbergian themes are embraced more explicitly. The themes of dynamics, decentralized computation, emergence, selection and competition, and autonomy, can help advance empirical neuro-science in major ways.

At a time when the development of neurotechniques has attained a fever pitch, neuroscience needs to take stock and invest comparable energy in conceptual and theoretical dimensions. Otherwise we run the risk of being able to measure ‘‘every atom’’ in the brain in a theoretical vacuum. If experimental physicists could measure every atom of a given galaxy, how would that advance understanding if not for a theory of gravitation? The current obsession in the field with causation is equally problematic. Without theory, ‘‘causal explanations’’ add little to current understanding.

Ultimately, to explain the cognitive-emotional brain, we need to eliminate boundaries within the brain – perception, cognition, action, etc. – as well as outside the brain, as we bring down the walls between biology, ecology, mathematics, computer science, philosophy, and so on. Let us hope that the body of work by Steve Grossberg can inspire us all in this endeavor.

Acknowledgments

The author’s research is supported in part by the National Institute of Mental Health, United States (R01 MH071589 and R01 MH112517). The author thanks Anastasiia Khibovska for assistance with figures. The author also thanks Joyneel Misra and Govinda Surampudi for generating Fig. 5.

Footnotes

1

A monograph published while in graduate school with over 400 pages.

2

The term emergence appears to have been first proposed in the 1870s when used by George Henry Lewes in his book Problems of Life and Mind and taken up by Wilhelm Wundt in his Introduction to Psychology.

3

The issue of the proximity of trajectories will depend on the dimensionality of the system in question (which is usually unknown) and the dimensionality of the space where data are being considered (say, after dimensionality reduction). Naturally, points projected onto a lower-dimensional representation might be closer than in the original higher-dimensional space.

4

The rotational structure was not due to primary features of neuronal responses, such as tuning to reach direction (Elsayed & Cunningham, 2017).

References

  1. Adhikari A, Lerner TN, Finkelstein J, Pak S, Jennings JH, Davidson TJ, et al. (2015). Basomedial amygdala mediates top-down control of anxiety and fear. Nature, 527(7577), 179. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Ahrens MB, Li JM, Orger MB, Robson DN, Schier AF, Engert F, et al. (2012). Brain-wide neuronal dynamics during motor adaptation in zebrafish. Nature, 485(7399), 471. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Baddeley A (1996). Exploring the central executive. The Quarterly Journal of Experimental Psychology Section A, 49(1), 5–28. [Google Scholar]
  4. von Bertalanffy L (1950). An outline of general system theory. The British Journal for the Philosophy of Science, 1(2), 134–165. [Google Scholar]
  5. Broome BM, Jayaraman V, & Laurent G (2006). Encoding and decoding of overlapping odor sequences. Neuron, 51(4), 467–482. [DOI] [PubMed] [Google Scholar]
  6. Buonomano DV, & Maass W (2009). State-dependent computations: spatiotemporal processing in cortical networks. Nature Reviews Neuroscience, 10(2), 113. [DOI] [PubMed] [Google Scholar]
  7. Churchland MM, Cunningham JP, Kaufman MT, Foster JD, Nuyujukian P, Ryu SI, et al. (2012). Neural population dynamics during reaching. Nature, 487(7405), 51. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Cohen MA, & Grossberg S (1983). Absolute stability of global pattern formation and parallel memory storage by competitive neural networks. IEEE Transactions on Systems, Man, and Cybernetics, 0(5), 815–826. [Google Scholar]
  9. Dean P, Redgrave P, & Westby GW (1989). Event or emergency? Two response systems in the mammalian superior colliculus. Trends in Neurosciences, 12(4), 137–147. [DOI] [PubMed] [Google Scholar]
  10. Do-Monte FH, Manzano-Nieves G, Quiñones Laracuente K, Ramos-Medina L, & Quirk GJ (2015). Revisiting the role of infralimbic cortex in fear extinction with optogenetics. Journal of Neuroscience, 35(8), 3607–3615. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Do-Monte FH, Quinones-Laracuente K, & Quirk GJ (2015). A temporal shift in the circuits mediating retrieval of fear memory. Nature, 519(7544), 460. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Dunsmoor JE, Niv Y, Daw N, & Phelps EA (2015). Rethinking extinction. Neuron, 88(1), 47–63. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Elsayed GF, & Cunningham JP (2017). Structure in neural population recordings: an expected byproduct of simpler phenomena? Nature Neuroscience, 20(9), 1310. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Fox E (2018). Perspectives from affective science on understanding the nature of emotion. Brain and Neuroscience Advances, 2, 2398212818812628. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Fox AS, Lapate RC, Shackman AJ, & Davidson RJ (Eds.), (2018). The nature of emotion: Fundamental questions. Oxford University Press. [Google Scholar]
  16. Fuster JM (2001). The prefrontal cortex—an update: time is of the essence. Neuron, 30(2), 319–333. [DOI] [PubMed] [Google Scholar]
  17. Gao P, Trautmann E, Byron MY, Santhanam G, Ryu S, Shenoy K, et al. (2017). A theory of multineuronal dimensionality, dynamics and measurement. bioRxiv, 214262.
  18. Gomez-Marin A (2017). Causal circuit explanations of behavior: Are necessity and sufficiency necessary and sufficient? In Decoding neural circuit structure and function (pp. 283–306). Cham: Springer. [Google Scholar]
  19. Gomez-Marin A, Paton JJ, Kampff AR, Costa RM, & Mainen ZF (2014). Big behavioral data: psychology, ethology and the foundations of neuroscience. Nature Neuroscience, 17(11), 1455. [DOI] [PubMed] [Google Scholar]
  20. Grossberg S (1964). The theory of embedding fields with applications to psychology and neurophysiology. Rockefeller Institute for Medical Research, monograph, 451 pp. [Google Scholar]
  21. Grossberg S (2018). Desirability, availability, credit assignment, category learning, and attention: Cognitive-emotional and working memory dynamics of orbitofrontal, ventrolateral, and dorsolateral prefrontal cortices. Brain and Neuroscience Advances, 2, 2398212818772179. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Grossberg S, & Levine DS (1987). Neural dynamics of attentionally modulated Pavlovian conditioning: blocking, interstimulus interval, and secondary reinforcement. Applied optics, 26(23). [DOI] [PubMed] [Google Scholar]
  23. Heimer L, van Hoesen GW, Trimble M, & Zahm DS (2007). Anatomy of neuropsychiatry: The new anatomy of the basal forebrain and its implications for neuropsychiatric illness. Burlington, MA: Academic Press. [Google Scholar]
  24. Herry C, Ciocchi S, Senn V, Demmou L, Müller C, & Lüthi A (2008). Switching on and off fear by distinct neuronal circuits. Nature, 454(7204), 600. [DOI] [PubMed] [Google Scholar]
  25. Hopfield JJ (1982). Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences, 79(8), 2554–2558. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Hopfield JJ (1984). Neurons with graded response have collective computational properties like those of two-state neurons. Proceedings of the National Academy of Sciences, 81(10), 3088–3092. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Hur J, Stockbridge MD, Fox AS, & Shackman AJ (2018). Dispositional negativity, cognition, and anxiety disorders: An integrative translational neuroscience framework. PsyArXiv. December, 11. [DOI] [PMC free article] [PubMed]
  28. Krakauer JW, Ghazanfar AA, Gomez-Marin A, MacIver MA, & Poeppel D (2017). Neuroscience needs behavior: correcting a reductionist bias. Neuron, 93(3), 480–490. [DOI] [PubMed] [Google Scholar]
  29. Latora V, & Marchiori M (2001). Efficient behavior of small-world networks. Physical Review Letters, 87(19), 198701. [DOI] [PubMed] [Google Scholar]
  30. Lissek S, Biggs AL, Rabin SJ, Cornwell BR, Alvarez RP, Pine DS, & Grillon C (2008). Generalization of conditioned fear-potentiated startle in humans: experimental validation and clinical relevance. Behaviour Research and Therapy, 46(5), 678–687. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Lovett-Barron M, Andalman AS, Allen WE, Vesuna S, Kauvar I, Burns VM, et al. (2017). Ancestral circuits for the coordinated modulation of brain state. Cell, 171(6), 1411–1423. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Maren S, Phan KL, & Liberzon I (2013). The contextual brain: implications for fear conditioning, extinction and psychopathology. Nature Reviews Neuroscience, 14(6), 417. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. McCulloch WS (1945). A heterarchy of values determined by the topology of nervous nets. The Bulletin of Mathematical Biophysics, 7(2), 89–93. [DOI] [PubMed] [Google Scholar]
  34. McMenamin BW, Langeslag SJ, Sirbu M, Padmala S, & Pessoa L (2014). Network organization unfolds over time during periods of anxious anticipation. Journal of Neuroscience, 34(34), 11261–11273. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Menon V, & Uddin LQ (2010). Saliency, switching, attention and control: a network model of insula function. Brain Structure and Function, 214(5–6), 655–667. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Meyer C, Padmala S, & Pessoa L (2019). Dynamic threat processing. Journal of Cognitive Neuroscience, 31(4), 522–542. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Miller EK, & Cohen JD (2001). An integrative theory of prefrontal cortex function. Annual Review of Neuroscience, 24(1), 167–202. [DOI] [PubMed] [Google Scholar]
  38. Miyake A, Friedman NP, Emerson MJ, Witzki AH, Howerter A, & Wager TD (2000). The unity and diversity of executive functions and their contributions to complex frontal lobe tasks: A latent variable analysis. Cognitive Psychology, 41(1), 49–100. [DOI] [PubMed] [Google Scholar]
  39. Monsell S, & Driver J (2000). Banishing the control homunculus In Control of cognitive processes: Attention and performance: Vol. XVIII, (pp. 3–32). [Google Scholar]
  40. Morgan MA, Romanski LM, & LeDoux JE (1993). Extinction of emotional learning: contribution of medial prefrontal cortex. Neuroscience Letters, 163(1), 109–113. [DOI] [PubMed] [Google Scholar]
  41. Morgane PJ (1979). Historical and modern concepts of hypothalamic organization and function. Anatomy of the Hypothalamus, 1, 1–64. [Google Scholar]
  42. Najafi M, Kinnison J, & Pessoa L (2017). Dynamics of intersubject brain networks during anxious anticipation. Frontiers in Human Neuroscience, 11(552). [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Newman M (2018). Networks. Oxford University Press. [Google Scholar]
  44. Nieuwenhuys R, Voogd J, & Huijzen CV (2008). The human central nervous system (4th ed.). Springer. [Google Scholar]
  45. Ochsner Kevin N., & Gross James J. (2005). The cognitive control of emotion. Trends in Cognitive Sciences, 24, 2–249. [DOI] [PubMed] [Google Scholar]
  46. Packard Norman H., Crutchfield James P., Farmer J. Doyne, & Shaw Robert S. (1980). Geometry from a time series. Physical Review Letters, 45(9), 712. [Google Scholar]
  47. Peek MY, & Card GM (2016). Comparative approaches to escape. Current Opinion in Neurobiology, 41, 167–173. [DOI] [PubMed] [Google Scholar]
  48. Penzo MA, Robert V, Tucciarone J, De Bundel D, Wang M, Van Aelst L, et al. (2015). The paraventricular thalamus controls a central amygdala fear circuit. Nature, 519(7544), 455. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Pereira AG, & Moita MA (2016). Is there anybody out there? Neural circuits of threat detection in vertebrates. Current Opinion in Neurobiology, 41, 179–187. 10.1016/j.conb.2016.09.011. [DOI] [PubMed] [Google Scholar]
  50. Pessoa L (2008). On the relationship between emotion and cognition. Nature Reviews Neuroscience, 9(2), 148–158. [DOI] [PubMed] [Google Scholar]
  51. Pessoa L (2013). The cognitive-emotional brain: From interactions to integration. MIT press. [Google Scholar]
  52. Pessoa L (2017a). The emotional brain In Conn’s translational neuroscience (pp. 635–656). Academic Press. [Google Scholar]
  53. Pessoa L (2017b). A network model of the emotional brain. Trends in Cognitive Sciences, 21(5), 357–371. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Pessoa L (2018a). Embracing integration and complexity: placing emotion within a science of brain and behaviour. Cognition and Emotion, 10.1080/02699931.2018.1520079. [DOI] [PMC free article] [PubMed]
  55. Pessoa L (2018b). Emotion and the interactive brain: Insights from comparative neuroanatomy and complex systems. Emotion Review, 10(3), 204–216. [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Rabinovich M, Huerta R, & Laurent G (2008). Transient dynamics for neural processing. Science, 321(5885), 48–50. [DOI] [PubMed] [Google Scholar]
  57. Ren S, Wang Y, Yue F, Cheng X, Dang R, Qiao Q, et al. (2018). The paraventricular thalamus is a critical thalamic area for wakefulness. Science, 362(6413), 429–434. [DOI] [PubMed] [Google Scholar]
  58. Roweis ST, & Saul LK (2000). Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500), 2323–2326. [DOI] [PubMed] [Google Scholar]
  59. Santhanam G, Yu BM, Gilja V, Ryu SI, Afshar A, Sahani M, et al. (2009). Factor-analysis methods for higher-performance neural prostheses. Journal of Neurophysiology, 102(2), 1315–1330. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Sauer T, Yorke JA, & Casdagli M (1991). Embedology. Journal of Statistical Physics, 65(3–4), 579–616. [Google Scholar]
  61. Shallice T (1988). From neuropsychology to mental structure. New York: Cambridge University Press. [Google Scholar]
  62. Striedter GF (2005). Principles of brain evolution. Sinauer Associates. [Google Scholar]
  63. Takens F (1981). Detecting strange attractors in turbulence In Dynamical systems and turbulence, Warwick 1980 (pp. 366–381). Berlin, Heidelberg: Springer. [Google Scholar]
  64. Tovote P, Fadok JP, & Lüthi A (2015). Neuronal circuits for fear and anxiety. Nature Reviews Neuroscience, 16(6), 317–331. [DOI] [PubMed] [Google Scholar]
  65. Verbruggen F, McLaren IP, & Chambers CD (2014). Banishing the control homunculi in studies of action control and behavior change. Perspectives on Psychological Science, 9(5), 497–524. [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. Watts DJ, & Strogatz SH (1998). Collective dynamics of ‘small-world’networks. Nature, 393(6684), 440. [DOI] [PubMed] [Google Scholar]

RESOURCES