Abstract
Most human neuroscience research to date has focused on statistical approaches that describe stationary patterns of localized neural activity or blood flow. While these patterns are often interpreted in light of dynamic, information-processing concepts, the static, local, and inferential nature of the statistical approach makes it challenging to directly link neuroimaging results to plausible underlying neural mechanisms. Here, we argue that dynamical systems theory provides the crucial mechanistic framework for characterizing both the brain’s time-varying quality and its partial stability in the face of perturbations, and hence, that this perspective can have a profound impact on the interpretation of human neuroimaging results and their relationship with behavior. After briefly reviewing some key terminology, we identify three key ways in which neuroimaging analyses can embrace a dynamical systems perspective: by shifting from a local to a more global perspective, by focusing on dynamics instead of static snapshots of neural activity, and by embracing modeling approaches that map neural dynamics using “forward” models. Through this approach, we envisage ample opportunities for neuroimaging researchers to enrich their understanding of the dynamic neural mechanisms that support a wide array of brain functions, both in health and in the setting of psychopathology.
Keywords: fMRI, Dynamics, Attractor landscapes, Neuroscience, Bifurcations
Author Summary
The study of dynamical systems offers a powerful framework for interpreting neuroimaging data from a range of different contexts, however, as a field, we have yet to fully embrace the power of this approach. Here, we offer a brief overview of some key terms from the dynamical systems literature, and then highlight three ways in which neuroimaging studies can begin to embrace the dynamical systems approach: by shifting from local to global descriptions of activity, by moving from static to dynamic analyses, and by transitioning from descriptive to generative models of neural activity patterns.
INTRODUCTION
Making sense of the inner workings of the human brain is a daunting task. Whole-brain neuroimaging represents a crucial device for reducing our uncertainty about how the brain works. But what if the assumptions inherent within traditional neuroimaging analyses have us on the wrong track? In many ways, neuroscience is relatively preparadigmatic (Kuhn, 1962), akin to the field of biology before the insights of Charles Darwin, or chemistry before atomic theory. With this in mind, how then should we approach modeling the brain? We suggest that a dynamical systems perspective provides a path for scientists to break out of the piecemeal progress circumscribed by traditional, static data-fitting statistical procedures. This modeling approach is also ideally suited to mechanistic accounts of the emergence of actions, emotions, and thoughts. We argue that dynamical systems theory (DST) is naturally suited to discussing the temporal aspects of neural and behavioral phenomena, as well as how interactions—within the brain and between the brain and external phenomena—unfold over time.
Since the cognitive revolution, neural processes have been routinely described in terms of manipulations of discrete “states,” “symbols,” or “codes” (Brette, 2019). The prevailing analogy used by this approach is the notion of “digital computing”: The brain is argued to “process information” by flexibly rearranging between different states. This approach naturally leads to a view of the brain as a mosaic of disjoint, independent functional units—consider the oversimplified conception of the amygdala as exclusively devoted to processing “fear” (Pessoa & Adolphs, 2010). This strategy has generated a “parts list” for neural processes, but only rarely pays close attention to how the parts interact in order to mediate the behavior of the system as a whole. Moreover the information-processing framework contains latent anthropomorphic thinking: coding, message-passing, and communication are metaphors that rely on the intuitive familiarity of social interactions—their neurobiological underpinnings are often left unstated (Brette, 2019).
In contrast to the view of the brain as a mosaic of quasi-independent functional units or agents, DST frames neural phenomena in terms of trajectories governed by coupled differential equations (Beurle, 1956; Caianiello, 1961; Corchs & Deco, 2004; Freeman, 1975; Griffith, 1963; Grossberg, 1967; Jirsa et al., 1994; Schoner & Kelso, 1988; Wilson & Cowan, 1972; Zeeman, 1973). These equations naturally lend themselves to causal and mechanistic interpretations, thereby cashing out anthropomorphic metaphors in terms of simpler biophysical processes such as excitation and inhibition. While the mathematical research behind DST has a long history, nonlinear dynamical systems exhibit behavior difficult to analyze without simulation. Advances in computational power have rendered DST much more tractable as a tool for neuroimaging (Breakspear, 2017; Cabral et al., 2014; Deco et al., 2009, 2011, 2013a, 2013b, 2015, 2021; Deco & Jirsa, 2012; Ghosh et al., 2008; Gollo et al., 2015; Hlinka & Coombes, 2012; Pillai & Jirsa, 2017; Sanz Perl et al., 2021; Shine et al., 2019a). Further, the DST modeling framework has enabled simulations of neural dynamics that are predictive and generative: simulated trajectories can be used to fit specific datasets (beim Graben et al., 2019; Golos et al., 2015; Hansen et al., 2015; Koppe et al., 2019; Vyas et al., 2020), but can also point researchers beyond data, for example, by contributing to experimental design and facilitating integration of findings from different paradigms and species.
An exhaustive survey of DST is beyond the scope of this review, but the key concepts have been described in depth in books accessible to neuroscientists (Durstewitz, 2017; Izhikevich, 2006; Rolls & Deco, 2010; Strogatz, 2015). Several neuroscience papers also serve as introductions to DST (Breakspear, 2017; Csete & Doyle, 2002; Favela, 2020, 2021; Miller, 2016; Shine et al., 2021), so here we will focus on how to integrate these modes of thinking with a functional, adaptive account of the brain. We will argue that DST is a lens that brings into sharp focus certain aspects of neural processing that are left somewhat blurred through the lens of the information-processing framework, including the importance of stability, flexibility, nonlinearity, and history dependence. Dynamical modes of description are particularly expressive for describing how humans and other animals pursue survival goals in ever-changing situations in ways that are both stable and fluid. More specifically, we argue that human neuroimaging, due to the availability of whole-brain sampling of brain dynamics, is especially suited to leverage concepts from DST (Deco et al., 2015; Galadí et al., 2021; Kringelbach & Deco, 2020). Importantly, beneath the surface-level complexity and abstraction of differential equations, DST enables a visual style of thinking that all neuroscientists can make use of in order to uncover causal and functional mechanisms (Daunizeau et al., 2012; Golos et al., 2015; Izhikevich, 2006; McIntosh & Jirsa, 2019; Rabinovich et al., 2006, 2015, 2020; Rabinovich & Varona, 2011; Shine et al., 2021; Wong & Wang, 2006).
In the first section of this review, we outline key concepts from DST that serve as building blocks for intuitive models of neural function. We then go on to suggest three ways in which current neuroimaging techniques can be productively combined with DST, thereby creating a powerful new vantage point from which to view the brain.
A VIEW OF THE BRAIN THROUGH THE DYNAMICAL SYSTEMS PRISM
Traditional functional analyses of brain areas have allowed researchers to identify statistically reliable neural “puzzle pieces.” These methods give us insight into what a brain area or network may functionally mediate, but not how this mediation unfolds in time, or better yet, how coordinated interactions between the identified neural regions manifest as behavior. Our claim is that DST is the ideal framework for piecing together this brain-behavior puzzle, given that it foregrounds interaction and timing (McIntosh & Jirsa, 2019). Moreover, a dynamical systems perspective may suggest principled ways to reformulate psychiatric conceptions (Durstewitz et al., 2021) and “folk psychological” terms used to describe behavior, such as “attention,” “memory,” “emotion,” and “cognition,” and the functions of a given region may be better understood as integrated network-level trajectories rather than modular and localizable processes (Hommel et al., 2019). Conversely, the functions of some localized areas may be better conceived in terms of their effects on network dynamics, rather than in terms of psychological concepts.
DST characterizes how a system—a neuron, a circuit, or even the whole brain—changes over time. A dynamical system is defined by its state space (or phase space), which characterizes the configurations available to the system. The dimensions of the state space specify the systems’ possible dynamics. For example, each dimension could be the firing rate of a neuron, or the metabolic activity of voxels, or the intensity of a stimulus. At any instant of time, the system is understood as occupying a point in its state space; a trajectory is a path through the state space, mapping how the values for each dimension change over time (Figure 1). Differential equations stipulate how the system’s trajectory will evolve over time from a chosen starting point (the initial conditions).
DST enables concise descriptions of families of trajectories that share qualitative properties. For example, if a family of trajectories all tend toward a particular region of state space, then that region is called an attractor (the simplest of which is called a fixed point attractor). The parts of state space from which the system finds itself “drawn” to an attractor forms the corresponding basin of attraction. The term “basin” here alludes to a valley in a mountain range — a ball placed on any slope of a valley will roll to the bottom. Understanding a state space as a landscape is an analogy that holds even in high-dimensional systems that cannot be visualized. The idea of an attractor provides an intuitive, mechanistic account of stability: a system in an attractor can be bumped or perturbed, but as long as the system stays within the attractor basin, it will eventually return to the bottom of the basin, like a marble rolling to the bottom of a bathtub. In contrast, a repeller is an inverted attractor, and therefore analogous to the top of a hill or a ridge: a system precariously balanced on a repeller.
The topography of fixed points isn’t always so clear cut. Indeed, fixed points can contain both attractive and repulsive properties, as is the case with a saddle node, which can be thought of topographically as similar to a mountain pass—unstable in one direction (i.e., you could just as easily move backward or forward along the path) but stable in another (i.e., it’s hard to climb the mountains on either side). Features such as saddle nodes inherently increase the potential complexity of emergent dynamics; however, it is important to point out that these qualitative features can only be identified when the differential equations of a system are posited. This implies that assigning terms such as “attractor” or “saddle” to a family of dynamic trajectories derived from data is necessarily dependent on the choice of model and cannot be inferred directly from data.
The set of all possible motivational states of an animal is an example of an attractor landscape (Deco & Jirsa, 2012; Shine, 2021) or “energy” landscape (though the use of the term “energy” is based on a mathematical analogy and need not possess the same physical dimensions as energy). The attractor basin of any given goal-oriented state must not be too deep: if an animal becomes so unwavering in its search for food that it is not perturbed by the appearance of a predator, then it is unlikely to survive for very long. Thus, behavioral flexibility requires that certain stimuli can nudge the system from one attractor basin to another. In other words, the trajectories of a flexible neural system are likely to traverse regions of state space that are repellers, since such regions are poised to enter nearby attractor basins. Another example of an attractor landscape is the space of perceptual targets that can capture attention (Rabinovich et al., 2013). Focused, unwavering attention on a target might correspond to the system being in a valley that is much deeper than neighboring ones, and from which the system cannot easily be dislodged by distractors. Similarly, high distractibility should correspond to a landscape of shallow attractors. Depending on the modeling goal, DST can be used to simulate how individual psychological constructs change over time (e.g., anger; Hoeksma et al., 2007), or how mental states shift across a landscape of multiple competing mental states, jostled by environmental forces (Jirsa & Kelso, 2004; Riley & Holden, 2012; Tognoli & Kelso, 2014). Beyond attractors, there are more subtle qualitative patterns, such as those associated with transient dynamics, that may be required to characterize trajectories exhibiting both recurring phases and variability or flexibility (Rabinovich et al., 2008; Rabinovich & Varona, 2011).
These external transient stimuli can be considered using the language of DST: for a system residing in state space, the only way for the system to move against the direction prescribed by the space is through a perturbation. In fact, determining whether a perturbation is considered “small,” or an attractor basin is considered “deep,” depends on their relative scales, as well as the exact position of the system within the attractor basin. For a system occupying the deepest point in a given attractor, perturbations below a certain scale will never push the system out of the attractor basin. If a system has already been perturbed so that it is near the ridge separating an attractor basin from that of an adjacent attractor, a relatively small push may be all that is needed to disrupt stability (Figure 1C). In the case of attention, this implies that, however focused an attentional state may be, there will be a distractor or combination of distractors that will have sufficient magnitude to push the system out of the corresponding attractor basin. Difficulties in maintaining attentional focus may arise from neural disruptions or developmental abnormalities that change the attractor depth of a target relative to the magnitude of perturbations, rendering attention easily captured by distractors (Duch, 2019; Iravani et al., 2021; John et al., 2018).
There are theoretical tools that motivate segmenting the brain into quasi-independent subsystems; we will now argue that this parcellation is far more illuminating than the traditional mosaic of functions. DST is not simply a taxonomy of attractors, repellers, and other qualitative features of trajectories. Important insights are derived from the study of bifurcations: qualitative changes to state space that arise from smooth parameter changes. Parameters, also referred to as “codimensions,” are distinct from the dimensions that define the state space. A typical example of a bifurcation is the transition from quiescence to stable repetitive spiking in the two-dimensional FitzHugh–Nagumo model and its descendants (FitzHugh, 1955; Izhikevich, 2006). In this simplification of the Hodgkin–Huxley model of the action potential, the excitatory input to the model neuron serves as a parameter, while the two dimensions are voltage and recovery, which characterize the spiking behavior. Increasing the input can trigger a “subcritical Hopf bifurcation,” in which a point attractor, the stable quiescent state, becomes unstable and an attractive limit cycle forms, such as is the case for periodic action potentials. As with all concepts in DST, bifurcations have a precise meaning only when we specify the model equations. But awareness of the general idea may point researchers toward mathematical models and theoretical insight. For example, in the case of the motivational attractor landscape discussed above, a bifurcation could occur if the environment affords only one salient goal initially, but affords two, say, eating and mating, after a transition arising from a parameter change, such as a decrease in perceived danger—the shift from one to two motivational attractors constitutes a bifurcation. Bifurcations have also been used to model the development of psychiatric disorders such as depression (Ramirez-Mahaluf et al., 2017).
NEUROMODULATING THE MANIFOLD
What kinds of neural phenomena can deform the multidimensional attractor landscapes of the brain? Viewing neuromodulatory ligands such as dopamine, noradrenaline, and serotonin as parameters of subnetworks in the brain may provide fresh perspectives on how the brain flexibly alters its own low-dimensional neural dynamics. There is long-standing evidence that neuromodulatory tone is tightly coupled to cognitive function, often by way of an inverted U-shaped relationship (Arnsten, 1998)—for example, noradrenaline can transition an individual from a disengaged state to an engaged mindset back to disengaged. To test whether these capacities were linked to attractor landscape dynamics, Shine et al. (2018) mimicked the effects of neuromodulatory tone on neuronal activity by altering neural gain—effectively tuning how much influence individual populations in the network have over one another. Increasing neural gain at intermediate levels of excitability caused an abrupt, nonlinear increase in interregional synchrony that overlapped with empirical network topological signatures observed when analyzing task-based fMRI data (Shine et al., 2016). This same model was used to demonstrate a gain-mediated increase in interregional transfer entropy (Li et al., 2019). Given the similarity in the mechanisms by which neuromodulatory chemicals impact neural gain (Shine et al., 2021), we expect other neuromodulatory ligands to have similar effects on network dynamics, with idiosyncrasies that betray their unique functions (Kringelbach et al., 2020).
Neuromodulatory ligands can also enact more subtle effects on state space dynamics (Figure 2). For instance, Munn et al. (2021) used a combination of 7T fMRI and statistical physics to demonstrate that the activity patterns in key hubs of the ascending arousal system differentially affect the brain’s attractor landscape. Specifically, activity in the locus coeruleus (the primary source of noradrenaline for the brain) was found to precede a flattening of the attractor landscape and hence allowed the system to leave an attractor with a smaller perturbation than was previously necessary. In contrast, blood flow in the basal nucleus of Meynert (the primary source of cholinergic inputs to the cortex) was found to precede moments in which the brain remained “stuck” in a deep well with a greatly diminished ability to escape. Importantly, these changes are also tied to alterations in phenomenological states. By analyzing fMRI data obtained during breath awareness meditation, Munn and colleagues found similar attractor landscape dynamics linked to alterations in internal awareness—specifically, the moments when meditators noticed that their thoughts had “wandered” from their breath. This phenomenon is also highly reminiscent of the notion of a noradrenaline-mediated “network reset” (Sara & Bouret, 2012), which has also been used to explain switches in perceptual stability associated with bistable images (Einhäuser et al., 2008), and hence may represent a fundamental feature of the intersection between neuromodulatory tone and network-level dynamics.
DYNAMICAL SYSTEMS THEORY FOR HUMAN NEUROIMAGING
Reframing neuroimaging data in the language of DST offers an exciting opportunity to investigate the brain using a precise language tailor-made for describing the distributed, dynamic, and highly integrated nature of the brain. Following in the footsteps of pioneering studies in the field that combined neuroimaging, computational modeling, and cognitive neuroscience tasks to advance our understanding of the rules that govern dynamical activity in the brain (Box 1), we identify three key principles through which neuroimaging researchers can adopt a dynamical systems perspective: zooming out from the local to the global level, trading off static for more dynamic descriptions of the brain, and moving from description to simulation (Figure 3). By designing neuroimaging approaches that embrace each of these aspects, we hope to entice the field toward more “ideal” experiments that will both expose the inner workings of the brain, but also identify more sensitive means for interacting with the complex, adaptive, and dynamic nature of the brain.
Box 1. A spectrum of dynamical systems approaches in neuroimaging.
Differential equations are becoming increasingly popular in DST modeling of neuroimaging data (beim Graben et al., 2019; Kringelbach & Deco, 2020; Wang et al., 2019). However, as in the case of data-oriented modeling techniques represented schematically in Figure 1, differential equation-based methods occupy a continuous “feature space of models,” not all of which use the full suite of DST concepts. Three key features have helped us make sense of the ever-expanding literature on dynamical modeling and DST: (1) the extent of focus on qualitative or mechanistic explanations using qualitative patterns like attractors and bifurcations, (2) the extent of focus on quantitative fitting of data, and (3) the degree to which characterization of data is employed to explain behavior (cognition, emotion, and other processes).
While it is tempting to view qualitative and quantitative modeling as mutually exclusive extremes on a continuum, it is possible for a single model to excel at both. Recent work demonstrates that close attention to data and precise mechanistic models can go hand in hand (Breakspear, 2017; Deco & Jirsa, 2012; Kringelbach & Deco, 2020; Shine et al., 2021; Wang et al., 2019). Nevertheless, the sheer complexity of data, as well as the plurality of research goals, means that there cannot be a “one-size-fits-all” approach to dynamical modeling of the brain. Ideally, models that perform quantitative fitting and those that focus more on qualitative characterization can mutually constrain and inspire each other.
The third highlighted feature of DST models—the mapping between brain dynamics and behavior—in our view has the most scope for growth. Given the complexity of the brain, it is natural to treat it as a phenomenon on its own, rather than a central part of a wider set of behavioral phenomena: cognition, emotion, and action. Given that these phenomena can themselves be described in terms of dynamics, a key goal of DST in neuroimaging must be to show, beyond mere correlation, how specific patterns of neural dynamics give rise to specific patterns of behavioral dynamics. In other words, the neuroimaging field will benefit from DST models that not only generate accurate simulations and interface with lower level neural mechanisms, but also provide a causal and functional account of the dynamics of emotions or broad cognitive modes. Early steps in this direction include studies of meditation and sleep that map DST concepts directly onto neuroimaging data (Deco et al., 2019; Galadí et al., 2021; Melnychuk et al., 2018; Munn et al., 2021). Neuroimaging studies of clinical and psychiatric conditions are beginning to be viewed through the DST lens, including epilepsy (McIntosh & Jirsa, 2019), migraine (Dahlem & Isele, 2013), and schizophrenia (Loh et al., 2007). There are many opportunities for close integration between DST as a way to study neuroimaging data and DST as a perspective on how symptoms are generated, such as in attention deficit hyperactivity disorder (Iravani et al., 2021), autism (Duch, 2019), and depression (Ramirez-Mahaluf et al., 2017).
Zooming Out to View the Whole Network
The popular “massively univariate” statistical parametric mapping (SPM; Figure 3) approach employed in most fMRI research precludes a deep understanding of the dynamic brain, with its interconnections influencing each other and changing over time. In this traditional approach, following careful preprocessing steps (Esteban et al., 2019), independent statistical models are fit to a behavioral task paradigm (convolved with a hemodynamic response function or finite impulse response model to account for hemodynamic delay) to the time course of either a single voxel or an averaged, summary time series calculated from a (hopefully predefined) region of interest. Such approaches have been successful in identifying regions with particular functions (such as the fusiform face area), via the clustering of voxels independently identified with statistical models that typically involve task contrasts (such as activation during face vs. scene viewing). The early success of these methods has entrenched a relatively static mindset among academics that hinders more detailed explanations involving multiple regions interacting over time. While there are numerous examples of pioneering work examining whole-brain neuroimaging with circuits-level explanations, we maintain that purely stationary statistical models are insufficient for a mechanistic understanding of cognitive phenomena in both healthy and diseased states.
In contrast, the DST approach has an inherent and direct link to underlying mechanisms. For example, instead of performing a univariate analysis and reporting that a face viewing task “activates” the fusiform face area, researchers could report how the entire brain activation patterns shift from one state (while viewing scenes) to another (while viewing faces) and back again over time. Even with a univariate analysis, this perspective could be supported by routinely including animation of fMRI activity, and by using unthresholded surface maps for improved visualization. Multiecho sequences may even allow for sufficient denoising (Kundu et al., 2017) to examine individual trials, precluding the need for the trial averaging that occludes network states influencing activation movements. Unthresholded animation, especially denoised, could then hint at a trajectory between states. Crucially, this approach would then offer additional steps, such as interrogating the likely neural processes that could have caused the differences between cognitive capacities (assuming a good observational model), or prediction of how the dynamics should change, given an intervention such as transcranial magnetic stimulation or a suitably chosen pharmacological agent.
Multivariate analyses have been steadily growing in popularity over recent years. These approaches begin with the assumption that neural representations are nonlocal: that is, that the functional capacities of the brain rely on distributed patterns of activity that reflect the influences that neural regions have over one another. The most widely adopted reverse (i.e., data fitting) multivariate approaches for measuring these effects are functional connectivity fMRI (FC), seed-based and independent component analysis (ICA), multivoxel pattern analysis (MVPA), and the effective connectivity approaches of psychophysiological interactions (PPI) and Granger causality (Figure 3). These methods provide insight into systems-level brain organization: for instance, the idea of a set of modular communities (derived using functional connectivity) that loosely relate to distinct functional capacities (Smith et al., 2009). However, despite this clarity, it is important to note that these methods are still primarily focused on fitting data rather than creating a generative model. As such, a substantial theoretical gap still remains between the appearance of these patterns and the mechanistic processes that could give rise to them. As we mentioned above, this problem can be mitigated in large part by grounding our investigations of neuroimaging data in a dynamical systems framework.
Other popular methods are based on the justified assumption that neural activity is low dimensional: the inherent degrees of freedom of neuroimaging data are typically far fewer than the number of different recordings that sample the brain (Churchland et al., 2012; Durstewitz, 2017; Gallego et al., 2020; Gotts et al., 2020; Shine et al., 2019a, 2019b). Embracing this assumption—using popular approaches such as principal component analysis (PCA) and ICA (Figure 3)—means that experimenters can reduce the number of independent variables that they need to track, a process that makes both interpretation and modeling substantially easier. In neuroimaging, the goal is typically to reduce the dimensionality of voxels or electrodes such that what was once an unwieldy dataset can now be effectively tracked (and visualized) in low-dimensional (“state”) space. In a recent fMRI study, Shine et al. (2019a) used PCA to reduce regional BOLD activity across multiple tasks to a set of low-dimensional components that were then shown to link clearly to analyses based on cognitive neuroscience, network neuroscience, DST, and neuromodulatory receptor expression. Crucially, certain critical assumptions of the dimensionality reduction approach are incompatible with aggressive preprocessing steps often used to “clean” data (Gotts et al., 2020)—careful modeling clearly shows that these strategies often “throw out the baby with the bathwater,” and hence should be applied with abundant caution. Regardless, this approach only scratches the surface of the potential for dimensionality reduction in systems neuroscience, as evidenced by the many examples from nonhuman studies (Chaudhuri et al., 2019; Mastrogiuseppe & Ostojic, 2018; Stringer et al., 2016).
Graph theory provides another means for embracing the distributed nature of neural activity patterns (Sporns, 2015), enabling a more harmonious integration with DST. One such approach treats regions of the brain as nodes of a network (or graph), and then defines the edges between these nodes according to the strength of temporal similarity (for instance, using a Pearson’s correlation or wavelet coherence). Following this step, mathematical tools (Fornito et al., 2016) can be used to infer topological properties of the network, that is, those features that are present in the data, irrespective of the specific implementation (Sporns, 2013), and how these properties change as a function of factors such as the cognitive demands of the task (Shine & Poldrack, 2018). The approach is not without pitfalls, as seemingly trivial choices (such as the presence and extent of edge thresholding) can have substantial impacts on the conclusions inferred about particular cognitive capacities (Hallquist & Hillary, 2019). In addition, there is also evidence that the ability to decipher stable nodes can vary substantially as a function of different cognitive tasks (Salehi et al., 2020). Despite these concerns, these approaches do reveal important aspects of the systems-level dynamics of the brain, and hence are capable of generating predictions about how neural activity is grounded in the underlying neurobiology. Two pertinent examples from recent work involve linking brain network integration to the diffuse projections of the ascending noradrenergic system (Munn et al., 2021; Shine et al., 2016, 2018) and the matrix regions of the thalamus (Müller et al., 2020a, 2020b).
Shifting From Static to Dynamic
An organism is a constantly changing web of biophysical and electrochemical interactions. A natural consequence of this organization is that the manner in which stimuli are processed depends on the state of the organism at the precise moment that a stimulus arrives. In other words, the brain is inherently dynamic, and cannot be understood with mere static descriptions. For instance, it is essential to examine not only how activity levels in voxels change over time, but also to model how voxels influence each other. Unfortunately, the majority of approaches used in modern neuroimaging contain a hidden assumption of stationarity—when viewed through the lens of DST, this amounts to assuming that the brain is always in the same position in state space when a stimulus arrives, which is difficult to justify.
One simple way to incorporate dynamics into modern neuroimaging approaches is to extend analyses beyond the typical assumptions of zero-lag correlation that permeate the field. These patterns are not uninterpretable in their own right—for example, the robustness and relative invariance of static network parcellations derived from long fc-fMRI scans suggests a form of slow dynamic stability, rather than an artifact of averaging. However, there is also evidence that, by calculating functional connectivity patterns across an entire scan, investigators potentially average across reconfigurations that occur over shorter time scales (Faskowitz et al., 2020; Honey et al., 2007; Karahanoğlu & Van De Ville, 2015). Fortunately, methods exist to soften these constraints (Robinson et al., 2021). For instance, tracking time-shifted correlations in fMRI showed that the well-known zero-lag temporal correlation structure of intrinsic activity emerges as a consequence of neural trajectories, assessed by their lag structure (Mitra et al., 2015) (Figure 3). At their extreme, these patterns can be interpreted as spatiotemporal traveling waves (Raut et al., 2021) or eigenmodes (Robinson et al., 2021), which are amenable to dynamical systems modeling (Koch, 2021). Traveling wave models are an example of a broad class of coarse-graining approaches in DST that include neural field, neural mass and mean field models (Bojak et al., 2010; Byrne et al., 2020; Deco et al., 2013b; Müller et al., 2020a; Shine et al., 2021; Wang et al., 2019). Another pertinent example comes from the field of time-varying functional connectivity, which typically breaks a standard neuroimaging scan into smaller windows and then characterizes fluctuations in correlation patterns over time (Lurie et al., 2020). In both cases, embracing the dynamics inherent in interregional coordination can pave the way to more powerful generative models of the human brain and its mediation of behavior.
A common criticism of fMRI is that the typical temporal resolution is slower than the time scales of most perceptual and behavioral changes. While this is true for fast behavioral choices, homeostatic processes in humans and other organisms necessarily take place at a variety of temporal scales. The fastest perceptions and reactions are embedded in slow dynamical trajectories that may correspond to phenomena such as mood, affect, or cognitive mode, which in turn are embedded in even slower trajectories such as hormonal/circadian rhythms and so on. The temporally and spatially coarse grained nature of whole-brain functional imaging make it well suited to characterizing “quasi-invariants”—neural contexts within which perception, thinking, and action are framed. Neural dynamics is organized across an intertwined temporal hierarchy, with causal relationships operating in both directions. For example, slower oscillations modulate fast oscillation (Tort et al., 2010), and, psychologically, a sudden fright may cause a lasting change of mood. As a first approximation, it is useful to think of slower fMRI findings as a window into slow processes that set the context for faster processing. Further, clever task designs can identify faster responses, on the order of hundreds of milliseconds (Lewis et al., 2018), so even faster dynamics can be studied.
Another potential barrier to application of dynamical analysis of fMRI is the fact that most fMRI paradigms involve analysis of data from predetermined epochs, whether they are blocks of stimuli or collections of rapidly presented events. While traditionally considered important for ensuring effective signal-to-noise properties, the constraints imposed by these approaches can limit the conclusions made about the dynamical processes at play. Moreover, a pure task-based division of neural recordings will average out any functional variability that is independent of the task structure. In other words, the underlying assumption is that all functionally relevant neural dynamics are strongly correlated to the temporal division assumed by the experimenter. Fortunately, newer task structures such as movie watching (Finn & Bandettini, 2020; Meer et al., 2020) and videogames (Richlan et al., 2018) do not impose the event structures that are typically used in signal-averaging approaches. Instead, dynamical models can be constructed that predict how the trajectory of brain states will change in concert with the videogame, and these simulations can then be compared with the fMRI data acquired.
The notion of attractor landscapes provides enticing links to whole-brain neuroimaging and suggests a set of neural trajectories that can be applied to neuroimaging data. In this framing, brain states evolve along the attractor landscape topography, much like a ball rolls under the influence of gravity down a valley and requires energy to traverse up a hill, this corresponds to an evolution toward an attractive or repulsive brain state, respectively. This technique can resolve what might otherwise be obscured states of attraction (and repulsion) in a multistable system and has been successfully applied to the dynamics of spiking neurons (Tkačik et al., 2015), BOLD fMRI (Munn et al., 2021; Watanabe et al., 2013, 2014), and MEG (Krzemiński et al., 2020). The approach offers several conceptual advances, but perhaps most importantly, it renders the otherwise daunting task of systems-level interpretation relatively intuitive. Importantly, this framework extends beyond mere analogy, as the topography of the attractor landscape shares a 1-to-1 correspondence with the generative equations required to synthesize realistic neural time series data (Breakspear, 2017). For example, Munn et al. (2021) compared trajectories of BOLD activity following phasic bursts of subcortical regions of the ascending arousal system, and by leveraging the attractor landscape approach it was apparent adrenergic and cholinergic neuromodulation actively modulated the strength of an attractor state.
Moving From Description to Simulation
All computational models in biology can be situated on a continuum from “reverse” to “forward,” based on their relationship with experimental data (Gunawardena, 2014). Statistical models proceed in the “reverse” direction: the modeling begins with experimental data and then “reverse engineers” the causal mechanisms that generated the data. In contrast, “forward” modeling starts with known or hypothetical causal mechanisms, which are used to generate patterns that mirror key aspects of experimental data (Breakspear, 2017). These two approaches were combined in what is arguably the most successful model in neuroscience, the Hodgkin–Huxley model of action potential generation (Hodgkin & Huxley, 1952): the data fitting facilitated the discovery of a system of differential equations that pointed toward the mechanisms underlying action potential generation.
At scales larger than the single neuron, forward modeling becomes increasingly underconstrained by experimental data. There is also no consensus on the neurobiological underpinnings of neuroimaging techniques (Breakspear, 2017). But the lack of constraint by data does not mean that forward models cannot be built: careful analysis of anatomy, behavior, and evolutionary history can provide modelers with well-justified mechanisms that can be captured by differential equations. Further, given the variability of neural and behavioral data, it does not make sense for generative models to cleave too closely to specific quantitative recordings. Qualitative descriptions and predictions can be more robust than quantitative data fits, as they generalize more easily, being less sensitive to idiosyncratic features of specific experiments. For instance, the notion that acetylcholine and noradrenaline can modulate attractor landscape topography (Munn et al., 2021) can be imported into the design of future experiments, not only in the context of meditation, but also to attention more broadly construed. It also creates bridges with nonhuman research techniques that can directly manipulate these neuromodulators.
There are existing software programs for simulating dynamical systems, such as the Brain Dynamics Toolbox (Breakspear & Heitmann, 2010; Heitmann & Breakspear, 2018) and the Virtual Brain (Ritter et al., 2013; Sanz-Leon et al., 2015; Schirner et al., 2021; Spiegler et al., 2016). Using these tools, DST concepts can be directly tested through comparison of model outputs with fMRI data. However, because the field of DST in neuroimaging is rapidly evolving, software packages may be less flexible than custom simulations written in programming languages like MATLAB, Python, or Julia. For example, custom code can be used to construct layer-specific models that incorporate the precise, compartment-specific connectivity principles that are present in the cerebral cortex (Braitenberg & Lauria, 1960; Du et al., 2012; Havlicek & Uludağ, 2020; Stephan et al., 2019). Regardless of the computational approach taken, the activity dynamics for each of the regions or neurons can be simulated, and the activity can then be convolved with a canonical hemodynamic response function, or better yet, with more advanced models of hemodynamics (Aquino et al., 2012; Pang et al., 2016). The output of this simulation can then be compared qualitatively with fMRI data collected during an experiment, with further iterations of the model bringing theory into closer contact with empirical data. This approach will be particularly powerful when combined with advances in fast sampling-rate (Polimeni & Lewis, 2021) and layer-resolved fMRI recordings (Huber et al., 2021; Polimeni et al., 2010), both of which will increase the precision with which models can be integrated with neuroimaging data.
It is important to note that a key constraint imposed by computational models is the degree of their abstraction from the “veridical”—the vast dimensionality of the adult human brain is undoubtedly more complex than a typical neural model can realistically simulate, such that even the most detailed computational model will likely lack the degrees of freedom to effectively characterize the true nature of the dynamical system with sufficient clarity and robustness. One way to mitigate this issue is to design modeling architectures to express a particular feature of neuroanatomy, and then, after investigating any interesting implications of the feature, compare the outputs of the model with empirical neuronal recordings. The Virtual Brain (Ritter et al., 2013; Sanz-Leon et al., 2015; Schirner et al., 2021) is an excellent example of a toolbox that affords access to this approach, and has been used to demonstrate important links between structure and function across many spatiotemporal scales. In these approaches, users define the network structure and computational model of interest, and then manipulate whichever parameters are of experimental interest. A complementary approach is to design more bespoke neural architectures, such as those that embrace interactions between the cerebral cortex and thalamus, and then work to determine what the benefits and costs of such an architecture might be. For instance, the presence of a population of relatively diffuse thalamocortical projections (as is the case for matrix thalamic nuclei; Jones, 2001; Müller et al., 2020a); can shift a network of corticothalamic neural masses into a quasi-critical regime characterized by the continual formation and dissolution of neuronal ensembles in such a way that maximizes a trade-off between network integration and segregation (Müller et al., 2020b). Although these approaches can be quite insightful, it is important to remember to pick a scale of modeling that matches both the mechanism of interest, and the particular imaging technique that the researcher is interested in interrogating.
A point worth stressing is that DST goes beyond the use of differential equations to fit data. For example, some variations of DCM (Cao et al., 2019; Friston et al., 2019) focus on data fitting but do not employ qualitative concepts such as attractor landscapes, limit cycles, or bifurcations, partly because they restrict themselves to the linear domain (Sadeghi et al., 2020), whereas more sophisticated nonlinear variations do (Daunizeau et al., 2012; Roberts et al., 2017a, 2017b). Models based on differential equations, whether linear or nonlinear, are also generative, and can simulate hypothetical BOLD data. In addition to the capacity for quantitative fits and simulations, DST offers conceptual tools that create bridges between data and neural mechanisms. In principle, any neuroimaging outcome measure can be generated by a well-designed forward model, but measures that embrace the complex, dynamical features of biological data (Bizzarri et al., 2019; Juarrero, 2002) will likely lead to a more rich causal understanding. Further, as we have mentioned at various points in this manuscript, the qualitative tools of DST—attractors, bifurcations, metastability, etc.—not only help account for data and neural processes, but also create natural links with the dynamics of behavior and cognition (also see Box 1).
CONCLUSIONS
In this Review, we have argued that the DST framework has the potential to revolutionize the analysis of neuroimaging data and how this data accounts for behavior, both in artificial task-based protocols and more naturalistic situations such as movie watching. We have argued that embracing this perspective will enable the discovery of otherwise latent links between neural mechanisms and the patterns that we measure from standard imaging approaches, which in turn can be used to rapidly augment our understanding of the brain, both in health and disease. For instance, we argue that a renewed focus on time-varying dynamics via the identification of qualitative but well-characterized dynamical phenomena (such as stability and limit cycles) or ideally, the geometric or visual interpretation of results (e.g., in terms of attractor basins or saddles) emergent in whole-brain neuroimaging data, will lead to rapid progress in systems neuroscience. This paradigm shift is already well underway, as evidenced by numerous papers that have used neuroimaging to derive measures of stability, entropy, and low-dimensional attractor manifolds as a function of different task contexts (Chaudhuri et al., 2019; Koppe et al., 2019; Müller et al., 2020b; Munn et al., 2021).
There is much work to be done. Fortunately, a major benefit of the DST approach is that there exists a large corpus of fMRI data that can be reanalyzed within the frame imposed by dynamical systems, potentially leading to major new insights into the brain bases of higher order mental phenomena. To this end, we strongly recommend that interested neuroscientists reach out to and actively collaborate with computational modelers in order to build models that can make predictions and build deeper intuition and explanation for the data already acquired. Of course, the advent of higher spatial and temporal resolution data, and interventional datasets like those that combine optogenetics with fMRI (Ryali et al., 2016), will undoubtedly further accelerate progress. Nonlinear dynamical systems must be simulated, so advances in computational power fuel advances in what can be understood with DST. The synergistic interactions that will emerge between DST and imaging are a crucial step toward the maturation of the field of systems neuroscience.
AUTHOR CONTRIBUTIONS
Yohan J. John: Conceptualization; Writing – original draft; Writing – review & editing. Kayle S. Sawyer: Conceptualization; Writing – original draft; Writing – review & editing. Karthik Srinivasan: Conceptualization; Writing – original draft; Writing – review & editing. Eli J. Müller: Conceptualization; Visualization; Writing – original draft; Writing – review & editing. Brandon R. Munn: Conceptualization; Writing – original draft; Writing – review & editing. James Shine: Conceptualization; Visualization; Writing – original draft; Writing – review & editing.
FUNDING INFORMATION
James Shine, National Health and Medical Research Council (https://dx.doi.org/10.13039/501100000925), Award ID: 1193857.
TECHNICAL TERMS
- State space:
A representation of all possible states that can be attained by the system (i.e., a point in state space).
- Trajectory:
The time course of a system given a particular set of initial conditions.
- Attractor:
A region of one or more fixed points that trajectories move towards.
- Fixed point:
A point in state space where the system is stationary (i.e., the derivative with respect to time is zero).
- Basin of attraction:
An area of state space from which systems will evolve towards a particular attractor.
- Repeller:
A region of one or more fixed points that trajectories move away from.
- Attractor landscape:
A state space containing multiple basins of attraction.
- Perturbation:
A small extrinsic change in the position of the system in state space (not governed by the system’s differential equations).
- Bifurcation:
A qualitative change in the behavior of the system produced by a change in a parameter of the differential equations.
- Limit cycle:
A region of state space that takes the form of a closed, cyclic trajectory.
REFERENCES
- Aquino, K. M., Schira, M. M., Robinson, P. A., Drysdale, P. M., & Breakspear, M. (2012). Hemodynamic traveling waves in human visual cortex. PLoS Computational Biology, 8(3), e1002435. 10.1371/journal.pcbi.1002435, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Arnsten, A. F. T. (1998). The biology of being frazzled. Science, 280(5370), 1711–1712. 10.1126/science.280.5370.1711, [DOI] [PubMed] [Google Scholar]
- beim Graben, P., Jimenez-Marin, A., Diez, I., Cortes, J. M., Desroches, M., & Rodrigues, S. (2019). Metastable resting state brain dynamics. Frontiers in Computational Neuroscience, 13, 62. 10.3389/fncom.2019.00062, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Beurle, R. (1956). Properties of a mass of cells capable of regenerating pulses. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 240(669), 55–94. 10.1098/rstb.1956.0012 [DOI] [Google Scholar]
- Bizzarri, M., Brash, D. E., Briscoe, J., Grieneisen, V. A., Stern, C. D., & Levin, M. (2019). A call for a better understanding of causation in cell biology. Nature Reviews Molecular Cell Biology, 20(5), 261–262. 10.1038/s41580-019-0127-1, [DOI] [PubMed] [Google Scholar]
- Bojak, I., Oostendorp, T. F., Reid, A. T., & Kötter, R. (2010). Connecting mean field models of neural activity to EEG and fMRI data. Brain Topography, 23(2), 139–149. 10.1007/s10548-010-0140-3, [DOI] [PubMed] [Google Scholar]
- Braitenberg, V., & Lauria, F. (1960). Toward a mathematical description of the grey substance of nervous systems. Il Nuovo Cimento (1955–1965), 18(2), 149–165. 10.1007/bf02783537 [DOI] [Google Scholar]
- Breakspear, M. (2017). Dynamic models of large-scale brain activity. Nature Neuroscience, 20(3), 340–352. 10.1038/nn.4497, [DOI] [PubMed] [Google Scholar]
- Breakspear, M., & Heitmann, S. (2010). Generative models of cortical oscillations: Neurobiological implications of the Kuramoto model. Frontiers in Human Neuroscience, 4, 190. 10.3389/fnhum.2010.00190, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brette, R. (2019). Is coding a relevant metaphor for the brain? Behavioral and Brain Sciences, 42, e215. 10.1017/s0140525x19000049, [DOI] [PubMed] [Google Scholar]
- Byrne, Á., O’Dea, R. D., Forrester, M., Ross, J., & Coombes, S. (2020). Next-generation neural mass and field modeling. Journal of Neurophysiology, 123(2), 726–742. 10.1152/jn.00406.2019, [DOI] [PubMed] [Google Scholar]
- Cabral, J., Kringelbach, M. L., & Deco, G. (2014). Exploring the network dynamics underlying brain activity during rest. Progress in Neurobiology, 114, 102–131. 10.1016/j.pneurobio.2013.12.005, [DOI] [PubMed] [Google Scholar]
- Caianiello, E. R. (1961). Outline of a theory of thought-processes and thinking machines. Journal of Theoretical Biology, 1, 204–235. 10.1016/0022-5193(61)90046-7, [DOI] [PubMed] [Google Scholar]
- Cao, X., Sandstede, B., & Luo, X. (2019). A functional data method for causal dynamic network modeling of task-related fMRI. Frontiers in Neuroscience, 13, 127. 10.3389/fnins.2019.00127, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chaudhuri, R., Gerçek, B., Pandey, B., Peyrache, A., & Fiete, I. (2019). The intrinsic attractor manifold and population dynamics of a canonical cognitive circuit across waking and sleep. Nature Neuroscience, 22(9), 1512–1520. 10.1038/s41593-019-0460-x, [DOI] [PubMed] [Google Scholar]
- Churchland, M. M., Cunningham, J. P., Kaufman, M. T., Foster, J. D., Nuyujukian, P., Ryu, S. I., & Shenoy, K. V. (2012). Neural population dynamics during reaching. Nature, 487(7405), 51–56. 10.1038/nature11129, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Corchs, S., & Deco, G. (2004). Feature-based attention in human visual cortex: Simulation of fMRI data. NeuroImage, 21(1), 36–45. 10.1016/j.neuroimage.2003.08.045, [DOI] [PubMed] [Google Scholar]
- Csete, M. E., & Doyle, J. C. (2002). Reverse engineering of biological complexity. Science, 295(5560), 1664–1669. 10.1126/science.1069981, [DOI] [PubMed] [Google Scholar]
- Dahlem, M. A., & Isele, T. M. (2013). Transient localized wave patterns and their application to migraine. The Journal of Mathematical Neuroscience, 3(1), 7. 10.1186/2190-8567-3-7, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Daunizeau, J., Stephan, K. E., & Friston, K. J. (2012). Stochastic dynamic causal modelling of fMRI data: Should we care about neural noise? NeuroImage, 62(1), 464–481. 10.1016/j.neuroimage.2012.04.061, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Deco, G., Cruzat, J., Cabral, J., Tagliazucchi, E., Laufs, H., Logothetis, N. K., & Kringelbach, M. L. (2019). Awakening: Predicting external stimulation to force transitions between different brain states. Proceedings of the National Academy of Sciences, 116(36), 18088–18097. 10.1073/pnas.1905534116, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Deco, G., & Jirsa, V. K. (2012). Ongoing cortical activity at rest: Criticality, multistability, and ghost attractors. Journal of Neuroscience, 32(10), 3366–3375. 10.1523/jneurosci.2523-11.2012, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Deco, G., Jirsa, V. K., & McIntosh, A. R. (2011). Emerging concepts for the dynamical organization of resting-state activity in the brain. Nature Reviews Neuroscience, 12(1), 43–56. 10.1038/nrn2961, [DOI] [PubMed] [Google Scholar]
- Deco, G., Jirsa, V. K., & McIntosh, A. R. (2013a). Resting brains never rest: Computational insights into potential cognitive architectures. Trends in Neurosciences, 36(5), 268–274. 10.1016/j.tins.2013.03.001, [DOI] [PubMed] [Google Scholar]
- Deco, G., Kringelbach, M. L., Arnatkeviciute, A., Oldham, S., Sabaroedin, K., Rogasch, N. C., Aquino, K. M., & Fornito, A. (2021). Dynamical consequences of regional heterogeneity in the brain’s transcriptional landscape. Science Advances, 7(29), eabf4752. 10.1126/sciadv.abf4752, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Deco, G., Ponce-Alvarez, A., Mantini, D., Romani, G. L., Hagmann, P., & Corbetta, M. (2013b). Resting-state functional connectivity emerges from structurally and dynamically shaped slow linear fluctuations. Journal of Neuroscience, 33(27), 11239–11252. 10.1523/jneurosci.1091-13.2013, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Deco, G., Rolls, E. T., & Romo, R. (2009). Stochastic dynamics as a principle of brain function. Progress in Neurobiology, 88(1), 1–16. 10.1016/j.pneurobio.2009.01.006, [DOI] [PubMed] [Google Scholar]
- Deco, G., Tononi, G., Boly, M., & Kringelbach, M. L. (2015). Rethinking segregation and integration: Contributions of whole-brain modelling. Nature Reviews Neuroscience, 16(7), 430–439. 10.1038/nrn3963, [DOI] [PubMed] [Google Scholar]
- Du, J., Vegh, V., & Reutens, D. C. (2012). The laminar cortex model: A new continuum cortex model incorporating laminar architecture. PLoS Computational Biology, 8(10), e1002733. 10.1371/journal.pcbi.1002733, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Duch, W. (2019). Autism spectrum disorder and deep attractors in neurodynamics. In Cutsuridis V. (Ed.), Multiscale Models of Brain Disorders (pp. 135–146). Springer International Publishing. 10.1007/978-3-030-18830-6_13 [DOI] [Google Scholar]
- Durstewitz, D. (2017). Advanced data analysis in neuroscience: Integrating statistical and computational models. New York, NY: Springer. 10.1007/978-3-319-59976-2 [DOI] [Google Scholar]
- Durstewitz, D., Huys, Q. J. M., & Koppe, G. (2021). Psychiatric illnesses as disorders of network dynamics. Biological Psychiatry: Cognitive Neuroscience and NeuroImaging, 6(9), 865–876. 10.1016/j.bpsc.2020.01.001, [DOI] [PubMed] [Google Scholar]
- Einhäuser, W., Stout, J., Koch, C., & Carter, O. (2008). Pupil dilation reflects perceptual selection and predicts subsequent stability in perceptual rivalry. Proceedings of the National Academy of Sciences of the United States of America, 105(5), 1704–1709. 10.1073/pnas.0707727105, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Esteban, O., Markiewicz, C. J., Blair, R. W., Moodie, C. A., Isik, A. I., Erramuzpe, A., … Gorgolewski, K. J. (2019). fMRIPrep: A robust preprocessing pipeline for functional MRI. Nature Methods, 16(1), 111–116. 10.1038/s41592-018-0235-4, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Faskowitz, J., Esfahlani, F. Z., Jo, Y., Sporns, O., & Betzel, R. F. (2020). Edge-centric functional network representations of human cerebral cortex reveal overlapping system-level architecture. Nature Neuroscience, 23(12), 1644–1654. 10.1038/s41593-020-00719-y, [DOI] [PubMed] [Google Scholar]
- Favela, L. H. (2020). Dynamical systems theory in cognitive science and neuroscience. Philosophy Compass, 15(8), e12695. 10.1111/phc3.12695 [DOI] [Google Scholar]
- Favela, L. H. (2021). The dynamical renaissance in neuroscience. Synthese, 199(1), 2103–2127. 10.1007/s11229-020-02874-y [DOI] [Google Scholar]
- Finn, E. S., & Bandettini, P. A. (2020). Movie-watching outperforms rest for functional connectivity-based prediction of behavior. bioRxiv. 10.1101/2020.08.23.263723 [DOI] [PMC free article] [PubMed] [Google Scholar]
- FitzHugh, R. (1955). Mathematical models of threshold phenomena in the nerve membrane. The Bulletin of Mathematical Biophysics, 17(4), 257–278. 10.1007/bf02477753 [DOI] [Google Scholar]
- Fornito, A., Zalesky, A., & Bullmore, E. T. (2016). Fundamentals of brain network analysis. Amsterdam, the Netherlands: Elsevier/Academic Press. [Google Scholar]
- Freeman, W. J. (1975). Mass action in the nervous system: Examination of the neurophysiological basis of adaptive behavior through the EEG. New York, NY: Academic Press. 10.1016/C2009-0-03145-6 [DOI] [Google Scholar]
- Friston, K. J., Preller, K. H., Mathys, C., Cagnan, H., Heinzle, J., Razi, A., & Zeidman, P. (2019). Dynamic causal modelling revisited. NeuroImage, 199, 730–744. 10.1016/j.neuroimage.2017.02.045, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Galadí, J. A., Silva Pereira, S., Sanz Perl, Y., Kringelbach, M. L., Gayte, I., Laufs, H., Tagliazucchi, E., Langa, J. A., & Deco, G. (2021). Capturing the non-stationarity of whole-brain dynamics underlying human brain states. NeuroImage, 244, 118551. 10.1016/j.neuroimage.2021.118551, [DOI] [PubMed] [Google Scholar]
- Gallego, J. A., Perich, M. G., Chowdhury, R. H., Solla, S. A., & Miller, L. E. (2020). Long-term stability of cortical population dynamics underlying consistent behavior. Nature Neuroscience, 23(2), 260–270. 10.1038/s41593-019-0555-4, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ghosh, A., Rho, Y., McIntosh, A. R., Kotter, R., & Jirsa, V. K. (2008). Noise during rest enables the exploration of the brain’s dynamic repertoire. PLoS Computational Biology, 4(10), e1000196. 10.1371/journal.pcbi.1000196, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gollo, L. L., Zalesky, A., Hutchison, R. M., van den Heuvel, M., & Breakspear, M. (2015). Dwelling quietly in the rich club: Brain network determinants of slow cortical fluctuations. Philosophical Transactions of the Royal Society B: Biological Sciences, 370(1668), 20140165. 10.1098/rstb.2014.0165, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Golos, M., Jirsa, V., & Daucé, E. (2015). Multistability in large scale models of brain activity. PLoS Computational Biology, 11(12), e1004644. 10.1371/journal.pcbi.1004644, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gotts, S. J., Gilmore, A. W., & Martin, A. (2020). Brain networks, dimensionality, and global signal averaging in resting-state fMRI: Hierarchical network structure results in low-dimensional spatiotemporal dynamics. NeuroImage, 205, 116289. 10.1016/j.neuroimage.2019.116289, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Griffith, J. S. (1963). A field theory of neural nets: I. Derivation of field equations. The Bulletin of Mathematical Biophysics, 25, 111–120. 10.1007/BF02477774, [DOI] [PubMed] [Google Scholar]
- Grossberg, S. (1967). Nonlinear difference-differential equations in prediction and learning theory. Proceedings of the National Academy of Sciences of the United States of America, 58(4), 1329–1334. 10.1073/pnas.58.4.1329, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gunawardena, J. (2014). Models in biology: “Accurate descriptions of our pathetic thinking.” BMC Biology, 12, 29. 10.1186/1741-7007-12-29, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hallquist, M. N., & Hillary, F. G. (2019). Graph theory approaches to functional network organization in brain disorders: A critique for a brave new small-world. Network Neuroscience, 3(1), 1–26. 10.1162/netn_a_00054, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hansen, E. C. A., Battaglia, D., Spiegler, A., Deco, G., & Jirsa, V. K. (2015). Functional connectivity dynamics: Modeling the switching behavior of the resting state. NeuroImage, 105, 525–535. 10.1016/j.neuroimage.2014.11.001, [DOI] [PubMed] [Google Scholar]
- Havlicek, M., & Uludağ, K. (2020). A dynamical model of the laminar BOLD response. NeuroImage, 204, 116209. 10.1016/j.neuroimage.2019.116209, [DOI] [PubMed] [Google Scholar]
- Heitmann, S., & Breakspear, M. (2018). Putting the “dynamic” back into dynamic functional connectivity. Network Neuroscience, 2(2), 150–174. 10.1162/netn_a_00041, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hlinka, J., & Coombes, S. (2012). Using computational models to relate structural and functional brain connectivity. European Journal of Neuroscience, 36(2), 2137–2145. 10.1111/j.1460-9568.2012.08081.x, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hodgkin, A. L., & Huxley, A. F. (1952). A quantitative description of membrane current and its application to conduction and excitation in nerve. The Journal of Physiology, 117(4), 500–544. 10.1113/jphysiol.1952.sp004764, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hoeksma, J. B., Oosterlaan, J., Schipper, E., & Koot, H. (2007). Finding the attractor of anger: Bridging the gap between dynamic concepts and empirical data. Emotion, 7(3), 638–648. 10.1037/1528-3542.7.3.638, [DOI] [PubMed] [Google Scholar]
- Hommel, B., Chapman, C. S., Cisek, P., Neyedli, H. F., Song, J.-H., & Welsh, T. N. (2019). No one knows what attention is. Attention, Perception, & Psychophysics, 81(7), 2288–2303. 10.3758/s13414-019-01846-w, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Honey, C. J., Kotter, R., Breakspear, M., & Sporns, O. (2007). Network structure of cerebral cortex shapes functional connectivity on multiple time scales. Proceedings of the National Academy of Sciences, 104(24), 10240–10245. 10.1073/pnas.0701519104, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Huber, L., Finn, E. S., Chai, Y., Goebel, R., Stirnberg, R., Stöcker, T., Marrett, S., Uludag, K., Kim, S.-G., Han, S., Bandettini, P. A., & Poser, B. A. (2021). Layer-dependent functional connectivity methods. Progress in Neurobiology, 207, 101835. 10.1016/j.pneurobio.2020.101835, [DOI] [PubMed] [Google Scholar]
- Iravani, B., Arshamian, A., Fransson, P., & Kaboodvand, N. (2021). Whole-brain modelling of resting state fMRI differentiates ADHD subtypes and facilitates stratified neuro-stimulation therapy. NeuroImage, 231, 117844. 10.1016/j.neuroimage.2021.117844, [DOI] [PubMed] [Google Scholar]
- Izhikevich, E. M. (2006). Dynamical systems in neuroscience: The geometry of excitability and bursting. Cambridge, MA: MIT Press. 10.7551/mitpress/2526.001.0001 [DOI] [Google Scholar]
- Jirsa, V. K., Friedrich, R., Haken, H., & Kelso, J. A. S. (1994). A theoretical model of phase transitions in the human brain. Biological Cybernetics, 71(1), 27–35. 10.1007/BF00198909, [DOI] [PubMed] [Google Scholar]
- Jirsa, V. K., & Kelso, S. (Eds.). (2004). Coordination dynamics: Issues and trends. New York, NY: Springer. 10.1007/978-3-540-39676-5 [DOI] [Google Scholar]
- John, Y. J., Zikopoulos, B., Bullock, D., & Barbas, H. (2018). Visual attention deficits in schizophrenia can arise from inhibitory dysfunction in thalamus or cortex. Computational Psychiatry, 2, 223–257. 10.1162/cpsy_a_00023, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jones, E. G. (2001). The thalamic matrix and thalamocortical synchrony. Trends in Neurosciences, 24(10), 595–601. 10.1016/S0166-2236(00)01922-6, [DOI] [PubMed] [Google Scholar]
- Juarrero, A. (2002). Dynamics in action: Intentional behavior as a complex system. Cambridge, MA: MIT Press. [Google Scholar]
- Karahanoğlu, F. I., & Van De Ville, D. (2015). Transient brain activity disentangles fMRI resting-state dynamics in terms of spatially and temporally overlapping networks. Nature Communications, 6, 7751. 10.1038/ncomms8751, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Koch, J. (2021). Data-driven modeling of nonlinear traveling waves. Chaos: An Interdisciplinary Journal of Nonlinear Science, 31(4), 043128. 10.1063/5.0043255, [DOI] [PubMed] [Google Scholar]
- Koppe, G., Toutounji, H., Kirsch, P., Lis, S., & Durstewitz, D. (2019). Identifying nonlinear dynamical systems via generative recurrent neural networks with applications to fMRI. PLoS Computational Biology, 15(8), e1007263. 10.1371/journal.pcbi.1007263, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kringelbach, M. L., Cruzat, J., Cabral, J., Knudsen, G. M., Carhart-Harris, R., Whybrow, P. C., Logothetis, N. K., & Deco, G. (2020). Dynamic coupling of whole-brain neuronal and neurotransmitter systems. Proceedings of the National Academy of Sciences, 117(17), 9566–9576. 10.1073/pnas.1921475117, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kringelbach, M. L., & Deco, G. (2020). Brain states and transitions: Insights from computational neuroscience. Cell Reports, 32(10), 108128. 10.1016/j.celrep.2020.108128, [DOI] [PubMed] [Google Scholar]
- Krzemiński, D., Masuda, N., Hamandi, K., Singh, K. D., Routley, B., & Zhang, J. (2020). Energy landscape of resting magnetoencephalography reveals fronto-parietal network impairments in epilepsy. Network Neuroscience, 4(2), 374–396. 10.1162/netn_a_00125, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kuhn, T. S. (1962). The structure of scientific revolutions. Chicago, IL: University of Chicago Press. [Google Scholar]
- Kundu, P., Voon, V., Balchandani, P., Lombardo, M. V., Poser, B. A., & Bandettini, P. A. (2017). Multi-echo fMRI: A review of applications in fMRI denoising and analysis of BOLD signals. NeuroImage, 154, 59–80. 10.1016/j.neuroimage.2017.03.033, [DOI] [PubMed] [Google Scholar]
- Lewis, L. D., Setsompop, K., Rosen, B. R., & Polimeni, J. R. (2018). Stimulus-dependent hemodynamic response timing across the human subcortical-cortical visual pathway identified through high spatiotemporal resolution 7T fMRI. NeuroImage, 181, 279–291. 10.1016/j.neuroimage.2018.06.056, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Li, M., Han, Y., Aburn, M. J., Breakspear, M., Poldrack, R. A., Shine, J. M., & Lizier, J. T. (2019). Transitions in information processing dynamics at the whole-brain network level are driven by alterations in neural gain. PLoS Computational Biology, 15(10), e1006957. 10.1371/journal.pcbi.1006957, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Loh, M., Rolls, E. T., & Deco, G. (2007). A dynamical systems hypothesis of schizophrenia. PLoS Computational Biology, 3(11), e228. 10.1371/journal.pcbi.0030228, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lurie, D. J., Kessler, D., Bassett, D. S., Betzel, R. F., Breakspear, M., Kheilholz, S., … Calhoun, V. D. (2020). Questions and controversies in the study of time-varying functional connectivity in resting fMRI. Network Neuroscience, 4(1), 30–69. 10.1162/netn_a_00116, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mastrogiuseppe, F., & Ostojic, S. (2018). Linking connectivity, dynamics, and computations in low-rank recurrent neural networks. Neuron, 99(3), 609–623. 10.1016/j.neuron.2018.07.003, [DOI] [PubMed] [Google Scholar]
- McIntosh, A. R., & Jirsa, V. K. (2019). The hidden repertoire of brain dynamics and dysfunction. Network Neuroscience, 3(4), 994–1008. 10.1162/netn_a_00107, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Meer, J. N. van den, Breakspear, M., Chang, L. J., Sonkusare, S., & Cocchi, L. (2020). Movie viewing elicits rich and reliable brain state dynamics. Nature Communications, 11(1), 5004. 10.1038/s41467-020-18717-w, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Melnychuk, M. C., Dockree, P. M., O’Connell, R. G., Murphy, P. R., Balsters, J. H., & Robertson, I. H. (2018). Coupling of respiration and attention via the locus coeruleus: Effects of meditation and pranayama. Psychophysiology, 55(9), e13091. 10.1111/psyp.13091, [DOI] [PubMed] [Google Scholar]
- Miller, P. (2016). Dynamical systems, attractors, and neural circuits. F1000Research, 5, F1000 Faculty Rev-992. 10.12688/f1000research.7698.1, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mitra, A., Snyder, A. Z., Blazey, T., & Raichle, M. E. (2015). Lag threads organize the brain’s intrinsic activity. Proceedings of the National Academy of Sciences, 112(17), E2235–E2244. 10.1073/pnas.1503960112, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Müller, E. J., Munn, B., Hearne, L. J., Smith, J. B., Fulcher, B., Cocchi, L., & Shine, J. M. (2020a). Core and matrix thalamic sub-populations relate to spatio-temporal cortical connectivity gradients. bioRxiv. 10.1101/2020.02.28.970350 [DOI] [PubMed] [Google Scholar]
- Müller, E. J., Munn, B. R., & Shine, J. M. (2020b). Diffuse neural coupling mediates complex network dynamics through the formation of quasi-critical brain states. Nature Communications, 11(1), 6337. 10.1038/s41467-020-19716-7, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Munn, B., Müller, E. J., Wainstein, G., & Shine, J. M. (2021). The ascending arousal system shapes low-dimensional neural dynamics to mediate awareness of intrinsic cognitive states. Nature Communications, 12, 6016. 10.1038/s41467-021-26268-x, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pang, J. C., Robinson, P. A., & Aquino, K. M. (2016). Response-mode decomposition of spatio-temporal haemodynamics. Journal of the Royal Society Interface, 13(118), 20160253. 10.1098/rsif.2016.0253, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pessoa, L., & Adolphs, R. (2010). Emotion processing and the amygdala: From a “low road” to “many roads” of evaluating biological significance. Nature Reviews Neuroscience, 11(11), 773–782. 10.1038/nrn2920, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pillai, A. S., & Jirsa, V. K. (2017). Symmetry breaking in space-time hierarchies shapes brain dynamics and behavior. Neuron, 94(5), 1010–1026. 10.1016/j.neuron.2017.05.013, [DOI] [PubMed] [Google Scholar]
- Polimeni, J. R., Fischl, B., Greve, D. N., & Wald, L. L. (2010). Laminar analysis of 7T BOLD using an imposed spatial activation pattern in human V1. NeuroImage, 52(4), 1334–1346. 10.1016/j.neuroimage.2010.05.005, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Polimeni, J. R., & Lewis, L. D. (2021). Imaging faster neural dynamics with fast fMRI: A need for updated models of the hemodynamic response. Progress in Neurobiology, 207, 102174. 10.1016/j.pneurobio.2021.102174, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rabinovich, M. I., Huerta, R., Varona, P., & Afraimovich, V. S. (2008). Transient cognitive dynamics, metastability, and decision making. PLoS Computational Biology, 4(5), e1000072. 10.1371/journal.pcbi.1000072, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rabinovich, M. I., Simmons, A. N., & Varona, P. (2015). Dynamical bridge between brain and mind. Trends in Cognitive Sciences, 19(8), 453–461. 10.1016/j.tics.2015.06.005, [DOI] [PubMed] [Google Scholar]
- Rabinovich, M. I., Tristan, I., & Varona, P. (2013). Neural dynamics of attentional cross-modality control. PLoS One, 8(5), e64406. 10.1371/journal.pone.0064406, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rabinovich, M. I., & Varona, P. (2011). Robust transient dynamics and brain functions. Frontiers in Computational Neuroscience, 5, 24. 10.3389/fncom.2011.00024, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rabinovich, M. I., Varona, P., Selverston, A. I., & Abarbanel, H. D. I. (2006). Dynamical principles in neuroscience. Reviews of Modern Physics, 78(4), 1213–1265. 10.1103/RevModPhys.78.1213 [DOI] [Google Scholar]
- Rabinovich, M. I., Zaks, M. A., & Varona, P. (2020). Sequential dynamics of complex networks in mind: Consciousness and creativity. Physics Reports, 883, 1–32. 10.1016/j.physrep.2020.08.003 [DOI] [Google Scholar]
- Ramirez-Mahaluf, J. P., Roxin, A., Mayberg, H. S., & Compte, A. (2017). A computational model of major depression: The role of glutamate dysfunction on cingulo-frontal network dynamics. Cerebral Cortex, 27(1), 660–679. 10.1093/cercor/bhv249, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Raut, R. V., Snyder, A. Z., Mitra, A., Yellin, D., Fujii, N., Malach, R., & Raichle, M. E. (2021). Global waves synchronize the brain’s functional systems with fluctuating arousal. Science Advances, 7(30), eabf2709. 10.1126/sciadv.abf2709, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Richlan, F., Schubert, J., Mayer, R., Hutzler, F., & Kronbichler, M. (2018). Action video gaming and the brain: FMRI effects without behavioral effects in visual and verbal cognitive tasks. Brain and Behavior, 8(1), e00877. 10.1002/brb3.877, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Riley, M. A., & Holden, J. G. (2012). Dynamics of cognition. WIREs Cognitive Science, 3(6), 593–606. 10.1002/wcs.1200, [DOI] [PubMed] [Google Scholar]
- Ritter, P., Schirner, M., McIntosh, A. R., & Jirsa, V. K. (2013). The virtual brain integrates computational modeling and multimodal neuroimaging. Brain Connectivity, 3(2), 121–145. 10.1089/brain.2012.0120, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Roberts, J. A., Friston, K. J., & Breakspear, M. (2017a). Clinical applications of stochastic dynamic models of the brain, part I: A primer. Biological Psychiatry: Cognitive Neuroscience and NeuroImaging, 2(3), 216–224. 10.1016/j.bpsc.2017.01.010, [DOI] [PubMed] [Google Scholar]
- Roberts, J. A., Friston, K. J., & Breakspear, M. (2017b). Clinical applications of stochastic dynamic models of the brain, part II: A review. Biological Psychiatry: Cognitive Neuroscience and NeuroImaging, 2(3), 225–234. 10.1016/j.bpsc.2016.12.009, [DOI] [PubMed] [Google Scholar]
- Robinson, P. A., Henderson, J. A., Gabay, N. C., Aquino, K. M., Babaie-Janvier, T., & Gao, X. (2021). Determination of dynamic brain connectivity via spectral analysis. Frontiers in Human Neuroscience, 15, 655576. 10.3389/fnhum.2021.655576, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rolls, E. T., & Deco, G. (2010). The noisy brain: Stochastic dynamics as a principle of brain function. Oxford, UK: Oxford University Press. 10.1093/acprof:oso/9780199587865.001.0001 [DOI] [PubMed] [Google Scholar]
- Ryali, S., Shih, Y.-Y. I., Chen, T., Kochalka, J., Albaugh, D., Fang, Z., Supekar, K., Lee, J. H., & Menon, V. (2016). Combining optogenetic stimulation and fMRI to validate a multivariate dynamical systems model for estimating causal brain interactions. NeuroImage, 132, 398–405. 10.1016/j.neuroimage.2016.02.067, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sadeghi, S., Mier, D., Gerchen, M. F., Schmidt, S. N. L., & Hass, J. (2020). Dynamic causal modeling for fMRI with Wilson-Cowan-based neuronal equations. Frontiers in Neuroscience, 14, 593867. 10.3389/fnins.2020.593867, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Salehi, M., Greene, A. S., Karbasi, A., Shen, X., Scheinost, D., & Constable, R. T. (2020). There is no single functional atlas even for a single individual: Functional parcel definitions change with task. NeuroImage, 208, 116366. 10.1016/j.neuroimage.2019.116366, [DOI] [PubMed] [Google Scholar]
- Sanz Perl, Y., Pallavicini, C., Pérez Ipiña, I., Demertzi, A., Bonhomme, V., Martial, C., … Tagliazucchi, E. (2021). Perturbations in dynamical models of whole-brain activity dissociate between the level and stability of consciousness. PLoS Computational Biology, 17(7), e1009139. 10.1371/journal.pcbi.1009139, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sanz-Leon, P., Knock, S. A., Spiegler, A., & Jirsa, V. K. (2015). Mathematical framework for large-scale brain network modeling in The Virtual Brain. NeuroImage, 111, 385–430. 10.1016/j.neuroimage.2015.01.002, [DOI] [PubMed] [Google Scholar]
- Sara, S. J., & Bouret, S. (2012). Orienting and reorienting: The locus coeruleus mediates cognition through arousal. Neuron, 76(1), 130–141. 10.1016/j.neuron.2012.09.011, [DOI] [PubMed] [Google Scholar]
- Schirner, M., Domide, L., Perdikis, D., Triebkorn, P., Stefanovski, L., Pai, R., … Ritter, P. (2021). Brain modelling as a service: The Virtual Brain on EBRAINS. arXiv. https://arxiv.org/abs/2102.05888v2 [Google Scholar]
- Schoner, G., & Kelso, J. A. (1988). Dynamic pattern generation in behavioral and neural systems. Science, 239(4847), 1513–1520. 10.1126/science.3281253, [DOI] [PubMed] [Google Scholar]
- Shine, J. M. (2021). The thalamus integrates the macrosystems of the brain to facilitate complex, adaptive brain network dynamics. Progress in Neurobiology, 199, 101951. 10.1016/j.pneurobio.2020.101951, [DOI] [PubMed] [Google Scholar]
- Shine, J. M., Aburn, M. J., Breakspear, M., & Poldrack, R. A. (2018). The modulation of neural gain facilitates a transition between functional segregation and integration in the brain. Elife, 7, e31130. 10.7554/eLife.31130, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shine, J. M., Bissett, P. G., Bell, P. T., Koyejo, O., Balsters, J. H., Gorgolewski, K. J., Moodie, C. A., & Poldrack, R. A. (2016). The dynamics of functional brain networks: Integrated network states during cognitive task performance. Neuron, 92(2), 544–554. 10.1016/j.neuron.2016.09.018, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shine, J. M., Breakspear, M., Bell, P. T., Ehgoetz Martens, K. A., Shine, R., Koyejo, O., Sporns, O., & Poldrack, R. A. (2019a). Human cognition involves the dynamic integration of neural activity and neuromodulatory systems. Nature Neuroscience, 22(2), 289–296. 10.1038/s41593-018-0312-0, [DOI] [PubMed] [Google Scholar]
- Shine, J. M., Hearne, L. J., Breakspear, M., Hwang, K., Müller, E. J., Sporns, O., Poldrack, R. A., Mattingley, J. B., & Cocchi, L. (2019b). The low-dimensional neural architecture of cognitive complexity is related to activity in medial thalamic nuclei. Neuron, 104(5), 849–855. 10.1016/j.neuron.2019.09.002, [DOI] [PubMed] [Google Scholar]
- Shine, J. M., Müller, E. J., Munn, B., Cabral, J., Moran, R. J., & Breakspear, M. (2021). Computational models link cellular mechanisms of neuromodulation to large-scale brain dynamics. Nature Neuroscience, 24(6), 765–776. 10.1038/s41593-021-00824-6, [DOI] [PubMed] [Google Scholar]
- Shine, J. M., & Poldrack, R. A. (2018). Principles of dynamic network reconfiguration across diverse brain states. NeuroImage, 180, 396–405. 10.1016/j.neuroimage.2017.08.010, [DOI] [PubMed] [Google Scholar]
- Smith, S. M., Fox, P. T., Miller, K. L., Glahn, D. C., Fox, P. M., Mackay, C. E., Filippini, N., Watkins, K. E., Toro, R., Laird, A. R., & Beckmann, C. F. (2009). Correspondence of the brain’s functional architecture during activation and rest. Proceedings of the National Academy of Sciences, 106(31), 13040–13045. 10.1073/pnas.0905267106, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Spiegler, A., Hansen, E. C. A., Bernard, C., McIntosh, A. R., & Jirsa, V. K. (2016). Selective activation of resting-state networks following focal stimulation in a connectome-based network model of the human brain. ENeuro, 3(5). 10.1523/ENEURO.0068-16.2016, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sporns, O. (2013). Network attributes for segregation and integration in the human brain. Current Opinion in Neurobiology, 23(2), 162–171. 10.1016/j.conb.2012.11.015, [DOI] [PubMed] [Google Scholar]
- Sporns, O. (2015). Cerebral cartography and connectomics. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 370(1668), 20140173. 10.1098/rstb.2014.0173, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Stephan, K. E., Petzschner, F. H., Kasper, L., Bayer, J., Wellstein, K. V., Stefanics, G., Pruessmann, K. P., & Heinzle, J. (2019). Laminar fMRI and computational theories of brain function. NeuroImage, 197, 699–706. 10.1016/j.neuroimage.2017.11.001, [DOI] [PubMed] [Google Scholar]
- Stringer, C., Pachitariu, M., Steinmetz, N. A., Okun, M., Bartho, P., Harris, K. D., Sahani, M., & Lesica, N. A. (2016). Inhibitory control of correlated intrinsic variability in cortical networks. Elife, 5, e19695. 10.7554/eLife.19695, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Strogatz, S. H. (2015). Nonlinear dynamics and chaos: With applications to physics, biology, chemistry, and engineering (2nd ed.). Boca Raton, FL: CRC Press. [Google Scholar]
- Tkačik, G., Mora, T., Marre, O., Amodei, D., Palmer, S. E., Berry, M. J., & Bialek, W. (2015). Thermodynamics and signatures of criticality in a network of neurons. Proceedings of the National Academy of Sciences, 112(37), 11508–11513. 10.1073/pnas.1514188112, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tognoli, E., & Kelso, J. A. S. (2014). The metastable brain. Neuron, 81(1), 35–48. 10.1016/j.neuron.2013.12.022, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tort, A. B. L., Komorowski, R., Eichenbaum, H., & Kopell, N. (2010). Measuring phase-amplitude coupling between neuronal oscillations of different frequencies. Journal of Neurophysiology, 104(2), 1195–1210. 10.1152/jn.00106.2010, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Vyas, S., Golub, M. D., Sussillo, D., & Shenoy, K. V. (2020). Computation through neural population dynamics. Annual Review of Neuroscience, 43(1), 249–275. 10.1146/annurev-neuro-092619-094115, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang, P., Kong, R., Kong, X., Liégeois, R., Orban, C., Deco, G., van den Heuvel, M. P., & Thomas Yeo, B. T. (2019). Inversion of a large-scale circuit model reveals a cortical hierarchy in the dynamic resting human brain. Science Advances, 5(1), eaat7854. 10.1126/sciadv.aat7854, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Watanabe, T., Hirose, S., Wada, H., Imai, Y., Machida, T., Shirouzu, I., Konishi, S., Miyashita, Y., & Masuda, N. (2013). A pairwise maximum entropy model accurately describes resting-state human brain networks. Nature Communications, 4(1), 1370. 10.1038/ncomms2388, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Watanabe, T., Masuda, N., Megumi, F., Kanai, R., & Rees, G. (2014). Energy landscape and dynamics of brain activity during human bistable perception. Nature Communications, 5(1), 4765. 10.1038/ncomms5765, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wilson, H. R., & Cowan, J. D. (1972). Excitatory and inhibitory interactions in localized populations of model neurons. Biophysical Journal, 12(1), 1–24. 10.1016/S0006-3495(72)86068-5, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wong, K.-F., & Wang, X.-J. (2006). A recurrent network mechanism of time integration in perceptual decisions. Journal of Neuroscience, 26(4), 1314–1328. 10.1523/jneurosci.3733-05.2006, [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zeeman, E. C. (1973). Catastrophe theory in brain modelling. International Journal of Neuroscience, 6(1), 39–41. 10.3109/00207457309147186, [DOI] [PubMed] [Google Scholar]