Skip to main content
Human Brain Mapping logoLink to Human Brain Mapping
. 2021 May 20;42(11):3680–3711. doi: 10.1002/hbm.25462

Principles and open questions in functional brain network reconstruction

Onerva Korhonen 1,2,, Massimiliano Zanin 3, David Papo 4,5
PMCID: PMC8249902  PMID: 34013636

Abstract

Graph theory is now becoming a standard tool in system‐level neuroscience. However, endowing observed brain anatomy and dynamics with a complex network representation involves often covert theoretical assumptions and methodological choices which affect the way networks are reconstructed from experimental data, and ultimately the resulting network properties and their interpretation. Here, we review some fundamental conceptual underpinnings and technical issues associated with brain network reconstruction, and discuss how their mutual influence concurs in clarifying the organization of brain function.

Keywords: brain dynamics, brain topology, edges, functional imaging, functional networks, nodes, resting state, structure–function relationship, temporal networks


Graph theory is now becoming a standard tool in system‐level neuroscience. However, endowing observed brain anatomy and dynamics with a complex network representation involves often covert theoretical assumptions and methodological choices which affect the way networks are reconstructed from experimental data, and ultimately the resulting network properties and their interpretation. Here, we review some fundamental conceptual underpinnings and technical issues associated with brain network reconstruction, and discuss how their mutual influence concurs in clarifying the organization of brain function.

graphic file with name HBM-42-3680-g001.jpg

1. INTRODUCTION

The introduction of complex network theory in neuroscience has represented a profound methodological but also in many ways conceptual revolution, promoting new research avenues (Bullmore & Sporns, 2009). A network is a collection of nodes and pairwise relations between them, called edges or links (Newman, 2003). Endowing a system with a network structure means identifying some of its parts with the former, and physical or more abstract relations between them with the latter. In spite of its apparent straightforwardness, such an operation is highly non‐trivial from both a conceptual and a practical viewpoint and comes with a set of often implicit assumptions.

Representing a system with a network structure does not necessarily entail that the system's properties are those of the associated network and that the system actually works as a network. Thus, at a fundamental level, network neuroscience must address the question of whether network structure reflects genuine aspects of brain phenomenology or is an epiphenomenon of coordinated dynamical activity in the same way as spatio‐temporal electrical field fluctuations are sometimes thought of. But even supposing that brain function emerges from some network property of anatomy and the dynamics taking place on it, a no less fundamental question for neuroscientists is how to extract this structure from data. Before even addressing the ontological question, it is therefore necessary to ascertain that the reconstruction is carried out properly. What “properly” means is of course highly non‐trivial, context‐specific, and crucially depends on the way the data on brain activity are collected, analysed and interpreted.

Network reconstruction from empirical data involves discretionary choices at all steps, to which graph theory per se provides no direction (Papo, Zanin, & Buldú, 2014; Zanin et al., 2012). For instance, there are no criteria for the choice of the space to be represented with a network structure, and for the definition of its boundaries, nodes and edges (Papo, Zanin, & Buldú, 2014). The reconstruction process in general and these choices in particular are somehow associated with assumptions on the characteristics of the studied system. For instance, endowing brain anatomy and dynamics with a network structure may seem prima facie rather similar processes, but differ in some fundamental (not merely technical) ways. An obvious difference lies in the definition of edges, which is far more straightforward in the former case than in the latter, but the most fundamental difference is to do with the definition of functional brain imaging's object, namely, functional brain activity.

In the remainder, we review the conceptual bases of functional network reconstruction from standard system‐level neuroimaging recordings (Section 2). In particular, we show that defining functional brain activity, extracting it from neuroimaging data and representing it with bona fide functional networks involve essentially similar conceptual steps. This theoretical introduction analyses issues seldom examined in other neuroscience reviews, but which turn out to be essential to understanding the technical aspects of the reconstruction process (Section 3). The interaction between functional brain activity characterization and network representation, and the extent to which the definition of what is functional in brain activity depends on the particular way in which brain networks are reconstructed is discussed throughout.

2. PROLEGOMENA TO NETWORK RECONSTRUCTION

System‐level functional neuroimaging techniques afford discrete time‐varying images of some aspect of neural physiology, typically associated with some physiological or cognitive function. Neuroimaging data therefore constitute a coarse‐grained version of “true” brain activity, implicitly meaning that there exists some map between the former and the latter.

The first important issue is determining the conditions under which and extent to which neuroimaging data and, more specifically the variables used to quantify them, allow recovering the system and make the system observable, that is, the internal states of the whole system can be reconstructed from the system's outputs (Kalman, 1961). Thus, neuroimaging data analysis can be thought of as a reconstruction or inverse problem (Nguyen, Zecchina, & Berg, 2017) (cf. Section 2.2.2). Supposing brain activity in fact has a structure of some kind, for example, a symmetry, the aim of neuroimaging data analysis should be to preserve at least some context‐specific properties of the underlying structure. The presence of structure induces specific equivalence classes, that is, sets whose elements are equivalent in terms of some relation, and allows measuring quantities over the considered spaces. These should ideally be preserved in the mapping.

A second important issue is that what needs to be quantified is in general not brain dynamics, but functional brain activity. However, the activity recorded with neuroimaging techniques such as functional magnetic resonance imaging (fMRI), electro‐ (EEG) or magneto‐encephalography (MEG), which is generally called functional, is not genuinely functional per se. Functional brain activity can be thought of as a particular structure of brain dynamics, that is, a particular set of relations among the elements composing dynamics which reflect a specific function. Extracting function from bare dynamics represents a non‐trivial though often implicit process (Papo, 2019). As a consequence, functional brain imaging should provide a map between true and coarse‐grained space's respective structure.

Finally, network neuroscience aims at characterizing brain anatomy, dynamics and ultimately function by endowing them with a network representation, and describing them in terms of properties of this representation. A network can be thought of as a discrete version of a continuous space, equipped with a particular structure. Thus, in network reconstruction, the codomain of the map from true brain structure is a coarse‐grained and discrete image of (some aspect of) brain anatomy and dynamics. Both its objects (i.e., the nodes) and the relations among them (i.e., the edges) are endowed with basic properties. Altogether, network analysis of system‐level neuroimaging data involves a complex characterization of the system's functional space, via neuroimaging and network coarse‐graining. Crucially, these properties depend on the way functional brain activity is defined. Defining functional brain activity, extracting it from neuroimaging data and representing it with bona fide functional networks correspond to as many coarse‐graining processes, which though implying a reduction in information, in the temporal and spatial domains, involve essentially similar conceptual steps.

2.1. From brain dynamics to functional brain activity

Perhaps the most common way of representing system‐level brain activity and therefore the time‐varying data produced by standard neuroimaging techniques is as the output of an underlying spatially‐extended dynamical system embedded in the 3D anatomical space. The space Φ associated with the dynamics is typically treated as scalar, vector, or tensor field, 1 either in the time domain, ranging from experimental to developmental or evolutionary time scales or, in somehow equivalently ways, in the frequency domain or in phase space.

Whatever the domain in which it is defined, the space has in general some additional structure, that is, some relationship among its elements. The space Φ is often identified with the anatomical space itself and treated as a smooth Euclidean space. This means that on such a space a distance is defined, that is, a rule to calculate the length of curves connecting points of the space, and that the tools of standard calculus can be used to carry out operations within the space, and comparing or evaluating differences across conditions. However, when considered at the whole brain spatial scale, neither anatomy nor global dynamics can in general be thought of as a simple Euclidean space. The folded structure resulting from brain gyrification produces an object with non‐trivial geometry. Perhaps more importantly, anatomically contiguous brain areas may radically differ in terms of dynamics and function. Φ can nonetheless be equipped with some geometry providing a way of defining distances. This can be done by assuming the anatomically‐embedded dynamical space to be locally Euclidean, an approximation typically adopted in anatomical data analysis. The resulting space is a manifold M, that is, a geometric object consisting of a collection of Euclidean patches, which are local descriptions of Φ covering the space. Such a construct is akin to a standard geographical atlas, which is nothing else but a collection of local charts projected on the plane. Overall, the resulting geometry is Euclidean within patches, but of a different nature at longer spatial scales (cf. Section 3.3.1). The main problems with such a space are understanding the conditions under which its parts are distinguishable, how the charts are related to each other, how to treat overlaps between two separate charts and changes in the description of the same set in different coordinates (Robinson, 2013a) (see Figure 1).

FIGURE 1.

FIGURE 1

An n‐dimensional manifold M can be described locally by the n‐dimensional real space n. A local chart (φ, U) is an open subset of the manifold U ⊆ M together with a one to one map φ: U→ n from this subset to an open set of the Euclidean space. The piecewise one‐to‐one mapping to the Euclidean space allows generalizing Euclidean space properties onto manifolds. A transition map between two open subsets of n provides a way to compare two charts of an atlas

While representing brain activity in a metric space equipped with a distance may seem to facilitate neuroscientists' life, it may not capture essential non‐metric aspects of functional brain organization (Petri et al., 2014; Reimann et al., 2017). It may then be useful to represent Φ as a space the elements of which do not derive internal relations from a metric (Petri et al., 2014). The space Φ may for instance be endowed with a topological structure (Lee, 2010). A topological space is a set of elements together with a topology, that is, a collection of open sets satisfying some basic properties. Intuitively, a set is open if, starting from any of its points and going in any direction, it is possible to move a little and still lie inside the set. The notion of open set provides a fundamental way to define nearness, and hence properties such as continuity, connectedness, and closeness, without explicitly resorting to distances. Nearness is maintained as the space is stretched without tearing. This naturally allows comparing systems of different metric size and differing local properties, a desirable characteristic given the intrinsic variability of individual brains. Thus, altogether, a topological representation affords two important advantages over a usual metric space: a more flexible notion of distance, and a robust way to compare conditions (Ghrist, 2014).

Treating brain activity as the output of a dynamical system, it can be modelled as a topological dynamical system where in addition to rules prescribing the matching conditions between overlapping charts, one defines some function accounting for the temporal evolution of such a structure. Considering the dynamics of such systems complexifies the picture. This is because brain activity has non‐random structure not just in the anatomical space but also in its dynamics (Papo, 2013). For instance, at long time scales, brain fluctuations are characterized by non‐trivial properties such as scale invariance (Fraiman & Chialvo, 2012; Novikov, Novikov, Shannahoff‐Khalsa, Schwartz, & Wright, 1997). The presence of such properties induces for instance a particular geometry (fractal geometry) in the time domain. This structure interacts with the spatial structure (Papo, 2014), potentially giving rise to arbitrarily complex topological properties (Zaslavsky, 2002).

2.1.1. Notion of functional space

Neuroimaging can help pursuing cognitive neuroscience's dual goal: understanding on the one hand how brain anatomical structure and the dynamics unfolding on it control function, and on the other hand, how the demands of cognitive or physiological tasks act on brain anatomy and dynamics, producing functional subdivisions in the brain. This can be accomplished by mapping a space Ψ of cognitive functions {ψ 1, ψ 2, …, ψ J}, non‐observable when using a given system‐level neuroimaging technique, onto a finite set of functions {ϕ 1, ϕ 2, …, ϕ K} ∈ Φ of aspects of brain anatomy or physiology associated with observable fitness or performance measures {γ 1, γ 2, …, γ L} ∈ Γ from subjects carrying out given tasks or just resting. When using Φ to make sense of Ψ, one ultimately aims at defining the space of equivalence classes under some relation defined on Φ, for example, the space of points for which brain activity has the same amplitude. In the opposite case, Φ is partitioned into functionally meaningful units using cognitive tasks as probes. In general, one seeks the best way to project one space onto the other, inducing partitions as accurate and fine as possible. Thus, defining functional brain activity using neuroimaging techniques involves partitioning two complex spaces, respectively made observable by behaviour and brain recording techniques, putting a structure on the set of equivalence classes, and mapping the corresponding structures, thereby parameterizing one space by another space. How to construct these partitions, what form the corresponding space may take, and therefore what may be regarded as functional, depends on the way ΨSΨ and ΦSΦ, where S denotes a generic structure, are defined and mapped onto each other through π (or π ). Within such a space, coordinates and transitions between charts in one space are defined by corresponding charts in the other space used to parameterize it.

Structure of the functional space

Subdivisions of the functional space depend on the way Φ and Ψ are defined, on their respective structure, and on the way they are mapped onto each other through some function π (or π ) (see Figure 2). π can be thought of as a map preserving structure, as ideally one would like subdivisions in one space to be mapped onto subdivisions in the other, though its nature, properties (e.g., invertibility, continuity) and the ones it preserves may be context‐specific. Carving functional equivalence classes from brain activity ultimately involves moving from dynamical equivalence classes, with identical dynamical properties and specific phase and parameter space symmetries, to functional ones comprising patterns of neural activity that can achieve given functional properties (Lizier & Rubinov, 2012; Ma, Trusina, El‐Samad, Lim, & Tang, 2009). This in turn requires considering the structure induced by the time evolution of the space produced by the map π (or π ). Functional structure results from the combination of two aspects: on the one hand, the accessibility structure in the neurophysiological space, that is, which observable variations are realizable in the neighbourhood of underlying neuronal configurations at scales below the observed ones; on the other hand, the neurophysiological neutrality structure of observable variations in the space in which these are evaluated, that is, those changes in one space which have no consequence on the space onto which they are projected. The combination of these two factors may give rise to a rather non‐trivial structure. Notably, various important properties, for example, nearness and neighbourhood, may qualitatively change when considering function rather than bare dynamics, the former possibly turning out not to be a metric or even a topological space (Stadler, Stadler, Wagner, & Fontana, 2001). The very definition of other properties, such as path dependence and robustness, may also vary in the dynamics‐to‐function transition.

FIGURE 2.

FIGURE 2

Genuinely functional activity results from a complex relation between the structure SΦ of the neurophysiological space Φ and the structure SΨ of the abstract space Ψ of cognitive functions made observable by performance measures Γ (see text above). Thus subdivisions in one space are used to define subdivisions in the other

2.2. From functional activity to functional networks

2.2.1. Network structure

An increasingly popular way to equip neuroimaging data with structure consists in endowing them with a network structure (Bullmore & Sporns, 2009). A network is a pair G = (V, E), where V is a finite set of nodes and E ⊆ V ⊗ V a set of ordered pairs of V called edges (or links). E is a symmetric and antireflexive relation on V for simple networks, and an anti‐symmetric one for directed ones (Estrada, 2011).

In a sense, network analysis operates in the same way as neuroimaging itself. Neuroimaging data can prima facie be thought of as kinetic models with a noise term averaging over brain activity at scales that are not detectable by a given recording device (Zaslavsky, 2002). This means that there exists a map π~:ΦNObsΦObsbetween non‐observable and observable activity. Recovering the hidden structure at microscopic scales would require finding a generating partition, an arduous task often impossible in experimental contexts (Kantz & Schreiber, 2004; Schulman & Gaveau, 2001). Associating the brain space with a network structure constitutes a particular coarse‐graining wherein each portion of ΦObs, an essentially continuous space (though empirical data are of course discrete), is identified with a discrete point. This process bears similarities with the way the centre of mass summarizes a whole mechanical system, an operation made possible by the system's symmetries. Furthermore, insofar as in a discrete space all points are isolated, 2 a network structure, however, defined, induces a separation property in ΦObs, even when the spaces summarized by each node are not separated 3 ones. In analogy with the general way of defining brain function, such networks, which are associated with the structure of bare dynamics, should be called dynamical, while functional networks should be reserved for structures inducing partitions of ΦObs through behavioural measures Γ. In this sense, defining network nodes, the starting point of functional network analysis, already incorporates a specification of functional brain activity.

ΦObs can loosely be thought to emerge from the renormalization of neural activity at scales not observable with a given neuroimaging technique (Allefeld, Atmanspacher, & Wackermann, 2009). The way microscopic scales renormalize into macroscopic ones and the properties the π~ map induces are poorly understood but could help determining the scale at which Φ is locally isomorphic to n and can effectively be treated as a topological manifold. Likewise, the level of neural operation at which connectivity becomes functionally relevant determines the scales at which a system can effectively be considered a network. At this scale, which may be induced by permutation symmetry with respect to a given property at microscopic scales, connectivity and collectivity are equivalent. Macroscopic parcellations in the anatomical space may then consist of topographical regions for which such symmetry holds.

Important functional elements are also incorporated in the relations between network nodes. In network neuroscience, the relation E is predicated upon connectedness and correlation lato sensu is usually used as a proxy for neighbourhood in the relevant space. Connectedness is one of the most important properties of topological spaces expressing the intuitive idea that an entity cannot be represented as the sum of two parts separated from each other, or, more precisely, as the sum of two non‐empty disjoint open‐closed subsets. Connectedness is preserved under mappings preserving the topological properties of a given space. The choice of connectedness is consistent with the proposed role of dynamical connectivity in healthy brain function (Varela, Lachaux, Rodriguez, & Martinerie, 2001) and in several neurological and psychiatric conditions (Alderson‐Day, McCarthy‐Jones, & Fernyhough, 2015; Friston, 1998; Hahamy, Behrmann, & Malach, 2015; Hilary & Grafman, 2017; Hohenfeld, Werner, & Reetz, 2018; Schmidt et al., 2013; Stephan, Friston, & Frith, 2009). The relations among the component nodes induce both metric (though not Euclidean) and topological properties.

Altogether, while a network representation should in principle clarify key aspects of functional brain activity, in turn, the assumptions on what should be regarded as functional have a profound impact on the associated networks, introducing circularity between definition and quantification of functional brain activity.

2.2.2. Reconstruction‐related principles

The network neuroscience endeavour is a particular inverse problem (Nguyen et al., 2017), involving the reconstruction of connectivity kernels given a prescribed dynamics of the activity field (Coombes, beim Graben, Potthast, & Wright, 2014) (Cf. Section 3.5). Insofar as the activity field is discretized, and that the key aspect is not dynamics per se but function, characterizing functional brain activity using network reconstruction involves determining the set of all networks that generate a given function. Inverse problems are by definition ill‐posed in the absence of boundary conditions. Specifying these conditions involves choices of varying degrees of arbitrariness. Reconstruction should ideally fulfil some partially inter‐related criteria:

  1. Reducibility. A fundamental question relates to whether brain activity can indeed be reduced to a network representation. The brain is a disordered spatially extended system with complex dynamics and incompletely understood functional organization. While complex networks' properties are well‐equipped to reflect many aspects of such a system, for instance its strong disorder (Dorogovtsev, Goltsev, & Mendes, 2008), how much information is lost as a result of discretization, and the extent to which information loss depends on the particular way a network is reconstructed and the scale at which this happens are still poorly understood issues.

  2. Observability. An issue related to reducibility and which is still poorly understood, is to what extent and under what conditions a network representation enables good observability, that is, allows recovering the states (Letellier & Aguirre, 2002) of the underlying high‐dimensional system (Aguirre, Portes, & Letellier, 2018).

  3. Structural similarity. Ideally, the network structure should reflect that of the underlying functional space, that is, there should be a map between them that preserves structure.

  4. Property preservation. An adequate structure should therefore preserve fundamental dynamical and structural properties of the underlying space. These properties include (a) the ability to obtain a dynamical rule for the system (Allefeld et al., 2009), and (b) symmetries (Cross & Gilmore, 2010). At least in its classical formulation, a network representation introduces symmetries, which may not be intrinsic to the system. For instance, nodes are typically taken to be essentially equal, implying a global symmetry of the space on which the network is defined.

  5. Intrinsicality. Network properties should be intrinsic, that is, they should show some invariance with respect to the way the network structure through which they are identified is reconstructed.

Which properties of brain dynamics and function networks can, actually do, or should document, and what this implies in terms of network properties, constitute fundamental issues that need to be addressed in network reconstruction.

3. BRAIN NETWORK RECONSTRUCTION

Endowing brain dynamics with a network representation is consistent with models representing global brain activity as emerging from the coupling of oscillating neuronal ensembles (Ashwin, Coombes, & Nicks, 2016; Hoppensteadt & Izhikevich, 1997; Sreenivasan, Menon, & Sinha, 2017). In this sense, network neuroscience can be seen as a particular neural field theory (Coombes & Byrne, 2019), wherein a finite number of neural masses interact according to a given context‐dependent topology (Cabral, Hugues, Sporns, & Deco, 2011).

However, not only is writing equations for brain dynamics based on empirical data an arduous task (Brückner, Ronceray, & Broedersz, 2020; Crutchfield & McNamara, 1987; Friedrich, Peinke, Sahimi, & Tabar, 2011), but at the scales typical of standard system‐level non‐invasive neuroimaging techniques, it is not trivial to define oscillators. At these scales, the definition of node is far less intuitive than for example, at the single neuron ones, where units are clearly defined. Nodes then ought to be identified in a different way. This often involves some form of functional projection on the anatomical space, whereby nodes map spatially local characteristics of the system's microscopic scales. However, the anatomy‐to‐function map is complex and poorly understood, as dynamical patterns of brain activity emerge in a spatially and temporally non‐local way from brain connectivity at all scales (Kozma & Freeman, 2016). Furthermore, no clear recipe exists to define relationships among nodes or to choose among the many available alternatives (Pereda, Quiroga, & Bhattacharya, 2005).

Functional network reconstruction is typically presented as a process involving node and edge definition and comprising a sequence of discrete steps. This division is in large part both heuristic, as discretional decisions at each step crucially depend on choices made at the others, and incomplete, as node and edge characterization is for instance logically preceded by the choice of the space on which the network structure is defined. In the remainder of this section, we provide an account of the various aspects in the reconstruction process, their reciprocal relationships, their dependence on often covert assumptions on functional brain activity as well as their brain recording technique specificity.

3.1. Space identification

Typically the space to be endowed with a network structure is isomorphic to the anatomical space on which the recorded dynamics takes place. Often the anatomical space would constitute both the embedding and the configuration space 4 for the dynamics. This characterization presents features that may simplify the analysis. For instance, the space can be endowed with the usual Euclidean metric. In addition, it makes interpretations in physiological terms and comparisons between anatomical and dynamical networks straightforward.

In fMRI studies, the anatomical space is not only the space where the brain dynamics take place but also the space where the imaging data are collected. The case of MEG and EEG is more complicated: while the data origins from electric dynamics of source points located on the brain surface, or in the source space, signals are recorded by magnetometers, gradiometers, and electrodes outside of the skull, in the sensor space. Functional networks can be constructed in both spaces. Sensor space analysis directly investigates the temporal similarity between signals from different sensors. For source‐space analysis, on the other hand, the dynamics of sources on the brain surface are first reconstructed via electromagnetic inverse modelling, also known as source reconstruction (for a review on source reconstruction approaches, cf. Hämäläinen, Hari, Ilmoniemi, Knuutila, & Lounasmaa, 1993; Grech et al., 2008; Schoffelen & Gross, 2009, He, Sohrabpoir, Brown, & Liu, 2018). While sensor‐space analysis is often selected for its relative simplicity and reduced computational cost, source‐space analysis offers higher spatial resolution, facilitating interpretation of results in neurophysiological context (Palva & Palva, 2012; Schoffelen & Gross, 2009). Source‐space analysis is also less prone to errors originating from signal mixing (cf. Section 3.5.2).

The anatomical space is not the same for all brains. In particular, fMRI data are collected in the so‐called native space of each subject, and also MEG and EEG data may be source‐modelled to native space. However, the brain regions used as functional network nodes (cf. Section 3.2.1) are defined in some standard space, the most commonly used being the Montreal Neurological Institute space (Collins, 1994; Collins, Neelin, Peters, & Evans, 1994; Evans, Collins, & Milner, 1992; Evans, Marrett, et al., 1992). Therefore, neuroimaging data are typically transformed to a standard space prior to network construction. The transformation aims to map homologous areas of different subjects into a single area in the standard space (Brett, Johnsrude, & Owen, 2002). Depending on the assumed connection between function and anatomy, this can be done by matching brain size and outline or more detailed anatomical structures such as sulci (Brett et al., 2002). Another, although more rare, option is to map the ROI definitions to each subject's native space and construct networks there. Some studies report no difference in network metrics between spaces (van den Heuvel, Stam, Boersma, & Hulshoff Pol, 2008); according to others, however, native‐space networks have more local structure and clearer local hubs than standard‐space ones (Magalhães, Marques, Soares, Alves, & Sousa, 2015). Furthermore, network metrics calculated in the native space and normalized by metrics obtained from random networks of corresponding size are better predictors for IQ of children suffering from epilepsy than their standard‐space counterparts, although the standard‐space metrics outperform the non‐normalized native space ones (Paldino, Golriz, Zhang, & Chu, 2019). However, effects of the standard‐space transformation on network structure and metrics are not fully known, and the selection of optimal space probably depends on multiple factors, including the definition of network nodes (Magalhães et al., 2015).

The anatomical space is only one of the many classes of spaces that can in principle be endowed with a network structure. For instance, network theory may be used to describe the phase space in which brain activity lives (Baiesi, Bongini, Casetti, & Tattini, 2009; Thurner, 2005). Such a representation seems particularly appropriate in regard to the phase space characteristics of complex systems such as the brain, where not all possible states are homogeneously populated, and microscopic dynamics is restricted to some states and the paths uniting them (Bianco et al., 2007; Sherrington, 2010). In a multilayer network approach (Boccaletti et al., 2014; Kivelä et al., 2014) to brain activity (Buldú & Papo, 2018), the relevant space may be the frequency domain (Brookes et al., 2016; Buldú & Porter, 2018; Guillon et al., 2017). In this approach, network nodes are identified with signal frequency bands, and edges with their particular relationship, reflecting the frequency‐specific aspect of long‐range interactions associated with cognitive function (Siegel, Donner, & Engel, 2012). Finally, the network structure need not be isomorphic to the anatomical structure (Papo, Zanin, & Buldú, 2014; Papo, Zanin, Pineda, et al., 2014). The space may for instance be of a more abstract nature, for example, the space of pathological features of a given disease or of the relationships between different diseases (Borsboom & Cramer, 2013; Zanin et al., 2014). This would reflect the fact that the relevant space may qualitatively differ, depending on the goals of a given research, which could range from simply finding differences between conditions to modelling brain activity.

3.2. Space partitioning

Nodes are the basic objects of a network structure and constitute the microscopic scale of network analysis. Network theory prescribes nothing as to their properties other than their pointwise nature.

Defining nodes from dynamical brain imaging data implies partitioning the space, quotienting it by a given property and identifying the open sets of the topology thus induced with discrete points. This is achieved through a complex renormalization process which involves a number of discretionary choices to define the following partially interrelated properties:

  1. General construction criteria/principles. These may include anatomy‐based rules, resorting for instance to available atlases, or dynamics‐based ones. The various methods differ in the stage at which function enters the picture.

    One class of methods, referred to as data‐driven in the remainder of this article, tries to commit as little as possible to prior theory, to let function emerge from the dynamics. The maximally non‐committal possibility would involve a one‐to‐one map to the microscopic scales induced by the brain recording device's precision. For non‐invasive system‐level electrophysiological techniques, this ground partition could prima facie coincide with the sensor space and the main issue is how well sensors sample the underlying dynamical system. Working on a source reconstructed space would allow far more reliable interpretations in terms of activity in anatomically defined brain regions, however both the accuracy of the inverse model and the partition of the source points affects analysis outcomes (Palva et al., 2018), and tend to limit the size of the reconstructable network. In fMRI, voxels induce a partition of the anatomical space and the main issue is finding a functionally meaningful covering of this ground partition.

    Another class of methods, a priori atlases, uses prior knowledge, for example, anatomical or histological landmarks, to define partitions of the anatomical space, taking into account the disordered nature of the functional space, the general idea being to directly carve functionally meaningful parcellations. Here, the problem lies in the complex anatomy‐dynamics‐function relationship.

  2. Membership rules, for example, topographical localization in the anatomical space or statistical criteria, as in clustering methods (Jain, Murty, & Flynn, 1999) directly reflect the chosen reconstruction principles.

  3. Space partition rules, for example, partitioning stricto sensu, fuzzy (Simas & Rocha, 2015) or overlapping (Palla, Derényi, Farkas, & Vicsek, 2005) parcellations, or size rules, enforcing in different ways separation on the relevant space. Furthermore, space partitions need not be time‐invariant and nodes may be time‐varying entities in the space in which they are defined; for instance, nodes could be spatially non‐stationary in the anatomical space (cf. Section 3.4.3).

  4. Geometric or topological metarules. Typically, parcellating the anatomical space involves forming macronodes, called regions of interest (ROIs). In this case, other important properties such as locality, compactness and connectedness in the anatomical space are often required. These properties are motivated by a classical anatomy‐to‐function projection, but also by the need to perform operations in the relevant space, such as comparing different parts of the space (cf. Section 3.3.1). A well‐behaved space would for instance permit using the powerful tools of calculus and differential geometry, and handling scalar or vector fields allowing operations such as transport within the underlying manifold. Relaxing these properties may imply allowing the emergence of non‐trivial properties of the underlying space, and would require tools for example, from computational topology (Robinson, 2013a) capable of handling such systems. This would also help conceiving of observed brain dynamics in terms of a function space (Papo, 2019) and therefore a more flexible and intuitive way of representing brain function.

Independent of the exact criteria applied to reduce the number of nodes in the transition from voxel or source‐point level to the level of ROIs, the process involves delineating functionally separate brain units, a task that goes under the name of parcellation (Stanley et al., 2013). As a basic requirement, a parcellation should minimize the amount of information lost in the transition from voxels or source points to ROIs. To this end, ROIs must be functionally homogeneous, or in other words, comprise voxels or source points similar enough to be presented with a single ROI time series (Stanley et al., 2013). Functional homogeneity can be measured as the similarity of, for example, voxel or source point time series (Göttlich et al., 2013; Korhonen, Saarimäki, Glerean, Sams, & Saramäki, 2017; Ryyppö, Glerean, Brattico, Saramäki, & Korhonen, 2018; Stanley et al., 2013), voxel or source point connectivity profiles (Craddock, James, Holzheimer, Hu, & Mayberg, 2012; Gordon et al., 2016), general linear model parameters describing voxel activation (Thirion et al., 2006), or the observed activity z scores (Schaefer et al., 2017). The selection of the appropriate measure of functional homogeneity depends on how ROI time series are observed (cf. “Renormalization of ROIs' internal properties” section below).

In addition to functional homogeneity, the goodness of a parcellation can be evaluated in terms of agreement with brain microstructure (cytoarchitecture and myelination), performance in simple network‐based classification tasks (e.g., gender classification), and in the case of data‐driven parcellations (cf. “A priori atlases or data‐driven parcellations?” section below) also reproducibility, that is, the ability to produce similar ROIs from different datasets of the same subject (Arslan et al., 2018).

Creating a parcellation optimal in all these measures is challenging (Arslan et al., 2018). Therefore, what parcellation scheme to adopt hinges on the studies' general purpose. For instance, if the purpose is characterizing brain function or modelling brain activity, then nodes should closely reflect the properties that one intends to model. However, network analysis may simply be used as a convenient tool to achieve less ambitious goals though often of primary importance, such as discriminating between populations or conditions along some feature.

3.2.1. Defining ROIs

Motivation for using ROIs

A typical whole‐brain fMRI protocol is associated with ~106 voxels, while in MEG/EEG, when source reconstruction is applied, the data collected by some hundreds of sensors are typically inverse modelled as time series of ~106 source points. As described above (cf. Section 3.2), these voxels or source points may appear as natural node candidates for functional brain network analysis. However, often network nodes depict larger spatially continuous clusters of voxels or source points, referred to as ROIs. The appropriate definition of ROIs is equally important for analysis of fMRI and source‐modelled MEG and EEG data. Often, the same ROI definition approaches can be used for analysing both imaging modalities, in particular if the MEG or EEG source reconstruction is based on anatomical information from MRI (Cottereau, Ales, & Norcia, 2015). Instead, the problem of ROI definition is not relevant for sensor‐space MEG and EEG analysis, where network nodes naturally depict the measurement sensors.

There are several reasons for using ROIs as network nodes. The most important is dimensionality reduction: the large amount of nodes may lead to a noisy adjacency matrix, in particular since the signal‐to‐noise ratio (SNR) of voxel and source point time series is often not particularly high (de Reus & van den Heuvel, 2013; Zalesky, Fornito, Hardling, et al., 2010). In general, interpreting relations or even providing a graphical representation may prove arduous for the voxel and source‐point‐level networks (Papo, Buldú, Boccaletti, & Bullmore, 2014). Besides, the large number of nodes in the voxel and source‐point‐level networks increases the computational cost of obtaining higher‐order topological properties. Furthermore, cognitive functions are known to cover cortical areas larger than single voxels or source points (Shen, Tokoglu, Papademetris, & Constable, 2013; Wig et al., 2011). Therefore, outcomes of ROI‐level analysis may be easier to interpret in the neurophysiological context than those of voxel or source‐point level ones.

Unlike nodes of many other networks, ROIs are not spatially pointwise. Consequently, their reconstruction involves defining both boundaries (cf. “Renormalization of ROI boundaries” section below) and internal properties, which determine the way each of these regions interacts with other ones (cf. “Renormalization of ROI's internal properties” section below). The latter introduces an intrinsic relationship between node and edge definition.

A priori atlases or data‐driven parcellations?

The lack of a standard method for brain parcellation into regions (Eickhoff, Thirion, Varoquaux, & Bzdok, 2015) has led to a wide variety of definitions of functional brain network nodes (Zalesky, Fornito, Hardling, et al., 2010).

From a methodological viewpoint, the various ROI definition methods can be divided into two categories. Most parcellation techniques are based on mapping a priori atlases (e.g., Desikan et al., 2006; Fan et al., 2016; Fischl et al., 2004; Power et al., 2011), defined in terms of, for example, anatomy or function, onto the subject's brain. On the other hand, data‐driven techniques parcellate the brain using the features of the present data (Honnorat et al., 2015; Parisot, Arslan, Passerat‐Palmbach, Wells, & Rueckert, 2016).

Brain functions range from highly localized to highly extended (Robinson, 2013b). The atlas approaches are based on the assumption that a relatively small number of localized ROIs, representing the underlying dominant modes, can accurately capture distributed brain dynamics (Robinson, 2013b). However, this assumption is not guaranteed to hold, and checking its validity with experimental data is hard. Indeed, data‐driven parcellations outperform a priori atlases in terms of functional homogeneity (Craddock et al., 2012; Gordon et al., 2016) and amount of information maintained in the transition from voxels to ROIs (Thirion, Varoquaux, Dohmatob, & Poline, 2014). Besides, data‐driven parcellations yield higher prediction accuracy in the classification of fMRI data collected during different tasks (Sala‐Lloch, Smith, Woolrich, & Duff, 2019; Shirer, Ryali, Rykhlevskaia, Menon, & Greicius, 2012) or from different subject cohorts, for example, patients and healthy control subjects (Dadi et al., 2019). Despite the evidence supporting data‐driven parcellations, a priori atlases are still commonly used, since they are easy to apply and may allow more straightforward interpretation of results than the data‐driven approaches.

From a conceptual viewpoint, it is interesting to compare how these methods differ in the way they allow function to emerge (cf. Section 2.1.1). These two methods, which belong to two qualitatively different approaches, respectively theory‐ and data‐driven, differ in the space used to parameterize brain activity. Atlas‐based approaches use a local projection of function on anatomy, whereas data‐driven approaches typically use bare dynamics. Thus, the former approach already contains a parcellation of the space onto which the dynamics is defined. What is studied is an ensemble of oscillators interacting according to the coupling scheme imposed by the anatomical network (Cabral et al., 2011; Deco, Jirsa, & McIntosh, 2011). Interestingly, function features both as an a priori ingredient of the space in which dynamics takes place and as a subset of the associated space which emerges from network dynamics at appropriate coupling values (Pillai & Jirsa, 2017). In data‐driven approaches, parcellations, and therefore ultimately function both emerge from dynamics. However, data‐driven approaches may in fact contain geometrical and topological constraints on how brain function is projected onto brain anatomy (cf. Sections 3.2 and “Renormalization of ROI boundaries”). Ultimately this may contribute to reducing differences in the parcellations produced by the two methods.

Furthermore, while in the former method the parameterizing space is static by construction, in the latter it can potentially by time‐varying. However, as the time axis is collapsed, the two approaches lose one of the dimensions in which they differ. These factors help explaining why, although the difference between atlases and data‐driven approaches may look fundamental, the two methods may yield overlapping results, under some conditions. Many parcellation strategies first introduced as data‐driven ones have led to static sets of ROIs that are used as a priori atlases (e.g., Craddock et al., 2012; Schaefer et al., 2017; Shen et al., 2013). Such an approach obviously saves time and computational power. On the other hand, there is no reason to assume that the accuracy of a parcellation strategy would remain intact if it is used as an atlas instead of a data‐driven approach.

Renormalization of ROI boundaries

In the following sections, we review parcellation strategies grouped by the brain features they make use of and the ROI properties they aim to optimize. Some of these strategies produce by definition a priori atlases while others can be both used to define atlases and applied as data‐driven techniques. Most of the techniques can be applied to both functional neuroimaging and electrophysiological recordings.

Microstructural parcellations

Parcellating the brain based on cell‐level microstructure has a long tradition, tracing back to the seminal work of Brodmann on brain cytoarchitecture (Brodmann, 1909) and Vogt and Vogt (1919) on myeloarchitecture. These parcellation strategies are based on the diversity of cell types in the brain: different cells are assumed to specialize on different tasks, and boundaries of functionally homogeneous ROIs should therefore follow the boundaries between different cell types. The first microstructural atlases were defined in 2D and excluding the intrasulcular surface, which means that before using these areas as network nodes, they need to be translated to a 3D space (Zilles & Amunts, 2010). The best‐known example of such translation is the Talairach–Tournoux atlas, a 3D generalization of the Brodmann areas (Talairach & Tournoux, 1988). Some Brodmann areas are still commonly used as ROIs in both fMRI and MEG and EEG analysis, and they are also used as a naming reference for creating new parcellations.

The early microstructural parcellations were based on light microscopy studies and carried no reference to anatomical landmarks of the brain, while the modern approaches combine cell‐level staining methods with large‐scale structural neuroimaging (Amunts & Zilles, 2015). For example, Ding et al. (2016) applied Nissl staining and NFP and PV immunolabelling, together with MRI and diffusion weighted imaging, to label 862 grey and white matter structures in the brain of a 34‐year‐old female. The JuBrain atlas (Amunts, Schleicher, & Zilles, 2007; Caspers, Eickhoff, Zilles, & Amunts, 2013) combines cell staining with macroanatomical landmarks to create a probabilistic parcellation or, in other words, a set of maps telling for each voxel the probability to belong to each of the 106 regions of the atlas. The Julich–Brain project (Amunts, Mohlberg, Bluday, & Zilles, 2020) combines probabilistic cytoarchitectural maps from different sub‐studies; these maps are obtained with modified Merker staining and anatomical information from MRI using post‐mortem data of 10 subjects selected from a 23‐subject pool. The Julich–Brain atlas is available as maximum probability maps, where each voxel is assigned to the ROI it has the highest probability to belong to (Eickhoff et al., 2005; Eickhoff, Heim, Zilles, & Amunts, 2006), as well as in probabilistic maps of individual ROIs that allow more detailed, distribution‐based localization of brain activation (Eickhoff et al., 2007). At the moment, the Julich–Brain atlas contains 120 areas per hemisphere, covering around 80% of the cortical volume; however, the atlas is continuously updated with new areas as new sub‐studies are published (Amunts et al., 2020).

Microstructure‐based parcellation strategies rely on post mortem data, which obviously limits their use to construction of a priori defined atlases only. Furthermore, the availability of such data limits the number of brains used for the microstructure‐based parcellation strategies; the atlases may even be based on a single brain, and their generalizability to other subject populations is rarely addressed.

Anatomical parcellations

In many commonly‐used parcellation strategies, the criteria for defining a cortical area are based on structure–function associations at the level of cortical areas in the anatomical space (Amunts & Zilles, 2015). Anatomical parcellation strategies use data collected with non‐invasive imaging methods, typically structural MRI. Therefore, these parcellations may use a larger number of subjects than the microstructural ones, yielding improved generalizability. However, the anatomical parcellation processes are typically time‐consuming and require significant amounts of manual work, which means that these parcellation strategies are rarely used in a data‐driven way.

Anatomical ROIs are commonly used as nodes of functional brain networks constructed from both fMRI and source‐modelled MEG and EEG data. The probably most commonly used one is the automated anatomical labeling (AAL) atlas (Rolls, Huang, Lin, Feng, & Joliot, 2020; Rolls, Joliot, & Tzourio‐Mazoyer, 2015; Tzourio‐Mazoyer et al., 2002), whose ROIs are obtained by manually labelling a high‐resolution single‐subject MR image based on the main sulci. The latest version of AAL, AAL3 (Rolls et al., 2020) contains 166 ROIs. Another commonly used anatomical parcellation, the Desikan–Killiany atlas (Desikan et al., 2006) was constructed by manually labelling the cortex of 40 subjects of varying age and healthy status into 34 areas per hemisphere and turning these areas into a cortical atlas using a probabilistic algorithm. Besides being used as an atlas of its own, the Desikan–Killiany atlas forms a part of the probabilistic Harvard–Oxford (HO) parcellation that combines multiple atlases (Desikan et al., 2006; Frazier et al., 2005; Goldstein et al., 2007; Makris et al., 2006). In HO, each voxel is given, separately, the probability to belong to each of the 48 cortical and 21 subcortical ROIs.

HO, Desikan–Killiany, and AAL all offer an atlas for assigning ROI labels to voxels after transforming the data from subjects' native space to some standard space. An opposite approach is used by, for example, the Automated Nonlinear Image Matching and Anatomical Labelling (ANIMAL) parcellation (Collins, Holmes, Peters, & Evans, 1995) and the Destrieux parcellation (Destrieux, Fischl, Dale, & Halgren, 2010; Fischl et al., 2004), also known as the FreeSurfer parcellation after the commonly used FreeSurfer analysis software (Fischl, 2012). In these parcellations, the a priori atlas of ROI labels is first transformed to the native space and voxels are assigned with ROI labels before transformation to the standard space. ANIMAL comprises an a priori atlas observed from the averaged MR images of 305 subjects, an iterative, hierarchical multiscale algorithm for mapping the atlas to native space, and a linear transformation from the native space back to a standard space for group‐level analysis. In the Destrieux parcellation, the atlas consists of voxels' probabilities of belonging to certain ROIs (74 per hemisphere), given the anatomical location and ROI labels of neighbouring voxels, and the transformation from standard to native space is done using anisotropic non‐stationary Markov random fields (MRFs).

The size of ROIs created by anatomical parcellations tends to vary widely. While this variation may be a genuine property of the brain (Wig et al., 2011), it may also bias the outcome of network analysis, depending on how the network edges are defined. To eliminate this bias, some studies have further fine‐tuned anatomical ROIs by splitting them to sub‐areas along the axis with the largest variance in voxel or source point location (Palva, Kulashekhar, Hämäläinen, & Palva, 2011; Palva, Monto, Kulashekhar, & Palva, 2010).

In microstructural and anatomical parcellation methods, Ψ is mapped onto the anatomical Euclidean space, inheriting the functional partition defined on this space based on average anatomical structure, physiology, or cytoarchitecture (Brodmann, 1909). Both Ψ and Φ are then assumed to have modular structure (Fodor, 1983), the underlying assumption being that information is (locally) compact in the anatomical support. However, higher‐level cognitive function, for example, executive functions, reasoning or thinking, are associated with complex spatio‐temporal organization and correspondingly complex phenomenology (Papo, 2015), and emerging function is spatially and temporally non‐local (Kozma & Freeman, 2016). These functions are typically supported by redundant and degenerate 5 systems wherein a number of brain structures can generate functionally equivalent behaviour (Price & Friston, 2002). Φ may therefore turn out to be too low‐dimensional to capture the complexity of both Φ and Ψ, and this severely limits the ability to account for the complex phenomenology of executive function and its disruption in pathology (cf. Section 2.1.1). Degeneracy may reflect a higher dimensional input–output space and combinatorial complexity (Brezina, 2010; Brezina & Weiss, 1997) but may also be a purely dynamical effect of a system with non‐linear and history‐dependent interactions (cf. “Structure of the functional space” section above).

Functional parcellations

In functional parcellation approaches, ROIs are defined as functional equivalence classes, that is, as groups of voxels or source points with similar functional profile. The definition of these parcellations depends on recording techniques and the temporal scales of brain activity. For fast sensory processes, which typically have a characteristic duration and a topographically more stereotyped identity, functional ROIs can be defined by stimulus properties and response functions, for example, the dynamical range, that is, the range of stimulus intensities resulting in distinguishable neural responses, or the dynamical repertoire, that is, the number of distinguishable responses. On the other hand, for processes lacking a characteristic duration and stereotypical topography such as thinking or reasoning (Papo, 2015), defining functional parcellations is conceptually and technically arduous. This approach suffers from some of the issues encountered by anatomy‐based methods (cf. “Anatomical parcellations” section above), namely a spatially local and time‐invariant vision of brain function, an approximation which may be useful in some cases but untenable in others, and further illustrates a degree of circularity in the definition of function in network neuroscience (cf. Section 2.2.1).

Historically, the term ROI has referred to a part of the brain, typically a set of fMRI voxels, subject to a specific interest because of its observed activation during a certain task. This is still the standard way to define functional ROIs: ROI centroids are defined as the peak coordinates of activation maps related to a task or a set of tasks and the ROI is formed by setting a relatively small sphere or cube around the centroid (Power et al., 2011; Stanley et al., 2013; Wang et al., 2011). This approach produces ROIs of at least approximately uniform size.

Typically, the spherical functional ROIs cover only a part, even as little as 1%, of grey matter (Stanley et al., 2013), which obviously leads to losing information from the excluded voxels. Furthermore, activation‐based parcellations are limited by the fact that the tasks used in the activation mapping scans obviously cover only a minor part of the brain's functional repertoire (Eickhoff et al., 2018).

Using the parcellations based on activation maps in a data‐driven way requires additional activation mapping scans. Therefore, these parcellations are typically used as a priori atlases and cannot account for individual variation between subjects. The parcellation of Blumensath et al. (2013) addresses this problem by maximizing the similarity of voxel time series inside ROIs instead of localizing activity peaks. The approach is twofold: first, small parcels are grown around numerous (up to several thousands) seed voxels. Next, hierarchical clustering is used to combine these parcels into final ROIs; cutting the clustering tree at different stages produces different numbers of ROIs.

Another commonly used functional parcellation approach, especially in fMRI analysis, is the independent component analysis (ICA) (Calhoun, Liu, & Adali, 2009). The numerous ICA approaches can be divided into two domains: the temporal ICA (tICA) (Biswal & Ulmer, 1999; Smith et al., 2012) and the spatial ICA (sICA) (Calhoun, Liu, & Adali, 2009; McKeown et al., 1998). tICA identifies temporally independent signal components, possibly originating from spatially overlapping areas. These components are, by definition, not correlated, which precludes their use as network nodes (Pervaiz, Vidaurre, Woolrich, & Smith, 2020). sICA, on the other hand, divides the data into a set of spatially independent components, that is, components originating from non‐overlapping voxels. For group‐level analysis, there are several approaches that first perform group sICA and then register the detected components back to each subject's native space (Calhoun, Liu, & Adali, 2009). Despite the spatial independence requirement of sICA, temporal dependencies between the components are possible, allowing definition of network edges as temporal similarity between the sICA components. The spatial independence requirement ensures that the sICA‐based nodes do not overlap; however, depending on the selected number of components, the nodes may be spatially discontinuous.

The ICA‐based parcellation approaches search for components of brain activity that are independent either in time or in space. However, the brain activity components are unlikely to be fully independent in either domain, which questions the accuracy of ICA‐based ROIs (Harrison et al., 2015; Pervaiz et al., 2020). The PROFUMO approach (Harrison et al., 2015) addresses this problem by dividing the brain activity into probabilistic functional modes (PFMs) using a Bayesian inference model. While PROFUMO maximizes the joint independence of PFMs in space and time, there is no strict condition for independence in either domain alone. Therefore, PFMs may overlap spatially and be correlated, allowing to investigate the functional connectivity between them. Similar approaches, the Abraham, Dohmatob, Thirion, Samaras, and Varoquaux's (2013) and the DiFuMo parcellation (Dadi et al., 2020), use dictionary learning to obtain soft, mostly non‐overlapping functional modes of brain activity.

Unlike many other functional and connectivity‐based parcellation approaches (see below), sICA and functional mode approaches are rarely used for obtaining ROI atlases. Instead, they are typically applied to construct ROIs from the present data in a truly data‐driven manner.

Functional parcellation approaches are more common in fMRI studies than in analysis of MEG and EEG data. However, both ROIs around activity peaks (Cottereau et al., 2015) and ICA approaches (Chen, Ros, & Gruzelier, 2013) have been successfully applied on source‐modelled MEG and EEG.

Connectivity‐based parcellations

Connectivity‐based parcellations aim to produce ROIs that contain voxels or source points with maximally similar connectivity profiles. This approach resorts to a combination of topological (connectivity, contiguity, compacity) and geometrical (local continuity) criteria as a proxy for function (Varela et al., 2001), and as means to define ROIs (cf. Sections 2.2.1 and 3.2). Note that, while in principle allowing a certain degree of non‐locality, such a principle is mitigated by these a priori assumptions.

These parcellations can operate either at the level of single subjects, producing individual ROIs, or at the group level, combining connectivity observed in multiple subjects (Arslan et al., 2018). Connectivity‐based parcellation approaches may be roughly divided into two classes: local gradient approaches and global similarity approaches (Eickhoff et al., 2018; Schaefer et al., 2017).

The local gradient approaches detect ROI boundaries as sudden changes in the connectivity landscape between two neighbouring voxels (Schaefer et al., 2017). For example, the approach introduced by Cohen et al. (2008) and further developed by Nelson et al. (2010) creates for each of the seed points in a 3‐mm grid a connectivity profile similarity map compared to the rest of the seeds. An edge detection algorithm then detects the potential ROI boundaries in each of these similarity maps. The group‐level average of these boundaries gives the probability of each voxel to be part of a ROI boundary, and ROIs can be detected by applying a watershed algorithm on this probability map. Later, Power et al. (2011) complemented this parcellation approach with a bunch of functionally defined ROIs to create the Power atlas. Wig, Laumann, and Petersen (2014) and Gordon et al. (2016) have suggested similar approaches.

The gradient approaches do not directly address the similarity of voxel connectivity profiles, although they in practice often produce ROIs with relatively high connectional homogeneity (Gordon et al., 2016; Schaefer et al., 2017). Parcellation approaches based on global connectivity similarity, on the other hand, cluster together voxels with maximally similar connectivity profiles, independent on their spatial location (Schaefer et al., 2017). For example, Craddock et al. (2012) obtained ROIs using normalized cut (NCUT) spectral clustering that maximizes similarity inside clusters and dissimilarity between clusters; the optimization target may be either the temporal similarity of voxel time series or the spatial similarity of their connectivity maps. Group‐level ROI atlases for a priori use, obtained from 41 subjects either by averaging connectivity matrices before NCUT or by a second clustering round on cluster membership matrices, are available at several resolutions (Craddock et al., 2012). The approach of Shen et al. (Shen, Papademetris, and Constable, 2010; Shen, Tokoglu, Papademetris, and Constable, 2013) uses the same clustering method but addresses the problem of group‐level parcellation by a multigraph extension that finds the optimal set of ROIs for multiple subjects at once. Also the Shen ROIs, obtained from 79 subjects, are available as an a priori atlas at multiple resolutions (Shen et al., 2013).

Since the parcellation approaches based on global similarity explicitly optimize connectional homogeneity, they may produce ROIs better suited for network nodes than those produced by the local gradient approaches (Schaefer et al., 2017). However, the maximization of global similarity does not necessarily lead to spatially continuous ROIs (Schaefer et al., 2017). Craddock et al. (2012) solved this by adding a continuity term to the clustering target function, while in the Shen parcellation, the continuity requirement is implicitly included in the multigraph approach (Shen et al., 2013).

The latest generation of connectivity‐based parcellations combines the local and global approaches and different features of the data, possibly even multiple imaging modalities. The Brainnetome atlas (Fan et al., 2016) of 210 cortical and 36 subcortical ROIs is constructed from the data of 40 subjects based on anatomical information from MRI, structural connectivity from diffusion tensor imaging, and functional activation and connectivity from fMRI during rest and task. Glasser et al. (2016) applied a local gradient approach to detect 360 cortical and subcortical ROIs from the data of 210 subjects; the multimodal data used in this approach included MRI to address myelination and cortical thickness from MRI, task‐fMRI to address activation, rest‐fMRI to address functional connectivity, and the topography of some areas, in particular the visual cortex. Schaefer et al. (2017) combine global similarity of function (in terms of voxel time series), local gradients of functional connectivity, and the spatial continuity requirement into a gradient‐weighted MRF model; the ROIs obtained with this approach from the data of 1,489 subjects are available as a priori atlases at several resolutions.

Similarly to functional parcellations, connectivity‐based parcellation approaches are not particularly popular in MEG and EEG studies. Due to differences in the temporal scale of imaging modalities, connectivity‐based a priori atlases constructed from fMRI data may not be optimal for analysing MEG and EEG data. However, many of the data‐driven approaches are essentially network clustering methods and can be applied on the source‐point‐level connectivity matrix of MEG and EEG data to obtain connectivity‐based ROIs.

Random parcellations

In addition to parcellations based on different features of neuroimaging data, functional brain network nodes can be defined at random. Typically, the random parcellations are used as a reference, against which the other parcellation approaches are compared (see, e.g., Craddock et al., 2012; Gordon et al., 2016). However, Fornito, Zalesky, and Bullmore (2010) used ROIs grown around random seeds only guided by spatial proximity to show that the size and number of nodes affects the properties of functional brain networks. Although random ROIs lack neurophysiological interpretation, they have shown surprisingly high functional homogeneity (Craddock et al., 2012; Gordon et al., 2016) and also yielded network properties comparable to those observed with optimized parcellation approaches (Craddock et al., 2012).

Renormalization of ROIs' internal properties

In classic graph‐theoretical analysis, nodes are considered as point‐like entities. However, unlike the nodes of theoretical graphs and many real‐life networks, ROIs are spatially extended and comprise several lower‐level units with individual dynamics. A network‐based treatment of the anatomical space partitioned into ROIs could take a community structure or network‐of‐networks approach (Gao, Li, & Havlin, 2014; Schaub, Delvenne, Rosvall, & Lambiotte, 2017), wherein each ROI is treated as a sub‐network or as a network in its own right. The results about non‐trivial connectivity structure inside ROIs of anatomical and connectivity‐based parcellations (Ryyppö et al., 2018; Stanley et al., 2013) would support such an approach.

However, addressing the dimension reduction goal often requires collapsing ROIs into equivalent point‐wise nodes. The connectivity between ROIs depends on the way the voxel or source point dynamics inside ROIs are summarized or, in other words, how well ROIs' internal structure is accounted for. However, while the importance of properly defined ROI boundaries is already widely acknowledged, less attention is paid on how the time series of individual voxels or source points are combined to obtain a single ROI time series.

The by far most common approach is to obtain ROI time series as an unweighted average of the time series of voxels or source points. In the ideal case of functionally perfectly homogeneous ROIs, these time series would differ from each other only in terms of independent noise, and averaging would increase the SNR by eliminating this noise (Stanley et al., 2013). In reality, however, in particular ROIs of a priori atlases often show low to mediocre functional homogeneity (Göttlich et al., 2013; Korhonen et al., 2017; Stanley et al., 2013). Therefore, unweighted averaging often leads to losing information (Stanley et al., 2013) and in the worst case to spuriosities in observed connectivity (Korhonen et al., 2017).

An obvious way to decrease the loss of information in unweighted averaging is to assign the voxel or source point time series with weights before averaging. For example, the approach of S. Palva et al. (2011) addresses the possible phase inhomogeneity inside ROIs in source‐reconstructed MEG/EEG data: source points on different walls of a sulcus tend to have a phase difference of π, leading to signal cancelation if their time series are averaged without further consideration (Ahlfors et al., 2010; Cottereau et al., 2015). To avoid this, S. Palva et al. (2011) first calculated the phase distribution inside ROIs, identified the two groups of source points with different phases, and before averaging shifted the phase of one of these groups by π while keeping the amplitude intact. Another, also MEG/EEG‐oriented approach (Korhonen, Palva, & Palva, 2014) weighted source point time series by their ability to retain their original dynamics in a simulated MEG/EEG measurement (forward modelling) and source reconstruction. The approach increases functional homogeneity, measured in terms of phase synchrony, of ROIs and decreases spurious connectivity between ROIs. However, because of the weight thresholding of this approach, many source point time series get weight 0 and get excluded from further analysis (Korhonen et al., 2014).

In studies of fMRI data, the weighted average approach has been applied by defining the ROI time series in terms of the strongest intra‐ROI principal component analysis (PCA) components (Sato et al., 2010; Zhou et al., 2009). In this approach, the number of components to use remains as a free parameter; in the work of Zhou et al. and Sato et al., the number was relatively low, typically less than 10.

As an extreme case of weighted averaging, the time series of a single source point can be selected to represent the whole ROI. In this approach, all but the chosen source point time series are assigned with weight 0. For example, Hillebrand, Barnes, Bosboom, Berendse, and Stam (2012) used as ROI time series the source point time series with the highest signal power within the ROI, while O'Neill et al. (2017) used the time series of ROIs' centres of mass. While this approach avoids the possibly corrupting effects of averaging, it voluntarily discards the information from a vast majority of source points. Furthermore, it does not account for the risk of the highest‐power source point being an outlier or having particularly low SNR.

In parcellation approaches based on spatially overlapping functional modes (see section “Functional parcellations” above), averaging voxel or source point time series is less straightforward. These parcellations often require more sophisticated ways for renormalizing ROI time series. For example, Dadi et al. (2020) obtained the time series of their functional modes in terms of linear regression.

A separate time series renormalization step is not included in all parcellation approaches. The sICA approaches as well as some functional mode approaches define ROIs as the spatial origins of certain signal components, and these components are obviously used as ROI time series. In MEG/EEG analysis, time series renormalization can be overcome also by the beamformer source reconstruction approach (van Veen, van Drongelen, Yuchtman, & Suzuki, 1997). This approach filters the signals of measurement sensors to detect the independent activity originating on a set of source points on the brain surface; if the number of these source points is small enough and their location is motivated by, for example, anatomy or function observed in previous studies, the source‐reconstructed signals can be directly used as ROI time series.

3.3. Edge identification

In addition to space parcellation, another key step in brain network reconstruction requires identifying edges. Edges play a double role in the network structure: on the one hand, they incorporate the structure's relational information; on the other hand, in the statistical mechanics approach, the system's degrees of freedom are represented by the interactions, so that edges, rather than nodes constitute the system's genuine particles, their maximum number playing the role of the system's volume (Gabrielli, Mastrandrea, Caldarelli, & Cimini, 2019).

Edges are usually designed to reflect essential aspects of brain dynamics and function. Thus, in principle, edges should incorporate as much neurophysiological detail as required by the study's purpose. However, the neurophysiological plausibility of edge metrics is subjected to a number of other constraints. Any metric operationalizing the functional elements that edges are supposed to handle necessarily reflects a specific angle under which brain activity is envisioned (e.g., dynamical or information‐theoretic) and is predicated upon basic assumptions on brain dynamics and function. For instance, how information is transported, that is, whether via modulations of mean firing rates (Litvak, Sompolinsky, Segev, & Abeles, 2003; Shadlen & Newsome, 1998), through temporally precise spike‐timing patterns (Abeles, 1991; Buzsaki, Llinas, Singer, Berthoz, & Christen, 1994), or otherwise, and how it is processed at various spatial and temporal scales of neural activity, are still poorly understood though seemingly context‐specific phenomena. To provide a mathematical characterization of known properties, edge reconstruction has mainly drawn its conceptual framework from nonlinear dynamics and synchronization theory (Arenas, Díaz‐Guilera, Kurths, Moreno, & Zhou, 2008; Boccaletti, Kurths, Osipov, Valladares, & Zhou, 2002), and information theory (Rieke, Warland, van Steveninck, & Bialek, 1999), to produce a variety of edge metrics with various characteristics (Pereda et al., 2005; Rubinov & Sporns, 2010). This conceptual background allows addressing dynamics, but only somehow indirectly function. For instance, information‐based metrics would at first sight seem to directly quantify a crucial ingredient of brain function. However, only part of the information effectively transferred can be thought to have a genuine functional meaning, due to both thermodynamic and informational inefficiency of neural circuitry (Sterling & Laughlin, 2015; Still, Sivak, Bell, & Crooks, 2012). Moreover, understanding the functional meaning of information transfer in heavily coarse‐grained signals is not straightforward.

Finding an adequate mathematical representation constitutes a further set of constraints to the physiological plausibility of connectivity metrics. Each metric comes with its own set of characteristics and shortcomings. For instance, metrics may quantify statistical dependency (often referred to as functional connectivity 6 ) or causal interactions (effective connectivity) (Friston, 1994; Horwitz, 2003); may be linear or nonlinear (Paluš, Albrecht, & Dvořák, 1993). Some connectivity metrics may be symmetric, while others, for example, Granger causality (Ding, Chen, & Bressler, 2006; Granger, 1969; Hlaváčková‐Schindler, Paluš, Vejmelka, & Bhattacharya, 2007), connectivity estimates from graph learning algorithms (e.g., Sun et al., 2012), or transfer entropy (Schreiber, 2000; Vicente, Wibral, Lindner, & Pipa, 2011) are directed and asymmetric. Measures may or may not distinguish between direct and indirect connectivity (Smith et al., 2011; Vejmelka & Paluš, 2008). Carefully selected matrix regularizators may embed assumptions about network structure, for example, sparsity or modularity, into estimation of coherence‐based connectivity metrics (Qiao et al., 2016). Finally, given the role of oscillations in brain activity (Başar, 2012; Fries, 2005; Schnitzler & Gross, 2005; Varela et al., 2001), connectivity is sometimes evaluated in a frequency‐specific way both in EEG and fMRI data analysis (Brookes et al., 2011; Hipp & Siegel, 2015), and corresponding frequency‐specific networks at rest (Boersma et al., 2011; Hipp, Hawellek, Corbetta, Siegel, & Engel, 2012; Qian et al., 2015) and associated with the execution of cognitive tasks (Wu, Zhang, Ding, Li, & Zhou, 2013), can be reconstructed from electrophysiological data, which are inherently broadband, but also from relatively narrow‐band BOLD fMRI recordings (Thompson & Fransson, 2015).

Statistical limitations reducing a given metrics' ability to track dynamics and function may also be specific to each connectivity metric. For instance, the minimum time‐interval required to estimate a simple linear correlation is much shorter than the corresponding one for Granger causality. More generally, complex mutual relationships are grossly simplified and most metrics do not allow accounting for functionally meaningful neurophysiological properties such as feedback loops, or inhibition.

Other limitations may stem from the edge reconstruction process itself, which may not take into account key available aspects of brain dynamics. For instance, synaptic pathways' finite conduction velocity and paths' varying length, give rise to dynamical phenomena such as partial synchrony (Hoppensteadt & Izhikevich, 1997), and polysynchrony (Stewart, Golubitsky, & Pivato, 2003), wherein some neurons oscillate synchronously while others do not, but also asynchronous behaviour, including polychrony, that is, time‐locked but asynchronous firing patterns (Izhikevich, 2006), and traveling waves (Muller, Chavane, Reynolds, & Sejnowski, 2018). To account for these phenomena, time lags should be incorporated into the dynamics at any scale of the analysis. In practice, though, reconstruction often turns out to be independent of system's characteristic time scales and at zero‐lag, producing dynamically spurious edges and ultimately distorting the temporal complexity of function. Moreover, if connectivity metrics reflect interregional communication and information transport modes, they should be stationary neither in time nor in the anatomical space. The indication would then be to reconstruct with region‐specific, possibly time‐varying connectivity metrics (Malagarriga, Villa, García‐Ojalvo, & Pons, 2017; Zanin, Pereda, et al., 2021) and time‐lags (Novelli, Wollstadt, Mediano, Wibral, & Lizier, 2019).

Brain recording devices' characteristics, for example, the sources of noise and artefacts of the device (cf. Section 3.5.2) or the physiological signal's characteristic used by each technique to characterize brain activity, may constitute further constraints to the physiological meaningfulness of reconstructed edges. For instance, fMRI's poor temporal resolution severely constrains the range of possible metrics, so that functional connectivity is typically estimated through the Pearson correlation between brain regions. It also forces evaluating metrics over a time‐window which may span the whole available epoch (Bastos & Schoffelen, 2016), spuriously compressing dynamics (cf. Section 3.5).

On the other hand, sophisticated edge identification can help to compensate for noise in network structure. In MEG and EEG analyses, a common source of such noise is signal mixing, that is, false edges caused by the fact that the dynamics of each source are captured by multiple measurement sensors (for details, cf. Section  3.5.2). A significant part of the false edges created by signal mixing are zero phase lag connections (Palva et al., 2018) and can therefore be eliminated by using connectivity metrics insensitive to zero phase lag connectivity, such as the imaginary part of coherence (Nolte et al., 2004), phase lag index (Stam et al., 2007), imaginary phase locking value (Palva & Palva, 2012), weighted phase lag index (Vinck et al., 2011), or orthogonalized correlation coefficient (Hipp et al., 2012; Brookes et al., 2012). As a downside, using these metrics means losing information about the true zero phase lag connections as well.

Overall, the particular metric that is being used results from a combination of functional assumptions, available mathematical tools, and constraints associated with the recording technique, and the specific goals of a given study. An important illustration of this statement is represented by the prominence of bivariate connectivity measures both in functional brain imaging (Bastos & Schoffelen, 2016) and electrophysiology (Lehnertz, 2011; Pereda et al., 2005), though multivariate ones have also been proposed (Lizier, Heinzle, Horstmann, Haynes, & Prokopenko, 2011). Whether pairwise connectivity reflects a true brain operating mode is all but a well‐established fact. The rationale for this choice can at least partially be traced to extra‐physiological factors, ranging from a long‐established conceptual framework dating back to the very onset of neuropsychological modelling, to early studies suggesting the preponderance of pairwise connectivity in neural activity (Schneidman, Berry, Segev, & Bialek, 2006), but also availability of analytical tools. However, even at short time scales, peaks in pairwise correlations may well be non‐local network effects involving other neuronal populations and feeling the influence of external fields (Roudi, Dunn, & Hertz, 2015).

3.3.1. Effects of edge metrics on network properties

The choice of a particular metric or class of metrics may potentially induce a corresponding change in properties of the associated network structure. For instance, the relation used to identify edges may affect the reconstructed network's topological and metric properties (Zanin, Pereda, et al., 2021) (cf. Section 3.6.1). Furthermore, while the choice of the embedding space is essentially a discretionary choice unrelated to network analysis (cf. Section 3.3), the edge metric determines the nature of the attractor space 7 and the set of operations allowed on it 8 by inducing a specific matrix, hence a particular space with its geometrical properties (Amari & Nagaoka, 2007). For example, normalized covariance matrices are associated with positive definite matrices, while the Pearson's correlation is associated with positive semi‐definite matrices. The space induced by these matrices may not be a vector space, so that some operations (e.g., subtractions) may not be well‐defined, and equipping it with a distance may be arduous (Bonnabel & Sepulchre, 2010; Lenglet, Rousson, Deriche, & Faugeras, 2006; Pennec, Fillard, & Ayache, 2006). For instance, when directly applied to covariance matrices, classical matrix computations may yield inaccurate results and numerical artefacts (Arsigny, Fillard, Pennec, & Ayache, 2007). While Whitney's embedding theorem (Whitney, 1936) ensures that an n‐dimensional manifold can at least be embedded into a 2n‐dimensional Euclidean space, affording an extrinsic view of the true space, an intrinsic view independent of the embedding space requires a different geometry (Pennec, 2006). One possibility consists in representing the underlying space as a Riemannian manifold, that is, a manifold M equipped with metrics on the tangent spaces which vary smoothly from point to point (Amari & Nagaoka, 2007). The structure of a Riemannian manifold can be specified by a Riemannian metric, that is, a continuous collection of scalar products on the tangent bundle T*M at each point of M, which is invariant under affine transformations (cf. Figure 3). This allows extending Euclidean properties to the manifold. Importantly, this also allows using classification or prediction algorithms, which cannot act in a Riemannian manifold (Krajsek, Menzel, & Scharr, 2016; Pervaiz et al., 2020).

FIGURE 3.

FIGURE 3

Manifold M and the corresponding tangent space T p*M at point p. At each point p of M, there is one and only one tangent vector, and a scalar product can be defined in the associated tangent space T*M. If M is the space of positive definite matrices, T*M is identified with the Euclidean space of symmetric matrices. The M ↦ T*M homomorphism allows replacing the Riemannian metric in M with the Euclidean metric in T*M, and treating the projected connectivity matrices in the tangent space as Euclidean objects

A similar structure arises naturally when considering diffusion tensor images, in which each voxel is associated with the symmetric positive definite matrix induced by the covariance tensor image (Lenglet et al., 2006; Pennec et al., 2006), or when aggregating data from different subjects of a given experiment, which induce a manifold where each point is the dynamic covariance matrix (Ng, Varoquaux, Poline, Greicius, & Thirion, 2015; Varoquaux, Baronnet, Kleinschmidt, Fillard, & Thirion, 2010), as well as in genuine functional methods such as Hidden Markov models (HMMs; cf. Sections 2.1.1 and 3.4.1).

Equipped with a Riemannian geometry, the space has in principle additional structure with respect to a topological manifold. On the one hand, Riemannian tangent space parameterization allows preserving the geometry of functional connectivity. This has a clear meaning when analysing the brain's anatomical structure, as the tangent space parameterization preserves the geometry of anatomy‐embedded functional connectivity, but its meaning is less straightforward in the functional space, which is not guaranteed to be a smooth manifold even when the dynamical space is (cf. Section 2.1.1). On the other hand, for a given Riemannian space, a further discretionary element is given by the choice of a distance among the many available ones that can be defined on the space of covariance matrices. Each of these distances has specific properties (Arsigny et al., 2007; Yger, Berar, & Lotte, 2016), and relative task‐specific performance level, some performing better in terms of classification accuracy, while others are better in terms of computational efficiency (Chevallier, Kalunga, Barthélemy, & Monacelli, 2021).

Finally, different edge metrics may also induce different physics associated with the system's network structure. For instance symmetric connectivity readily accounts for equilibrium systems, whereas, asymmetric coupling matrices can be thought of as out‐of‐equilibrium systems with breakdown of detailed balance. In this perspective, choices associated with edges and the way these are constructed, for example, hybrid reconstruction with space‐ and time‐varying properties, represent not only a technical but also a theoretical challenge, in that they induce spaces with non‐trivial geometries and corresponding physics.

3.3.2. Classical versus Bayesian reconstruction

The standard way of identifying edges in functional networks is a frequentist one, in that a single value (e.g., the correlation coefficient or any other synchronization metric) is extracted from each pair of nodes and encoded as the weight of the corresponding edge. This approach, besides creating problems related with the pruning of the network, as we will see below, further has another disadvantage: the uncertainty about the edge is lost. In other words, any metric can only yield an estimation of the real connectivity strength, due to factors like the finiteness of the time series, or the presence of observational noise. The solution to this can be found in Bayesian statistics (Bolstad & Curran, 2016). Specifically, Bayesian inference considers data to be fixed, and the model parameters to be random, as opposed to what frequentist inference does. Furthermore, Bayesian inference—unlike frequentist—estimates a full probability model, including hypothesis testing. Disregarding such uncertainty may lead to the detection of wrong topological structures, and specifically to an overestimation of the presence of regularities and non‐trivial (i.e., non‐random) structures (Zanin, Belkoura, Gomez, Alfaro, & Cano, 2018).

3.3.3. Pruning and binarization

The natural result of the previous edge identification step is, in many cases, a weighted clique: when the metric used to estimate functional connectivity yields a value and not a statistical test, a weight is assigned to each edge, corresponding to the detected functional strength. Afterwards, such cliques are usually binarized, that is, the fully connected graphs are pruned according to some rule applied to edge weights, and unitary weights are assigned to surviving edges. The direct analysis of the weighted cliques would in principle represent the best solution, as they codify all the available information about the dynamics of the brain; on the other hand, any pruning procedure inevitably deletes some information. Still, network binarization entails some important advantages.

Brain networks are expected to be naturally sparse, as increasing the connectivity implies a higher physiological cost, although the true anatomical and, as a consequence, dynamic connectivity density might be grossly underestimated (Wang & Kennedy, 2016). Secondly, most of the topological metrics available in network theory have originally been developed for unweighted graphs, to only subsequently be adapted to weighted ones; therefore, a larger set of (better validated) tools is available to the researcher. Furthermore, edges with small connectivity values may just be the result of statistical fluctuations or of noise, such that deleting them can improve the understanding of the system, and in some cases even avoid biases (van Wijk, Stam, & Daffertshofer, 2010). A pruning can also help deleting indirect, second order correlations, which do not represent direct dynamical relationships (Vakorin, Krakovska, & McIntosh, 2009). Lastly, network analysis benefits from their graphical representation, which is meaningful only in the case of sparse structures.

Two main alternatives for network binarization are available, respectively called absolute and proportional thresholding. In the former case, all edges whose strength exceeds an absolute threshold τ are retained, while all the others are deleted (Váša, Bullmore, & Patel, 2018). This usually yields networks with a different number of edges across subjects, and most importantly, across groups (e.g., between control subjects and patients). This may lead to statistically significant differences in network metrics, even when these are not due to underlying disease‐related topological differences. As such, this approach has been suggested to be less appropriate for case–control studies (Nichols et al., 2017). The second approach partly overcomes this issue, by including in each network a fixed number of the strongest edges, hence the name of proportional (van den Heuvel et al., 2017); note that this approach is often referred to in literature as an analysis in which the density (Jalili, 2016) or network cost (Achard & Bullmore, 2007) is kept constant.

Thresholds are important tools for revealing a system's structure. For instance, if the underlying space can be thought of as a differential manifold, its topology can be defined by analysing the level sets of a function defined on this manifold (Milnor, 1963). Changes in the manifold topology can be related to the critical points of a sufficiently general function; for instance, under certain hypotheses, phase transitions can be related to changes in the topology of the level sets of the system's Hamiltonian (Caiani, Casetti, Clementi, & Pettini, 1997; Casetti, Pettini, & Cohen, 2000).

The dynamical and functional structure of the attractor underlying a given network representation should ideally be robust with respect to the way the network is built. However, each threshold type can be thought of as a particular cut into the relevant space, highlighting a specific set of properties of the underlying system, and is associated with its own renormalization flow, percolation threshold and, as a consequence, phenomenological properties. Suppose a given condition, for example, a pathology, only involves a change in dynamical coupling strength, but no change in overall network topology. Then, an absolute threshold may spuriously indicate that a given condition modifies topology when it actually does not. Conversely, a proportional threshold highlights the structure of networks of strongest edges, independent of the magnitude of the coupling strength. However, if genuine function is a non‐linear function of coupling strength, proportional thresholding may incorrectly consider functionally inequivalent networks as indistinguishable. Another aspect that is worth considering when using thresholds is that the optimum fraction of actual edges needed to characterize the system may be condition‐specific and this may correspond to a significantly higher percentage of edges then the one typically retained in the analysis (Zanin et al., 2012). Finally, in the presence of nonlinearities, the importance of edges is not necessarily proportional to their strength. Moreover, weak edges have been shown to have a strong impact on network topology; their inclusion can induce transitions from fractal to small‐world universality classes (Gallos, Makse, & Sigman, 2012; Rozenfeld, Song, & Makse, 2010), and affect network dynamics and the processes taking place on it (Csermely, 2004; Karsai, Perra, & Vespignani, 2014).

3.4. Network dynamics

In practice, network reconstruction of brain function often turns out to be independent of the underlying dynamics, in particular of the system's characteristic time scales. This means that more often than not, functional brain networks are constructed as static networks: connectivity estimates are calculated over the whole measurement time series, so that the obtained network structure represents the average connectivity over the whole measurement. This approach may help to increase sensitivity of connectivity estimates and therefore produce less noisy networks (Smith et al., 2011; Van Dijk et al., 2010). However, since the brain needs to respond to varying stimuli in a continuously changing environment, it is natural to assume that also functional connectivity changes in time (Hutchison et al., 2013). Therefore, the static approach probably does not reveal the full picture of functional connectivity. An important step towards deeper understanding on the dynamics of functional networks is the concept of chronnectome (Calhoun, Miller, Pearlson, & Adalı, 2014; Iraji, DeRamus, et al., 2019). While the connectome (Sporns, Tononi, & Kötter, 2005) represents static connectivity between brain areas, the chronnectome involves also a temporal dimension, describing the brain function as a set of reoccurring, temporal connectivity patterns (Calhoun et al., 2014).

Dynamics of functional brain networks can be addressed from multiple different viewpoints. In what follows, we adopt the threefold division of Iraji, Miller, Adali, and Calhoun (2020): changes in network edges, changes in boundaries of ROIs used as network nodes, and changes in both edges and nodes, that is, time‐varying networks with time‐dependent nodes. 9

3.4.1. Edge dynamics

Changes in edge weight and related network structure have been widely reported in functional brain networks extracted from both fMRI and MEG/EEG data. These changes take place both at longer time scales, that is, across the human lifespan (fMRI: Meunier, Achard, Morcom, & Bullmore, 2009; Hwang, Hallquist, & Luna, 2013; Cao et al., 2014; Geerligs, Renken, Saliasi, Maurits, & Lorist, 2015; Gu et al., 2015; Marek, Hwang, Foran, Hallquist, & Luna, 2015; MEG/EEG: Micheloyannis et al., 2009; Smit et al., 2012; Boersma et al., 2013; Vecchio, Miraglia, Bramanti, & Rossini, 2014; Hou et al., 2018; Moezzi et al., 2019) or between health and disease (fMRI: Calhoun, Eichele, & Pearlson, 2009; MEG/EEG: Stam et al., 2009; Buldú et al., 2011; de Haan et al., 2012), and at shorter scales, between different cognitive tasks (fMRI: Sakoğlu et al., 2010; Richiardi, Eryilmaz, Schwartz, Vuilleumier, & van de Ville, 2011; Shirer et al., 2012; Leonardi, Shirer, Greicius, & van de Ville, 2014; MEG/EEG: Bassett, Meyer‐Lindenberg, Achard, Duke, & Bullmore, 2006; Palva, Monto, Kulashekhar, & Palva, 2010; Hipp, Engel, & Siegel, 2011; O'Neill et al., 2015, 2017) and also spontaneously over time in rest (fMRI: Honey, Kötter, Breakspear, & Sporns, 2007; Shehzad et al., 2009; Allen et al., 2014; Liao et al., 2015; MEG/EEG: Chu et al., 2012; de Pasquale et al., 2012; de Pasquale, Della Penna, Sporns, Romani, & Corbetta, 2016). Even spontaneous changes in functional connectivity are not random: functional brain networks fluctuate between states of different metastable connectivity profiles (Calhoun et al., 2014; Ma, Calhoun, Phlypo, & Adalı, 2014; Núñez et al., 2021) and increased and decreased global efficiency (Cocchi et al., 2017; Zalesky, Fornito, Cocchi, Gollo, & Breakspear, 2014). In particular, the community structure of functional brain networks tends to reorganize over time and between different tasks (Bassett et al., 2011; Jones et al., 2012). The time‐dependent connectivity of single ROIs is assumed to reflect their function: flexible or transient nodes with frequently changing connectivity play different roles during different tasks, while more stable nodes form the long‐term backbone of functional connectivity (Allen et al., 2014; Ryyppö et al., 2018; Salehi, Karbasi, Barron, Scheinost, & Constable, 2020; Zalesky et al., 2014).

Methods used to estimate functional connectivity (cf. Section 3.3) measure similarity of node signals over time. Therefore, they do not allow estimation of temporally point‐like edges that would be characteristic to temporal networks (Holme & Saramäki, 2012). Instead, most studies on the short‐scale dynamics of functional brain networks use time windows. In this approach, the neuroimaging time series are divided into a set of consequent or overlapping time windows and a network is constructed inside each window (Hutchison et al., 2013; Yu et al., 2018). The windows should be short enough to catch the changes in functional connectivity, while too short a window length easily results in noisy connectivity estimates (Hutchison et al., 2013; O'Neill et al., 2018; Sakoğlu et al., 2010). The optimal window overlap depends on the research question at hand, and varies from zero in the case of consequent time windows (e.g., Bassett et al., 2011) to the maximal overlap between one‐sample size sliding windows used to obtain time‐resolved connectivity (Cocchi et al., 2017; Zalesky et al., 2014). Typically, networks constructed in different time windows share a common set of nodes, which allows investigating evolution of both single edges and more global network properties over time. Separated time windows can be combined into a single network with multilayer approaches using time windows as layers (Bassett et al., 2011) or with hypergraph approaches where the nodes of the hypergraph represent the edges of networks calculated in time windows (Bassett, Wymbs, Porter, Mucha, & Grafton, 2014; Davison et al., 2015; Gu et al., 2017).

Despite the popularity of the time window approach, the interpretation of its outcomes is not fully straightforward. In particular, statistical significance of the observed changes in functional connectivity needs to be evaluated carefully, and probability to detect significant connectivity fluctuations can be surprisingly low (Hindriks et al., 2016; O'Neill et al., 2018). This limitation can be partially overcome by HMMs that use Bayesian inference to divide the data into states with characteristic activity and connectivity patterns and previously unknown lifetimes (Baker et al., 2014; Vidaurre et al., 2016; Vidaurre et al., 2018; Vidaurre, Smith, & Woolrich, 2017; Woolrich et al., 2013) (cf. Section 3.5) or by time‐frequency analysis where wavelet transfer coherence is used to quantify the similarity of two signals as a function of time and frequency (Chang & Glover, 2010). However, although these approaches detect areas showing similar dynamics at a certain time point, they do not construct networks and do not therefore allow further network‐oriented analysis.

3.4.2. Changes in ROI boundaries

So far, most studies of functional brain network dynamics have concentrated on connectivity between static ROIs. However, both internal connectivity structure and functional homogeneity of ROIs change over time (Ryyppö et al., 2018). Therefore, if the aim is to minimize information losses in node renormalization, ROIs should change in a time‐varying fashion. For example, Salehi et al. (2020) used exemplar‐based clustering (Salehi, Karbasi, Shen, Scheinost, & Constable, 2018) to define time‐dependent functional ROIs from fMRI data. Boundaries of these ROIs varied between cognitive tasks, and the ROI configuration allowed predicting which task the subject was facing and how well they performed in it.

The spatial chronnectome approach (Iraji, DeRamus, et al., 2019) divides the brain into sources or temporally synchronized neural assemblies. These sources can be, for example, ROIs or larger functional systems (sometimes referred to as brain networks); nodes of functional brain networks represent the sources (Iraji et al., 2020). Sources do not need to be static objects; instead, they can manifest themselves as a set of spatial states of voxels strongly synchronized with the source time series (Iraji, DeRamus, et al., 2019). Since the spatial location of sources may change in time, their connectivity cannot be modelled by static nodes (Calhoun et al., 2014).

As an example of the changing spatial states of sources, Iraji, DeRamus, et al. (2019) reported four different states present in fMRI data, each consisting of a partly different set of voxels, for the well‐known default mode network (Fox et al., 2005). In another fMRI study, Iraji, Fu, et al. (2019) detected the spatial states for nine brain systems and further clustered them into functional modules of those spatial states of different systems that co‐occurred more often than others. Similar time‐varying spatial states of the so‐called resting‐state networks have been reported also in MEG data (de Pasquale et al., 2010; O'Neill et al., 2015), although, to the best of our knowledge, these states have not been considered as candidates for network nodes.

3.4.3. Time‐varying networks with time‐dependent nodes

Very few studies have so far applied network tools to investigate functional brain networks with time‐dependent nodes. The approach introduced by Nurmi, Korhonen, and Kivelä (2019) is based on multilayer networks. Layers of this network represent time windows and nodes are defined independently on each layer by clustering fMRI voxels into ROIs with maximal functional homogeneity. Edges inside layers quantify functional connectivity in terms of Pearson correlation or some other similarity measure (cf. Section 3.3), while inter‐layer edges represent spatial overlap of ROIs. After constructing the network, the approach allows more sophisticated analysis in terms of, for example, multilayer motifs (Battiston, Nicosia, Chavez, & Latora, 2017) or multilayer clustering analysis (Mucha, Richardson, Macon, Porter, & Onnela, 2010).

Using time‐dependent nodes requires particular methodological rigor: renormalizing ROI boundaries over time may lead to fluctuating number of nodes, and care should be taken when comparing the associated topologies. However, given the growing evidence for the dynamic nature of brain networks, the benefits of brain network analysis with time‐dependent nodes are obvious (Iraji et al., 2020; Nurmi et al., 2019; Ryyppö et al., 2018). Indeed, the study of spatiotemporal dynamics of brain networks is one of the most important future directions for network neuroscience, and the appearance of new network approaches with time‐dependent nodes is merely a question of time.

3.5. Missing data, sub‐sampling and errors

The general assumption in experimental science is that available data accurately sample the underlying dynamical system. However, while the last decades have gifted neuroscience with considerably expanded access to finer spatio‐temporal scales of brain anatomy and dynamics, empirical neuroimaging data should always be treated as incomplete observations of a system with vastly heterogeneous pieces of hardware, with various aspects of the reconstruction process potentially inducing missing edges or nodes and, more generally, biased sampling of the relevant space.

On the one hand, instrumental techniques such as EEG or MEG do not necessarily ensure a correct sampling of the underlying dynamical system. Confounding variables may arise at various preprocessing stages, both in EEG/MEG and fMRI analysis (Pereda et al., 2005). Furthermore, the aspect of neural activity captured by other techniques may afford an insufficient representation of relevant variables responsible for the dynamics. For instance, poor definition of connectivity dynamics may cause fMRI‐based networks to miss information that may be crucial to the discrimination between conditions, for example, pathology‐specific brain dynamics, in spite of its good spatial definition (Zanin, Ivanoska, et al., 2021). On the other hand, discretionary choices throughout the reconstruction process may create or annihilate edges even for a given resolution level (cf. Section 3.3.2). At a more conceptual level, the effective scales at which the reconstruction is carried out have important consequences on the resulting network structure. This is not just because topological network properties of an inherently multiscale system such as the brain are scale‐dependent (Gallos et al., 2012), but is also to do with the effective sparseness of neural networks. On the one hand, even at the scales typical of standard system‐level neuroimaging techniques, neural circuitry is probably far more connected than often acknowledged (Wang & Kennedy, 2016). On the other hand, discretizing connections between a limited number of nodes entails that these are average connectivities, and the active units they approximate actually travel indirectly through a multitude of polysynaptic paths unaccounted for by the connection matrices between macroscopic ROIs (cf. “A priori atlases or data‐driven parcellations?” section above) or of typical EEG‐based analyses (Gramfort et al., 2013), whether sensor‐ or source‐based (cf. Section 3.5.1) (Galán, 2008; Robinson, 2013b; Robinson et al., 2016). All these factors concur in inducing a recording technique‐ and variable‐specific (Angulo, Moreno, Lippner, Barabási, & Liu, 2017) observability of brain dynamics, and as a consequence, of the underlying functional space, the extent of which is still incompletely understood.

Partial information on the structure could in principle be dealt with by resorting to maximum entropy models (Squartini, Caldarelli, Cimini, Gabrielli, & Garlaschelli, 2018) (cf. Section 2.2.2). These models yield ensembles of graphs whose topology is maximally random, given a chosen set of structural properties used as constraints, naturally providing a null distribution for quantities that are not directly constrained (Jaynes, 1957), which can be used to approximate an unknown under‐sampled probability distributions.

Thinking of observed data as the output of a process obeying detailed balance and with pairwise couplings, the model parameters can be inferred using equilibrium statistics, and likelihood maximization. In general, though, the generating process is unknown, and what is sought is a statistical description of the data in terms of a simpler model matching some property of the observed data, from the mean activity of individual populations and the correlations between them (Meshulam et al., 2017; Roudi, Tyrcha, & Hertz, 2009; Schneidman et al., 2006; Tang et al., 2008; Tkačik, Schneidman, Berry II, & Bialek, 2009; Tkačik et al., 2013), to specific combinations of activity and silence (Ganmor, Segev, & Schneidman, 2011) or higher‐order correlations (see Yeh et al., 2010, Savin & Tkačik, 2017, and Nguyen et al., 2017 for critical reviews of maximum entropy methods in neuroscience). Time‐dependent variants could provide tractable null models for the time‐varying dynamics of neural activity as an alternative to latent linear dynamical systems models. These methods prove unwieldy for very high‐dimensional spaces (Roudi et al., 2015), but can in principle be used at sufficiently coarse‐grained scales.

The partial information issue can also be dealt with through functional and effective network inference methods. These methods, which are designed to infer minimal models of the parent sets for each target node in the network (Runge, 2018; Sun, Taylor, & Bollt, 2015) or at least to identify features of its structure and dynamics, and to reflect the properties of groups of nodes in the structure (Novelli & Lizier, 2020), can be used to minimize spurious edges (Novelli et al., 2019). Model‐based edge‐prediction methods (Clauset, Moore, & Newman, 2008; Guimerà & Sales‐Pardo, 2009) yield relative probabilities for the probability of edge existence, given a generative model fitted to the observed data, which can then be used to reconstruct the network given the number of missing or spurious edges. However, information on missing edges is in general not available in neuroimaging studies. Recently, a Bayesian network‐reconstruction method has been proposed which allows optimal estimates of network structure from complex data in arbitrary formats (Newman, 2018). However, this method assumes uniform error rates and the existence of each edge is estimated via repeated measurements. Peixoto (2018) developed a similar approach, applying nonparametric Bayesian inference to a model coupling generative models of network structure, incorporating given topological properties, and models of the noisy measurement process. This method yields a reconstructed network together with its associated uncertainty estimate, based on the posterior distribution over all possible reconstructions and allows handling single edge measurements without direct error estimates.

3.5.1. Subsampling in electrophysiological studies

Electrophysiological techniques generally suffer from severe space under‐sampling. This problem is particularly acute when identifying nodes with recording sensors. Identifying nodes with sensors drastically under‐sample electrical activity at scales not observable by system‐level electrophysiological techniques leading to a coarse graining of the dynamics. This introduces a spatial scale irrespective of the actual system organization, resulting in spatial correlations in the topology of reconstructed networks, and ultimately affects topological network properties (Lee, Kim, & Jeong, 2006; Stumpf, Wiuf, & May, 2005).

Limitations in the amount of data and in the reliability of edge estimation (due to the presence of noise, of common sources or the inability of most estimators to distinguish between direct and indirect interactions) likely lead to the spurious addition, deletion or changes in the nature of edges. Spurious edges between nodes of similar degree may for instance decrease the average shortest path length and increase the clustering coefficient (Lee et al., 2006), leading to erroneously classify them as assortative, even when their true structure is disassortative (Bialonski, 2012). On the other hand, randomly sub‐sampled scale‐free networks generally turn out not to be scale‐free (Stumpf et al., 2005). Finally, in multiple electrode recordings, each sensor picks up many sources at small scales, but their number constrains the sampling on large ones, and this can lead to distortions in global topological properties of the reconstructed network (Gerhard, Pipa, Lima, Neuenschwander, & Gerstner, 2011).

3.5.2. Noise and noise reduction

Functional neuroimaging measurements, like every physiological recording, are a mixture of real signal and noise. For example, subject motion (Power et al., 2014; Power, Barness, Snyder, Schlaggar, & Petersen, 2012; van Dijk, Sabuncu, & Buckner, 2012), physiological noise from breathing and heartbeat (Birn, Smith, Jones, & Bandettini, 2008; Chang, Cunningham, & Glover, 2009; Chen et al., 2020; Dagli, Ingeholm, & Haxby, 1999; Shmueli et al., 2007), fMRI scanner drift (Hutchison et al., 2013), and noise from electric devices in MEG and EEG (Hämäläinen et al., 1993; Michel & Brunet, 2019) increase the noise level of measurement signals, leading to undesired increase in functional connectivity variance both inside and between subjects (Pervaiz et al., 2020). There are several ways to reduce the noisy component of the neuroimaging time series, including finite impulse response filters (Vorobyov & Cichocki, 2002), Kalman filters (Bartoli & Cerutti, 1983), spectral interpolation (Leske & Dalal, 2019), and wavelet transformation (Olkkonen, Pesola, Olkkonen, Valjakka, & Tuomisto, 2002; Yu, 2009). Unfortunately, many preprocessing steps applied to increase the SNR, for instance global signal regression (Fox, Zhang, Snyder, & Raichle, 2009; Gotts et al., 2013) or spatial smoothing (Alakörkkö, Saarimäki, Glerean, Saramäki, & Korhonen, 2017; Fornito, Zalesky, & Breakspear, 2013; Stanley et al., 2013; Triana, Glerean, Saramäki, & Korhonen, 2020; Wu et al., 2011), may have unexpected and undesired effects on the observed network structure.

The importance of well‐reasoned measurement and preprocessing pipelines has been widely covered in the literature (e.g., Andellini, Cannatà, Gazzellini, Bernardi, & Napolitano, 2015; Aurich, Alves Filho, Marques da Silva, & Franco, 2015; Michel & Brunet, 2019; Shirer, Jiang, Price, Ng, & Greicius, 2015). Therefore, in what follows, we will concentrate on methods proposed for handling the noise remaining in the network structure after network construction. These methods are particularly important when working on large, public datasets that, while opening insights on brain function at population level (Van Essen et al., 2013; Vidaurre et al., 2018), deny the end user the control on data collection and sometimes even on data preprocessing.

Noise in network structure is particularly problematic in studies including comparison of networks, for example, between different tasks or subject groups. Low SNR, together with the rigorous significance threshold required by the high number of comparisons, makes it hard to detect the often subtle differences between groups (Zalesky, Fornito, & Bullmore, 2010). The statistical power may be increased already at the network construction stage by reducing the number of nodes, using larger ROIs instead of measurement voxels or source points. However, an inaccurate definition of ROIs can easily lead to spurious network structure (cf. section 3.2.1).

Coarse‐graining approaches avoid this problem by reducing network dimensionality after connectivity estimation. For example, network‐based statistic (Zalesky, Fornito, & Bullmore, 2010) improves statistical power by searching for network components, that is, clusters of edges, that differ significantly between subject groups instead of testing for single edges. In the approach of Kujala et al. (2016), the nodes of the coarse‐grained network are modules of voxel‐level network, while edge weights between the modules represent the number of significant voxel‐level edges. The small number of nodes in the coarse‐grained network allows intuitive comparisons between rest and task states or healthy and diseased populations (Kujala et al., 2016). In the analysis of network dynamics, applying PCA on the connectivity patterns before clustering them into connectivity states may help to reduce the dimensionality and therefore to control the noise (Kafashan, Palanca, & Ching, 2018; Laumann et al., 2010).

In addition to the issues raised by high dimensionality, networks constructed from MEG and EEG data suffer from noise characteristic for these imaging modalities: artificial and spurious connectivity due to signal mixing (Palva & Palva, 2012; Schoffelen & Gross, 2009). In electrophysiological measurements, each measurement sensor collects signals from several neighbouring brain sources. Therefore, artificial connectivity is observed between each pair of neighbouring sources. Furthermore, when the signals of two sources are connected by a true functional edge, the neighbours of these two sources appear to be connected to each other by spurious edges. Signal mixing is most prominent in sensor‐space analysis, while source reconstruction separates the signals of different sources to some extent (Kujala et al., 2006; Palva & Palva, 2012; Schoffelen & Gross, 2009), particularly if the source reconstruction approach is optimized for detecting independent signal components. Examples of such approaches include the combination of ICA and sLORETA algorithm used by Chen et al. (2013), the minimum overlap component analysis approach by Marzetti, del Gratta, and Nolte (2008), and the multivariate autoregressive (MVAR)‐EfICA approach (Gómez‐Herrero, Atienza, Egiazarian, & Cantero, 2008) that combines PCA and ICA to remove zero‐time lag similarities, MVAR modelling to quantify time‐delayed similarities, and swLORETA to localize the cortical sources.

Clever definition of network edges can significantly reduce artificial connectivity (cf. Section 3.3), albeit at the cost of losing also true connections with zero phase lag. The hyperedge bundling approach (Wang et al., 2018) handles spurious connectivity by turning the network into a hypergraph (Battiston et al., 2020) with hyperedges connecting groups of nodes. The hyperedges are defined so that each of them should contain a true functional connection and all of its spurious reflections (Wang et al., 2018). Although edge bundling slightly reduces the spatial resolution of connection mapping, the approach detects true connections with high accuracy and notably reduces the number of false positive connections (Wang et al., 2018).

When using machine‐learning based classifier algorithms for detecting connectivity differences between tasks or subject groups, noise together with small sample sizes may increase variance in classifier weights and therefore reduce classification accuracy (Ng et al., 2015). Moreover, the connectivity estimates used as classification features are often interrelated, which violates the assumptions of most classification algorithms (Ng et al., 2015). These issues can be partly alleviated with tools from Riemannian geometry (Ng et al., 2015; Pervaiz et al., 2020) (cf. Section 3.1). Effects of noise can be further reduced by applying conventional matrix regularization techniques in the ambient space where the data have been collected (Pervaiz et al., 2020) or covariance shrinkage estimators (Chen, Wiesel, Eldar, & Hero, 2010; Ledoit & Wolf, 2004; Rahim, Thirion, & Varoquaux, 2019) either in the ambient space or in the tangential space of the Riemannian manifold.

3.6. Assessing and improving network reconstruction

In the absence of a set of principled criteria for the choice of network reconstruction parameters, a fundamental question in network neuroscience is understanding the extent to which the topological properties of the reconstructed networks are intrinsic of the system under description or, at least, are robust with respect to the way networks are reconstructed from experimental recordings (Papo, Zanin, & Buldú, 2014; Stanley et al., 2013; Zalesky, Fornito, Hardling, et al., 2010; Zanin et al., 2012). Reconstruction methods in general and renormalization procedures in particular have been shown to potentially qualitatively affect topological properties (Gallos et al., 2012; Stumpf et al., 2005). Specifically, a study on node renormalization found that global topological properties such as small‐worldness may be robust to the parcellation technique and overall number of nodes, although the quantitative aspect of these properties may be grossly affected (Zalesky, Fornito, Hardling, et al., 2010).

3.6.1. Looking backward, looking forward: From statistics to data mining

When the functional brain networks of two or more groups, for example, of patients suffering from given pathologies, are extracted and characterized, it is only natural to hope to see a difference between them, or, in other words, that the pathology under study translates itself into a different network structure. The next logical step is thus to assess the significance of such difference, and specifically of the difference between some topological metrics calculated on the networks.

Traditionally this has been performed by resorting to statistics. Given the two probability distributions yielded by the considered metric for the two sets of subjects, the equality of their average can be assessed through a Welch's t test; alternatively, the hypothesis that both distributions are equivalent (i.e., not just the average) can be tested through a Kolmogorov–Smirnov test. In both cases, the result will be a p‐value, which has to be compared against a desired significance level, and eventually corrected for multiple comparisons. Leaving aside the important discussion about the fallacies associated with only relying on the p‐value in science (Dixon, 2003; Goodman, 1999; Goodman, 2008), it is important to highlight that such tests only provide an assessment of the equality of the two distributions (or of their mean values), but yield little information about the usefulness of the result.

To clarify this point, suppose that the result consists in two probability distributions, of the same shape and only slightly shifted (i.e., almost perfectly overlapping). Provided enough samples (i.e., subjects and networks) are available, the resulting test will detect a statistical significant difference, independently on the magnitude of the shift. Nevertheless, it would be virtually impossible to create a diagnostic test from this result: the minimal difference between both distributions implies that any applied criteria would have an almost random output (Lin, Lucas Jr, & Shmueli, 2013).

The previous example illustrates an important point: statistical significance of results does not equate to usefulness. How can then this latter aspect be assessed? The solution can be found in data mining (Han, Pei, & Kamber, 2011; Vapnik, 2013), and specifically in the score yielded by a classification algorithm trained to discriminate between the two subject classes using the metrics extracted from the complex functional networks. High classification scores (thus, low errors) imply that the differences between the two groups are not just significant, but also useful for the creation of discrimination rules. In a more abstract way, the difference between statistics and data mining can be interpreted as a temporal one: the former looks backward, at what the differences were in the past data, while the latter looks forward, that is, at what these data tell us about future patients.

The interplay between p‐value and classification score is a complex one, being a function not just on the difference between the two groups, but also on the number of available instances—see figure 6 in Zanin et al. (2016). As already seen, a large number of instances may yield significant p‐values even in the absence of a useful difference; on the other hand, having only few instances may cause unreliable classification results (due to an overfitted learning), which needs to be confirmed through a p‐value examination. As such, it is convenient to take both into account when evaluating the relevance of a functional network analysis.

3.6.2. Effects on brain network characteristics

One fundamental issue is to appraise the extent to which the properties of the reconstructed system are invariant with respect to the discretionary steps in the network reconstruction process.

In more general terms, this issue is to be framed within the concept of generalizability, that is, how universal are the results obtained with a specific analysis. Generalizability has traditionally been tackled from two different points of view. On the one hand, one can perform test–retest analyses, that is, record the same subjects two or more times and compare the resulting networks, for then checking if the resulting networks kept the same properties over time. Several studies have addressed this issue (Deuker et al., 2009; Hardmeier et al., 2014; Höller et al., 2017), and observed that generalizability depends on factors such as frequency band or length of time series. On the other hand, one may check how the resulting topological metrics depend on methodological decisions such as the used software package (Mahjoory et al., 2017), the chosen frequency band (Pashkov & Dakhtin, 2019), scalp versus source data (Antiqueira, Rodrigues, van Wijk, da Costa, & Daffertshofer, 2010; Lai, Demuru, Hillebrand, & Fraschini, 2018; Palva, Monto, & Palva, 2010), anatomical versus dynamical networks (Ponten, Daffertshofer, Hillebrand, & Stam, 2010), or in general the combination of different steps (Pervaiz et al., 2020). Possibly, the most complete example of such analysis has been proposed in (Botvinik‐Nezer et al., 2020), where 70 independent teams were asked to process the same data with the aim of testing a same set of hypotheses.

While it has theoretically been shown that some connectivity patterns are more stable, and hence appear more frequently in any analysis (Malagarriga et al., 2017), the previous studies also highlight that the topological metrics of the reconstructed networks strongly depend on the methodological choices made by the researcher. This is clearly a problem, as it undermines the generalizability of results; but also represents an advantage. Specifically, this variability of results suggests that different choices help focusing on different aspects of brain dynamics, and hence some may be more suitable for different tasks, for example, for the identification of a specific pathology. This condition‐specific character has been exploited in various studies (Bosch, Herrera, López, & Maldonado, 2018; Yu, Lei, Song, Liu, & Wang, 2019; Zanin et al., 2012), in which the idea is to use the score of a classification task to guide the generation of the network (see Figure 4). In spite of promising results, it is not clear how this strategy affects generalizability, that is, if the best reconstruction process is universal for a given condition, or whether this depends on the characteristics of the studied data.

FIGURE 4.

FIGURE 4

A general recipe for post hoc iterative network reconstruction parameter update. Instead of fixing parameters (e.g., edge density, which topological metric to extract) a priori, the proposed methodology involves reconstructing networks using a large set of different parameters; the corresponding discrimination power is then evaluated using a classification problem, and the combination yielding the clearest difference between two groups of patients is chosen as the most informative one. See Zanin et al. (2012) for details

4. CONCLUDING REMARKS

Defining what is functional in brain activity is an arduous task. Functional network reconstruction should ultimately lead to the characterization of functional brain activity and could therefore assist in addressing this question. However, as we illustrated, the image of function that it provides is at least partially dependent on prior assumptions on function. Thus, the answers to fundamental questions such as “How much of brain phenomenology does a network representation help in revealing?,” “How do network properties emerge?,” “Do they have a functional meaning?,” “What network‐related aspect of brain activity can we expect to be able to reconstruct from standard neuroimaging technique recordings?” depend to some extent on the way these networks are reconstructed and, more specifically, on the (often covert) assumptions underlying the reconstruction process. The extent to which methodology matters is poorly understood, and various questions are still unanswered. For instance, are all network properties equally sensitive to methodological choices? The existing literature suggests that some properties may be more robust than others. For example, very similar scale‐free and small‐world features of brain networks are observed using different node definition strategies (Arslan et al., 2018), even with randomly selected nodes that most probably do not match the true functional organization of the brain (Fornito et al., 2010). However, although the selected brain parcellation may matter relatively little in simple network analysis in healthy subjects, the role of node definition is probably more crucial when aiming to detect the subtle changes in network structure due to, for example, diseases or aging (Arslan et al., 2018).

To evaluate the extent to which a network representation genuinely documents the way the brain carries out the functions it is supposed to fulfil is ultimately tantamount to determining whether the topological properties are intrinsic or extrinsic, leading to a better understanding of emergence of functional dynamics (Atmanspacher, 2012). To do so, implies steps at various levels.

At a network representation level, this involves constructing a structure which is able to document, and ideally, to generate, these properties. On the one hand, although several aspects of edge (and node) definition are largely independent of the space in which the network is constructed, and much of what has been said here equally applies to structures such as multi‐layer graphs, hypergraphs, or simplices (Battiston et al., 2020), constructs alternative to the standard network structure may modify microscopic network scale properties and introduce new ones. For instance, in a simplex, not only nodes but also edges have non‐trivial degree. Thus, these structures may affect topology, geometry and underlying physics of the reconstructed network. On the other hand, revisiting the basics of functional network reconstruction and its main goal leads to reconsidering the very identity and role of the structure to be used in pursuit of that goal. Can a structure of time dependent, possibly spatially overlapping nodes still be called a network and analysed as such? In the same vein, network neuroscience will likely undergo a trajectory wherein nodes and connections will be thought of in a different way, possibly integrating different known properties of neural activity such as inhibition or complex feedback loops. Their specification will as a consequence change from its current one. While network features have historically been predicated upon dynamical system and information theory, to render representations more context‐independent, network neuroscience will possibly increasingly summon constructs from disciplines such as computational topology (Carlsson, 2009; Edelsbrunner & Harer, 2010; Petri et al., 2014; Reimann et al., 2017; Stolz, 2014; Zomorodian, 2005) or statistical physics (Bianconi & Rahmede, 2016), and explanations at various levels (Marr, 1982) and of different types (Illari & Williamson, 2012; Tozzi & Papo, 2020). New constructs may include neurophysiological properties, such as inherent disorder and lack of translational invariance, and may result in representations with non‐trivial emergent geometry (Bianconi & Rahmede, 2017) accounting for phenomenology as yet poorly documented, at least in network terms, for example, topological phase transitions (Santos et al., 2019) or frustration (Gollo & Breakspear, 2014). Ultimately, this may help changing representations of brain function not just by producing constructs more robust to the specification of the space in which they are embedded as well as inter‐subject variability and noise, but also by changing the underlying vision of brain function in the first place.

At the assessment level, this will involve checking the statistical but, more importantly, also the functional significance (Demirel, 2014; Ma et al., 2009) of the network properties emerging from brain imaging data analysis. Overall, the series of relevant questions should then be: is the mapping from data to networks reliable? Is it statistically significant? Is it functionally significant? Ultimately, though, genuine breakthroughs will also require parallel conceptual advances in the way bona fide brain function is understood at various levels, which would allow incorporating “stylized facts” within the network construction process. Such descriptions may help determining when the functional space can be endowed with a network representation, how reproducible it is, and when this representation ceases to be appropriate, at least as a modelling tool (Papo, 2019).

Finally, an aspect that silently underpins most functional network reconstruction efforts, but that is seldom made explicit is that a complex network representation is a very simplified representation of brain activity, usually focused on some specific aspects, for example, how information is propagated in different brain regions; and seeking answers to specific questions, for example, how a pathology modifies this propagation. A representation may be useful without being a model of brain functioning, and its worth context‐specific rather than general. As in simplified underground maps, where the way stations and lines and their spatial location are tailored towards a specific aim without representing the system as a whole with all its characteristics, network neuroscientists should choose those elements that yield the representation that best serves their specific goals.

CONFLICT OF INTEREST

The authors declare no conflict of interest.

ACKNOWLEDGMENTS

This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 851255). M. Z. acknowledges the Spanish State Research Agency, through the Severo Ochoa and María de Maeztu Program for Centers and Units of Excellence in R&D (MDM‐2017‐0711). D. P. acknowledges the financial support from the program Accueil de Talents of the Métropole Européenne de Lille and from the Labex (laboratory of excellence) DISTALZ (Development of Innovative Strategies for a Transdisciplinary approach to Alzheimer's disease). O. K. acknowledges funding from the Osk. Huttunen Foundation and Emil Aaltonen Foundation (grant No. 190095). Open Access funding enabled and organized by ProjektDEAL.

Korhonen, O. , Zanin, M. , & Papo, D. (2021). Principles and open questions in functional brain network reconstruction. Human Brain Mapping, 42(11), 3680–3711. 10.1002/hbm.25462

Funding information H2020 European Research Council, Grant/Award Number: 851255; Emil Aaltonen Foundation, Grant/Award Number: 190095; Osk. Huttunen Foundation; Labex (laboratory of excellence) DISTALZ (Development of Innovative Strategies for a Transdisciplinary approach to Alzheimer's disease); Accueil de Talents of the Métropole Européenne de Lille; Severo Ochoa and María de Maeztu Program for Centers and Units of Excellence in R&D; Spanish State Research Agency

Endnotes

1

A field is a physical quantity that assigns a value to each of its points. For instance, a vector field assigns a vector to each point in a subset of space. Thus, for EEG data, each point may be identified with an electrode, and at each time point, one would have the output of this electrode (or some function of it). For fMRI data, this may be represented by the BOLD signal at a given voxel.

2

A point a of a subset S topological space X is isolated if the intersection of some neighbourhood of a with S consists of the point a alone; in other words a is isolated if it is an element of S but it has a neighbourhood which does not contain any other points of S.

3

A Hausdorff space is a topological space in which any two points have non‐intersecting neighbourhoods.

4

In classical and statistical mechanics, the configuration space of a physical system is the set of all possible positions that this system can reach.

5

Redundancy is a property of systems in which components are duplicated allowing the implementation of alternative functional channels when subparts of the system break down. Degeneracy is a property of systems in which structurally different elements carry out the same function (Edelman & Gally, 2001; Tononi et al., 1999).

6

In this review, we reserve the term functional for genuine function, that is, a system's ability to perform a task, which we distinguish from bare dynamics (cf. Section 2.1). In this sense, connectivity, which constitutes the microscopic scale of the analysis, is prima facie considered as a dynamical phenomenon.

7

In the study of dynamical systems, an attractor (or limit set) is a set or a space towards which a system evolves irreversibly in the absence of disturbances.

8

At system‐level scales, the set of operations allowed on the reconstructed network structure generally differs from the operations actually carried out by the system.

9

Iraji et al. (2020) refer to these classes as temporal, spatial, and spatiotemporal dynamics.

Contributor Information

Onerva Korhonen, Email: onerva.korhonen@gmail.com.

David Papo, Email: david.papo@iit.it.

DATA AVAILABILITY STATEMENT

This is a Review article—to which no data are associated.

REFERENCES

  1. Abeles, M. (1991). Corticonics: Neural circuits of the cerebral cortex. Cambridge: Cambridge University Press. [Google Scholar]
  2. Abraham, A. , Dohmatob, E. , Thirion, B. , Samaras, D. , & Varoquaux, G. (2013). Extracting brain regions from rest fMRI with total‐variation constrained dictionary learning. MICCAI – 16th International Conference on Medical Image Computing and COmputer Assisted Intervention, Sep 2013, Nagoya, Japan. [DOI] [PubMed]
  3. Achard, S. , & Bullmore, E. (2007). Efficiency and cost of economical brain functional networks. PLoS Computational Biology, 3, e17. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Aguirre, L. A. , Portes, L. L. , & Letellier, C. (2018). Structural, dynamical and symbolic observability: From dynamical systems to networks. PLoS One, 13, 10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Ahlfors, S. P. , Han, J. , Lin, F.‐H. , Witzel, T. , Belliveau, J. W. , Hämäläinen, M. S. , & Halgren, E. (2010). Cancellation of EEG and MEG signals generated by extended and distributed sources. Human Brain Mapping, 31, 140–149. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Alakörkkö, T. , Saarimäki, H. , Glerean, E. , Saramäki, J. , & Korhonen, O. (2017). Effects of spatial smoothing on functional brain networks. The European Journal of Neuroscience, 46, 2471–2480. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Alderson‐Day, B. , McCarthy‐Jones, S. , & Fernyhough, C. (2015). Hearing voices in the resting brain: A review of intrinsic functional connectivity research on auditory verbal hallucinations. Neuroscience and Biobehavioral Reviews, 55, 78–87. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Allefeld, C. , Atmanspacher, H. , & Wackermann, J. (2009). Mental states as macrostates emerging from brain electrical dynamics. Chaos, 19, 015102. [DOI] [PubMed] [Google Scholar]
  9. Allen, E. A. , Damaraju, E. , Plis, S. M. , Erhardt, E. B. , Eichele, T. , & Calhoun, V. D. (2014). Tracking whole‐brain connectivity dynamics in the resting state. Cerebral Cortex, 24, 663–676. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Amari, S. I. , & Nagaoka, H. (2000). Methods of information geometry (Vol. 191). Providence, RI: American Mathematical Society. [Google Scholar]
  11. Amunts, K. , Mohlberg, H. , Bluday, S. , & Zilles, K. (2020). Julich‐Brain: A 3D probabilistic atlas of the human brain's cytoarchitecture. Science, 369, 988–992. [DOI] [PubMed] [Google Scholar]
  12. Amunts, K. , Schleicher, A. , & Zilles, K. (2007). Cytoarchitecture of the cerebral cortex—More than localization. NeuroImage, 37, 1061–1065. [DOI] [PubMed] [Google Scholar]
  13. Amunts, K. , & Zilles, K. (2015). Architectonic mapping of the human brain beyond Brodmann. Neuron, 88, 1086–1107. [DOI] [PubMed] [Google Scholar]
  14. Andellini, M. , Cannatà, V. , Gazzellini, S. , Bernardi, B. , & Napolitano, A. (2015). Test‐retest reliability of graph metrics of resting state MRI functional brain networks: A review. Journal of Neuroscience Methods, 253, 183–192. [DOI] [PubMed] [Google Scholar]
  15. Angulo, M. T. , Moreno, J. A. , Lippner, G. , Barabási, A. L. , & Liu, Y. Y. (2017). Fundamental limitations of network reconstruction from temporal data. Journal of the Royal Society Interface, 14, 20160966. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Antiqueira, L. , Rodrigues, F. A. , van Wijk, B. C. M. , da Costa, L. F. , & Daffertshofer, A. (2010). Estimating complex cortical networks via surface recordings—A critical note. NeuroImage, 53, 439–449. [DOI] [PubMed] [Google Scholar]
  17. Arenas, A. , Díaz‐Guilera, A. , Kurths, J. , Moreno, Y. , & Zhou, C. (2008). Synchronization in complex networks. Physics Reports, 469, 93–153. [Google Scholar]
  18. Arsigny, V. , Fillard, P. , Pennec, X. , & Ayache, N. (2007). Geometric means in a novel vector space structure on symmetric positive‐definite matrices. SIAM Journal on Matrix Analysis and Applications, 29, 328–347. [Google Scholar]
  19. Arslan, S. , Ktena, S. I. , Makropoulos, A. , Robinson, E. C. , Rueckert, D. , & Parisot, S. (2018). Human brain mapping: A systematic comparison of parcellation methods for the human cerebral cortex. NeuroImage, 170, 5–30. [DOI] [PubMed] [Google Scholar]
  20. Ashwin, P. , Coombes, S. , & Nicks, R. (2016). Mathematical frameworks for oscillatory network dynamics in neuroscience. The Journal of Mathematical Neuroscience, 6, 2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Atmanspacher, H. (2012). Identifying mental states from neural states under mental constraints. Interface Focus, 2, 74–81. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Aurich, N. K. , Alves Filho, J. O. , Marques da Silva, A. M. , & Franco, A. R. (2015). Evaluating the reliability of different preprocessing steps to estimate graph theoretical measures in resting state fMRI data. Frontiers in Neuroscience, 9, 48. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Baiesi, M. , Bongini, L. , Casetti, L. , & Tattini, L. (2009). Graph theoretical analysis of the energy landscape of model polymers. Physical Review E, 80, 011905. [DOI] [PubMed] [Google Scholar]
  24. Baker, A. P. , Brookes, M. J. , Rezek, I. A. , Smith, S. M. , Behrens, T. E. J. , Smith, P. J. P. , & Woolrich, M. W. (2014). Fast transient networks in spontaneous human brain activity. eLife, 3, e01867. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Bartoli, F. , & Cerutti, S. (1983). An optimal linear filter for the reduction of noise superimposed to the EEG signal. Journal of Biomedical Engineering, 5, 274–280. [DOI] [PubMed] [Google Scholar]
  26. Başar, E. (2012). Brain function and oscillations: Volume I: Brain oscillations. In Principles and approaches. Berlin, Germany: Springer Science & Business Media. [Google Scholar]
  27. Bassett, D. S. , Meyer‐Lindenberg, A. , Achard, S. , Duke, T. , & Bullmore, E. (2006). Adaptive reconfiguration of fractal small‐world human brain functional networks. Proceedings of the National Academy of Sciences of the United States of America, 103, 19518–19523. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Bassett, D. S. , Wymbs, N. F. , Porter, M. A. , Mucha, P. J. , Carlson, J. M. , & Grafton, S. T. (2011). Dynamic reconfiguration of human brain networks during learning. Proceedings of the National Academy of Sciences of the United States of America, 108, 7641–7646. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Bassett, D. S. , Wymbs, N. F. , Porter, M. A. , Mucha, P. J. , & Grafton, S. T. (2014). Cross‐linked structure of network evolution. Chaos, 24, 013112. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Bastos, A. M. , & Schoffelen, J. M. (2016). A tutorial review of functional connectivity analysis methods and their interpretational pitfalls. Frontiers in Systems Neuroscience, 9, 175. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Battiston, F. , Cencetti, G. , Iacopini, I. , Latora, V. , Lucas, M. , Patania, A. , … Petri, G. (2020). Networks beyond pairwise interactions: Structure and dynamics. Physics Reports, 874, 1–92. [Google Scholar]
  32. Battiston, F. , Nicosia, V. , Chavez, M. , & Latora, V. (2017). Multilayer motif analysis of brain networks. Chaos, 27, 047404. [DOI] [PubMed] [Google Scholar]
  33. Bialonski, S. (2012). Inferring complex networks from time series of dynamical systems: Pitfalls, misinterpretations, and possible solutions. arXiv 1208.0800. [Google Scholar]
  34. Bianco, S. , Ignaccolo, M. , Rider, M. S. , Ross, M. J. , Winsor, P. , & Grigolini, P. (2007). Brain, music, and non‐Poisson renewal processes. Physical Review E, 75, 061911. [DOI] [PubMed] [Google Scholar]
  35. Bianconi, G. , & Rahmede, C. (2016). Network geometry with flavor: From complexity to quantum geometry. Physical Review E, 93, 032315. [DOI] [PubMed] [Google Scholar]
  36. Bianconi, G. , & Rahmede, C. (2017). Emergent hyperbolic network geometry. Scientific Reports, 7, 41974. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Birn, R. M. , Smith, M. A. , Jones, T. B. , & Bandettini, P. A. (2008). The respiration response function: The temporal dynamics of fMRI signal fluctuations related to changes in respiration. NeuroImage, 40, 644–654. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Biswal, B. B. , & Ulmer, J. L. (1999). Blind source separation of multiple signal sources of fMRI data sets using independent component analysis. Journal of Computer Assisted Tomography, 23, 265–271. [DOI] [PubMed] [Google Scholar]
  39. Blumensath, T. , Jbabdi, S. , Glasser, M. F. , van Essen, D. C. , Behrens, T. E. J. , & Smith, S. M. (2013). Spatially constrained hierarchical parcellation of the brain with resting‐state fMRI. NeuroImage, 76, 313–324. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Boccaletti, S. , Bianconi, G. , Criado, R. , del Genio, C. I. , Gómez‐Gardeñes, J. , Romance, M. , … Zanin, M. (2014). The structure and dynamics of multilayer networks. Physics Reports, 554, 1–122. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Boccaletti, S. , Kurths, J. , Osipov, G. , Valladares, D. L. , & Zhou, C. S. (2002). The synchronization of chaotic systems. Physics Reports, 366, 1–101. [Google Scholar]
  42. Boersma, M. , Smit, D. J. , Boomsma, D. I. , de Geus, E. J. , Delemarre‐van de Waal, H. A. , & Stam, C. J. (2013). Growing trees in child brains: Graph theoretical analysis of electroencephalography‐derived minimum spanning tree in 5‐ and 7‐year‐old children reflects brain maturation. Brain Connectivity, 3, 50–60. [DOI] [PubMed] [Google Scholar]
  43. Boersma, M. , Smit, D. J. , de Bie, H. M. , van Baal, G. C. M. , Boomsma, D. I. , de Geus, E. J. , … Stam, C. J. (2011). Network analysis of resting state EEG in the developing young brain: Structure comes with maturation. Human Brain Mapping, 32, 413–425. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Bolstad, W. M. , & Curran, J. M. (2016). Introduction to Bayesian statistics. Hoboken, NJ: John Wiley & Sons. [Google Scholar]
  45. Bonnabel, S. , & Sepulchre, R. (2010). Riemannian metric and geometric mean for positive semidefinite matrices of fixed rank. SIAM Journal on Matrix Analysis and Applications, 31, 1055–1070. [Google Scholar]
  46. Borsboom, D. , & Cramer, A. O. (2013). Network analysis: An integrative approach to the structure of psychopathology. Annual Review of Clinical Psychology, 9, 91–121. [DOI] [PubMed] [Google Scholar]
  47. Bosch, P. , Herrera, M. , López, J. , & Maldonado, S. (2018). Mining EEG with SVM for understanding cognitive underpinnings of math problem solving strategies. Behavioural Neurology, 2018, 4638903. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Botvinik‐Nezer, R. , Holzmeister, F. , Camerer, C. F. , Dreber, A. , Huber, J. , Johannesson, M. , … Avesani, P. (2020). Variability in the analysis of a single neuroimaging dataset by many teams. Nature, 582, 84–88. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Brett, M. , Johnsrude, I. , & Owen, A. (2002). The problem of functional localization in the human brain. Nature Reviews. Neuroscience, 3, 243–249. [DOI] [PubMed] [Google Scholar]
  50. Brezina, V. (2010). Beyond the wiring diagram: Signalling through complex neuromodulator networks. Philosophical Transactions of the Royal Society B, 365, 2363–2374. [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Brezina, V. , & Weiss, K. R. (1997). Analyzing the functional consequences of transmitter complexity. Trends in Neurosciences, 20, 538–543. [DOI] [PubMed] [Google Scholar]
  52. Brodmann, K. (1909). Vergleichende Lokalisationslehre der Grosshirnrinde in ihren Prinzipien dargestellt auf Grund des Zellenbaues. Leipzig: Verlag von Johann Ambrosius Barth. [Google Scholar]
  53. Brookes, M. J. , Hale, J. R. , Zumer, J. M. , Stevenson, C. M. , Francis, S. T. , Barnes, G. R. , … Nagarajan, S. S. (2011). Measuring functional connectivity using MEG: Methodology and comparison with fcMRI. NeuroImage, 56, 1082–1104. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Brookes, M. J. , Tewarie, P. K. , Hunt, B. A. E. , Robson, S. E. , Gascoyne, L. E. , Liddle, E. B. , … Morris, P. G. (2016). A multi‐layer network approach to MEG connectivity analysis. NeuroImage, 132, 425–438. [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Brookes, M. J. , Woolrich, M. W. , & Barnes, G. R. (2012). Measuring functional connectivity in MEG: A multivariate approach insensitive to linear source leakage. NeuroImage, 63(2), 910–920. [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Brückner, D. B. , Ronceray, P. , & Broedersz, C. P. (2020). Inferring the dynamics of underdamped stochastic systems. Physical Review Letters, 125, 058103. [DOI] [PubMed] [Google Scholar]
  57. Buldú, J. M. , Bajo, R. , Maestú, F. , Castellanos, N. , Leyva, I. , Gil, P. , … Boccaletti, S. (2011). Reorganization of functional networks in mild cognitive impairment. PLoS One, 6, e19584. [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Buldú, J. M. , & Papo, D. (2018). Can multilayer brain networks be a real step forward?. Comment on "network science of biological systems at different scales: A review" by M. Gosak et al. Physics of Life Reviews, 24, 153–155. [DOI] [PubMed] [Google Scholar]
  59. Buldú, J. M. , & Porter, M. A. (2018). Frequency‐based brain networks: From a multiplex framework to a full multilayer description. Network Neuroscience, 2, 418–441. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Bullmore, E. , & Sporns, O. (2009). Complex brain networks: Graph theoretical analysis of structural functional systems. Nature Reviews. Neuroscience, 10, 186–198. [DOI] [PubMed] [Google Scholar]
  61. Buzsaki, G. , Llinas, R. , Singer, W. , Berthoz, A. , & Christen, Y. (Eds.). (1994). Temporal coding in the brain. New York, NY: Springer‐Verlag. [Google Scholar]
  62. Cabral, J. , Hugues, E. , Sporns, O. , & Deco, G. (2011). Role of local network oscillations in resting‐state functional connectivity. NeuroImage, 57, 130–139. [DOI] [PubMed] [Google Scholar]
  63. Caiani, L. , Casetti, L. , Clementi, C. , & Pettini, M. (1997). Geometry of dynamics, Lyapunov exponents, and phase transitions. Physical Review Letters, 79, 4361–4364. [Google Scholar]
  64. Calhoun, V. D. , Eichele, T. , & Pearlson, G. (2009). Functional brain networks in schizophrenia: A review. Frontiers in Human Neuroscience, 3, 17. [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Calhoun, V. D. , Liu, J. , & Adali, T. (2009). A review of group ICA for fMRI data and ICA for joint inference of imaging, genetic, and ERP data. NeuroImage, 45, S163–S172. [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. Calhoun, V. D. , Miller, R. , Pearlson, G. , & Adalı, T. (2014). The chronnectome: Time‐varying connectivity networks as the next frontier in fMRI data discovery. Neuron, 84, 262–274. [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Cao, M. , Wang, J.‐H. , Dai, Z.‐D. , Cao, X.‐Y. , Jiang, L.‐L. , Fan, F.‐M. , … He, Y. (2014). Topological organization of the human brain functional connectome across the lifespan. Developmental Cognitive Neuroscience, 7, 76–93. [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Carlsson, G. (2009). Topology and data. Bulletin of the American Mathematical Society, 46, 255–308. [Google Scholar]
  69. Casetti, L. , Pettini, M. , & Cohen, E. G. D. (2000). Geometric approach to Hamiltonian dynamics and statistical mechanics. Physics Reports, 337, 237–341. [Google Scholar]
  70. Caspers, S. , Eickhoff, S. B. , Zilles, K. , & Amunts, K. (2013). Microstructural grey matter parcellation and its relevance for connectome analyses. NeuroImage, 80, 18–26. [DOI] [PMC free article] [PubMed] [Google Scholar]
  71. Chang, C. , Cunningham, J. P. , & Glover, G. H. (2009). Influence of heart rate on the BOLD signal: The cardiac response function. NeuroImage, 44, 857–869. [DOI] [PMC free article] [PubMed] [Google Scholar]
  72. Chang, C. , & Glover, G. H. (2010). Time‐frequency dynamics of resting‐state brain connectivity measured with fMRI. NeuroImage, 50, 81–98. [DOI] [PMC free article] [PubMed] [Google Scholar]
  73. Chen, J. E. , Lewis, L. D. , Chang, C. , Tian, Q. , Fultz, N. E. , Ohringer, N. A. , … Polimeni, J. R. (2020). Resting‐state “physiological networks”. NeuroImage, 213, 116707. [DOI] [PMC free article] [PubMed] [Google Scholar]
  74. Chen, J.‐L. , Ros, T. , & Gruzelier, J. H. (2013). Dynamic changes of ICA‐derived EEG functional connectivity in the resting state. Human Brain Mapping, 34, 852–868. [DOI] [PMC free article] [PubMed] [Google Scholar]
  75. Chen, Y. , Wiesel, A. , Eldar, Y. C. , & Hero, A. O. (2010). Shrinkage algorithms for MMSE covariance estimation. IEEE Transactions on Signal Processing, 58, 5016–5029. [Google Scholar]
  76. Chevallier, S. , Kalunga, E. K. , Barthélemy, Q. , & Monacelli, E. (2021). Review of Riemannian distances and divergences, applied to SSVEP‐based BCI. Neuroinformatics, 19(1), 93–106. [DOI] [PubMed] [Google Scholar]
  77. Chu, C. J. , Kramer, M. A. , Pathmanathan, J. , Bianchi, M. T. , Westover, M. B. , Wizon, L. , & Cash, S. S. (2012). Emergence of stable functional networks in long‐term human electroencephalography. The Journal of Neuroscience, 32, 2703–2713. [DOI] [PMC free article] [PubMed] [Google Scholar]
  78. Clauset, A. , Moore, C. , & Newman, M. E. (2008). Hierarchical structure and the prediction of missing links in networks. Nature, 453, 98–101. [DOI] [PubMed] [Google Scholar]
  79. Cocchi, L. , Yang, Z. , Zalesky, A. , Stezer, J. , Hearne, L. J. , Gollo, L. L. , & Mattingley, J. B. (2017). Neural decoding of visual stimuli varies with fluctuations in global network efficiency. Human Brain Mapping, 38, 3069–3080. [DOI] [PMC free article] [PubMed] [Google Scholar]
  80. Cohen, A. L. , Fair, D. A. , Dosenbach, N. U. F. , Miezin, F. M. , Dierker, D. , van Essen, D. C. , … Petersen, S. E. (2008). Defining functional areas in individual human brain using resting functional connectivity MRI. NeuroImage, 41, 45–57. [DOI] [PMC free article] [PubMed] [Google Scholar]
  81. Collins, D. L. (1994). 3D model‐based segmentation of individual brain structures from magnetic resonance imaging data. [Thesis]. McGill University, Canada
  82. Collins, D. L. , Holmes, C. J. , Peters, T. M. , & Evans, A. C. (1995). Automatic 3‐D model‐based neuroanatomical segmentation. Human Brain Mapping, 3, 190–208. [Google Scholar]
  83. Collins, D. L. , Neelin, P. , Peters, T. M. , & Evans, A. C. (1994). Automatic 3D intersubject registration of MR volumetric data in standardized Talairach space. Journal of Computer Assisted Tomography, 18, 192–205. [PubMed] [Google Scholar]
  84. Coombes, S. , beim Graben, P. , Potthast, R. , & Wright, J. (Eds.). (2014). Neural fields: Theory and applications. Berlin, Germany: Springer. [Google Scholar]
  85. Coombes, S. , & Byrne, Á. (2019). Next generation neural mass models. In Nonlinear dynamics in computational neuroscience (pp. 1–16). Cham: Springer. [Google Scholar]
  86. Cottereau, B. R. , Ales, J. M. , & Norcia, A. M. (2015). How to use fMRI functional localizers to improve EEG/MEG source estimation. Journal of Neuroscience Methods, 250, 64–73. [DOI] [PMC free article] [PubMed] [Google Scholar]
  87. Craddock, R. C. , James, G. A. , Holzheimer, P. E., III , Hu, X. P. , & Mayberg, H. S. (2012). A whole brain fMRI atlas generated via spatially constrained spectral clustering. Human Brain Mapping, 33, 1914–1928. [DOI] [PMC free article] [PubMed] [Google Scholar]
  88. Cross, D. J. , & Gilmore, R. (2010). Differential embedding of the Lorenz attractor. Physical Review E, 81, 066220. [DOI] [PubMed] [Google Scholar]
  89. Crutchfield, J. P. , & McNamara, B. S. (1987). Equations of motion from a data series. Complex Systems, 1, 121. [Google Scholar]
  90. Csermely, P. (2004). Strong links are important, but weak links stabilize them. Trends in Biochemical Sciences, 29, 331–334. [DOI] [PubMed] [Google Scholar]
  91. Dadi, K. , Rahim, M. , Abraham, A. , Chyzhyk, D. , Milham, M. , Thirion, B. , … for the Alzheimer's Disease Neuroimaging Initiative . (2019). Benchmarking functional connectome‐based predictive models for resting‐state fMRI. NeuroImage, 192, 115–134. [DOI] [PubMed] [Google Scholar]
  92. Dadi, K. , Varoquaux, G. , Machlouzarides‐Shalit, A. , Gorgolewski, K. J. , Wassermann, D. , Thirion, B. , & Mensch, A. (2020). Fine‐grain atlases of functional modes for fMRI analysis. NeuroImage, 221, 117126. [DOI] [PubMed] [Google Scholar]
  93. Dagli, M. S. , Ingeholm, J. E. , & Haxby, J. V. (1999). Localization of cardiac‐induced signal change in fMRI. NeuroImage, 9, 407–415. [DOI] [PubMed] [Google Scholar]
  94. Davison, E. N. , Schlesinger, K. J. , Bassett, D. S. , Lynall, M.‐E. , Miller, M. B. , Grafton, S. T. , & Carlson, J. M. (2015). Brain network adaptability across task states. PLoS Computational Biology, 11, e1004029. [DOI] [PMC free article] [PubMed] [Google Scholar]
  95. de Haan, W. , van der Flier, W. M. , Koene, T. , Smits, L. L. , Scheltens, P. , & Stam, C. J. (2012). Disrupted modular brain dynamics reflect cognitive dysfunction in Alzheimer's disease. NeuroImage, 59, 3085–3093. [DOI] [PubMed] [Google Scholar]
  96. de Pasquale, F. , Della Penna, S. , Snyder, A. Z. , Lewis, C. , Mantini, D. , Marzetti, L. , … Corbetta, M. (2010). Temporal dynamics of spontaneous MEG activity in brain networks. Proceedings of the National Academy of Sciences of the United States of America, 107, 6040–6045. [DOI] [PMC free article] [PubMed] [Google Scholar]
  97. de Pasquale, F. , Della Penna, S. , Snyder, A. Z. , Marzetti, L. , Pizzella, V. , Romani, G. L. , & Corbetta, M. (2012). A cortical core for dynamic integration of functional networks in the resting human brain. Neuron, 74, 753–764. [DOI] [PMC free article] [PubMed] [Google Scholar]
  98. de Pasquale, F. , Della Penna, S. , Sporns, O. , Romani, G. L. , & Corbetta, M. (2016). A dynamic core network and global efficiency in the resting human brain. Cerebral Cortex, 26, 4015–4033. [DOI] [PMC free article] [PubMed] [Google Scholar]
  99. de Reus, M. A. , & van den Heuvel, M. P. (2013). The parcellation‐based connectome: Limitations and extensions. NeuroImage, 80, 397–404. [DOI] [PubMed] [Google Scholar]
  100. Deco, G. , Jirsa, V. K. , & McIntosh, A. R. (2011). Emerging concepts for the dynamical organization of resting‐state activity in the brain. Nature Reviews. Neuroscience, 12, 43–56. [DOI] [PubMed] [Google Scholar]
  101. Demirel, Y. (2014). Information in biological systems and the fluctuation theorem. Entropy, 16, 1931–1948. [Google Scholar]
  102. Desikan, R. S. , Segonne, F. , Fischl, B. , Quinn, B. T. , Dickerson, B. C. , Blacker, D. , … Albert, M. S. (2006). An automated labeling system for subdividing the human cerebral cortex on MRI scans into gyral based regions of interest. NeuroImage, 31, 968–980. [DOI] [PubMed] [Google Scholar]
  103. Destrieux, C. , Fischl, B. , Dale, A. , & Halgren, E. (2010). Automatic parcellation of human cortical gyri and sulci using standard anatomical nomenclature. NeuroImage, 53, 1–15. [DOI] [PMC free article] [PubMed] [Google Scholar]
  104. Deuker, L. , Bullmore, E. T. , Smith, M. , Christensen, S. , Nathan, P. J. , Rockstroh, B. , & Bassett, D. S. (2009). Reproducibility of graph metrics of human brain functional networks. NeuroImage, 47, 1460–1468. [DOI] [PubMed] [Google Scholar]
  105. Ding, M. , Chen, Y. , & Bressler, S. L. (2006). Granger causality: Basic theory and application to neuroscience. In Schelter S., Winterhalder N., & Timmer J. (Eds.), Handbook of time series analysis. Weinheim: Wiley. [Google Scholar]
  106. Ding, S.‐L. , Royall, J. J. , Sunkin, S. M. , Ng, L. , Facer, B. A. , Lesnar, P. , … Lein, E. S. (2016). Comprehensive cellular‐resolution atlas of the adult human brain. The Journal of Comparative Neurology, 524, 3127–3481. [DOI] [PMC free article] [PubMed] [Google Scholar]
  107. Dixon, P. (2003). The p‐value fallacy and how to avoid it. Canadian Journal of Experimental Psychology, 57, 189–202. [DOI] [PubMed] [Google Scholar]
  108. Dorogovtsev, S. N. , Goltsev, A. V. , & Mendes, J. F. (2008). Critical phenomena in complex networks. Reviews of Modern Physics, 80, 1275–1335. [Google Scholar]
  109. Edelman, G. M. , & Gally, J. A. (2001). Degeneracy and complexity in biological systems. Proceedings of the National Academy of Sciences of the United States of America, 98(24), 13763–13768. [DOI] [PMC free article] [PubMed] [Google Scholar]
  110. Edelsbrunner, H. , & Harer, J. (2010). Computational topology: An introduction. Providence, RI: American Mathematical Society. [Google Scholar]
  111. Eickhoff, S. B. , Heim, S. , Zilles, K. , & Amunts, K. (2006). Testing anatomically specified hypotheses in functional imaging using cytoarchitectonic maps. NeuroImage, 32, 570–582. [DOI] [PubMed] [Google Scholar]
  112. Eickhoff, S. B. , Paus, T. , Caspers, S. , Grosbras, M.‐H. , Evans, A. C. , Zilles, K. , & Amunts, K. (2007). Assignment of functional activations to probabilistic cytoarchitectonic areas revisited. NeuroImage, 36, 511–521. [DOI] [PubMed] [Google Scholar]
  113. Eickhoff, S. B. , Stephan, K. E. , Mohlberg, H. , Grefkes, C. , Fink, G. R. , Amunts, K. , & Zilles, K. (2005). A new SPM toolbox for combining probabilistic cytoarchitectonic maps and functional imaging data. NeuroImage, 25, 1325–1335. [DOI] [PubMed] [Google Scholar]
  114. Eickhoff, S. B. , Thirion, B. , Varoquaux, G. , & Bzdok, D. (2015). Connectivity based parcellation: Critique and implications. Human Brain Mapping, 36, 4771–4792. [DOI] [PMC free article] [PubMed] [Google Scholar]
  115. Eickhoff, S. B. , Yeo, B. T. , & Genon, S. (2018). Imaging‐based parcellations of the human brain. Nature Reviews Neuroscience, 19, 672–686. [DOI] [PubMed] [Google Scholar]
  116. Estrada, E. (2011). The structure of complex networks: Theory and applications. Oxford, England: Oxford University Press. [Google Scholar]
  117. Evans, A. C. , Collins, D. L. , & Milner, B. (1992). An MRI‐based stereotactic atlas from 250 young normal subjects. Society for Neuroscience – Abstracts, 18, 408. [Google Scholar]
  118. Evans, A. C. , Marrett, S. , Neelin, P. , Collins, L. , Worsley, K. , Dai, W. , … Bub, D. (1992). Anatomical mapping of functional activation in stereotactic coordinate space. NeuroImage, 1, 43–53. [DOI] [PubMed] [Google Scholar]
  119. Fan, L. , Li, H. , Zhuo, J. , Zhang, Y. , Wang, J. , Chen, L. , … Jiang, T. (2016). The human Brainnetome atlas: A new brain atlas based on connectional architecture. Cerebral Cortex, 26, 3508–3526. [DOI] [PMC free article] [PubMed] [Google Scholar]
  120. Fischl, B. (2012). FreeSurfer. NeuroImage, 62, 774–781. [DOI] [PMC free article] [PubMed] [Google Scholar]
  121. Fischl, B. , van der Kouwe, A. , Destrieux, C. , Halgren, E. , Segonne, F. , Salat, D. H. , … Caviness, V. (2004). Automatically parcellating the human cerebral cortex. Cerebral Cortex, 14, 11–22. [DOI] [PubMed] [Google Scholar]
  122. Fodor, J. (1983). The modularity of mind. Cambridge, MA: MIT Press. [Google Scholar]
  123. Fornito, A. , Zalesky, A. , & Breakspear, M. (2013). Graph analysis of the human connectome: Promise, progress, and pitfalls. NeuroImage, 80, 426–444. [DOI] [PubMed] [Google Scholar]
  124. Fornito, A. , Zalesky, A. , & Bullmore, E. T. (2010). Network scaling effects in graph analytic studies of human resting‐state fMRI data. Frontiers in Systems Neuroscience, 4, 22. [DOI] [PMC free article] [PubMed] [Google Scholar]
  125. Fox, M. D. , Snyder, A. Z. , Vincent, J. L. , Corbetta, M. , van Essen, D. C. , & Raichle, M. E. (2005). The human brain is intrinsically organized into dynamic, anticorrelated Functional networks. Proceedings of the National Academy of Sciences of the United States of America, 102, 9673–9678. [DOI] [PMC free article] [PubMed] [Google Scholar]
  126. Fox, M. D. , Zhang, D. , Snyder, A. Z. , & Raichle, M. E. (2009). The global signal and observed anticorrelated resting state brain networks. Journal of Neurophysiology, 101, 3270–3283. [DOI] [PMC free article] [PubMed] [Google Scholar]
  127. Fraiman, D. , & Chialvo, D. R. (2012). What kind of noise is brain noise: Anomalous scaling behavior of the resting brain activity fluctuations. Frontiers in Physiology, 3, 307. [DOI] [PMC free article] [PubMed] [Google Scholar]
  128. Frazier, J. A. , Chiu, S. , Breeze, J. L. , Makris, N. , Lange, N. , Kennedy, D. N. , … Biederman, J. (2005). Structural brain magnetic resonance imaging of limbic and thalamic volumes in pediatric bipolar disorder. The American Journal of Psychiatry, 162, 1256–1265. [DOI] [PubMed] [Google Scholar]
  129. Friedrich, R. , Peinke, J. , Sahimi, M. , & Tabar, M. R. R. (2011). Approaching complexity by stochastic methods: From biological systems to turbulence. Physics Reports, 506, 87–162. [Google Scholar]
  130. Fries, P. (2005). A mechanism for cognitive dynamics: Neuronal communication through neuronal coherence. Trends in Cognitive Sciences, 9, 474–480. [DOI] [PubMed] [Google Scholar]
  131. Friston, K. J. (1994). Functional and effective connectivity in neuroimaging: A synthesis. Human Brain Mapping, 2, 56–78. [Google Scholar]
  132. Friston, K. J. (1998). The disconnection hypothesis. Schizophrenia Research, 30, 115–125. [DOI] [PubMed] [Google Scholar]
  133. Gabrielli, A. , Mastrandrea, R. , Caldarelli, G. , & Cimini, G. (2019). Grand canonical ensemble of weighted networks. Physical Review E, 99, 030301. [DOI] [PubMed] [Google Scholar]
  134. Galán, R. F. (2008). On how network architecture determines the dominant patterns of spontaneous neural activity. PLoS One, 3, e2148. [DOI] [PMC free article] [PubMed] [Google Scholar]
  135. Gallos, L. K. , Makse, H. A. , & Sigman, M. (2012). A small world of weak ties provides optimal global integration of self‐similar modules in functional brain networks. Proceedings of the National Academy of Sciences of the United States of America, 109, 2825–2830. [DOI] [PMC free article] [PubMed] [Google Scholar]
  136. Ganmor, E. , Segev, R. , & Schneidman, E. (2011). Sparse low‐order interaction network underlies a highly correlated and learnable neural population code. Proceedings of the National Academy of Sciences of the United States of America, 108, 9679–9684. [DOI] [PMC free article] [PubMed] [Google Scholar]
  137. Gao, J. , Li, D. , & Havlin, S. (2014). From a single network to a network of networks. National Science Review, 1, 346–356. [Google Scholar]
  138. Geerligs, L. , Renken, R. J. , Saliasi, E. , Maurits, N. M. , & Lorist, M. M. (2015). A brain‐wide study of age‐related changes in functional connectivity. Cerebral Cortex, 25, 1987–1999. [DOI] [PubMed] [Google Scholar]
  139. Gerhard, F. , Pipa, G. , Lima, B. , Neuenschwander, S. , & Gerstner, W. (2011). Extraction of network topology from multi‐electrode recordings: Is there a small‐world effect? Frontiers in Computational Neuroscience, 5, 4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  140. Ghrist, R. W. (2014). Elementary applied topology (Vol. 1). Seattle, WA: CreateSpace. [Google Scholar]
  141. Glasser, M. F. , Coalson, T. S. , Robinson, E. C. , Hacker, C. D. , Harwell, J. , Yacoub, E. , … van Essen, D. C. (2016). A multi‐modal parcellation of human cerebral cortex. Nature, 536, 171–178. [DOI] [PMC free article] [PubMed] [Google Scholar]
  142. Goldstein, J. M. , Seidman, L. J. , Makris, N. , Ahern, T. , O'Brien, L. M. , Caviness, V. S., Jr. , … Tsuang, M. T. (2007). Hypothalamic abnormalities in schizophrenia: Sex effects and genetic vulnerability. Biological Psychiatry, 61, 935–945. [DOI] [PubMed] [Google Scholar]
  143. Gollo, L. L. , & Breakspear, M. (2014). The frustrated brain: From dynamics on motifs to communities and networks. Philosophical Transactions of the Royal Society B, 369, 20130532. [DOI] [PMC free article] [PubMed] [Google Scholar]
  144. Gómez‐Herrero, G. , Atienza, M. , Egiazarian, K. , & Cantero, J. L. (2008). Measuring directional coupling between EEG sources. NeuroImage, 43, 497–508. [DOI] [PubMed] [Google Scholar]
  145. Goodman, S. (2008). A dirty dozen: Twelve p‐value misconceptions. Seminars in hematology, 45, 135–140. [DOI] [PubMed] [Google Scholar]
  146. Goodman, S. N. (1999). Toward evidence‐based medical statistics. 1: The P value fallacy. Annals of Internal Medicine, 30, 995–1004. [DOI] [PubMed] [Google Scholar]
  147. Gordon, E. M. , Laumann, T. O. , Adeyemo, B. , Huckins, J. F. , Kelley, W. M. , & Petersen, S. E. (2016). Generation and evaluation of a cortical area parcellation from resting‐state correlations. Cerebral Cortex, 26, 288–303. [DOI] [PMC free article] [PubMed] [Google Scholar]
  148. Göttlich, M. , Münte, T. F. , Heldmann, M. , Kasten, M. , Hagenah, J. , & Krämer, U. M. (2013). Altered resting state brain networks in Parkinson's disease. PLoS One, 10, e77336. [DOI] [PMC free article] [PubMed] [Google Scholar]
  149. Gotts, S. J. , Saad, Z. S. , Jo, H. J. , Wallace, G. L. , Cox, R. W. , & Martin, A. (2013). The perils of global signal regression for group comparisons: A case study of autism spectrum disorders. Frontiers in Human Neuroscience, 7, 356. [DOI] [PMC free article] [PubMed] [Google Scholar]
  150. Gramfort, A. , Luessi, M. , Larson, E. , Engemann, D. A. , Strohmeier, D. , Brodbeck, C. , … Hämäläinen, M. (2013). MEG and EEG data analysis with MNE‐Python. Frontiers in Neuroscience, 7, 267. [DOI] [PMC free article] [PubMed] [Google Scholar]
  151. Granger, C. W. J. (1969). Investigating causal relations by econometric models and cross‐spectral methods. Econometrica, 37, 424–438. [Google Scholar]
  152. Grech, R. , Cassar, T. , Muscat, J. , Camilleri, K. P. , Fabri, S. G. , Zervakis, M. , … Vamrumste, B. (2008). Review on solving the inverse problem in EEG source analysis. Journal of Neuroengineering and Rehabilitation, 5, 25. [DOI] [PMC free article] [PubMed] [Google Scholar]
  153. Gu, S. , Satterthwaite, T. D. , Medaglia, J. D. , Yang, M. , Gur, R. E. , Gur, R. C. , & Bassett, D. S. (2015). Emergence of system roles in normative neurodevelopment. Proceedings of the National Academy of Sciences of the United States of America, 112, 13681–13686. [DOI] [PMC free article] [PubMed] [Google Scholar]
  154. Gu, S. , Yang, M. , Medaglia, J. D. , Gur, R. C. , Gur, R. E. , Satterthwaite, T. D. , & Bassett, D. S. (2017). Functional hypergraph uncovers novel covariant structures over neurodevelopment. Human Brain Mapping, 38, 3823–3835. [DOI] [PMC free article] [PubMed] [Google Scholar]
  155. Guillon, J. , Attal, Y. , Colliot, O. , la Corte, V. , Dubois, B. , Schwartz, D. , … de Vico Fallani, F. (2017). Loss of brain inter‐frequency hubs in Alzheimer's disease. Scientific Reports, 7, 1–13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  156. Guimerà, R. , & Sales‐Pardo, M. (2009). Missing and spurious interactions and the reconstruction of complex networks. Proceedings of the National Academy of Sciences of the United States of America, 106, 22073–22078. [DOI] [PMC free article] [PubMed] [Google Scholar]
  157. Hahamy, A. , Behrmann, M. , & Malach, R. (2015). The idiosyncratic brain: Distortion of spontaneous connectivity patterns in autism spectrum disorder. Nature Neuroscience, 18, 302–309. [DOI] [PubMed] [Google Scholar]
  158. Hämäläinen, M. , Hari, R. , Ilmoniemi, R. J. , Knuutila, J. , & Lounasmaa, O. V. (1993). Magnetoencephalography—Theory, instrumentation, and applications to noninvasive studies of the working human brain. Reviews of Modern Physics, 65, 413–497. [Google Scholar]
  159. Han, J. , Pei, J. , & Kamber, M. (2011). Data mining: Concepts and techniques. Waltham, MA: Elsevier. [Google Scholar]
  160. Hardmeier, M. , Hatz, F. , Bousleiman, H. , Schindler, C. , Stam, C. J. , & Fuhr, P. (2014). Reproducibility of functional connectivity and graph measures based on the phase lag index (PLI) and weighted phase lag index (wPLI) derived from high resolution EEG. PLoS One, 9, e108648. [DOI] [PMC free article] [PubMed] [Google Scholar]
  161. Harrison, S. J. , Woolrich, M. W. , Robinson, E. C. , Glasser, M. F. , Beckmann, C. F. , Jenkinson, M. , & Smith, S. M. (2015). Large‐scale probabilistic functional modes from resting state fMRI. NeuroImage, 109, 217–231. [DOI] [PMC free article] [PubMed] [Google Scholar]
  162. He, B. , Sohrabpoir, A. , Brown, E. , & Liu, Z. (2018). Electrophysiological source imaging: A noninvasive window to brain dynamics. Annual Review of Biomedical Engineering, 20, 171–196. [DOI] [PMC free article] [PubMed] [Google Scholar]
  163. Hilary, F. G. , & Grafman, J. H. (2017). Injured brains and adaptive networks: The benefits and costs of hyperconnectivity. Trends in Cognitive Sciences, 21, 385–401. [DOI] [PMC free article] [PubMed] [Google Scholar]
  164. Hillebrand, A. , Barnes, G. R. , Bosboom, J. L. , Berendse, H. W. , & Stam, C. J. (2012). Frequency‐dependent functional connectivity within resting‐state networks: An atlas‐based MEG beamformer solution. NeuroImage, 59, 3909–3921. [DOI] [PMC free article] [PubMed] [Google Scholar]
  165. Hindriks, R. , Adhikari, M. H. , Murayama, Y. , Ganzetti, M. , Mantini, D. , Logothetis, N. K. , & Deco, G. (2016). Can sliding‐window correlations reveal dynamic functional connectivity in resting‐state fMRI? NeuroImage, 127, 242–256. [DOI] [PMC free article] [PubMed] [Google Scholar]
  166. Hipp, J. F. , Engel, A. K. , & Siegel, M. (2011). Oscillatory synchronization in large‐scale cortical networks predicts perception. Neuron, 69, 387–396. [DOI] [PubMed] [Google Scholar]
  167. Hipp, J. F. , Hawellek, D. J. , Corbetta, M. , Siegel, M. , & Engel, A. K. (2012). Large‐scale cortical correlation structure of spontaneous oscillatory activity. Nature Neuroscience, 15, 884–890. [DOI] [PMC free article] [PubMed] [Google Scholar]
  168. Hipp, J. F. , & Siegel, M. (2015). BOLD fMRI correlation reflects frequency‐specific neuronal correlation. Current Biology, 25, 1368–1374. [DOI] [PubMed] [Google Scholar]
  169. Hlaváčková‐Schindler, K. , Paluš, M. , Vejmelka, M. , & Bhattacharya, J. (2007). Causality detection based on information‐theoretic approaches in time series analysis. Physics Reports, 441, 1–46. [Google Scholar]
  170. Hohenfeld, C. , Werner, C. J. , & Reetz, K. (2018). Resting‐state connectivity in neurodegenerative disorders: Is there potential for an imaging biomarker? NeuroImage: Clinical, 18, 849–870. [DOI] [PMC free article] [PubMed] [Google Scholar]
  171. Höller, Y. , Uhl, A. , Bathke, A. , Thomschewski, A. , Butz, K. , Nardone, R. , … Trinka, E. (2017). Reliability of EEG measures of interaction: A paradigm shift is needed to fight the reproducibility crisis. Frontiers in Human Neuroscience, 11, 441. [DOI] [PMC free article] [PubMed] [Google Scholar]
  172. Holme, P. , & Saramäki, J. (2012). Temporal networks. Physics Reports, 519, 97–125. [Google Scholar]
  173. Honey, C. J. , Kötter, R. , Breakspear, M. , & Sporns, O. (2007). Network structure of cerebral cortex shapes functional connectivity on multiple time scales. Proceedings of the National Academy of Sciences of the United States of America, 104, 10240–10245. [DOI] [PMC free article] [PubMed] [Google Scholar]
  174. Honnorat, N. , Eavani, H. , Satterthwaite, T. D. , Gur, R. E. , Gur, R. C. , & Davatzikos, C. (2015). GraSP: Geodesic graph‐based segmentation with shape priors for the functional parcellation of the cortex. NeuroImage, 106, 207–221. [DOI] [PMC free article] [PubMed] [Google Scholar]
  175. Hoppensteadt, F. C. , & Izhikevich, E. M. (1997). Weakly connected neural networks. New York, NY: Springer. [Google Scholar]
  176. Horwitz, B. (2003). The elusive concept of brain connectivity. NeuroImage, 19, 466–470. [DOI] [PubMed] [Google Scholar]
  177. Hou, F. , Liu, C. , Yu, Z. , Xu, X. , Zhang, J. , Peng, C. K. , … Yang, A. (2018). Age‐related alterations in electroencephalography connectivity and network topology during n‐back working memory task. Frontiers in Human Neuroscience, 12, 484. [DOI] [PMC free article] [PubMed] [Google Scholar]
  178. Hutchison, R. B. , Womelsdorf, T. , Allen, E. A. , Bandettini, P. A. , Calhoun, V. D. , Corbetta, M. , … Chang, C. (2013). Dynamic functional connectivity: Promise, issues, and interpretations. NeuroImage, 80, 360–378. [DOI] [PMC free article] [PubMed] [Google Scholar]
  179. Hwang, K. , Hallquist, M. N. , & Luna, B. (2013). The development of hub architecture in the human functional brain network. Cerebral Cortex, 23, 2380–2393. [DOI] [PMC free article] [PubMed] [Google Scholar]
  180. Illari, P. M. , & Williamson, J. (2012). What is a mechanism? Thinking about mechanisms across the sciences. European Journal for Philosophy of Science, 2, 119–135. [Google Scholar]
  181. Iraji, A. , DeRamus, T. P. , Lewis, N. , Yasoubi, M. , Stephen, J. M. , Erhardt, E. , … Calhoun, V. D. (2019). The spatial chronnectome reveals a dynamic interplay between functional segregation and integration. Human Brain Mapping, 40, 3058–3077. [DOI] [PMC free article] [PubMed] [Google Scholar]
  182. Iraji, A. , Fu, Z. , Damaraju, E. , DeRamus, T. P. , Lewis, N. , Bustillo, J. R. , … Calhoun, V. D. (2019). Spatial dynamics within and between brain functional domains: A hierarchical approach to study time‐varying brain function. Human Brain Mapping, 40, 1969–1986. [DOI] [PMC free article] [PubMed] [Google Scholar]
  183. Iraji, A. , Miller, R. , Adali, T. , & Calhoun, V. D. (2020). Space: A missing piece of the dynamic puzzle. Trends in Cognitive Sciences, 24, 135–149. [DOI] [PMC free article] [PubMed] [Google Scholar]
  184. Izhikevich, E. M. (2006). Polychronization: Computation with spikes. Neural Computing, 18, 245–282. [DOI] [PubMed] [Google Scholar]
  185. Jain, A. K. , Murty, M. N. , & Flynn, P. J. (1999). Data clustering: A review. ACM Computing Surveys, 31, 264–323. [Google Scholar]
  186. Jalili, M. (2016). Functional brain networks: Does the choice of dependency estimator and binarization method matter? Scientific Reports, 6, 29780. [DOI] [PMC free article] [PubMed] [Google Scholar]
  187. Jaynes, E. T. (1957). Information theory and statistical mechanics. Physics Review, 106, 620–630. [Google Scholar]
  188. Jones, D. T. , Vemuri, P. , Murphy, M. C. , Gunter, J. L. , Senjem, M. L. , Machulda, M. M. , … Clifford, R. J., Jr. (2012). Non‐stationarity in the “resting brain's” modular architecture. PLoS One, 7, e39731. [DOI] [PMC free article] [PubMed] [Google Scholar]
  189. Kafashan, M. , Palanca, B. J. A. , & Ching, S. (2018). Dimensionality reduction impedes the extraction of dynamic functional connectivity states from fMRI recordings of resting wakefulness. Journal of Neuroscience Methods, 293, 151–161. [DOI] [PMC free article] [PubMed] [Google Scholar]
  190. Kalman, R. E. (1961). On the general theory of control systems. Proceedings of the 1st International Congress of IFAC, Moscow 1960 1481, Butterworth, London 1961.
  191. Kantz, H. , & Schreiber, T. (2004). Nonlinear time series analysis (Vol. 7). Cambridge, England: Cambridge University Press. [Google Scholar]
  192. Karsai, M. , Perra, N. , & Vespignani, A. (2014). Time varying networks and the weakness of strong ties. Scientific Reports, 4, 4001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  193. Kivelä, M. , Arenas, A. , Barthelemy, M. , Gleeson, J. P. , Moreno, Y. , & Porter, M. A. (2014). Multilayer Networks. Journal of Complex Networks, 2, 203–271. [Google Scholar]
  194. Korhonen, O. , Palva, S. , & Palva, J. M. (2014). Sparse weightings for collapsing inverse solutions to cortical parcellations optimize M/EEG source reconstruction accuracy. Journal of Neuroscience Methods, 226, 147–160. [DOI] [PubMed] [Google Scholar]
  195. Korhonen, O. , Saarimäki, H. , Glerean, E. , Sams, M. , & Saramäki, J. (2017). Consistency of regions of interest as nodes of fMRI functional brain networks. Network Neuroscience, 1, 254–274. [DOI] [PMC free article] [PubMed] [Google Scholar]
  196. Kozma, R. , & Freeman, W. J. (2016). Cognitive phase transitions in the cerebral cortex‐enhancing the neuron doctrine by modeling neural fields (Vol. 39). Heidelberg, Germany: Springer. [Google Scholar]
  197. Krajsek, K. , Menzel, M. I. , & Scharr, H. (2016). A Riemannian Bayesian framework for estimating diffusion tensor images. International Journal of Computer Vision, 120, 272–299. [Google Scholar]
  198. Kujala, J. , Pammer, K. , Cornelissen, P. , Roebroeck, A. , Formisano, E. , & Salmelin, R. (2006). Phase coupling in a cerebro‐cerebellar network at 8–13 Hz during reading. Cerebral Cortex, 17, 1476–1485. [DOI] [PubMed] [Google Scholar]
  199. Kujala, R. , Glerean, E. , Pan, R. K. , Jääskeläinen, I. P. , Sams, M. , & Saramäki, J. (2016). Graph coarse‐graining reveals differences in the module‐level structure of functional brain networks. The European Journal of Neuroscience, 44, 2673–2684. [DOI] [PubMed] [Google Scholar]
  200. Lai, M. , Demuru, M. , Hillebrand, A. , & Fraschini, M. (2018). A comparison between scalp‐and source‐reconstructed EEG networks. Scientific Reports, 8, 1–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  201. Laumann, T. O. , Snyder, A. Z. , Mitra, A. , Gordon, E. M. , Gratton, C. , Adeyemo, B. , … Petersen, S. E. (2010). On the stability of BOLD fMRI correlations. Cerebral Cortex, 27, 4719–4732. [DOI] [PMC free article] [PubMed] [Google Scholar]
  202. Ledoit, O. , & Wolf, M. (2004). A well‐conditioned estimator for large‐dimensional covariance matrices. Journal of Multivariate Analysis, 88, 365–411. [Google Scholar]
  203. Lee, J. (2010). Introduction to topological manifolds (Vol. 202). Heidelberg, Germany: Springer Science & Business Media. [Google Scholar]
  204. Lee, S. H. , Kim, P. J. , & Jeong, H. (2006). Statistical properties of sampled networks. Physical Review E, 73, 16102. [DOI] [PubMed] [Google Scholar]
  205. Lehnertz, K. (2011). Assessing directed interactions from neurophysiological signals—An overview. Physiological Measurement, 32(11), 1715. [DOI] [PubMed] [Google Scholar]
  206. Lenglet, C. , Rousson, M. , Deriche, R. , & Faugeras, O. (2006). Statistics on the manifold of multivariate normal distributions: Theory and application to diffusion tensor MRI processing. Journal of Mathematical Imaging and Vision, 25, 423–444. [Google Scholar]
  207. Leonardi, N. , Shirer, W. R. , Greicius, M. D. , & van de Ville, D. (2014). Disentangling dynamic networks: Separated and joint expressions of functional connectivity patterns in time. Human Brain Mapping, 35, 5984–5995. [DOI] [PMC free article] [PubMed] [Google Scholar]
  208. Leske, S. , & Dalal, S. S. (2019). Reducing power line noise in EEG and MEG data via spectrum interpolation. NeuroImage, 189, 763–776. [DOI] [PMC free article] [PubMed] [Google Scholar]
  209. Letellier, C. , & Aguirre, L. A. (2002). Investigating nonlinear dynamics from time series: The influence of symmetries and the choice of observables. Chaos, 12, 549–558. [DOI] [PubMed] [Google Scholar]
  210. Liao, X. , Yuan, L. , Zhao, T. , Dai, Z. , Shu, N. , Xia, M. , … He, Y. (2015). Spontaneous functional network dynamics and associated structural substrates in the human brain. Frontiers in Human Neuroscience, 9, 478. [DOI] [PMC free article] [PubMed] [Google Scholar]
  211. Lin, M. , Lucas, H. C., Jr. , & Shmueli, G. (2013). Research commentary—Too big to fail: Large samples and the p‐value problem. Information Systems Research, 24, 906–917. [Google Scholar]
  212. Litvak, V. , Sompolinsky, H. , Segev, I. , & Abeles, M. (2003). On the transmission of rate code in long feed‐forward networks with excitatory‐inhibitory balance. The Journal of Neuroscience, 23, 3006–3015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  213. Lizier, J. , & Rubinov, M . (2012). Multivariate construction of effective computational networks from observational data. Technical Report Preprint 25/2012, Max Planck Institute for Mathematics in the Sciences.
  214. Lizier, J. T. , Heinzle, J. , Horstmann, A. , Haynes, J. D. , & Prokopenko, M. (2011). Multivariate information‐theoretic measures reveal directed information structure and task relevant changes in fMRI connectivity. Journal of Computational Neuroscience, 30, 85–107. [DOI] [PubMed] [Google Scholar]
  215. Ma, S. , Calhoun, V. C. , Phlypo, R. , & Adalı, T. (2014). Dynamic changes of spatial functional network connectivity in healthy individuals and schizophrenia patients using independent vector analysis. NeuroImage, 90, 196–206. [DOI] [PMC free article] [PubMed] [Google Scholar]
  216. Ma, W. , Trusina, A. , El‐Samad, H. , Lim, W. A. , & Tang, C. (2009). Defining network topologies that can achieve biochemical adaptation. Cell, 138, 760–773. [DOI] [PMC free article] [PubMed] [Google Scholar]
  217. Magalhães, R. , Marques, P. , Soares, J. , Alves, V. , & Sousa, N. (2015). The impact of normalization and segmentation on resting‐state brain networks. Brain Connectivity, 5, 166–176. [DOI] [PubMed] [Google Scholar]
  218. Mahjoory, K. , Nikulin, V. V. , Botrel, L. , Linkenkaer‐Hansen, K. , Fato, M. M. , & Haufe, S. (2017). Consistency of EEG source localization and connectivity estimates. NeuroImage, 152, 590–601. [DOI] [PubMed] [Google Scholar]
  219. Makris, N. , Goldstein, J. M. , Kennedy, D. , Hodge, S. M. , Caviness, V. S. , Faraone, S. V. , … Seidman, L. J. (2006). Decreased volume of left and total anterior insular lobule in schizophrenia. Schizophrenia Research, 83, 155–171. [DOI] [PubMed] [Google Scholar]
  220. Malagarriga, D. , Villa, A. E. , García‐Ojalvo, J. , & Pons, A. J. (2017). Consistency of heterogeneous synchronization patterns in complex weighted networks. Chaos, 27, 031102. [DOI] [PubMed] [Google Scholar]
  221. Marek, S. , Hwang, K. , Foran, W. , Hallquist, M. N. , & Luna, B. (2015). The contribution of network organization and integration to the development of cognitive control. PLoS Biology, 13, e1002328. [DOI] [PMC free article] [PubMed] [Google Scholar]
  222. Marr, D. (1982). Vision: A computational investigation into the human representation and processing of visual information. San Francisco, CA: W. H. Freeman and Company. [Google Scholar]
  223. Marzetti, L. , del Gratta, C. , & Nolte, G. (2008). Understanding brain connectivity from EEG data by identifying systems composed of interacting sources. NeuroImage, 42, 87–98. [DOI] [PubMed] [Google Scholar]
  224. McKeown, M. J. , Makeig, S. , Brown, G. G. , Jung, T.‐P. , Kindermann, S. S. , Bell, A. J. , & Sejnowski, T. J. (1998). Analysis of fMRI data by blind separation into independent spatial components. Human Brain Mapping, 6, 160–188. [DOI] [PMC free article] [PubMed] [Google Scholar]
  225. Meshulam, L. , Gauthier, J. L. , Brody, C. D. , Tank, D. W. , & Bialek, W. (2017). Collective behavior of place and non‐place neurons in the hippocampal network. Neuron, 96(5), 1178–1191. [DOI] [PMC free article] [PubMed] [Google Scholar]
  226. Meunier, D. , Achard, S. , Morcom, A. , & Bullmore, E. (2009). Age‐related changes in modular organization of human brain functional networks. NeuroImage, 44, 715–723. [DOI] [PubMed] [Google Scholar]
  227. Michel, C. M. , & Brunet, D. (2019). EEG source imaging: A practical review of the analysis steps. Frontiers in Neurology, 10, 325. [DOI] [PMC free article] [PubMed] [Google Scholar]
  228. Micheloyannis, S. , Vourkas, M. , Tsirka, V. , Karakonstantaki, E. , Kanatsouli, K. , & Stam, C. J. (2009). The influence of ageing on complex brain networks: A graph theoretical analysis. Human Brain Mapping, 30, 200–208. [DOI] [PMC free article] [PubMed] [Google Scholar]
  229. Milnor, J. (1963). Morse theory. Princeton, NJ: Princeton University Press. [Google Scholar]
  230. Moezzi, B. , Pratti, L. M. , Hordacre, B. , Graetz, L. , Berryman, C. , Lavrencic, L. M. , … Goldsworthy, M. R. (2019). Characterization of young and old adult brains: An EEG functional connectivity analysis. Neuroscience, 422, 230–239. [DOI] [PubMed] [Google Scholar]
  231. Mucha, P. J. , Richardson, T. , Macon, K. , Porter, M. A. , & Onnela, J.‐P. (2010). Community structure in time‐dependent, multiscale, and multiplex networks. Science, 328, 876–878. [DOI] [PubMed] [Google Scholar]
  232. Muller, L. , Chavane, F. , Reynolds, J. , & Sejnowski, T. J. (2018). Cortical travelling waves: Mechanisms and computational principles. Nature Reviews. Neuroscience, 19, 255–268. [DOI] [PMC free article] [PubMed] [Google Scholar]
  233. Nelson, S. M. , Cohen, A. L. , Power, J. D. , Wig, G. S. , Miezin, F. B. , Wheeler, M. E. , … Petersen, S. E. (2010). A parcellation scheme for human left lateral parietal cortex. Neuron, 67, 156–170. [DOI] [PMC free article] [PubMed] [Google Scholar]
  234. Newman, M. E. J. (2003). The structure and function of complex networks. SIAM Review, 45, 167–256. [Google Scholar]
  235. Newman, M. E. J. (2018). Network structure from rich but noisy data. Nature Physics, 14, 542–545. [Google Scholar]
  236. Ng, B. , Varoquaux, G. , Poline, J. B. , Greicius, M. , & Thirion, B. (2015). Transport on Riemannian manifold for connectivity‐based brain decoding. IEEE Transactions on Medical Imaging, 35, 208–216. [DOI] [PubMed] [Google Scholar]
  237. Nguyen, H. C. , Zecchina, R. , & Berg, J. (2017). Inverse statistical problems: From the inverse Ising problem to data science. Advances in Physics, 66, 197–261. [Google Scholar]
  238. Nichols, T. E. , Das, S. , Eickhoff, S. B. , Evans, A. C. , Glatard, T. , Hanke, M. , … Proal, E. (2017). Best practices in data analysis and sharing in neuroimaging using MRI. Nature Neuroscience, 20, 299–303. [DOI] [PMC free article] [PubMed] [Google Scholar]
  239. Nolte, G. , Bai, O. , Wheaton, L. , Mari, Z. , Vorbach, S. , & Hallett, M. (2004). Identifying true brain interaction from EEG data using the imaginary part of coherency. Clinical Neurophysiology, 115(10), 2292–2307. [DOI] [PubMed] [Google Scholar]
  240. Novelli, L. , & Lizier, J. T. (2020). Inferring network properties from time series via transfer entropy and mutual information: Validation of bivariate versus multivariate approaches. arXiv 2007.07500. [DOI] [PMC free article] [PubMed] [Google Scholar]
  241. Novelli, L. , Wollstadt, P. , Mediano, P. , Wibral, M. , & Lizier, J. T. (2019). Large‐scale directed network inference with multivariate transfer entropy and hierarchical statistical testing. Network Neuroscience, 3, 827–847. [DOI] [PMC free article] [PubMed] [Google Scholar]
  242. Novikov, E. , Novikov, A. , Shannahoff‐Khalsa, D. , Schwartz, B. , & Wright, J. (1997). Scale‐similar activity in the brain. Physical Review E, 56, R2387–R2389. [Google Scholar]
  243. Núñez, P. , Poza, J. , Gómez, C. , Rodríguez‐González, V. , Hillebrand, A. , Tewarie, P. , … Hornero, R. (2021). Abnormal meta‐state activation of dynamic brain networks across the Alzheimer spectrum. NeuroImage, 232, 117898. 10.1016/j.neuroimage.2021.117898 [DOI] [PubMed] [Google Scholar]
  244. Nurmi, T. , Korhonen, O. , & Kivelä, M. (2019). Multilayer brain networks with time‐evolving nodes and analyzing network motifs in them. An extended abstract published in The Book of Abstracts. The 8th International Conference on Complex Networks & Their Applications. December 10–12, Lisbon, Portugal.
  245. Olkkonen, H. , Pesola, P. , Olkkonen, J. , Valjakka, A. , & Tuomisto, L. (2002). EEG noise cancellation by a subspace method based on wavelet decomposition. Medical Science Monitor, 8, MT199–MT204. [PubMed] [Google Scholar]
  246. O'Neill, G. C. , Bauer, M. , Woolrich, M. W. , Morris, P. G. , Barnes, G. R. , & Brookes, M. J. (2015). Dynamic recruitment of resting state sub‐networks. NeuroImage, 115, 85–95. [DOI] [PMC free article] [PubMed] [Google Scholar]
  247. O'Neill, G. C. , Tewarie, P. , Vidaurre, D. , Liuzzi, L. , Woolrich, M. W. , & Brookes, M. J. (2018). Dynamics of large‐scale electrophysiological networks: A technical review. NeuroImage, 180(part B), 559–576. [DOI] [PubMed] [Google Scholar]
  248. O'Neill, G. C. , Tewarie, P. K. , Colclough, G. L. , Gascoyne, L. E. , Hunt, B. A. E. , Morris, P. G. , … Brookes, M. J. (2017). Measurement of dynamic task related functional networks using MEG. NeuroImage, 146, 667–678. [DOI] [PMC free article] [PubMed] [Google Scholar]
  249. Paldino, M. J. , Golriz, F. , Zhang, W. , & Chu, Z. D. (2019). Normalization enhances brain network features that predict individual intelligence in children with epilepsy. PLoS One, 14, e0212901. [DOI] [PMC free article] [PubMed] [Google Scholar]
  250. Palla, G. , Derényi, I. , Farkas, I. , & Vicsek, T. (2005). Uncovering the overlapping community structure of complex networks in nature and society. Nature, 435, 814–818. [DOI] [PubMed] [Google Scholar]
  251. Paluš, M. , Albrecht, V. , & Dvořák, I. (1993). Information theoretic test for nonlinearity in time series. Physics Letters A, 175, 203–209. [Google Scholar]
  252. Palva, J. M. , Monto, S. , Kulashekhar, S. , & Palva, S. (2010). Neuronal synchrony reveals working memory networks and predicts individual memory capacity. Proceedings of the National Academy of Sciences of the United States of America, 107, 7580–7585. [DOI] [PMC free article] [PubMed] [Google Scholar]
  253. Palva, J. M. , Wang, S. H. , Palva, S. , Zhigalov, A. , Monto, S. , Brookes, M. J. , … Jerbi, K. (2018). Ghost interactions in MEG/EEG source space: A note of caution on inter‐areal coupling measures. NeuroImage, 173, 632–643. [DOI] [PubMed] [Google Scholar]
  254. Palva, S. , Kulashekhar, S. , Hämäläinen, M. , & Palva, J. M. (2011). Localization of cortical phase and amplitude dynamics during visual working memory encoding and retention. The Journal of Neuroscience, 31, 5013–5025. [DOI] [PMC free article] [PubMed] [Google Scholar]
  255. Palva, S. , Monto, S. , & Palva, J. M. (2010). Graph properties of synchronized cortical networks during visual working memory maintenance. NeuroImage, 49, 3257–3268. [DOI] [PubMed] [Google Scholar]
  256. Palva, S. , & Palva, J. M. (2012). Discovering oscillatory interaction networks with M/EEG: Challenges and breakthroughs. Trends in Cognitive Sciences, 16, 219–230. [DOI] [PubMed] [Google Scholar]
  257. Papo, D. (2013). Time scales in cognitive neuroscience. Frontiers in Physiology, 4, 86. [DOI] [PMC free article] [PubMed] [Google Scholar]
  258. Papo, D. (2014). Functional significance of complex fluctuations in brain activity: From resting state to cognitive neuroscience. Frontiers in Systems Neuroscience, 8, 112. [DOI] [PMC free article] [PubMed] [Google Scholar]
  259. Papo, D. (2015). How can we study reasoning in the brain? Frontiers in Human Neuroscience, 9, 222. [DOI] [PMC free article] [PubMed] [Google Scholar]
  260. Papo, D. (2019). Gauging functional brain activity: From distinguishability to accessibility. Frontiers in Physiology, 10, 509. [DOI] [PMC free article] [PubMed] [Google Scholar]
  261. Papo, D. , Buldú, J. M. , Boccaletti, S. , & Bullmore, E. T. (2014). Complex network theory and the brain. Philosophical Transactions of the Royal Society B, 369, 20130525. [DOI] [PMC free article] [PubMed] [Google Scholar]
  262. Papo, D. , Zanin, M. , & Buldú, J. M. (2014). Reconstructing brain networks: Have we got the basics right? Frontiers in Human Neuroscience, 8, 107. [DOI] [PMC free article] [PubMed] [Google Scholar]
  263. Papo, D. , Zanin, M. , Pineda, J. A. , Boccaletti, S. , & Buldú, J. M. (2014). Brain networks: Great expectations, hard times, and the big leap forward. Philosophical Transactions of the Royal Society B, 369, 20130525. [DOI] [PMC free article] [PubMed] [Google Scholar]
  264. Parisot, S. , Arslan, S. , Passerat‐Palmbach, J. , Wells, W. M., III , & Rueckert, D. (2016). Group‐wise parcellation of the cortex through multi‐scale spectral clustering. NeuroImage, 136, 68–83. [DOI] [PMC free article] [PubMed] [Google Scholar]
  265. Pashkov, A. A. , & Dakhtin, I. S. (2019). Consistency across functional connectivity methods and graph topological properties in EEG sensor space. In International Conference on Neuroinformatics (pp. 116–123). Springer, Cham.
  266. Peixoto, T. P. (2018). Reconstructing networks with unknown and heterogeneous errors. Physical Review X, 8, 041011. [Google Scholar]
  267. Pennec, X. (2006). Intrinsic statistics on Riemannian manifolds: Basic tools for geometric measurements. Journal of Mathematical Imaging and Vision, 25, 127–154. [Google Scholar]
  268. Pennec, X. , Fillard, P. , & Ayache, N. (2006). A Riemannian framework for tensor computing. International Journal of Computer Vision, 66, 41–66. [Google Scholar]
  269. Pereda, E. , Quiroga, R. Q. , & Bhattacharya, J. (2005). Nonlinear multivariate analysis of neurophysiological signals. Progress in Neurobiology, 77, 1–37. [DOI] [PubMed] [Google Scholar]
  270. Pervaiz, U. , Vidaurre, D. , Woolrich, M. W. , & Smith, S. M. (2020). Optimising network modelling methods for fMRI. NeuroImage, 211, 116604. [DOI] [PMC free article] [PubMed] [Google Scholar]
  271. Petri, G. , Expert, P. , Turkheimer, F. , Carhart‐Harris, R. , Nutt, D. , Hellier, P. J. , & Vaccarino, F. (2014). Homological scaffolds of brain functional networks. Journal of the Royal Society Interface, 11, 20140873. [DOI] [PMC free article] [PubMed] [Google Scholar]
  272. Ponten, S. C. , Daffertshofer, A. , Hillebrand, A. , & Stam, C. J. (2010). The relationship between structural and functional connectivity: Graph theoretical analysis of an EEG neural mass model. NeuroImage, 52, 985–994. [DOI] [PubMed] [Google Scholar]
  273. Pillai, A. S. , & Jirsa, V. K. (2017). Symmetry breaking in space‐time hierarchies shapes brain dynamics and behavior. Neuron, 94(5), 1010–1026. [DOI] [PubMed] [Google Scholar]
  274. Power, J. D. , Barness, K. A. , Snyder, A. Z. , Schlaggar, B. L. , & Petersen, S. E. (2012). Spurious but systematic correlations in functional connectivity MRI networks arise from subject motion. NeuroImage, 59, 2142–2154. [DOI] [PMC free article] [PubMed] [Google Scholar]
  275. Power, J. D. , Cohen, A. L. , Nelson, S. M. , Wig, G. S. , Barnes, K. A. , Church, J. A. , … Petersen, S. E. (2011). Functional network organization of the human brain. Neuron, 72, 665–678. [DOI] [PMC free article] [PubMed] [Google Scholar]
  276. Power, J. D. , Mitra, A. , Laumann, T. O. , Snyder, A. Z. , Schlaggar, B. L. , & Petersen, S. E. (2014). Methods to detect, characterize, and remove motion artifact in resting state fMRI. NeuroImage, 84, 320–341. [DOI] [PMC free article] [PubMed] [Google Scholar]
  277. Price, C. J. , & Friston, K. J. (2002). Degeneracy and cognitive anatomy. Trends in Cognitive Sciences, 6, 416–421. [DOI] [PubMed] [Google Scholar]
  278. Qian, L. , Zhang, Y. , Zheng, L. , Shang, Y. , Gao, J. H. , & Liu, Y. (2015). Frequency dependent topological patterns of resting‐state brain networks. PLoS One, 10, e0124681. [DOI] [PMC free article] [PubMed] [Google Scholar]
  279. Qiao, L. , Zhang, H. , Kim, M. , Teng, S. , Zhang, L. , & Shen, D. (2016). Estimating functional brain networks by incorporating a modularity prior. NeuroImage, 141, 399–407. [DOI] [PMC free article] [PubMed] [Google Scholar]
  280. Rahim, M. , Thirion, B. , & Varoquaux, G. (2019). Population shrinkage of covariance (PoSCE) for better individual brain functional‐connectivity estimation. Medical Image Analysis, 54, 138–148. [DOI] [PubMed] [Google Scholar]
  281. Reimann, M. W. , Nolte, M. , Scolamiero, M. , Turner, K. , Perin, R. , Chindemi, G. , … Markram, H. (2017). Cliques of neurons bound into cavities provide a missing link between structure and function. Frontiers in Computational Neuroscience, 11, 48. [DOI] [PMC free article] [PubMed] [Google Scholar]
  282. Richiardi, J. , Eryilmaz, H. , Schwartz, S. , Vuilleumier, P. , & van de Ville, D. (2011). Decoding brain states from fMRI connectivity graphs. NeuroImage, 56, 616–626. [DOI] [PubMed] [Google Scholar]
  283. Rieke, F. , Warland, D. , van Steveninck, R. D. R. , & Bialek, W. S. (1999). Spikes: Exploring the neural code (Vol. 7). Cambridge, MA: MIT Press. [Google Scholar]
  284. Robinson, M. (2013a). Topological signal processing. New York, NY: Springer. [Google Scholar]
  285. Robinson, P. A. (2013b). Discrete‐network versus modal representations of brain activity: Why a sparse regions‐of‐interest approach can work for analysis of continuous dynamics. Physical Review E, 88, 054702. [DOI] [PubMed] [Google Scholar]
  286. Robinson, P. A. , Zhao, X. , Aquino, K. M. , Griffiths, J. D. , Sarkar, S. , & Mehta‐Pandejee, G. (2016). Eigenmodes of brain activity: Neural field theory predictions and comparison with experiment. NeuroImage, 142, 79–98. [DOI] [PubMed] [Google Scholar]
  287. Rolls, E. T. , Huang, C. C. , Lin, C. P. , Feng, J. , & Joliot, M. (2020). Automated anatomical labelling atlas 3. NeuroImage, 206, 116189. [DOI] [PubMed] [Google Scholar]
  288. Rolls, E. T. , Joliot, M. , & Tzourio‐Mazoyer, N. (2015). Implementation of a new parcellation of the orbitofrontal cortex in the automated anatomical labeling atlas. NeuroImage, 122, 1–5. [DOI] [PubMed] [Google Scholar]
  289. Roudi, Y. , Dunn, B. , & Hertz, J. (2015). Multi‐neuronal activity and functional connectivity in cell assemblies. Current Opinion in Neurobiology, 32, 38–44. [DOI] [PubMed] [Google Scholar]
  290. Roudi, Y. , Tyrcha, J. , & Hertz, J. (2009). Ising model for neural data: Model quality and approximate methods for extracting functional connectivity. Physical Review E, 79, 051915. [DOI] [PubMed] [Google Scholar]
  291. Rozenfeld, H. D. , Song, C. , & Makse, H. A. (2010). Small‐world to fractal transition in complex networks: A renormalization group approach. Physical Review Letters, 104, 025701. [DOI] [PubMed] [Google Scholar]
  292. Rubinov, M. , & Sporns, O. (2010). Complex network measures of brain connectivity: Uses and interpretations. NeuroImage, 52, 1059–1069. [DOI] [PubMed] [Google Scholar]
  293. Runge, J. (2018). Causal network reconstruction from time series: From theoretical assumptions to practical estimation. Chaos, 28, 075310. [DOI] [PubMed] [Google Scholar]
  294. Ryyppö, E. , Glerean, E. , Brattico, E. , Saramäki, J. , & Korhonen, O. (2018). Regions of interest as nodes of dynamic functional brain networks. Nature Neuroscience, 2, 513–535. [DOI] [PMC free article] [PubMed] [Google Scholar]
  295. Sakoğlu, U. , Pearlson, G. D. , Kiehl, K. A. , Wang, Y. M. , Michael, A. M. , & Calhoun, V. D. (2010). A method for evaluating dynamic functional network connectivity and task‐modulation: Application to schizophrenia. MAGMA, 23, 351–366. [DOI] [PMC free article] [PubMed] [Google Scholar]
  296. Sala‐Lloch, R. , Smith, S. M. , Woolrich, M. , & Duff, E. P. (2019). Spatial parcellations, spectral filtering, and connectivity measures in fMRI: Optimizing for discrimination. Human Brain Mapping, 40, 407–419. [DOI] [PMC free article] [PubMed] [Google Scholar]
  297. Salehi, M. , Karbasi, A. , Barron, D. S. , Scheinost, D. , & Constable, R. T. (2020). Individualized functional networks reconfigure with cognitive state. NeuroImage, 106, 116233. [DOI] [PMC free article] [PubMed] [Google Scholar]
  298. Salehi, M. , Karbasi, A. , Shen, X. , Scheinost, D. , & Constable, R. T. (2018). An exemplar‐based approach to individualized parcellation reveals the need for sex specific functional networks. NeuroImage, 170, 54–67. [DOI] [PMC free article] [PubMed] [Google Scholar]
  299. Santos, F. A. , Raposo, E. P. , Coutinho‐Filho, M. D. , Copelli, M. , Stam, C. J. , & Douw, L. (2019). Topological phase transitions in functional brain networks. Physical Review E, 100, 032414. [DOI] [PubMed] [Google Scholar]
  300. Sato, J. R. , Fujita, A. , Cardoso, E. G. , Thomaz, C. E. , Brammer, M. J. , & Amaro, E., Jr. (2010). Analyzing the connectivity between regions of interest: An approach based on cluster Granger causality for fMRI data analysis. NeuroImage, 52, 1444–1455. [DOI] [PubMed] [Google Scholar]
  301. Savin, C. , & Tkačik, G. (2017). Maximum entropy models as a tool for building precise neural controls. Current Opinion in Neurobiology, 46, 120–126. [DOI] [PubMed] [Google Scholar]
  302. Schaefer, A. , Kong, R. , Gordon, E. M. , Laumann, T. O. , Zuo, X. N. , Holmes, A. J. , … Yeo, B. T. (2017). Local‐global parcellation of the human cerebral cortex from intrinsic functional connectivity MRI. Cerebral Cortex, 28, 3095–3114. [DOI] [PMC free article] [PubMed] [Google Scholar]
  303. Schaub, M. T. , Delvenne, J. C. , Rosvall, M. , & Lambiotte, R. (2017). The many facets of community detection in complex networks. Applied Network Science, 2, 4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  304. Schmidt, A. , Smieskova, R. , Aston, J. , Simon, A. , Allen, P. , Fusar‐Poli, P. , … Borgwardt, S. (2013). Brain connectivity abnormalities predating the onset of psychosis: Correlation with the effect of medication. JAMA Psychiatry, 70, 903–912. [DOI] [PubMed] [Google Scholar]
  305. Schneidman, E. , Berry, M. J. , Segev, R. , & Bialek, W. (2006). Weak pairwise correlations imply strongly correlated network states in a neural population. Nature, 440, 1007–1012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  306. Schnitzler, A. , & Gross, J. (2005). Normal and pathological oscillatory communication in the brain. Nature Reviews. Neuroscience, 6, 285–296. [DOI] [PubMed] [Google Scholar]
  307. Schoffelen, J. M. , & Gross, J. (2009). Source connectivity analysis with MEG and EEG. Human Brain Mapping, 30, 1857–1865. [DOI] [PMC free article] [PubMed] [Google Scholar]
  308. Schreiber, T. (2000). Measuring information transfer. Physical Review Letters, 85, 461–464. [DOI] [PubMed] [Google Scholar]
  309. Schulman, L. S. , & Gaveau, B. (2001). Coarse grains: The emergence of space and order. Foundations of Physics, 31, 713–731. [Google Scholar]
  310. Shadlen, M. N. , & Newsome, W. T. (1998). The variable discharge of cortical neurons: Implications for connectivity, computation and information coding. The Journal of Neuroscience, 18, 3870–3896. [DOI] [PMC free article] [PubMed] [Google Scholar]
  311. Shehzad, Z. , Kelly, A. M. C. , Reiss, P. T. , Gee, D. G. , Gotimer, K. , Uddin, L. Q. , … Milham, M. P. (2009). The resting brain: Unconstrained yet reliable. Cerebral Cortex, 19, 2209–2229. [DOI] [PMC free article] [PubMed] [Google Scholar]
  312. Shen, X. , Papademetris, X. , & Constable, R. T. (2010). Graph‐theory based parcellation of functional subunits in the brain from resting‐state fMRI data. NeuroImage, 50, 1027–1035. [DOI] [PMC free article] [PubMed] [Google Scholar]
  313. Shen, X. , Tokoglu, F. , Papademetris, X. , & Constable, R. T. (2013). Groupwise wholebrain parcellation from resting‐state fMRI data for network node identification. NeuroImage, 82, 403–415. [DOI] [PMC free article] [PubMed] [Google Scholar]
  314. Sherrington, D. (2010). Physics and complexity. Philosophical Transactions of the Royal Society A, 368, 1175–1189. [DOI] [PMC free article] [PubMed] [Google Scholar]
  315. Shirer, W. R. , Jiang, H. , Price, C. M. , Ng, B. , & Greicius, M. D. (2015). Optimization of rs‐fMRI pre‐processing for enhanced signal‐noise separation, test‐retest reliability, and group discrimination. NeuroImage, 117, 67–79. [DOI] [PubMed] [Google Scholar]
  316. Shirer, W. R. , Ryali, S. , Rykhlevskaia, E. , Menon, V. , & Greicius, M. D. (2012). Decoding subject‐driven cognitive states with whole‐brain connectivity patterns. Cerebral Cortex, 22, 158–165. [DOI] [PMC free article] [PubMed] [Google Scholar]
  317. Shmueli, K. , van Gelderen, P. , de Zwart, J. A. , Horovitz, S. G. , Fukunaga, M. , Jansma, J. M. , & Duyn, J. H. (2007). Low‐frequency fluctuations in the cardiac rate as a source of variance in the resting‐state fMRI BOLD signal. NeuroImage, 38, 306–320. [DOI] [PMC free article] [PubMed] [Google Scholar]
  318. Siegel, M. , Donner, T. H. , & Engel, A. K. (2012). Spectral fingerprints of large‐scale neuronal interactions. Nature Reviews. Neuroscience, 13, 121–134. [DOI] [PubMed] [Google Scholar]
  319. Simas, T. , & Rocha, L. M. (2015). Distance closures on complex networks. Network Science, 3(2), 227–268. [Google Scholar]
  320. Smit, D. J. , Boersma, M. , Schnack, H. G. , Micheloyannis, S. , Boomsma, D. I. , Hulshoff Pol, H. E. , … de Geus, E. J. (2012). The brain matures with stronger functional connectivity and decreased randomness of its network. PLoS One, 7, e36896. [DOI] [PMC free article] [PubMed] [Google Scholar]
  321. Smith, S. M. , Miller, K. L. , Moeller, S. , Xu, J. , Auerbach, E. J. , Woolrich, M. W. , … Ugurbil, K. (2012). Temporally‐independent functional modes of spontaneous brain activity. Proceedings of the National Academy of Sciences of the United States of America, 109, 3131–3136. [DOI] [PMC free article] [PubMed] [Google Scholar]
  322. Smith, S. M. , Miller, K. L. , Salimi‐Khorshidi, G. , Webster, M. , Beckmann, C. F. , Nichols, T. E. , … Woolrich, M. W. (2011). Network modelling methods for FMRI. NeuroImage, 54, 875–891. [DOI] [PubMed] [Google Scholar]
  323. Sporns, O. , Tononi, G. , & Kötter, R. (2005). The human connectome: A structural description of the human brain. PLoS Computational Biology, 1, e42. [DOI] [PMC free article] [PubMed] [Google Scholar]
  324. Squartini, T. , Caldarelli, G. , Cimini, G. , Gabrielli, A. , & Garlaschelli, D. (2018). Reconstruction methods for networks: The case of economic and financial systems. Physics Reports, 757, 1–47. [Google Scholar]
  325. Sreenivasan, V. , Menon, S. N. , & Sinha, S. (2017). Emergence of coupling‐induced oscillations and broken symmetries in heterogeneously driven nonlinear reaction networks. Scientific Reports, 7, 1594. [DOI] [PMC free article] [PubMed] [Google Scholar]
  326. Stadler, B. M. R. , Stadler, P. F. , Wagner, G. P. , & Fontana, W. (2001). The topology of the possible: Formal spaces underlying patterns of evolutionary change. Journal of Theoretical Biology, 213, 241–274. [DOI] [PubMed] [Google Scholar]
  327. Stam, C. J. , de Haan, W. , Daffertshofer, A. , Jones, B. F. , Manshanden, I. , van cappellen van Walsum, A. M. , … Scheltens, P. (2009). Graph theoretical analysis of magnetoencephalographic functional connectivity in Alzheimer's disease. Brain, 132, 213–224. [DOI] [PubMed] [Google Scholar]
  328. Stam, C. J. , Nolte, G. , & Daffertshofer, A. (2007). Phase lag index: assessment of functional connectivity from multi channel EEG and MEG with diminished bias from common sources. Human Brain Mapping, 28(11), 1178–1193. [DOI] [PMC free article] [PubMed] [Google Scholar]
  329. Stanley, M. L. , Moussa, M. N. , Paolini, B. M. , Lyday, R. G. , Burdette, J. H. , & Laurienti, P. J. (2013). Defining nodes in complex brain networks. Frontiers in Computational Neuroscience, 7, 169. [DOI] [PMC free article] [PubMed] [Google Scholar]
  330. Stephan, K. E. , Friston, K. J. , & Frith, C. D. (2009). Dysconnection in schizophrenia: From abnormal synaptic plasticity to failures of self‐monitoring. Schizophrenia Bulletin, 35, 509–527. [DOI] [PMC free article] [PubMed] [Google Scholar]
  331. Sterling, P. , & Laughlin, S. (2015). Principles of neural design. Cambridge, MA: MIT Press. [Google Scholar]
  332. Stewart, I. , Golubitsky, M. , & Pivato, M. (2003). Symmetry groupoids and patterns of synchrony in coupled cell networks. SIAM Journal on Applied Dynamical Systems, 2, 606–646. [Google Scholar]
  333. Still, S. , Sivak, D. A. , Bell, A. J. , & Crooks, G. E. (2012). Thermodynamics of prediction. Physical Review Letters, 109, 120604. [DOI] [PubMed] [Google Scholar]
  334. Stolz, B. (2014). Computational topology in neuroscience. [Master's thesis]. University of Oxford.
  335. Stumpf, M. P. H. , Wiuf, C. , & May, R. M. (2005). Subnets of scale‐free networks are not scale‐free: Sampling properties of networks. Proceedings of the National Academy of Sciences of the United States of America, 102, 4221–4224. [DOI] [PMC free article] [PubMed] [Google Scholar]
  336. Sun, J. , Taylor, D. , & Bollt, E. M. (2015). Causal network inference by optimal causation entropy. SIAM Journal on Applied Dynamical Systems, 14, 73–106. [Google Scholar]
  337. Sun, S. , Huang, R. , & Gao, Y. (2012). Network‐scale traffic modeling and forecasting with graphical lasso and neural networks. Journal of Transportation Engineering, 138(11), 1358–1367. [Google Scholar]
  338. Talairach, J. , & Tournoux, P. (1988). Co‐planar stereotaxic atlas of the human brain. Stuttgart: Thieme. [Google Scholar]
  339. Tang, A. , Jackson, D. , Hobbs, J. , Chen, W. , Smith, J. L. , Patel, H. , … Hottowy, P. (2008). A maximum entropy model applied to spatial and temporal correlations from cortical networks in vitro. The Journal of Neuroscience, 28, 505–518. [DOI] [PMC free article] [PubMed] [Google Scholar]
  340. Thirion, B. , Flandin, G. , Pinel, P. , Roche, A. , Ciuciu, P. , & Poline, J.‐B. (2006). Dealing with the shortcomings of spatial normalization: Multi‐subject parcellation of fMRI datasets. Human Brain Mapping, 27, 678–693. [DOI] [PMC free article] [PubMed] [Google Scholar]
  341. Thirion, B. , Varoquaux, G. , Dohmatob, E. , & Poline, J.‐B. (2014). Which fMRI clustering gives good brain parcellations? Frontiers in Neuroscience, 8, 167. [DOI] [PMC free article] [PubMed] [Google Scholar]
  342. Thompson, W. H. , & Fransson, P. (2015). The frequency dimension of fMRI dynamic connectivity: Network connectivity, functional hubs and integration in the resting brain. NeuroImage, 121, 227–242. [DOI] [PubMed] [Google Scholar]
  343. Thurner, S. (2005). Nonextensive statistical mechanics and complex scale‐free networks. Europhysics News, 36, 218–220. [Google Scholar]
  344. Tkačik, G. , Marre, O. , Mora, T. , Amodei, D. , Berry, M. J., II , & Bialek, W. (2013). The simplest maximum entropy model for collective behavior in a neural network. Journal of Statistical Mechanics, 2013, P03011. [Google Scholar]
  345. Tkačik, G. , Schneidman, E. , Berry, M. J., II , & Bialek, W. (2009). Spin glass models for a network of real neurons. arXiv 0912.5409. [Google Scholar]
  346. Tononi, G. , Sporns, O. , & Edelman, G. M. (1999). Measures of degeneracy and redundancy in biological networks. Proceedings of the National Academy of Sciences of the United States of America, 96(6), 3257–3262. [DOI] [PMC free article] [PubMed] [Google Scholar]
  347. Tozzi, A. , & Papo, D. (2020). Projective mechanisms subtending real world phenomena wipe away cause effect relationships. Progress in Biophysics and Molecular Biology, 151, 1–13. [DOI] [PubMed] [Google Scholar]
  348. Triana, A. M. , Glerean, E. , Saramäki, J. , & Korhonen, O. (2020). Effects of spatial smoothing on group‐level differences in functional brain networks. Network Neuroscience, 4, 556–574. [DOI] [PMC free article] [PubMed] [Google Scholar]
  349. Tzourio‐Mazoyer, N. , Landeau, B. , Papathanassiou, D. , Crivello, F. , Etard, O. , Delcroix, N. , … Joliot, M. (2002). Automated anatomical labeling of activations in spm using a macroscopic anatomical parcellation of the MNI MRI single‐subject brain. NeuroImage, 15, 273–289. [DOI] [PubMed] [Google Scholar]
  350. Vakorin, V. A. , Krakovska, O. A. , & McIntosh, A. R. (2009). Confounding effects of indirect connections on causality estimation. Journal of Neuroscience Methods, 184, 152–160. [DOI] [PubMed] [Google Scholar]
  351. van den Heuvel, M. P. , de Lange, S. C. , Zalesky, A. , Seguin, C. , Yeo, B. T. , & Schmidt, R. (2017). Proportional thresholding in resting‐state fMRI functional connectivity networks and consequences for patient‐control connectome studies: Issues and recommendations. NeuroImage, 152, 437–449. [DOI] [PubMed] [Google Scholar]
  352. van den Heuvel, M. P. , Stam, C. J. , Boersma, M. , & Hulshoff Pol, H. E. (2008). Small‐world and scale‐free organization of voxel‐based resting‐state functional connectivity in the human brain. NeuroImage, 43, 528–539. [DOI] [PubMed] [Google Scholar]
  353. van Dijk, K. R. A. , Hedden, T. , Venkataraman, A. , Evans, K. C. , Lazar, S. W. , & Buckner, R. L. (2010). Intrinsic functional connectivity as a tool for human connectomics: Theory, properties, and optimization. Journal of Neurophysiology, 103, 297–321. [DOI] [PMC free article] [PubMed] [Google Scholar]
  354. van Dijk, K. R. A. , Sabuncu, M. R. , & Buckner, R. L. (2012). The influence of head motion on intrinsic functional connectivity MRI. NeuroImage, 59, 431–438. [DOI] [PMC free article] [PubMed] [Google Scholar]
  355. van Essen, D. C. , Smith, S. M. , Barch, D. M. , Behrens, T. E. J. , Yacoub, E. , Ugurbil, K. , & for the WU‐Minn HCP Consortium . (2013). The WU‐Minn Human Connectome Project: An overview. NeuroImage, 80, 62–79. [DOI] [PMC free article] [PubMed] [Google Scholar]
  356. van Veen, B. D. , van Drongelen, W. , Yuchtman, M. , & Suzuki, A. (1997). Localization of brain electrical activity via linearly constrained minimum variance spatial filtering. IEEE Transactions on Biomedical Engineering, 44, 867–880. [DOI] [PubMed] [Google Scholar]
  357. van Wijk, B. C. M. , Stam, C. J. , & Daffertshofer, A. (2010). Comparing brain networks of different size and connectivity density using graph theory. PLoS One, 5, e13701. [DOI] [PMC free article] [PubMed] [Google Scholar]
  358. Vapnik, V. (2013). The nature of statistical learning theory. New York, NY: Springer Science & Business media. [Google Scholar]
  359. Varela, F. , Lachaux, J. P. , Rodriguez, E. , & Martinerie, J. (2001). The brainweb: Phase synchronization and large‐scale integration. Nature Reviews. Neuroscience, 2, 229–239. [DOI] [PubMed] [Google Scholar]
  360. Varoquaux, G. , Baronnet, F. , Kleinschmidt, A. , Fillard, P. , & Thirion, B. (2010). Detection of brain functional‐connectivity difference in post‐stroke patients using group‐level covariance modeling. In International Conference on Medical Image Computing and Computer‐Assisted Intervention (pp. 200–208). Springer, Berlin, Heidelberg. [DOI] [PubMed]
  361. Váša, F. , Bullmore, E. T. , & Patel, A. X. (2018). Probabilistic thresholding of functional connectomes: Application to schizophrenia. NeuroImage, 172, 326–340. [DOI] [PubMed] [Google Scholar]
  362. Vecchio, F. , Miraglia, F. , Bramanti, P. , & Rossini, P. M. (2014). Human brain networks in physiological aging: A graph theoretical analysis of cortical connectivity from EEG data. Journal of Alzheimer's Disease, 41, 1239–1249. [DOI] [PubMed] [Google Scholar]
  363. Vejmelka, M. , & Paluš, M. (2008). Inferring the directionality of coupling with conditional mutual information. Physical Review E, 77, 026214. [DOI] [PubMed] [Google Scholar]
  364. Vicente, R. , Wibral, M. , Lindner, M. , & Pipa, G. (2011). Transfer entropy—A model‐free measure of effective connectivity for the neurosciences. Journal of Computational Neuroscience, 30, 45–67. [DOI] [PMC free article] [PubMed] [Google Scholar]
  365. Vidaurre, D. , Abeysuriya, R. , Becker, R. , Quinn, A. , Alfaro‐Almagro, E. , Smith, S. M. , & Woolrich, M. W. (2018). Discovering dynamic brain networks from big data in rest and task. NeuroImage, 180, 646–656. [DOI] [PMC free article] [PubMed] [Google Scholar]
  366. Vidaurre, D. , Quinn, A. J. , Baker, A. P. , Dupret, D. , Tejero‐Canero, A. , & Woolrich, M. W. (2016). Spectrally resolved fast transient brain states in electrophysiological data. NeuroImage, 126, 81–95. [DOI] [PMC free article] [PubMed] [Google Scholar]
  367. Vidaurre, D. , Smith, S. M. , & Woolrich, M. W. (2017). Brain network dynamics are hierarchically organized in time. Proceedings of the National Academy of Sciences of the United States of America, 114, 12827–12832. [DOI] [PMC free article] [PubMed] [Google Scholar]
  368. Vinck, M. , Oostenveld, R. , van Wingerden, M. , Battaglia, F. , & Pennartz, C. M. (2011). An improved index of phase‐synchronization for electrophysiological data in the presence of volume‐conduction, noise and sample‐size bias. NeuroImage, 55(4), 1548–1565. [DOI] [PubMed] [Google Scholar]
  369. Vogt, C. , & Vogt, O. (1919). Allgemeinere Ergebnisse unserer Hirnforschung (English translation: Results of our brain research in a broader context). Journal für Psychologie Und Neurologie, 25, 292–398. [Google Scholar]
  370. Vorobyov, S. , & Cichocki, A. (2002). Blind noise reduction for multisensory signals using ICA and subspace filtering, with application to EEG analysis. Biological Cybernetics, 86, 293–303. [DOI] [PubMed] [Google Scholar]
  371. Wang, J. H. , Zuo, X. N. , Gohel, S. , Milham, M. P. , Biswal, B. B. , & He, Y. (2011). Graph theoretical analysis of functional brain networks: Test‐retest evaluation on short‐ and long‐term resting‐state functional MRI data. PLoS One, 6, e21976. [DOI] [PMC free article] [PubMed] [Google Scholar]
  372. Wang, S. H. , Lobier, M. , Siebenhũhner, F. , Puoliväli, T. , Palva, S. , & Palva, J. M. (2018). Hyperedge bundling: A practical solution to spurious interactions in MEG/EEG source connectivity analyses. NeuroImage, 173, 610–622. [DOI] [PubMed] [Google Scholar]
  373. Wang, X. J. , & Kennedy, H. (2016). Brain structure and dynamics across scales: In search of rules. Current Opinion in Neurobiology, 37, 92–98. [DOI] [PMC free article] [PubMed] [Google Scholar]
  374. Whitney, H. (1936). Differentiable manifolds. Annals of Mathematics, 37(3), 645–680. [Google Scholar]
  375. Wig, G. S. , Schlagger, B. L. , & Petersen, S. E. (2011). Concepts and principles in the analysis of brain networks. Annals of the New York Academy of Sciences, 1224(1), 126–146. [DOI] [PubMed] [Google Scholar]
  376. Wig, G. S. , Laumann, T. O. , & Petersen, S. E. (2014). An approach for parcellating human cortical areas using resting‐state correlations. NeuroImage, 93, 276–291. [DOI] [PMC free article] [PubMed] [Google Scholar]
  377. Woolrich, M. W. , Baker, A. , Luckhoo, H. , Mohseni, H. , Barnes, G. , Brookes, M. , & Rezek, I. (2013). Dynamic state allocation for MEG source reconstruction. NeuroImage, 77, 77–92. [DOI] [PMC free article] [PubMed] [Google Scholar]
  378. Wu, C. W. , Chen, C.‐L. , Liu, P.‐Y. , Chao, Y.‐P. , Biswal, B. B. , & Lin, C.‐P. (2011). Empirical evaluations of slice‐timing, smoothing, and normalization effects in seed‐based, resting‐state functional magnetic resonance imaging analyses. Brain Connectivity, 1, 401–410. [DOI] [PubMed] [Google Scholar]
  379. Wu, J. , Zhang, J. , Ding, X. , Li, R. , & Zhou, C. (2013). The effects of music on brain functional networks: A network analysis. Neuroscience, 250, 49–59. [DOI] [PubMed] [Google Scholar]
  380. Yeh, F. C. , Tang, A. , Hobbs, J. P. , Hottowy, P. , Dabrowski, W. , Sher, A. , … Beggs, J. M. (2010). Maximum entropy approaches to living neural networks. Entropy, 12, 89–106. [Google Scholar]
  381. Yger, F. , Berar, M. , & Lotte, F. (2016). Riemannian approaches in brain‐computer interfaces: A review. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 25, 1753–1762. [DOI] [PubMed] [Google Scholar]
  382. Yu, H. , Lei, X. , Song, Z. , Liu, C. , & Wang, J. (2019). Supervised network‐based fuzzy learning of EEG signals for Alzheimer's disease identification. IEEE Transactions on Fuzzy Systems, 28, 60–71. [Google Scholar]
  383. Yu, L. (2009). EEG de‐noising based on wavelet transformation. In 2009 3rd International Conference on Bioinformatics and Biomedical Engineering (pp. 1–4). IEEE.
  384. Yu, Q. , Du, Y. , Shen, J. , Sui, J. , Adali, T. , Pearlson, G. D. , & Calhoun, V. D. (2018). Application of graph theory to assess static and dynamic brain connectivity: Approaches for building brain graphs. Proceedings of the IEEE, 106, 886–906. [DOI] [PMC free article] [PubMed] [Google Scholar]
  385. Zalesky, A. , Fornito, A. , & Bullmore, E. T. (2010). Network‐based statistic: Identifying differences in brain networks. NeuroImage, 53, 1197–1207. [DOI] [PubMed] [Google Scholar]
  386. Zalesky, A. , Fornito, A. , Cocchi, L. , Gollo, L. L. , & Breakspear, M. (2014). Time‐resolved resting‐state brain networks. Proceedings of the National Academy of Sciences of the United States of America, 111, 10341–10346. [DOI] [PMC free article] [PubMed] [Google Scholar]
  387. Zalesky, A. , Fornito, A. , Hardling, I. H. , Cocchi, L. , Yücel, M. , Pantelis, C. , & Bullmore, E. T. (2010). Whole‐brain anatomical networks: Does the choice of nodes matter? NeuroImage, 50, 970–983. [DOI] [PubMed] [Google Scholar]
  388. Zanin, M. , Alcazar, J. M. , Carbajosa, J. V. , Paez, M. G. , Papo, D. , Sousa, P. , … Boccaletti, S. (2014). Parenclitic networks: Uncovering new functions in biological data. Scientific Reports, 4, 5112. [DOI] [PMC free article] [PubMed] [Google Scholar]
  389. Zanin, M. , Belkoura, S. , Gomez, J. , Alfaro, C. , & Cano, J. (2018). Topological structures are consistently overestimated in functional complex networks. Scientific Reports, 8, 1–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  390. Zanin, M. , Ivanoska, I. , Güntekin, B. , Yener, G. , Loncar‐Turukalo, T. , Jakovljevic, N. , … Papo, D. (2021). A fast transform for brain connectivity difference evaluation. Neuroinformatics Manuscript submitted for publication. 10.1007/s12021-021-09518-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  391. Zanin, M. , Papo, D. , Sousa, P. A. , Menasalvas, E. , Nicchi, A. , Kubik, E. , & Boccaletti, S. (2016). Combining complex networks and data mining: Why and how. Physics Reports, 635, 1–44. [Google Scholar]
  392. Zanin, M. , Pereda, E. , Bajo, R. , Menasalvas, E. , Sousa, P. , & Papo, D. (2020). How does the forest's look depend on what trees you plant? Connectivity metrics reveal different aspects of functional brain organization. (In preparation).
  393. Zanin, M. , Sousa, P. , Papo, D. , Bajo, R. , García‐Prieto, J. , del Pozo, F. , & Boccaletti, S. (2012). Optimizing functional network representation of multivariate time series. Scientific Reports, 2, 630. [DOI] [PMC free article] [PubMed] [Google Scholar]
  394. Zaslavsky, G. M. (2002). Chaos, fractional kinetics, and anomalous transport. Physics Reports, 371, 461–580. [Google Scholar]
  395. Zhou, Z. , Ding, M. , Chen, Y. , Wright, P. , Lu, Z. , & Liu, Y. (2009). Detecting directional influence in fMRI connectivity analysis using PCA based Granger causality. Brain Research, 1289, 22–29. [DOI] [PMC free article] [PubMed] [Google Scholar]
  396. Zilles, K. , & Amunts, K. (2010). Centenary of Brodmann's map—Conception and fate. Nature Reviews. Neuroscience, 11, 139–145. [DOI] [PubMed] [Google Scholar]
  397. Zomorodian, A. J. (2005). Topology for computing (Vol. 16). Cambridge, England: Cambridge University Press. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

This is a Review article—to which no data are associated.


Articles from Human Brain Mapping are provided here courtesy of Wiley

RESOURCES