Skip to main content
PLOS Computational Biology logoLink to PLOS Computational Biology
. 2021 Jan 28;17(1):e1008310. doi: 10.1371/journal.pcbi.1008310

Graph neural fields: A framework for spatiotemporal dynamical models on the human connectome

Marco Aqil 1,¤,*, Selen Atasoy 2,3, Morten L Kringelbach 2,3, Rikkert Hindriks 1
Editor: Daniele Marinazzo4
PMCID: PMC7872285  PMID: 33507899

Abstract

Tools from the field of graph signal processing, in particular the graph Laplacian operator, have recently been successfully applied to the investigation of structure-function relationships in the human brain. The eigenvectors of the human connectome graph Laplacian, dubbed “connectome harmonics”, have been shown to relate to the functionally relevant resting-state networks. Whole-brain modelling of brain activity combines structural connectivity with local dynamical models to provide insight into the large-scale functional organization of the human brain. In this study, we employ the graph Laplacian and its properties to define and implement a large class of neural activity models directly on the human connectome. These models, consisting of systems of stochastic integrodifferential equations on graphs, are dubbed graph neural fields, in analogy with the well-established continuous neural fields. We obtain analytic predictions for harmonic and temporal power spectra, as well as functional connectivity and coherence matrices, of graph neural fields, with a technique dubbed CHAOSS (shorthand for Connectome-Harmonic Analysis Of Spatiotemporal Spectra). Combining graph neural fields with appropriate observation models allows for estimating model parameters from experimental data as obtained from electroencephalography (EEG), magnetoencephalography (MEG), or functional magnetic resonance imaging (fMRI). As an example application, we study a stochastic Wilson-Cowan graph neural field model on a high-resolution connectome graph constructed from diffusion tensor imaging (DTI) and structural MRI data. We show that the model equilibrium fluctuations can reproduce the empirically observed harmonic power spectrum of resting-state fMRI data, and predict its functional connectivity, with a high level of detail. Graph neural fields natively allow the inclusion of important features of cortical anatomy and fast computations of observable quantities for comparison with multimodal empirical data. They thus appear particularly suitable for modelling whole-brain activity at mesoscopic scales, and opening new potential avenues for connectome-graph-based investigations of structure-function relationships.

Author summary

The human brain can be seen as an interconnected network of many thousands neuronal “populations”; in turn, each population contains thousands of neurons, and each is connected both to its neighbors on the cortex, and crucially also to distant populations thanks to long-range white matter fibers. This extremely complex network, unique to each of us, is known as the “human connectome graph”. In this work, we develop a novel approach to investigate how the neural activity that is necessary for our life and experience of the world arises from an individual human connectome graph. For the first time, we implement a mathematical model of neuronal activity directly on a high-resolution connectome graph, and show that it can reproduce the spatial patterns of activity observed in the real brain with magnetic resonance imaging. This new kind of model, made of equations implemented directly on connectome graphs, could help us better understand how brain function is shaped by computational principles and anatomy, but also how it is affected by pathology and lesions.

Introduction

The spatiotemporal dynamics of human resting-state brain activity is organized in functionally relevant ways, with perhaps the best-known example being the “resting-state networks” [1]. How the repertoire of resting-state brain activity arises from the underlying anatomical structure, i.e. the connectome, is a highly non-trivial question: it has been shown that structural connections imply functional ones, but that the converse is not necessarily true [2]; furthermore, specific discordant attributes of structural and functional connectivity have been found by network analyses [3, 4]. Research on structure-function questions can be broadly divided into data-driven (analysis), theory-driven (modelling), and combinations thereof. In this work, we combine techniques from graph signal processing (analysis) and neural field equations (modelling) to outline a promising new approach for the investigation of whole-brain structure-function relationships.

A recent trend of particular interest in neuroimaging data analysis is the application of methods from the field of graph signal processing [510]. In these applications, anatomical information obtained from DTI and structural MRI is used to construct the connectome graph [11], and combined with functional imaging data such as BOLD-fMRI or EEG/MEG to investigate structure-function relationships in the human brain (see [12, 13] for reviews). The workhorse of graph signal processing analysis is the graph Laplacian operator, or simply graph Laplacian. Originally formulated as the graph-equivalent of the Laplace-Beltrami operator for Riemannian manifolds [14, 15], the graph Laplacian is now established as a valuable tool in its own right [12]. The eigenvectors of the graph Laplacian provide a generalization of the Fourier transform to graphs, and therefore also a complete orthogonal basis for functions on the graph. In the context of the human connectome graph, the eigenvectors of the graph Laplacian are referred to as connectome harmonics, by analogy with the harmonic eigenfunctions of the Laplace-Beltrami operator. Of relevance to the current work, several connectome harmonics have been shown to be related to specific resting-state networks [11]. More recent studies have provided additional evidence for this claim [16, 17], and others used a similar approach to explain how distinct electrophysiological resting-state networks emerge from the structural connectome graph [18]. Furthermore in [11], for the first time to the best of our knowledge, a model of neural activity making use of the graph Laplacian was implemented, and used to suggest the role of Excitatory-Inhibitory dynamics as possible underlying mechanism for the self-organization of resting-state activity patterns. In other very recent work [19, 20] techniques based on the graph Laplacian were employed to model EEG and MEG oscillations. Considering these developments, the combination of neural activity modelling and graph signal processing techniques appears as a promising direction for further inquiry.

Whole-brain models are models of neural activity that are defined on the entire cortex and possibly on subcortical structures. This is generally achieved either by parcellating the cortex into a network of a few dozens of macroscopic, coupled regions of interest (ROIs), or by approximating the cortex as a bidimensional manifold, and studying continuous integrodifferential equations in a flat or spherical geometry (see [21] for a review). In this study, relying on graph signal processing methods such as the graph Laplacian and graph filtering [7, 9], we show how to define and implement a large class of whole-brain models of neural activity on arbitrary metric graphs (that is, graphs equipped with a distance metric), and in particular on an unparcellated, mesoscopic-resolution human connectome. These models consist of systems of stochastic integrodifferential equations on graphs, and we refer to them as graph neural fields by analogy with their continuous counterparts. We obtain analytic expressions for harmonic and temporal power spectra, as well as functional connectivity and coherence matrices of graph neural fields, with a technique dubbed CHAOSS (shorthand for Connectome-Harmonic Analysis Of Spatiotemporal Spectra). When combined with appropriate observation models, graph neural fields can be fitted to and compared with functional data obtained from different imaging modalities such as EEG/MEG, fMRI, and positron emission tomography (PET). Graph neural fields can take into account many physical properties of the cortex, and provide a computationally efficient and versatile modelling framework that is tailored for connectome-graph-based structure-function investigations, particularly suitable for modelling whole-brain activity on mesoscopic scales. Graph neural fields present immediate application in the investigation of the relationship between individual anatomy, pathology, lesions, neuropharmacological alterations, with functional brain activity; and furthermore provide a model-based approach to test novel graph signal processing neuroimaging hypotheses. While here we focus on the human connectome as a prime application for graph neural fields, the mathematical framework can also be used to implement and analyze single-neuron models directly on the connectome graphs of simple organisms, such as C. Elegans, whose full neuronal pathways have been experimentally mapped [22].

In Results, we implement, analyze, and numerically simulate a stochastic Wilson-Cowan graph neural field model, first on a one-dimensional graph with 1000 vertices, and then on a single-subject multimodal connectome consisting of approximately 18000 cortical vertices and 15000 white matter tracts. The simplified context of a one-dimensional graph is useful to illustrate the effect of graph properties, such as distance weighting and non-local edges, on model dynamics; moving on to a real-world application, we show that the model implemented on the full connectome can reproduce the experimentally observed harmonic power spectrum of resting-state fMRI data, and predict the fMRI functional connectivity matrix with a high level of detail. In Methods, we describe the general framework of graph neural fields, and show how to derive analytic expressions for harmonic and temporal power spectra, as well as coherence and functional connectivity matrices (CHAOSS). Methodological generalizations, full linear stability analysis of the Wilson-Cowan graph neural field model, and an implementation of the damped-wave equation on the human connectome graph, are provided in S1S4 Appendices.

Results

Stochastic Wilson-Cowan equations on graphs

The Wilson-Cowan model [23] is a widely used and successful model of cortical dynamics. In this section we show how to use the framework of graph neural fields to implement the stochastic Wilson-Cowan equations on an arbitrary graphs equipped with a distance metric, and how to compute spatiotemporal observables (CHAOSS). We then illustrate the effects of distance-weighting and non-local graph edges on model dynamics in the simplified context of a one-dimensional graph, before moving on to a real-world application with fMRI data.

In continuous space, the stochastic Wilson-Cowan model is described by the following system of integrodifferential equations:

τEEt=-dEE+S[αEE(KEEE)-αIE(KIEI)+P]+σξE, (1)
τIIt=-dII+S[αEI(KEIE)-αII(KIII)+Q]+σξI, (2)

where ⊗ denotes a convolution integral, and we have omitted for brevity the spatiotemporal dependency of E(x, t), I(x, t), ξE(x, t) and ξI(x, t). This model posits the existence of two neuronal populations (Excitatory and Inhibitory) at each location in space. The fractions of active neurons in each population (E, I) evolve according to a spontaneous decay with rate dE and dI, a sigmoid-mediated activation term containing the four combinations of population interactions (E-E, I-E, E-I, I-I) as well as the subcortical input terms P and Q, stochastic noise realizations ξE and ξI of intensity σ, and with the timescale parameters τE and τI. The propagation of activity and interaction among neuronal populations is modeled by spatial convolution integrals with four, potentially different, kernels (KEE, KIE, KEI, KII). For arbitrary spatially symmetric kernels, convolution integrals can be formulated on graphs as linear matrix-vector products (Eq (28)). Table 1 summarizes the meaning of symbols in the Wilson-Cowan equations.

Table 1. Meaning of symbols in the Wilson-Cowan equations.

Symbol Meaning
E Fraction of active Excitatory neurons in local populations.
I Fraction of active Inhibitory neurons in local populations.
τE, τI Excitatory/Inhibitory timescale parameters.
dE, dI Excitatory/Inhibitory spontaneous activity decay rates.
S[x] Sigmoid function 1/(1+ ex).
αEE, αIE, αEI, αII Strength of connectivity between pairs of neuronal populations.
KEE, KIE, KEI, KII Convolution kernels in continuous space, corresponding filters on graphs.
σEE, σIE, σEI, σII Standard deviation of Gaussian kernels/filters.
P Subcortical input to Excitatory populations.
Q Subcortical input to Inhibitory populations.
σ Noise amplitude.
ξE, ξI Noise realizations.

The Wilson-Cowan equations model the spatiotemporal dynamics of interactions among Excitatory and Inhibitory neuronal populations. Note that each of the four possible pairs of population interactions is described by a distinct kernel/filter. Here, we use four Gaussian kernels of different sizes.

Thus, the stochastic Wilson-Cowan graph neural field model can be formulated as:

τEdEdt=-dEE+S[αEEKEEE-αIEKIEI+P]+σξE, (3)
τIdIdt=-dII+S[αEIKEIE-αIIKIII+Q]+σξI, (4)

where E, I, ξE and ξI are functions on the graph, i.e. vectors of size n, where n is the number of vertices in the graph. The convolution integrals are implemented via the graph-filters K**, which are matrices of size (n, n). In particular, for the case of Gaussian kernels, the filters are given by (Table 2):

KEE=UeσEE2Λ/2UT,KIE=UeσIE2Λ/2UT, (5)
KEI=UeσEI2Λ/2UT,KII=UeσII2Λ/2UT, (6)

where Δ = UT ΛU denotes the distance-weighted graph Laplacian and its diagonalization (Eqs (22 and 23)). Note that each kernel has a different size parameter σ**, effectively allowing different spatial ranges for Excitatory and Inhibitory interactions, without requiring a Mexican-hat kernel. Importantly, the inclusion of a stochastic noise term in the model formulation allows for characterization of resting-state activity as noise-induced fluctuations about a stable steady-state (E*, I*) [24].

Table 2. Spatial convolution kernels in Euclidean, Fourier, and graph domains.

Kernel Euclidean domain Fourier domain K^g
Gaussian e-x2/2σ2 e-σ2k2/2 eσ2λk/2
Exponential eα|x| 1α2+k2 1α2-λk
Mexican hat (1-(x/σ)2)e-x2/2σ2 k2eσ2k2/2 λkeσ2λk/2
Rectangular rect(ax) sinc(k2πa) sinc(-λk2πa)
Triangular tri(ax) sinc2(k2πa) sinc2(-λk2πa)

This table provides examples of commonly used continuous convolution kernels and their graph-domain equivalents. In short, substituting k2 with −λk in the Fourier transform of a continuous kernel provides its graph-domain translation. The choice of kernel and the value of kernel parameters (for example the size σ of a Gaussian kernel) have a significant influence on model dynamics. Normalization factors are omitted. The function sinc is defined as sinc(x) = sin(πx)/(πx).

Wilson-Cowan model CHAOSS

Having defined the Wilson-Cowan graph neural field equations, we wish to apply the Connectome-Harmonic Analysis Of Spatiotemporal Spectra to characterize the dynamics of resting-state fluctuations in neural activity. CHAOSS predictions, combined with a suitable observation model, can then be compared with empirical neuroimaging data, for example EEG, MEG, or fMRI. To do this, we obtain the linearized Wilson-Cowan equations for the evolution of a perturbation about a steady state:

τEdEdt=-dEE+aαEEKEEE-aαIEKIEI+σξE, (7)
τIdIdt=-dII+bαEIKEIE-bαIIKIII+σξI, (8)

where the scalar, steady-state-dependent parameters a and b are:

a=dEE*(1-dEE*),b=dII*(1-dII*). (9)

Derivation of the linearized equations and their full linear stability analysis can be found in S4 Appendix. In the graph Fourier domain, Eqs (7 and 8) are diagonalized and can be recast in the standard form:

du^kdt=Jku^k+Bξ^k, (10)

where the vector u contains the concatenation of population activities on the graph E and I, ξ contains the concatenation of noise realizations ξE and ξI. The hat notation u^ indicates the graph Fourier transform, and k = 1, …, n indexes the graph Laplacian eigenmodes. For the Wilson-Cowan model with Gaussian kernels, the Jacobian of the kth eigenmode is:

Jk=[-dEτE+aτEαEEeσEE2λk/2-aτEαIEeσIE2λk/2bτIαEIeσEI2λk/2-bτIαIIeσII2λ/2-dIτI], (11)

where λk is the kth graph Laplacian eigenvalue, and:

B=[σ2/τE200σ2/τI2]. (12)

In terms of the elements of the matrices Jk and B, the two-dimensional harmonic-temporal power spectrum of the Excitatory neuronal population activity is (Eq (41)):

[Sk(ω)]E=[B]00([Jk]112+ω2)+[Jk]012[B]11([Jk]00[Jk]11-[Jk]01[Jk]10-ω2)2+ω2([Jk]00+[Jk]11)2. (13)

The double-digits numerical subscripts refer to the row-column element of the respective matrix. Eq (13) describes the power of Excitatory activity, in the kth eigenmode, at temporal frequency ω. It can be used to compute the separate harmonic and temporal power spectra, as well as the functional connectivity and coherence matrices of the model. Equivalent formulas for the Inhibitory population can also be derived.

By integrating [Sk(ω)]E over all temporal frequencies, an explicit expression for the harmonic power spectrum of Excitatory activity can be obtained:

HE(k)=[B]00([Jk]00[Jk]11-[Jk]01[Jk]10)+[Jk]112[B]00+[Jk]012[B]112([Jk]01[Jk]10-[Jk]00[Jk]11)([Jk]00+[Jk]11). (14)

Similarly, the temporal power spectrum can be obtained by summing [Sk(ω)]E over all harmonic eigenmodes (Eq (42)). Eqs (13) and (14) represent a general result that does not only apply to the Wilson-Cowan model. In fact, these equations describe the power spectra of stochastic equilibrium fluctuations for the first population of any graph neural field model with two interacting populations and a first-order temporal differential operator. The specific shape of the power spectra will depend on the model formulation and on its parameters.

Effects of distance-weighting and non-local connectivity

Distance-weighting of graph edges, presence of non-local connectivity, and changes in model parameter values can have significant effects on dynamics of graph neural fields. To demonstrate this, we implement the stochastic Wilson-Cowan model in the simplified context of a one-dimensional graph with 1000 vertices. Numerical simulations were carried out with a time-step δt = 5 ⋅ 10−5 seconds, for a total time of 20 seconds of simulated activity (4 ⋅ 105 time-steps). The parameter set for the resulsts shown in Figs 13 is reported in S1 Table. In Fig 4, the value of σIE is increased by a factor of 20, with everything else unchanged, as an illustrative example of the influence of kernel parameters on model dynamics.

Fig 1. Effects of distance-weighting on graph neural field dynamics.

Fig 1

(A) The harmonic and (B) temporal power spectra of Excitatory activity equilibrium fluctuations in the one-dimensional graph for vertex spacing h = 10−4 m. A larger vertex spacing, for example h = 2 ⋅ 10−4 m, renders the steady state unstable. The dashed black lines correspond to the theoretical prediction and the red lines are obtained through numerical simulations. (C) The Excitatory activity functional connectivity obtained by analytic predictions and numerical simulations.

Fig 3. Abortion of pathological oscillations by non-local connectivity.

Fig 3

(A) The harmonic and (B) temporal power spectra of Excitatory activity equilibrium fluctuations in the one-dimensional graph for vertex spacing h = 2 ⋅ 10−4 m, after the addition of a non-local edge between vertices 250 and 750. Without the addition, the model dynamics is placed in an unstable limit-cycle regime. The dashed black lines correspond to the theoretical prediction and the red lines are obtained through numerical simulations. (C) The Excitatory activity functional connectivity obtained by analytic predictions and numerical simulations.

Fig 4. Emergence of multiple temporal power peaks by long-range inhibition.

Fig 4

(A) The harmonic and (B) temporal power spectra of Excitatory activity equilibrium fluctuations in the one-dimensional graph, with the size of the Gaussian kernel controlling Inhibitory to Excitatory interactions σIE increased by a factory of 20, and everything else unchanged with respect to Fig 3. Allowing Inhibitory activity to exert its influence over larger distances here leads to the emergence of multiple peaks in the temporal power spectrum of Excitatory activity. The dashed black lines correspond to the theoretical prediction and the red lines are obtained through numerical simulations. (C) The Excitatory activity functional connectivity obtained by analytic predictions and numerical simulations.

To show the effects of distance-weighting in graph neural fields, we note how, for the parameter set of S1 Table, increasing the distance between vertices leads to the emergence of an oscillatory resonance that eventually destabilizes the steady state and gives way to limit-cycle activity. Keeping the number of vertices constant, increasing the vertex spacing h alters the stability of the steady state from broadband activity (h = 10−5m), to oscillatory resonance (h = 10−4m), to oscillatory instability (h = 2 ⋅ 10−4m). This result demonstrates that the dynamics of graph neural fields are dependent on the metric properties of the graph, and hence indicate the necessity of employing the distance-weighted graph Laplacian in the context of graph neural fields modelling. The combinatorial (binary) graph Laplacian captures the topology, but not the geometry, of the graph, and in this sense does not take into account the physical properties of the cortex. The harmonic and temporal power spectra, as well as the functional connectivity matrix, are shown in Fig 1 for the case with h = 10−4m.

The presence of fast, long-range connectivity can impact the power spectrum and functional interactions of equilibrium fluctuations, as well as the stability of steady-states. To illustrate this, we add a single non-local edge between vertices 250 and 750 to the one-dimensional graph with h = 10−4m. The Euclidean distance between these two vertices is 500 ⋅ h = 5 ⋅ 10−2m = 5cm. In the healthy brain, myelination allows activity propagation along white-matter fibers to take place at speeds ∼ 200 times greater than local surface propagation [25]. To model myelination, we set the length of the non-local edge to be the Euclidean distance between the vertices, divided by a factor of 200 (similarly to the construction of the human connectome graph Laplacian, where the length of white-matter edges is set to be their 3D path-length distance along DTI fibers, divided by a factor of 200). Therefore, the effective length of the non-local edge is 2.5 ⋅ 10−4m. Fig 2A and 2B shows the effects of the presence of the non-local edge on the harmonic and temporal power spectra of the equilibrium fluctuations. The most pronounced effect is damping of the oscillatory resonance in the temporal power spectrum, thus rendering the fluctuations more stable. Furthermore, the edge leads to a discernible alteration in the functional connectivity (Fig 2C).

Fig 2. Suppression of oscillatory resonance by non-local connectivity.

Fig 2

(A) The harmonic and (B) temporal power spectra of Excitatory activity equilibrium fluctuations in the one-dimensional graph for vertex spacing h = 10−4 m, after the addition of a non-local edge between vertices 250 and 750. The dashed black lines correspond to the theoretical prediction and the red lines are obtained through numerical simulations. (C) The Excitatory activity functional connectivity obtained by analytic predictions and numerical simulations. Compare with Fig 1 to note the visible suppression of oscillatory resonance in the temporal power spectrum, and the change in functional connectivity engendered by a single non-local edge.

Interestingly, when the model operates in the pathological i.e. non-stable regime (h = 2 ⋅ 10−4m), addition of a single non-local edge stabilizes the steady state, thus leading to healthy equilibrium fluctuations (Fig 3B). The non-local edge also creates a large increase in the functional connectivity between the vertices involved, and a change in the pattern in neighboring vertices (Fig 3C). As noted above, these effects of long-range connectivity are observed if the effective length of the non-local edge is small enough for non-local activity propagation to interact with local activity propagation. For these one-dimensional simulations, this happens if the speed factor is larger than ∼50.

In Fig 4, we show an illustrative example of how kernel parameters can lead to significant alterations in observable model dynamics. Increasing the size of the Gaussian kernel controlling the Inhibitory to Excitatory interaction (σIE) by a factor of 20 leads to the emergence of multiple peaks in the temporal power spectrum of the model. Increasing the value further, for example by a factor of 30, renders the steady state unstable. All other parameters, presence of non-local edge, and distance-weighting were left unchanged with respect to Fig 3.

Application to resting-state fMRI

To illustrate the applicability of graph neural fields, we study the stochastic Wilson-Cowan graph neural field on a single-subject multimodal connectome, and investigate whether the model can capture empirical observables of resting-state fMRI. The connectome is of mesoscopic resolution, comprising of approximately 18000 cortical surface vertices (MRI) and 15000 white matter tracts (DTI). See connectome construction for details on the construction of the weighted connectome graph Laplacian.

Graph neural fields on the human connectome reproduce the harmonic power spectrum of resting-state fMRI

First, we obtain the harmonic power spectrum of resting-state fMRI, according to its definition, as the temporal mean of the squared graph Fourier transform of the fMRI timecourses. Note that the estimation of the fMRI harmonic power spectrum does not use a single timepoint, but the entire available timecourse. To regularize the empirical spectrum, we compute its log-log binned median with 300 bins, following [26]; eigenmodes above k = 15000 contain artifacts due to reaching the limits of fMRI spatial resolution, and are thus removed. We optimize the model parameters with a basinhopping procedure [27], aiming to minimize the residual difference between empirical and theoretical harmonic power spectra. The parameter set producing the best-fit harmonic power spectrum is reported in S2 Table. In the fitting, we allow for a linear rescaling as a simple observation model connecting the theoretical and empirical spectra:

HfMRI(k)=βHE(k), (15)

where HfMRI(k) is the harmonic power spectrum of the fMRI data, HE(k) is the analytically predicted harmonic power spectrum of Excitatory neural activity (Eq (14)), and β is a linear rescaling parameter. To verify the accuracy of the analytic prediction, we carry out numerical simulations of the model Eqs (7 and 8) on the connectome, with a time-step value δt = 10−4 seconds, and an observation time of 106 time-steps, corresponding to 100 seconds of simulated activity. S5 Fig shows snapshots of the simulated model and of resting-state fMRI, at different times. Fig 5 shows the harmonic power spectra of fMRI data and of the stochastic Wilson-Cowan graph neural field model, with the parameter set of S2 Table. The model is clearly able to reproduce the fMRI harmonic power spectrum, showing excellent agreement between analytically predicted, numerically simulated, and empirically observed harmonic power spectra. Previous studies have shown that the harmonic power spectrum of resting-state fMRI can be used to differentiate between a placebo condition and the altered state of consciousness induced by a serotonergic hallucinogen, lysergic acid diethylamide (LSD) [26]. LSD is known to have profound effects on perception and cognition; furthermoe, together with other psychedelic compounds, it is currently under investigation in the treatment of several psychiatric conditions [2830]. Thus, the ability to reproduce the harmonic power spectrum of resting-state fMRI shows that graph neural fields are capable of capturing measures of neural dynamics relevant for brain function and clinical applications.

Fig 5. Stochastic Wilson-Cowan graph neural field model captures the resting-state fMRI harmonic power spectrum.

Fig 5

The theoretical (dashed black line) and numerical (red line) predictions from the stochastic Wilson-Cowan graph neural field model, with the parameters of S2 Table, are in excellent agreement with the empirically observed fMRI harmonic spectrum (cyan line). The numerical spectrum was obtained by taking the median of three independent simulations.

Graph neural fields on the human connectome predict the vertex-wise functional connectivity of resting-state fMRI

The CHAOSS method also provides an analytic prediction of the model functional connectivity (correlation) matrix (Eq (46)). In Figs 6 and 7 we compare the resting-state fMRI functional connectivity with the theoretical prediction from the Wilson-Cowan graph neural field model with the parameters of S2 Table. The matrices are shown for the full connectome, at vertex-wise resolution, with no parcellation or smoothing. Vertex-wise fMRI functional connectivity on a connectome with 18000 vertices is naturally somewhat more noisy than the model analytic prediction, hence the choice of a slightly wider color-scale for the fMRI matrices, which emphasizes patterns in the data and deemphasizes background noise. Functional connectivity patterns in the empirical and theoretically predicted matrices are in clear agreement; two main blocks of connectivity can be distinguished, corresponding to the hemispheres, in the top-left and bottom-right of the matrices, as well as many corresponding intra-hemispheric features. In Figs 8 and 9, we show insets, at different scales, of the empirical and theoretical matrices. Because of the high number of vertices in the connectome, we recommend looking at the connectivity matrices on-screen, at the highest possible resolution; high-fidelity PDF versions of these figures are provided in S6S9 Figs. We remark that we did not fit the functional connectivity matrix of the model to the data, but only the harmonic power spectrum. Besides the success and applicability of the graph neural field approach, this result also demonstrates that the harmonic power spectrum is a robust measure of brain activity, capable of efficiently capturing features of neural dynamics with a high level of detail.

Fig 6. Resting-state fMRI functional connectivity matrix.

Fig 6

Connectome-wide, vertex-wise, single-subject, resting-state fMRI functional connectivity matrix. Zoom in to appreciate the patterns present in the data, in particular the two large blocks (top-left and bottom-right) corresponding to the two hemispheres, and the many intra-hemispheric patterns. Compare with the functional connectivity predicted by the stochastic Wilson-Cowan graph neural field (Fig 7). The light-blue and light-green rectangles indicate the insets visualized in Figs 8 and 9.

Fig 7. Stochastic Wilson-Cowan graph neural field model predicts the experimental functional connectivity matrix.

Fig 7

The CHAOSS prediction for the connectome-wide, vertex-wise, single-subject functional connectivity matrix of the stochastic Wilson-Cowan graph neural field model with the parameters of S2 Table. Compare with Fig 6 to appreciate how the model predicts the patterns of functional connectivity observed in the fMRI data. The light-blue and light-green rectangles indicate the insets visualized in Figs 8 and 9. Note that we did not fit the fMRI functional connectivity of the model to the data, but only the harmonic power spectrum.

Fig 8. Stochastic Wilson-Cowan graph neural field model predicts the experimental functional connectivity matrix (inset 1).

Fig 8

(A) An inset of the vertex-wise, resting-state fMRI functional connectivity matrix for a single subject. (B) The same inset for the Wilson-Cowan graph neural field model with the parameters of S2 Table.

Fig 9. Stochastic Wilson-Cowan graph neural field model predicts the experimental functional connectivity matrix (inset 2).

Fig 9

(A) An inset of the vertex-wise, resting-state fMRI functional connectivity matrix for a single subject. (B) The same inset for the Wilson-Cowan graph neural field model with the parameters of S2 Table.

Discussion

In this work, we have presented a general approach to whole-brain neural activity modelling on unparcellated, multimodal connectomes (graph neural fields), by combining tools from graph signal processing and neural field equations. We developed a technique to analytically compute observable quantities (CHAOSS). We showed that a Wilson-Cowan stochastic graph neural field model can reproduce the empirically observed harmonic spectrum of resting-state fMRI, and predict its functional connectivity matrix at vertex-wise resolution. Graph neural fields can address some limitations of existing modelling frameworks, and therefore represent a complementary approach resulting particularly suitable for mesoscopic-scale modelling and connectome-graph-based investigations. To discuss advantages and limitations of our approach, it is useful to contextualize it within the landscape of whole-brain models.

Existing whole-brain models can be broadly divided into two classes, according to whether they incorporate short-range local connectivity or not. Region-based models only take into account long-range connectivity between dozens or few hundreds of macroscopic ROIs, whereas surface-based models directly incorporate short-range local connectivity as well [31, 32]. It is furthermore possible to distinguish between discrete and continuous surface-based models. Discrete surface-based models are defined on a (highly-sampled) cortex and are therefore finite-dimensional. In several studies, region-based and discrete surface-based models are collectively referred to as networks of neural masses [21, 33, 34]. Continuous, surface-based models are better known as neural field models, are defined on the entire cortex, and are infinite-dimensional [32, 35, 36]. Mathematically, discrete surface-based models are finite-dimensional systems of ordinary differential equations, whereas neural field models are partial integro-differential equations.

Region-based models are constructed by parcellating the cortex into a number of regions-of-interest (ROIs), placing a local model in each ROI, and connecting them according to a given connectome (see [2, 21, 34] for reviews). The ROIs are usually obtained from structural or functional cortical atlases and the number of ROIs is in the order of a hundred or less. Region-based mass models are characterized by the type of local models and how they are connected i.e. if the connections are weighted or not, Excitatory or Inhibitory, and if transmission delays are incorporated. A wide variety of local models has been used in the literature, including neural mass models, self-sustained oscillators, chaotic deterministic systems, circuits of spiking neurons, normal-form bifurcation models, rate models, and density models [2, 35]. Region-based models have proven valuable in understanding various aspects of large-scale cortical dynamics and their roles in cognitive and perceptual processing, but they are limited in one important respect: they do not allow studying the spatiotemporal organization of cortical activity on scales smaller than several squared centimeters and their effects on large-scale pattern formation. This is due to the fact that the dynamics within ROIs are described by a single model without spatial extent. This prevents studying the mesoscopic mechanisms underlying a large class of cortical activity patterns that have been observed in experiments, including traveling and spiral waves, sink-source dynamics, as well as their role in shaping macroscopic dynamics [21]. This is a significant limitation, particularly because the role of mesoscopic spatiotemporal dynamics in cognitive and perceptual processing is increasingly being recognized and experimentally studied [37, 38]. Graph neural fields present the advantage of allowing explicit modelling of activity propagation dynamics with spatiotemporal convolutions and graph differential equations on mesoscopic-resolution connectomes, thereby overcoming this limitation.

Whole-brain models that incorporate short-range connectivity are referred to as surface-based because they are generally defined either on high-resolution surface-based representations of the cortex [31, 39, 40] or on the entire cortex viewed as a continuous medium. We will refer to these types of models as discrete and continuous surface-based models, the latter of which are known as neural field models [24, 32, 36, 41]. Numerically simulating discrete surface-based models is much more computationally demanding than simulating region-based models, as the former typically have dimensions that are one to two orders higher than those of the latter. Numerically simulating neural field models is even more demanding and requires heavy numerical integration in combination with specific analytical techniques [42]. Moreover, simulating neural field models requires special preparation of cortical meshes to ensure accuracy and numerical stability. [39, 40, 4345]. Graph neural fields have the advantage of being implementable directly on multimodal structural connectomes obtained from MRI and DTI, thereby minimizing anatomical approximations, and being limited in this sense only by the quality and resolution of the available structural data. The cortex in graph neural field need not be a flat or spherical manifold, but can reflect the specific anatomy of each subject, allowing in-depth analyses of the effects of individual anatomical differences on functional activity; such analyses can then be compared across subjects thanks to the common language provided by the connectome harmonics. Graph neural fields can take into account important physical properties of the cortex, such as folding, non-uniform thickness, hemispheric asymmetries, non-homogeneous structural connectivity, and white-matter projections, since all these anatomical features can be absorbed in the distance-weighted graph Laplacian. In particular, we note that the extension to connectomes including cortical thickness, hence allowing activity to propagate not only tangentially but also perpendicularly to the cortical surface, is of particular interest. Cortical layers can already be distinguished with ultra-high field fMRI, and are thought to subserve different functions [46]. The ability of graph neural fields to account for cortical thickness and layers in dynamical models of neural activity is therefore a promising property for future development [47].

For ease of exposition, here we have focused mainly on neural field models with purely spatial kernels. Although this might be sufficient for modelling wide-band activity such as BOLD fMRI, the large-scale organization of oscillatory activity as recorded with EEG and MEG sensitively depends on the propagation delays of action potentials through white-matter fiber tracts [4850]. To model such delays, spatiotemporal kernels have been used in continues neural field models [32, 36, 51, 52]. It is possible to extend this approach to graph neural fields, by using spatiotemporal graph convolutions, rather than purely spatial convolutions. This yields graphs filters that are more general than those in Eq (26) in that they depend not only on the eigenvalues of the graph Laplacian, but also on temporal frequency (S1 Appendix). The proposed method to fit graph neural fields to experimentally observed harmonic power spectra or functional connectivity matrices (CHAOSS) is straightforward to generalize as well, since the only difference is the appearance of complex exponentials in the linearized model equations in the temporal Fourier domain. With this extension, graph neural fields allow for the formulation of any spatiotemporal neural field model on arbitrary metric graphs.

Graph neural fields come equipped with computationally efficient analytic and numerical tools. The CHAOSS method allows fast computation of quantities such as the harmonic-temporal spectra or connectivity matrices without resorting to numerical simulations, which are enormously more computationally expensive than the direct evaluation of analytic expressions. This implies that optimization of model parameters (for example to fit an observable quantity such as the harmonic spectrum, as we do here) can be carried out without the computational burden of numerical simulations. Furthermore, linear or linearized graph neural field equations are diagonalized by the graph Fourier transform, allowing very efficient numerical simulations in the graph Fourier domain. For a graph neural field on a connectome with n vertices, carrying out numerical simulations in the graph Fourier domain reduces the dimensionality by a factor of n, which is a vast improvement for high-resolution connectomes. Hence, graph neural field analysis (CHAOSS) and numerical simulations (linear or linearized models in the graph Fourier domain) can be carried out with a minimal amount of computational power.

Our approach presents several limitations. First, the CHAOSS method as presented here, and the dimensionality reduction of linear or linearized equations in the graph Fourier domain, require the model parameters to be space-independent. That is, model parameters are assumed to have the same value for all vertices in the connectome. This assumption was also used in previous studies of continuous neural fields [53], and in our case has the advantage of allowing mathematical analyses and simulations that, as mentioned above, are scalable to higher-resolution connectomes with little computational cost. However, there are more biophysically realistic models that require space-dependent parameters. For example, some recent neural mass network models incorporate neuronal receptors and their densities, which are known to vary across the cortex [5456]. The CHAOSS method can in principle be extended to account for space-dependent model parameters, and numerical simulations of graph neural fields can also be carried out with space-dependent parameters, but both would become significantly more computationally demanding than their counterparts with space-independent parameters. A possible approach to preserve computational efficiency, while characterizing regional differences, could be to absorb all the relevant space-dependent information into the graph Laplacian, maintaining space-indepedent model parameters. Similarly to the idea of differentially weighing white matter edges to account for myelination, one might weigh differentially graph edges within specific ROIs or specific subsets of vertices. Second, it is important to point out that our approach is subject to the limitations of tractography in regards to false positive and true negatives; and that the connectome used here does not include subcortical structures, nor projections between the cortex and subcortical structures. Future studies could attempt to employ connectomes including subcortical structures and connections. Third, the formulation of convolutions on graphs presented here is restricted to spatially symmetric kernels (but see the caption of S1 Fig for some considerations on indirect ways to obtain asymmetric kernels). Finally, another important limitation is the use of an undirected and time-independent connectome graph. For maximal generality and biophysical realism, one might want to study a directed, or even time-dependent (plastic) structural connectome. Such extensions would be very challenging, if at all feasible.

Immediate applications of graph neural fields can be found in the comparison of harmonic spectra, functional connectivity, and coherence matrix with single-subject empirical data obtained from different neuroimaging modalities such as fMRI and MEG, as well as different conditions, for example health, pathology, and neuropharmacologically-induced altered states of consciousness [26]. Investigating the effects of a reduced myelination speed factor, or pruned white-matter fibers, could be an interesting approach to modelling the effects of pathological or age-related structural alterations of white matter on the dynamics of functional activity. Other possible developments include the implementation of more biophysically realistic models, potentially including space-dependent parameters, and the use of a cortical connectome that includes cortical thickness, accounting for activity propagation across layers perpendicularly to the surface. Aside from whole-brain resting-state modelling, graph neural fields may also be used for modelling specific ROIs and stimulus-evoked brain activity. In particular, because of the known retinotopic mapping between visual stimuli and neural activity, the visual cortex presents itself as a very interesting ROI for such developments [57]. Moving beyond neuronal populations and even the human brain, the mathematical framework of graph neural fields may also be used to implement single-neuron models directly on the full connectome graphs of simple organisms, such as C. Elegans, whose neuronal pathways have been experimentally mapped at the single-neuron level [22].

Conclusion

In summary, in this study we described a class of whole-brain neural activity models which we refer to as graph neural fields, and showed that they can be used to capture dynamics of brain activity obtained from neuroimaging methods efficiently and with a high level of detail. The formulation of graph neural fields relies on existing concepts from the field of graph signal processing, namely the distance-weighted graph Laplacian operator and graph filtering, in combination with modelling concepts such as neural field equations. This framework allows inclusion of realistic anatomical features, analytic predictions of harmonic-temporal power spectra, correlation, and coherence matrices (Connectome-Harmonic Analysis Of Spatiotemporal Spectra, CHAOSS), and efficient numerical simulations. We illustrated the practical use of the framework by reproducing the harmonic spectrum and predicting the functional connectivity of resting-state fMRI with a stochastic Wilson-Cowan graph neural field model. Future work could build on the methods and results presented here, both from theoretical and applied standpoints.

Methods

Laplacian operators on graphs

In this section we provide a derivation of the distance-weighted graph Laplacian, or simply graph Laplacian, in terms of graph differential operators. The distance-weighted graph Laplacian is distinguished from the combinatorial graph Laplacian often used in analysis studies [11], as it allows geometrical properties of the cortex to be taken into account, which is necessary to implement physically realistic graph neural field models.

The combinatorial Laplacian

Consider an undirected graph with n vertices. The binary adjacency matrix A˜ is defined as:

A˜ij={1ifi~j,0otherwise. (16)

where ij means that vertices i and j are connected by an edge. The graph’s degree matrix D˜ is a diagonal matrix whose diagonal entries are given by:

D˜ii=j=1nA˜ij. (17)

It hence counts the number of edges for each vertex i. The binary or combinatorial graph Laplacian, denoted by Δ˜, is defined as:

Δ˜=A˜-D˜. (18)

The combinatorial graph Laplacian and its normalized version do not carry information about the distances between cortical vertices and therefore are invariant under topological but non-isometric deformations of graph. Neural activity modeled in terms of the combinatorial graph Laplacian therefore is a topological graph invariant, whereas real neural activity does depend on the metric properties of the graph. The combinatorial graph Laplacian, however, can be adjusted so as to take into account the metric properties of the graph, yielding the distance-weighted graph Laplacian. Below, we provide a derivation of the weighted graph Laplacian in terms of the graph directional derivaties of a graph function.

The distance-weighted graph Laplacian

Let f be a a function defined on the vertices of a graph, and let M be the graph’s distance matrix. Thus, the (i, j) entry Mij of M equals the distance between vertices i and j in a particular metric. We note that for this derivation, it is irrelevant how M is obtained. In the context of connectomes, the elements of M can be defined in terms of suitably scaled Euclidean distances, geodesic distances over the cortical manifold, or as the lengths of white matter fibers connecting vertices. Different distance metrics can also be combined for the construction of connectome graphs containing multiple types of edges, as we do here (see Data preprocessing and connectome graph construction), and as has been done in some previous studies [58]. The first-order graph directional derivative ∂j fi of f at vertex i in the direction of vertex j is:

jfi=A˜ijMij(fj-fi). (19)

Note that according to this definition, ∂j fi = 0 if vertex j is not connected to vertex i, and that ∂i fi = 0. Also note that ∂j is a linear operator on the vector space of graph signals. Furthermore, since A˜ij2=A˜ij, the second-order graph directional derivative j2fi of f at vertex i in the direction of vertex j is defined as:

j(jfi)=j2fi=A˜ijMij2(fi-fj). (20)

Following the definition of the Laplacian operator in Euclidean space as the sum of second-order partial derivatives, the distance-weighted graph Laplacian, or simply graph Laplacian Δ is defined as:

Δfi=-j=1nj2fi. (21)

To see the relation with the combinatorical graph Laplacian, we note that Δ can be written in matrix form as:

Δ=A-D, (22)

where A and D are the distance-weighted adjacency matrix and distance-weighted degree matrix, respectively, which are defined as Aij=A˜ij/Mij2 and Dii=j=1nAij, respectively. Thus, the weighted graph Laplacian can be obtained by using the weighted versions of the adjacency and degree matrices in the definition of the combinatorial graph Laplacian.

The graph Fourier transform

Diagonalization of the graph Laplacian gives:

Δ=UΛUT, (23)

where U is an orthogonal matrix containing the eigenvectors of Δ, and Λ is a diagonal matrix containing the corresponding eigenvalues λ1 ≥ λ2 ≥, ⋯, ≥ λn ≥ 0. The graph Fourier transform of a function u(t) on the graph is defined by:

u^(t)=UTu(t), (24)

where the transformation UT expresses u(t) in the eigenbasis of Δ. The vertex-domain signal u(t) can be recovered again by applying the inverse graph Fourier transform U−1 = U to u^(t). For clarity, note that the graph Fourier transform is not related to the temporal Fourier transform and that u(t) does not have to depend on time to apply it. For grid graphs (i.e. graphs whose drawing, embedded in some Euclidean space, forms a regular tiling), the graph Fourier transform is equivalent, in the continuum limit, to the spatial Fourier transform in Euclidean space. However, the graph Fourier transform can also be applied to more complex graphs, possibly with non-local edges, such as the human connectome.

Convolution kernels on graphs

In order to define neural field equations on graphs, we need a graph-theoretical analog of the continuous spatial convolution:

(Ku)(x,t)=-K(x-x)u(x,t)dx. (25)

To obtain this, we use the convolution theorem to represent the convolution in the spatial Fourier domain as K^(k)u^(k,t), where k is the spatial wavenumber. When the kernel is real-valued and spatially symmetric, its Fourier transform is real-valued and even in k, so that K^(k) can be viewed as a function of −k2. In continuous space, −k2 is the eigenvalue of the spatial Fourier basis function eikx under the Laplace operator. On graphs, the distance-weighted graph Laplacian Δ implements a generalized version of the Laplace operator, and the graph Fourier basis is defined by its eigenvectors U. Hence, the graph filter K^g corresponding to the convolution (Ku)(x, t) can be defined by substituting λk for the values −k2 in K^(-k2):

K^g=Diag(K^(λ1),,K^(λn)). (26)

In the graph Fourier domain, the filtered (convolved) signal is hence per definition given by:

u^filt(t)=K^gu^(t). (27)

Applying the inverse graph Fourier transform U, we obtain the filtered signal in the graph domain:

ufilt(t)=UK^gu^(t)=UK^gUTu(t)=Kgu(t), (28)

where we have defined Kg=UK^gUT, the graph domain representation of the filter. Eqs (27 and 28) can be interpreted as an analogy for the convolution theorems on graphs: the matrix-multiplication implementing a convolution in the graph domain becomes a point-wise product in the graph Fourier domain, since K^g is a diagonal matrix. This analogy can be employed to define spatiotemporal convolutions (S1 Appendix), and reaction-diffusion models (S2 Appendix), on arbitrary metric graphs. For example, the damped-wave and telegrapher’s equations (S3 Appendix), of interest in the context of modelling the propagation of neural signals, can be implemented on the human connectome (S2S4 Figs).

Examples of graph kernels

Table 2 lists several commonly used continuous spatial kernels and their equivalent filter on graphs. On grid graphs, the filters simply act as discretized versions of their continuous counterparts. However, this approach generalizes to arbitrary metric graphs, potentially with non-local edges, such as the human connectome, and is therefore more broader in scope than grid-based discretizations of continuous convolution kernels.

Graph neural fields

Continuous neural field models describe the dynamics of cortical activity u(x, t) at time t and cortical location x ∈ Ω. Here, ΩR3 denotes the cortical manifold embedded in three-dimensional Euclidean space. Depending on the physical interpretation of the state variable u(x, t), neural fields come in two types, which we will refer to in the rest of the text as Type 1 and Type 2. This short description is by no means meant to be exhaustive, and only contains the required background to define graph neural fields; comprehensive treatments of continuous neural fields are provided in [36, 53].

In Type 1 neural fields [53], the state variable u(x, t) describes the average membrane potential at location x and time t. The general form of a neural field model of Type 1 is:

Dtu(x,t)=ΩK(d(x,x))S[u(x,t)]dx+σξ(x,t), (29)

where σξ(x, t) is the noise term, d(x, x′) is the geodesic distance between cortical locations x and x′, K is the spatial kernel of the neural field that describes how the firing-rate S[u(x′, t)] at location x′ affects the voltage at location x, and S is the firing-rate function that converts voltages to firing-rates. Dt is a placeholder for the linear temporal differential operator that models synaptic dynamics, and can take different forms depending on the model under investigation. In modelling resting-state cortical activity, ξ(x, t) is usually taken to be a stationary stochastic process. For simplicity, we will assume the stochastic term ξ(x, t) to be spatiotemporally white noise (but in principle, colored noise could be used as well). The distance function d(x, x′) between cortical locations x and x′, as well as the integration over the cortical manifold Ω, assume that Ω is equipped with a Riemannian metric. A natural choice is the Euclidean metric induced by the embedding of the cortical manifold in three-dimensional Euclidean space.

In Type 2 neural field models [59, 60], the state variable u(x, t) denotes the fraction of active cells in a local cell population at location x and time t, and hence takes values in the interval [0, 1]. Type 2 neural field models have the form:

Dtu(x,t)=S[ΩK(d(x,x))u(x,t)dx]+σξ(x,t), (30)

where S denotes the activation function that maps fractions to fractions and hence takes values in the interval [0, 1] and thus has a different interpretation from the firing-rate function in Type 1 neural field models. Mathematically, the only difference between Type 1 and Type 2 neural field models is the placement of the non-linear function S. In practice, most neural field models are defined by two or more neural field equations, where each equation describes the dynamics of a different neuronal population, and its interaction with the other cell types. For example, the state variable of the Wilson-Cowan neural field model (Eqs (1 and 2)) is two-dimensional and its components correspond to Excitatory and Inhibitory neuronal populations.

In theoretical studies on neural field models, the cortex is usually assumed to be flat:, i.e. Ω=R2 (cortical sheet) or Ω=R1 (cortical line) or a closed subset thereof (but see [61] for a detailed theoretical study of a neural field model on the sphere). The major simplification that occurs in this case is that the cortical metric reduces to the Euclidean metric:

d(x,x)=x-x, (31)

and, as a consequence, the integrals in Eqs (29) and (30) reduce to spatial convolutions, so that Fourier methods can be used in the analysis. For spatially symmetric kernels, i.e. K(−x) = K(x) for all xR, convolutions integrals can be translated to graphs using the methods of the previous section Convolution kernels on graphs.

Thus, a graph neural field of Type 1 is a model of the form:

Dtu(t)=KgS[u(t)]+σξ(t), (32)

and a graph neural field of Type 2 is a model of the form:

Dtu(t)=S[Kgu(t)]+σξ(t). (33)

When more than one type of neuronal population is included, as for the Wilson-Cowan model, or when the temporal differential operator Dt is of order higher than one, the continuous neural fields reduce to systems of ordinary differential equations on graphs.

The continuous neural fields in Eqs (29) and (30) are described by partial integro-differential equations in which the integration in done over space. Continuous neural fields can also be described by spatiotemporal integral equations by viewing the temporal differential operator Dt as a temporal integral, which leads to a more general class of models. By defining spatiotemporal convolutions on graphs (S1 Appendix), this larger class of neural fields can be formulated on graphs as systems of temporal integral equations. To make this explicit, we use the definition of the spatiotemporal graph filtering operator Kg⊗ to write out the ith component of u, for a neural field of Type 1:

ui(t)=-(j=1nKgij(s)S[uj(t-s)])ds+σ-tξi(t)dt. (34)

Thus, the spatiotemporal integrals in continuous neural fields are replaced by temporal integrals in graph neural fields, and the spatial structure of the continuous kernel is incorporated into the graph filter Kgij. The same applies to neural fields of Type 2. Furthermore, for separable kernels, and for special choices of the temporal component of the kernel, the spatiotemporal integral equation can be reduced to a partial integro-differential equation [32, 36]. For graph neural fields there exists an equivalent subset of models that can be represented by a system of ordinary integro-differential equations.

Eqs (32 and 33) define graph neural fields for the case of purely spatial kernels K(x, t) = K(x). In case of a purely temporal kernel K(x, t) = gΘ(t), we obtain the following systems of ordinary differential equations, for a graph neural field of Type 1:

Dtu(t)=(gΘS[u])(t)+σξ(t), (35)

and for a graph neural field of Type 2:

Dtu(t)=S[(gΘu)(t)]+σξ(t). (36)

In case of a separable kernel K(x, t) = w(x)gΘ(t) we obtain the following systems of ordinary differential equations, for a graph neural field of Type 1:

Dtu(t)=(gΘKgS[u])(t)+σξ(t), (37)

and for a graph neural field of Type 2:

Dtu(t)=S[(gΘKgu)(t)]+σξ(t). (38)

Relating graph neural fields to experimental observables

Connectome-Harmonic Analysis Of Spatiotemporal Spectra (CHAOSS)

To characterize the spatiotemporal dynamics of resting-state brain activity, we derive analytic predictions for harmonic and temporal power spectra, functional connectivity, and coherence matrices of graph neural fields. For simplicity, we carry out the derivation for the case of space-independent model parameters. It is possible to extend the method to the case with space-dependent parameters, but all computations would be significantly more burdensome. For graph neural fields with space-independent parameters, the linear or linearized model equations for each graph Laplacian eigenmode can be described as the following p-dimensional system, where p is the number of neuronal population types:

Dtu^k(t)=Jku^k(t)+Bξ^k(t). (39)

Taking the temporal Fourier transform we obtain:

u^k(ω)=[D(ω)-Jk]-1Bξ^k(ω), (40)

where D(ω) denotes the temporal Fourier transform of Dt. Abbreviating the graph filter K^g=[D(ω)-Jk]-1, the cross-spectral matrix Sk(ω) of the kth eigenmode is given by:

Sk(ω)=E[u^k(ω)u^k(ω)]=K^gBE[ξ^k(ω)ξ^k(ω)]BK^g=K^gBK^g, (41)

where † denotes the conjugate transpose and E denotes the expected value. Colored noise can be modeled by letting B depend on ω, although this is usually not done in neural field modelling studies. Another possible generalization is to let B depend on the harmonic eigenmode.

Eq (41) gives a closed-form expression for the cross-spectral matrix of the kth eigenmode. Hence, its sth diagonal entry [Sk(ω)]s, with s = 1, …, p, represents the power of the sth neuronal population, in the kth eigenmode, at temporal frequency ω. The temporal power spectrum Ts(ω) of the sth neuronal population is obtained by summing over harmonic eigenmodes:

Ts(ω)=2k=1n[Sk(ω)]s, (42)

where the factor of 2 arises because on graphs, k ranges only over positive integers between 1 and n. Similarly, the harmonic power spectrum of the sth neuronal population Hs(k) is obtained by integrating over the temporal frequency ω:

Hs(k)=12π-+[Sk(ω)]sdω. (43)

When combined with a suitable observation model, these predictions can be compared with or fitted to experimental data from different neuroimaging modalities.

Functional connectivity

Furthermore, it is possible to compute the correlation matrix of brain activity for each neuronal population. To construct the covariance matrix of a neuronal population activity Σs across all graph vertices, we first construct the covariance matrix Σ^s in the graph Fourier domain. The covariance matrix of the sth neuronal population in the graph Fourier domain Σ^s is given by:

Σ^s=Diag(Hs(k)). (44)

The covariance matrix across all vertices is obtained by transforming back to the graph domain:

Σs=UΣ^sUT. (45)

The functional connectivity (correlation) matrix Fs, which is often used in fMRI resting-state studies, is obtained by normalizing the covariance matrix, so that its entries are in the range [−1, 1]:

Fs=(Σs+)-1/2Σs(Σs+)-1/2, (46)

where Σs+ denotes Σs with all off-diagonal entries set to zero. Seed-based connectivity of the jth vertex is measured by the jth row (or column) of Fs. Eq (46) provides an analytic prediction for the vertex-wise functional connectivity of graph neural fields.

Coherence matrix

From the linearized model equations one can also derive the coherence matrix, which measures the strength and latency of interactions between pairs of vertices as a function of frequency ω, and is often used in EEG and MEG studies [62]. If the noise is assumed to be white, non-linear connectivity measures such as the phase-locking value and amplitude correlations can be analytically computed from the coherence matrix [63]. For simplicity, we derive the coherence matrix for the case of a single neuronal population and space-independent parameters.

The derivation of the coherence matrix is similar to that of the functional connectivity, and starts by expressing the linearized model equations in the vertex domain:

Dtu(t)=Ju(t)+Bξ^(t). (47)

Transforming Eq (47) to the temporal Fourier domain and taking expectations yields the cross-spectral matrix Sv(ω) in the vertex domain:

Sv(ω)=E[u(ω)u(ω)]=KgBKg, (48)

where Kg = [D(ω) − J]−1. The coherence matrix C(ω) is obtained by normalization of the cross-spectral matrix in the vertex domain:

C(ω)=(Sv+(ω))-1/2Sv(ω)(Sv+(ω))-1/2, (49)

where Sv+(ω)) denotes Sv(ω) with its off-diagonal entries to zero. The (i, j) entry of C(ω) is the coherence between the cortical activity at vertices i and j.

Data preprocessing and connectome graph construction

We use structural MRI and DTI data obtained from the Human Connectome Project (https://db.humanconnectome.org/) to construct the individual subject anatomical connectome graph. In short, MRI data is employed to obtain local graph edges based on the surface mesh; DTI data is employed to add long-range white-matter connections to the graph. The main difference with previous studies analyzing brain activity in terms of the anatomical connectome graph Laplacian [11] is that instead of constructing the combinatorial (binary) graph Laplacian, here we construct a distance-weighted graph Laplacian (Eqs (1922)). This allows us to take into account physical distance properties of the cortex that are relevant for graph neural fields, and that are otherwise lost. Specifically, for an local surface edge between vertices i and j, the element Mij of the distance matrix M is defined as their 3D Euclidean distance; for a non-local white-matter edge, Mij is defined as the distance along the respective DTI fiber path, divided by a factor of 200. This value is chosen to reflect the myelination of white matter fibers, which is known to allow neural activity to propagate at speeds ∼200 times greater in white matter fibers, in comparison with local surface propagation [25]. Resting-state BOLD fMRI timecourses from the Human Connectome Project were minimally preprocessed (coregistration, motion correction), resampled on the respective subject connectome graph, and demeaned.

Supporting information

S1 Table. Parameter set for 1D analysis and simulations.

This parameter set was obtained by a qualitative comparison of the Wilson-Cowan model’s harmonic and temporal spectra with empirical data, and used to illustrate how graph properties affect neural field dynamics in one dimension.

(PDF)

S2 Table. Parameter set for connectome-wide analysis and simulations.

This parameter set was obtained by quantiatively fitting the Wilson-Cowan model’s harmonic power spectrum to that of resting-state fMRI data, and used for all connectome-wide analysis and numerical simulations.

(PDF)

S1 Fig. Spatial convolution examples on 1-dimensional graphs.

To illustrate spatial convolution on graphs, we apply different spatial convolution filters from Table 2 to an impulse function centered on the middle vertex of a one-dimensional grid-graph with spacing h = 1 units. The resulting functions, normalized to have unit amplitude, show the shapes of the graph kernels. Note that the rectangular kernel convolution operator in Panel (E) exhibits the Gibbs phenomenon [64], which is a known feature of finite Fourier representations of functions with jump discontinuities. Solutions to this problem have been offered [65], but they are beyond the scope of the current work. Thus, we suggest avoiding spatial kernels with jump discontinuities in the context of graph neural fields. Open boundary conditions can be implemented by extending the graph beyond the image size, and periodic boundaries by adding edges connecting vertices on opposite sides of the graph. We also note that, if desired, spectral kernels can be obtained using polynomial approximation schemes, which obviates the need to diagonalize the graph Laplacian matrix [66]. For large datasets (for example natural images databases), it might be computationally advantageous to apply convolutions with symmetric kernels through graph filters, rather than with standard discrete convolution methods. Blurring/smoothing a 2-dimensional image with a spatial Gaussian kernel is equivalent to applying the graph Gaussian kernel to the image-function defined on a 2-dimensional square-grid graph. Spatial convolutions on graphs become linear matrix-vector products, which are highly optimized and easily parallelizable operations; the bulk of the computational cost for graph convolutions consists in the initial computation of the filter itself, which has to be performed only once per kernel. The approach described here is limited to symmetric kernels. In some special cases, asymmetric kernels may be practically obtained by introducing suitable asymmetries in the graph edges. For example, consider a grid graph in two dimensions, with additional edges connecting bottom-left and top-right vertices of each square in the grid. Because of the broken lattice symmetry, a Gaussian kernel on this non-grid graph will behave like a spatially elliptic Gaussian, angled at 45 degrees, analogously to modelling a spatially asymmetric diffusion process on the graph.

(TIF)

S2 Fig. The damped-wave equation on the human connectome gives rise to propagation with characteristic speed and wavelength.

Shown are snapshots of simulated cortical activity that is governed by the damped wave equation with time-step δt = 1 and parameters a = 3 ⋅ 105, b = 5 ⋅ 103.

(TIF)

S3 Fig. Varying the parameters of the damped-wave equation alters the dynamics of propagation on the human connectome.

Shown are snapshots of simulated cortical activity that is governed by the damped wave equation with time-step δt = 1 and parameters a = 1.5 ⋅ 105, b = 2.5 ⋅ 103.

(TIF)

S4 Fig. Dynamics of the damped-wave equation on the human connectome include non-local propagation along white-matter fibers.

Shown are snapshots of simulated cortical activity that is governed by the damped wave equation with time-step δt = 1 and parameters a = 1.5 ⋅ 105, b = 2.5 ⋅ 103.

(TIF)

S5 Fig. Resting-state fMRI and numerical simulation of the Wilson-Cowan graph neural field model on the human connectome.

Panel A shows resting-state brain activity, as fluctuations of the BOLD fMRI signal about the mean at each vertex. Panel B shows snapshots of activity from the stochastic Wilson-Cowan graph neural field model, simulated using the parameters of S2 Table. The model activity was temporally downsampled to match the TR of fMRI data, and rescaled by β to match the scale of the BOLD signal. No spatial or temporal smoothing was applied. Note that the two hemispheric surfaces are physically separate, and inter-hemispheric propagation is allowed through white matter fibers.

(TIF)

S6 Fig. FMRI functional connectivity.

High-resolution PDF version of Fig 6.

(PDF)

S7 Fig. Model functional connectivity.

High-resolution PDF version of Fig 7.

(PDF)

S8 Fig. Functional connectivity comparison (inset 1).

High-resolution PDF version of Fig 8.

(PDF)

S9 Fig. Functional connectivity comparison (inset 2).

High-resolution PDF version of Fig 9.

(PDF)

S1 Appendix. Spatiotemporal convolutions on graphs.

Here, we generalize the formulation of spatial convolutions on graphs to spatiotemporal convolutions on graphs, allowing the definition of a broader class of graph neural fields.

(PDF)

S2 Appendix. Reaction-diffusion neural activity models on graphs.

In this section we show how graph filters can also be used to implement the graph equivalents of neural activity models that can be directly written as partial differential equations [36, 53] and, among others, comprise damped wave and reaction-diffusion equations.

(PDF)

S3 Appendix. Damped wave and telegrapher’s equation on graphs.

The damped-wave describes the dynamics of simultaneous diffusion and wave propagation, and is thus of interest in the context of modelling activity propagation in neural tissue [53]. Nonlinear variants of the wave equation on graphs have also been the subject of previous analytical studies [67]. Here, we solve the graph equivalent of the damped-wave equation and of the telegrapher’s equation, which is of interest in the context of modelling action potentials [68].

(PDF)

S4 Appendix. Wilson-Cowan model linear stability analysis.

In order to compute meaningful spatiotemporal observables with CHAOSS for a given set of parameters, it is first necessary to find a steady state and compute its stability to perturbations. Here, we provide solutions to the steady-state equations and a general linear stability analysis for the Wilson-Cowan model on graphs.

(PDF)

Acknowledgments

We would like to thank Thomas Yeo and Ruby Kong for providing the mapping between HCP 32k and 10k vertices and Daniele Avitabile for valuable discussions.

Data Availability

Data used in this work were made available by the Human Connectome Project (HCP), WU-Minn Consortium. We use fMRI, MRI and DTI data of an individual subject (#100307), available from https://db.humanconnectome.org/data/projects/HCP_1200. All code used for analysis and simulations is available for use and review at https://github.com/marcoaqil/Graph-Stochastic-Wilson-Cowan-Model. Together, these two links contain all the data and all the code used for this work.

Funding Statement

MLK and SA are supported by the ERC Consolidator Grant: CARE-GIVING (n. 615539), Center for Music in the Brain, funded by the Danish National Research Foundation (DNRF117), and Centre for Eudaimonia and Human Flourishing funded by the Pettit and Carlsberg Foundations. RH is supported by the NWO-Wiskundeclusters grant nr. 613.009.105. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1. Damoiseaux JS, Rombouts S, Barkhof F, Scheltens P, Stam CJ, Smith SM, et al. Consistent resting-state networks across healthy subjects. Proceedings of the national academy of sciences. 2006;103(37):13848–13853. 10.1073/pnas.0601417103 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Deco G, Jirsa VK, McIntosh AR. Emerging concepts for the dynamical organization of resting-state activity in the brain. Nature Reviews Neuroscience. 2011;12(1):43 10.1038/nrn2961 [DOI] [PubMed] [Google Scholar]
  • 3. Lim S, Radicchi F, van den Heuvel MP, Sporns O. Discordant attributes of structural and functional connectivity in a two-layer multiplex network. bioRxiv. 2018; p. 273136. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Preti MG, Van De Ville D. Decoupling of brain function from structure reveals regional behavioral specialization in humans. Nature communications. 2019;10(1):1–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Mohar B, Alavi Y, Chartrand G, Oellermann O. The Laplacian spectrum of graphs. Graph theory, combinatorics, and applications. 1991;2(871-898):12. [Google Scholar]
  • 6. Sandryhaila A, Moura JM. Discrete signal processing on graphs. IEEE transactions on signal processing. 2013;61(7):1644–1656. 10.1109/TSP.2013.2238935 [DOI] [Google Scholar]
  • 7. Shuman DI, Narang SK, Frossard P, Ortega A, Vandergheynst P. The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains. IEEE Signal Processing Magazine. 2013;30(3):83–98. 10.1109/MSP.2012.2235192 [DOI] [Google Scholar]
  • 8. Perraudin N, Vandergheynst P. Stationary signal processing on graphs. IEEE Transactions on Signal Processing. 2017;65(13):3462–3477. 10.1109/TSP.2017.2690388 [DOI] [Google Scholar]
  • 9. Ortega A, Frossard P, Kovačević J, Moura JM, Vandergheynst P. Graph signal processing: Overview, challenges, and applications. Proceedings of the IEEE. 2018;106(5):808–828. 10.1109/JPROC.2018.2820126 [DOI] [Google Scholar]
  • 10. Stanković L, Mandic D, Daković M, Scalzo B, Brajović M, Sejdić E, et al. Vertex-frequency graph signal processing: A comprehensive review. Digital Signal Processing. 2020; p. 102802. [Google Scholar]
  • 11. Atasoy S, Donnelly I, Pearson J. Human brain networks function in connectome-specific harmonic waves. Nature communications. 2016;7:10340 10.1038/ncomms10340 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Atasoy S, Deco G, Kringelbach ML, Pearson J. Harmonic brain modes: a unifying framework for linking space and time in brain dynamics. The Neuroscientist. 2018;24(3):277–293. 10.1177/1073858417728032 [DOI] [PubMed] [Google Scholar]
  • 13.Huang W, Bolton TA, Medaglia JD, Bassett DS, Ribeiro A, Van De Ville D. A Graph Signal Processing Perspective on Functional Brain Imaging. Proceedings of the IEEE. 2018;.
  • 14. Xu G. Discrete Laplace–Beltrami operators and their convergence. Computer aided geometric design. 2004;21(8):767–784. 10.1016/j.cagd.2004.07.007 [DOI] [Google Scholar]
  • 15.Belkin M, Sun J, Wang Y. Discrete laplace operator on meshed surfaces. In: Proceedings of the twenty-fourth annual symposium on Computational geometry; 2008. p. 278–287.
  • 16. Tewarie P, Prasse B, Meier JM, Santos FA, Douw L, Schoonheim M, et al. Mapping functional brain networks from the structural connectome: Relating the series expansion and eigenmode approaches. NeuroImage. 2020; p. 116805 10.1016/j.neuroimage.2020.116805 [DOI] [PubMed] [Google Scholar]
  • 17. Wang MB, Owen JP, Mukherjee P, Raj A. Brain network eigenmodes provide a robust and compact representation of the structural connectome in health and disease. PLoS computational biology. 2017;13(6):e1005550 10.1371/journal.pcbi.1005550 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Tewarie P, Abeysuriya R, Byrne Á, O’Neill GC, Sotiropoulos SN, Brookes MJ, et al. How do spatially distinct frequency specific MEG networks emerge from one underlying structural connectome? The role of the structural eigenmodes. NeuroImage. 2019;186:211–220. 10.1016/j.neuroimage.2018.10.079 [DOI] [PubMed] [Google Scholar]
  • 19. Raj A, Cai C, Xie X, Palacios E, Owen J, Mukherjee P, et al. Spectral graph theory of brain oscillations. Human Brain Mapping. 2020;. 10.1002/hbm.24991 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. Glomb K, Queralt JR, Pascucci D, Defferrard M, Tourbier S, Carboni M, et al. Connectome spectral analysis to track EEG task dynamics on a subsecond scale. NeuroImage. 2020;221:117137 10.1016/j.neuroimage.2020.117137 [DOI] [PubMed] [Google Scholar]
  • 21. Breakspear M. Dynamic models of large-scale brain activity. Nature neuroscience. 2017;20(3):340 10.1038/nn.4497 [DOI] [PubMed] [Google Scholar]
  • 22. Petrovic M, Bolton TA, Preti MG, Liégeois R, Van De Ville D. Guided graph spectral embedding: Application to the C. elegans connectome. Network Neuroscience. 2019;3(3):807–826. 10.1162/netn_a_00084 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23. Cowan JD, Neuman J, van Drongelen W. Wilson–Cowan equations for neocortical dynamics. The Journal of Mathematical Neuroscience. 2016;6(1):1 10.1186/s13408-015-0034-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24. Robinson P, Rennie C, Wright J, Bourke P. Steady states and global dynamics of electrical activity in the cerebral cortex. Physical Review E. 1998;58(3):3557 10.1103/PhysRevE.58.3557 [DOI] [Google Scholar]
  • 25. Purves D, Augustine G, Fitzpatrick D, Katz L, LaMantia A, McNamara J, et al. Increased conduction velocity as a result of myelination. Neuroscience. 2001;. [Google Scholar]
  • 26. Atasoy S, Roseman L, Kaelen M, Kringelbach ML, Deco G, Carhart-Harris RL. Connectome-harmonic decomposition of human brain activity reveals dynamical repertoire re-organization under LSD. Scientific reports. 2017;7(1):17661 10.1038/s41598-017-17546-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27. Wales DJ, Doye JP. Global optimization by basin-hopping and the lowest energy structures of Lennard-Jones clusters containing up to 110 atoms. The Journal of Physical Chemistry A. 1997;101(28):5111–5116. 10.1021/jp970984n [DOI] [Google Scholar]
  • 28. Liechti ME. Modern clinical research on LSD. Neuropsychopharmacology. 2017;42(11):2114–2127. 10.1038/npp.2017.86 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29. Vollenweider FX, Preller KH. Psychedelic drugs: neurobiology and potential for treatment of psychiatric disorders. Nature Reviews Neuroscience. 2020;21(11):611–624. 10.1038/s41583-020-0367-2 [DOI] [PubMed] [Google Scholar]
  • 30. Nichols DE. Dark classics in chemical neuroscience: lysergic acid diethylamide (LSD). ACS chemical neuroscience. 2018;9(10):2331–2343. 10.1021/acschemneuro.8b00043 [DOI] [PubMed] [Google Scholar]
  • 31. Proix T, Spiegler A, Schirner M, Rothmeier S, Ritter P, Jirsa VK. How do parcellation size and short-range connectivity affect dynamics in large-scale brain network models? NeuroImage. 2016;142:135–149. [DOI] [PubMed] [Google Scholar]
  • 32. Coombes S. Large-scale neural dynamics: simple and complex. NeuroImage. 2010;52(3):731–739. 10.1016/j.neuroimage.2010.01.045 [DOI] [PubMed] [Google Scholar]
  • 33. Sanz-Leon P, Knock SA, Spiegler A, Jirsa VK. Mathematical framework for large-scale brain network modeling in The Virtual Brain. Neuroimage. 2015;111:385–430. 10.1016/j.neuroimage.2015.01.002 [DOI] [PubMed] [Google Scholar]
  • 34. Byrne Á, O’Dea RD, Forrester M, Ross J, Coombes S. Next-generation neural mass and field modeling. Journal of Neurophysiology. 2020;123(2):726–742. 10.1152/jn.00406.2019 [DOI] [PubMed] [Google Scholar]
  • 35. Deco G, Jirsa VK, Robinson PA, Breakspear M, Friston K. The dynamic brain: from spiking neurons to neural masses and cortical fields. PLoS Comput Biol. 2008;4(8):e1000092 10.1371/journal.pcbi.1000092 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36. Bressloff PC. Spatiotemporal dynamics of continuum neural fields. Journal of Physics A: Mathematical and Theoretical. 2011;45(3):033001. [Google Scholar]
  • 37. Roberts JA, Gollo LL, Abeysuriya RG, Roberts G, Mitchell PB, Woolrich MW, et al. Metastable brain waves. Nature communications. 2019;10(1):1056 10.1038/s41467-019-08999-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38. Muller L, Chavane F, Reynolds J, Sejnowski TJ. Cortical travelling waves: mechanisms and computational principles. Nature Reviews Neuroscience. 2018;19(5):255 10.1038/nrn.2018.20 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39. Bojak I, Oostendorp TF, Reid AT, Kötter R. Connecting mean field models of neural activity to EEG and fMRI data. Brain topography. 2010;23(2):139–149. 10.1007/s10548-010-0140-3 [DOI] [PubMed] [Google Scholar]
  • 40. Bojak I, Oostendorp TF, Reid AT, Kötter R. Towards a model-based integration of co-registered electroencephalography/functional magnetic resonance imaging data with realistic neural population meshes. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 2011;369(1952):3785–3801. 10.1098/rsta.2011.0080 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41. Liley DT, Cadusch PJ, Dafilis MP. A spatially continuous mean field theory of electrocortical activity. Network: Computation in Neural Systems. 2002;13(1):67–113. 10.1080/net.13.1.67.113 [DOI] [PubMed] [Google Scholar]
  • 42. Martin R. Collocation techniques for solving neural field models on complex cortical geometries. Nottingham Trent University; 2018. [Google Scholar]
  • 43. Freestone DR, Aram P, Dewar M, Scerri K, Grayden DB, Kadirkamanathan V. A data-driven framework for neural field modeling. NeuroImage. 2011;56(3):1043–1058. 10.1016/j.neuroimage.2011.02.027 [DOI] [PubMed] [Google Scholar]
  • 44. Spiegler A, Jirsa V. Systematic approximations of neural fields through networks of neural masses in the virtual brain. Neuroimage. 2013;83:704–725. 10.1016/j.neuroimage.2013.06.018 [DOI] [PubMed] [Google Scholar]
  • 45. Sanz Leon P, Knock SA, Woodman MM, Domide L, Mersmann J, McIntosh AR, et al. The Virtual Brain: a simulator of primate brain network dynamics. Frontiers in neuroinformatics. 2013;7:10 10.3389/fninf.2013.00010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46. Lawrence SJ, Norris DG, de Lange FP. Dissociable laminar profiles of concurrent bottom-up and top-down modulation in the human visual cortex. Elife. 2019;8:e44422 10.7554/eLife.44422 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47. Kuehn E, Sereno MI. Modelling the human cortex in three dimensions. Trends in cognitive sciences. 2018;22(12):1073–1075. 10.1016/j.tics.2018.08.010 [DOI] [PubMed] [Google Scholar]
  • 48. Alswaihli J, Potthast R, Bojak I, Saddy D, Hutt A. Kernel reconstruction for delayed neural field equations. The Journal of Mathematical Neuroscience. 2018;8(1):3 10.1186/s13408-018-0058-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49. Deco G, Jirsa V, McIntosh AR, Sporns O, Kötter R. Key role of coupling, delay, and noise in resting brain fluctuations. Proceedings of the National Academy of Sciences. 2009;106(25):10302–10307. 10.1073/pnas.0901831106 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50. Atay FM, Hutt A. Stability and bifurcations in neural fields with finite propagation speed and general connectivity. SIAM Journal on Applied Mathematics. 2004;65(2):644–666. 10.1137/S0036139903430884 [DOI] [Google Scholar]
  • 51. Hutt A, Atay FM. Analysis of nonlocal neural fields for both general and gamma-distributed connectivities. Physica D: Nonlinear Phenomena. 2005;203(1-2):30–54. 10.1016/j.physd.2005.03.002 [DOI] [Google Scholar]
  • 52.Shamsara E, Yamakou ME, Atay FM, Jost J. Dynamics of neural fields with exponential temporal kernel. arXiv preprint arXiv:190806324. 2019;. [DOI] [PMC free article] [PubMed]
  • 53. Coombes S. Waves, bumps, and patterns in neural field theories. Biological cybernetics. 2005;93(2):91–108. 10.1007/s00422-005-0574-y [DOI] [PubMed] [Google Scholar]
  • 54. Deco G, Cruzat J, Cabral J, Knudsen GM, Carhart-Harris RL, Whybrow PC, et al. Whole-brain multimodal neuroimaging model using serotonin receptor maps explains non-linear functional effects of LSD. Current Biology. 2018;28(19):3065–3074. 10.1016/j.cub.2018.07.083 [DOI] [PubMed] [Google Scholar]
  • 55. Kringelbach ML, Cruzat J, Cabral J, Knudsen GM, Carhart-Harris R, Whybrow PC, et al. Dynamic coupling of whole-brain neuronal and neurotransmitter systems. Proceedings of the National Academy of Sciences. 2020;117(17):9566–9576. 10.1073/pnas.1921475117 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56. Herzog R, Mediano PA, Rosas FE, Carhart-Harris R, Sanz Y, Tagliazucchi E, et al. A mechanistic model of the neural entropy increase elicited by psychedelic drugs. bioRxiv. 2020;. 10.1038/s41598-020-74060-6 [DOI] [PMC free article] [PubMed] [Google Scholar] [Retracted]
  • 57. Dumoulin SO, Wandell BA. Population receptive field estimates in human visual cortex. Neuroimage. 2008;39(2):647–660. 10.1016/j.neuroimage.2007.09.034 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58. Hammond DK, Scherrer B, Warfield SK. Cortical graph smoothing: a novel method for exploiting DWI-derived anatomical brain connectivity to improve EEG source estimation. IEEE transactions on medical imaging. 2013;32(10):1952–1963. 10.1109/TMI.2013.2271486 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59. Wilson HR, Cowan JD. A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Kybernetik. 1973;13(2):55–80. 10.1007/BF00288786 [DOI] [PubMed] [Google Scholar]
  • 60. Negahbani E, Steyn-Ross DA, Steyn-Ross ML, Wilson MT, Sleigh JW. Noise-induced precursors of state transitions in the stochastic Wilson–Cowan model. The Journal of Mathematical Neuroscience (JMN). 2015;5(1):9 10.1186/s13408-015-0021-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 61. Visser S, Nicks R, Faugeras O, Coombes S. Standing and travelling waves in a spherical brain model: the Nunez model revisited. Physica D: Nonlinear Phenomena. 2017;349:27–45. 10.1016/j.physd.2017.02.017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62. Pereda E, Quiroga RQ, Bhattacharya J. Nonlinear multivariate analysis of neurophysiological signals. Progress in neurobiology. 2005;77(1-2):1–37. 10.1016/j.pneurobio.2005.10.003 [DOI] [PubMed] [Google Scholar]
  • 63. Nolte G, Galindo-Leon E, Li Z, Liu X, Engel AK. Mathematical relations between measures of brain connectivity estimated from electrophysiological recordings for Gaussian distributed data. bioRxiv. 2019; p. 680678. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 64. Hewitt E, Hewitt RE. The Gibbs-Wilbraham phenomenon: an episode in Fourier analysis. Archive for history of Exact Sciences. 1979;21(2):129–160. 10.1007/BF00330404 [DOI] [Google Scholar]
  • 65. Gottlieb D, Shu CW. On the Gibbs phenomenon and its resolution. SIAM review. 1997;39(4):644–668. 10.1137/S0036144596301390 [DOI] [Google Scholar]
  • 66. Shuman DI. Localized spectral graph filter frames: A unifying framework, survey of design considerations, and numerical comparison. IEEE Signal Processing Magazine. 2020;37(6):43–63. 10.1109/MSP.2020.3015024 [DOI] [Google Scholar]
  • 67. Caputo JG, Khames I, Knippel A, Panayotaros P. Periodic orbits in nonlinear wave equations on networks. Journal of Physics A: Mathematical and Theoretical. 2017;50(37):375101 10.1088/1751-8121/aa7fd8 [DOI] [Google Scholar]
  • 68. Hodgkin AL, Huxley AF. A quantitative description of membrane current and its application to conduction and excitation in nerve. The Journal of physiology. 1952;117(4):500–544. 10.1113/jphysiol.1952.sp004764 [DOI] [PMC free article] [PubMed] [Google Scholar]
PLoS Comput Biol. doi: 10.1371/journal.pcbi.1008310.r001

Decision Letter 0

Daniele Marinazzo

7 Oct 2020

Dear Mr. Aqil,

Thank you very much for submitting your manuscript "Graph neural fields: a framework for spatiotemporal dynamical models on the human connectome" for consideration at PLOS Computational Biology.

As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. In light of the reviews (below this email), we would like to invite the resubmission of a significantly-revised version that takes into account the reviewers' comments.

We cannot make any decision about publication until we have seen the revised manuscript and your response to the reviewers' comments. Your revised manuscript is also likely to be sent to reviewers for further evaluation.

When you are ready to resubmit, please upload the following:

[1] A letter containing a detailed list of your responses to the review comments and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

[2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file).

Important additional instructions are given below your reviewer comments.

Please prepare and submit your revised manuscript within 60 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email. Please note that revised manuscripts received after the 60-day due date may require evaluation and peer review similar to newly submitted manuscripts.

Thank you again for your submission. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments.

Sincerely,

Daniele Marinazzo

Deputy Editor

PLOS Computational Biology

***********************

Reviewer's Responses to Questions

Comments to the Authors:

Please note here if the review is uploaded as an attachment.

Reviewer #1: The paper presents an integration of concepts from graph signal processing and neural field models. In particular, the latter is extended to the former setting. To this reader, the work is original, innovative and has the potential to be leveraged to shed light in to the dynamics of resting state fMRI data through using local and distant backbone connections of the brain. The methodology, although description-wise not so nicely laid out, is rigorous and provides notable insights for the community. Despite its strengths, the paper has left this reader with two main concerns:

1) The paper is titled heavily in relation to the human connectome, whereas the contents are essentially theoretical derivation of general nature, which are applicable to any type of graph. Although the incorporation of the neural field models into the graph setting does put the work in context in regard to the neural setting, the majority of presented results are on a simplistic 1D graph of a non-neuronal nature. Results of the human connectome are limited to that of a single subject, the most substantial of which is to show that the power spectral density of a single time frame of the subject's resting-state fMRI data can be fitted with the proposed model. To this reader, this extent of results is extensively limited to prove the applicability and robustness of the proposed methodology on fMRI data, and as such, the work lacks substantial evidence for the conclusions made. A number of suggestions are presented in the comments that follow, which the authors may want to consider.

2) The presentation of the paper, in particular in relation to the mathematical formulations are not consistent and the associated descriptions are generally not concise. A series of errors and issues are listed in the comments that follow. To increase the readership of the work, the authors are highly recommended to consider thoroughly revising the manuscript for brevity.

Good luck.

a) In the paragraph before equation (33), you state: ‘the elements of M can be defined in terms of suitably scaled Euclidean distances, geodesic distances over the cortical manifold, or as the lengths of the white matter fibers connecting the vertices.’ It may benefit the reader if you could refer to related works for the listed scenarios, for instance, for the third one: https://doi.org/10.1109/TMI.2013.2271486, which defines intra-cortical graph edges based on the surface mesh in a similar way as done in your work. The work is also probably the first to present the idea designing hybrid graph with local and distant edges, and also uses non-binary edges, see equation (6).

b) As mentioned earlier above, results on a single subject is just too little. Can you extend results shown in Figure 7 across multiple subjects, potentially on the HCP100 data? If you normalize the fMRI graph signals to unit norm, and use the normalized graph Laplacian, which has its eigenvalues bounded within [0,2], you may then overlay the resulting power spectra, or the cumulative sum for instance as in Fig. 3(a) in: https://doi.org/10.1109/ISBI45749.2020.9098667, across subjects. Moreover, as your model is not specific to resting-state data, it can be intuitive to see the same results also on the HCP task data, at least for a subset of the subjects if computationally burdensome.

c) The FC matrices shown in Figure 5 are shown at a resolution that cannot be readily compared with each otehr. Could you please provide a zoomed inset of a region around the diagonal on both matrices, to enable pixel-by-pixel comparison of the two at the zoomed inset? Moreover, to better convey that the two matrices are very similar, you can show the difference between the two, and again, show a zoomed in region around the diagonal. Lastly, the colormap used does not nicely reflect positive-negative values with a nice separation of 0; in particular, you state: line 202: ‘The non-local edge also creates a visible increase in the functional connectivity between the nodes involved, and a change in the pattern in neighboring nodes’ which is not at all as visible as you state. If you modify your colormap, for instance to show 0 as white, things should become much more visible. In doing so, you may want to use the following in matlab: RdBu = flipud(cbrewer('div','RdBu',101)); colormap(RdBu); for which you need to download and addpath this package: https://www.mathworks.com/matlabcentral/fileexchange/34087-cbrewer-colorbrewer-schemes-for-matlab

d) On line 526 you state: ‘intra-cortical edges are weighed by they 3D Euclidean distance’. Can you more specifically state how you define these weights? In particular, do you divide or multiply by the Euclidean distance? (Also note the typo: they > their.) But aside from this, I found your approach in scaling of the non-local edge weights very intuitive and an interesting way to account for intra-cortical propagation.

e) On line 607 you state: ‘Graph neural fields can naturally take into account important physical properties such as cortical folding, hemispheric asymmetries, non-homogeneous structural connectivity, and white matter projections, with a minimal amount of computational power.’ This is indeed interesting, but can you please elaborate a bit on how this is achievable?

f) The connectome model, when used for the sake of representation as the backbone on which fMRI data are realized, should be interpreted with care, in particular, in light of, firstly, limitations with tractography in regards to false positive and true negatives, and secondly, lack of projection fibers in the model that connect the cortex and subcortical structures such as the thalamus, basal ganglia, and spinal cord and the cerebellum. The concern is not only lack of connections, but in fact lack of vertices representing these structures. I understand that the presented work is not expected to address these concerns, but I believe these concerns deserve be mentioned, at least briefly, in the Discussion, to inform the reader.

g) The notation P(k) used in the y-axis label of Figure 7 seems not to appear in text, but you have H_{p}(k), though I do not perfectly follow what would be the 'neuronal population' in this example figure if relevant.

h) On line 594 you state that: ‘Whole-brain models that incorporate short-range connectivity are referred to as surface- based because they can are defined either on high-resolution surface-based representations of the cortex’. However, models that incorporate short-range connectivity can also be volumetric, e.g. see: [whole brain graph] https://doi.org/10.1016/j.neuroimage.2020.116718, [white matter graph] https://doi.org/10.1109/ISBI45749.2020.9098582, [gray matter graph] https://doi.org/10.1016/j.neuroimage.2015.06.010. There is no need to refer to these works as their aim differs in nature from that in your work, aside from the first one to some extent. The concern is with the sentence that is not quite accurate.

i) At the C Elegans statement, line 668, you may find the following work of relevance to refer to: https://doi.org/10.1162/netn_a_00084

j) To give the reader a more recent comprehensive review of GSP and its applications consider citing the following: https://doi.org/10.1016/j.dsp.2020.102802

k) You may want to state that the filtering operation of graphs using any desired spectral kernel can be efficiently implemented using polynomial approximation scheme which obviates the need to diagonalize the Laplacian matrices, which in your setting of 18K nodes, although feasible, but still quite computationally cumbersome to get all the eigenmodes. If you decided to mention such an option as outlook, you may find this recent preprint of interest to refer to as it provides a nice overview of available techniques: https://arxiv.org/abs/2006.11220

l) In the paragraph above equation (75), it states: ‘Because of the independence of eigenmodes,…’. I suppose you mean 'orthogonality' of the eigenmodes? (Also note that, i) the type at the end of the paragraph: spectrum. > spectrum: --- ii) sometimes you use: when presenting equations and sometimes not; please consider making the presentation consistent)

m) For constants, you sometimes use lower case letters, like n that specifies the number of eigenmodes, and sometimes use uppercase, like N that specifies the number of neuronal populations.

n) You redefine Dt multiple times and in an inconsistent way. Firstly, this is a very inconcise, and disturbing for the reader. Secondly, you sometimes refer to it as: 'where Dt is A temporal differential operator', and sometimes as: 'where Dt is THE temporal differential operator'. Moreover, you apparently use three different notations for temporal differentiation, which is quite sloppy: i) \\dot{.}, ii) \\frac{\\partial .}{\\partial t}, iii) Dt .

o) In describing graphs, sometimes you use the term ‘nodes’ and sometimes ‘vertices’. Please stick with one.

p) Table 1 should either be placed earlier, or be referred to early on in the section to prevent losing the reader.

q) It would be much better to present the diagonalization of the Laplacian and related notations (lines 166-117) before/after eq. (2); then you can skip the related descriptions of the degree matrix, after eq (4), and U matrix in eqs. (11)-(12). You are also repeating the diagonalization relation before eq. (38). Please do thoroughly re-read your paper, throughout, as there are many such instances of unnecessary and redundant repetitions.

r) You use multiple notations for vectors: bold lower case, upper case, bold upper case, for instance ‘d’, ‘K’ and ‘X’ in eq. (13). This also happens in the other equations. This is sloppy and super confusing for the reader.

s) The sentence after eq. (17) is either incomplete or badly written if it is linked to (17).

t) In the description after eq. (18), you denote S as S(.) whereas in the definition you use S[.].

u) Sometimes you say 'eigenmode k' and sometimes 'kth eigenmode'. Please consider making the description consistent and stick to one scheme.

v) Some other typos:

- After equation (73): Where > where

- After equation (74): where 1/2\\pi is. > this is obvious, so you may drop it.

- There are many repetitive sentences. For instance: ‘where n/N is…’. It should be sufficient to define these once.

- Line 106: ‘…5s of…’ > ‘…5 seconds of…’

- Line 515: (i; j)^{th} > is th needed?

- Line 579: varies > various

- Comma missing at the end of equations (10), (24), (67), ?

- Full stop missing at the end of equation (23), (17), (22), (20), (18), (19), (40), ?

- After eqs. (4), (10), etc. start sentence with ‘where’; e.g. ‘where’ \\Delta is..

- Before eq (11): are matrices of size (n; n) > are $n \\times n$ matrices.

- Page 8, second line: solutions the steady… > solutions TO the steady...

- Page 9, first sentence: abut > about

Reviewer #2: This paper introduces a graph neural field approach to describe whole brain neural activity which allows to derive spectra and also functional connecitivity. The authors use concepts from neural fields and graph signal processing to achieve this. Overall, I am very enthusiastic about this work and it definitely contributes to the growing interest and role of the eigenmodes to brain activity and connectivity. The math is rigorous and can be clearly followed. I do have a few concerns.

Major concerns

My suggestion would be to reorder and shorten the paper. The paper in its current form is quite long. The current ordering of the sections is also confusing and not according to the journal’s guidelines. I would suggest to have the following order: introduction, results, discussion, methods. I think there are parts of the paper that were necessary as initial sanity checks, such as “damped wave equation on the human connectome” in the result section and “Reaction-diffusion neural activity models”. Though relevant for following the reasoning and steps in the paper, these sections also quite distract from the main storyline of the paper. I understand the role of the linear stability analysis applied to the Wilson-Cowan equations. However, this type of analysis on Wilson-Cowan equations on a network is not new (see (Tewarie et al., 2019)). I would suggest to put all these/or part of these sections to the supplemental material.

The authors use a Gaussian Kernel for their Wilson-Cowan model based analysis. The authors do illustrate the possibilities of the kernels (see method section). Maybe I haven’t noticed it, but I haven’t seen any analysis on the influence of these different kernels. The choice of the kernel can actually have dramatic influence on the stability of the steady states and the model’s bifurcation structure (Atay and Hutt, 2004; Hutt and Atay, 2005; Shamsara et al., 2019). The authors have merely used the Gaussian Kernel with one value for the standard deviation. It is not clear to me how this standard deviation was the result of some optimization. Nor is it clear to me whether this Kernel also optimized the resemblance of functional connectivity with empirical data. Why did the authors not use a range of sigma? It would be interesting if the authors can discuss or speculate on the usage of different kernels for E and I cells, for some I cells, there is evidence of lateral inhibition, which would advocate for example for a Mexican hat kernel for I population (in the Wilson-Cowan example).

It is not clear to me how well the prediction of the model is for empirical functional connectivity patterns from fMRI. The authors do show how well the modeled spectra match the empirical one in Figure 7. However, equally important is whether modeled activity patterns or functional connectivity patterns match the empirical data. I doubt whether functional connectivity patterns such as in figure 6 match empirical fMRI based connectivity matrices.

I cannot find any mentioning of distance dependent delays. Can the authors discuss or mention if distance dependence delays for the propagation along white matter tracts can be implemented in their approach ((Alswaihli et al., 2018)), and otherwise comment how their distance weighting would make this less important? Could the authors please clarify how the distance was set for the Wilson-cowan based simulations on page 11. Did the authors use the distance along the cortical surface for local edges? Distance along the white matter tract for long range connections? If not, I would suggest the authors to implement this. I do not understand on page 11 that the non-local connections are merely based on DTI data? Cortico-cortical connections do not necessarily go through white matter tracts, but can propagate through connections between infra/supragranular layers in the cortex.

Minor concerns

I would put “eigenmodes” in the keywords, or even in the title.

Are sigma in equations (9) and (10) the same as sigma in (11) and (12), i.e. to they correspond to the standard deviation of Gaussian Kernels.

Maybe I overlooked this, but please explicitly mention what the definition of d_j is in (33).

The authors could consider to rename some state variables. The matrix U contains the eigenvectors of the graph Laplacian. At the same time there is state variable u(t). When people scan your paper, they can confuse u to be columns of U. I had this confusion too when I scanned the paper at first. I know that u(x,t) is often used in the field of neural fields to describe the state variables, maybe you could consider to rename the eigenvectors of the graph Laplacian?

I assume equations (62), (65), (67), (69) refer to type 2 neural field equations. Please explicitly mention to make it easy for the reader.

I think this statement in the discussion needs references: “This prevents studying the mechanisms underlying a large class of cortical activity patterns that have been observed in experiments, including traveling and spiral waves, sink-source-dynamics as well as their role in shaping macroscopic dynamics.”

There is a spelling error in the first sentence of the 4th paragraph in the discussion:” based because they can are defined either on high-resolution”. This is either “are” or “can”.

I would suggest to use consistent terminology, it is either nodes and links (networks) or vertices and edges (graphs). The authors use edges and links both at the same time with vertices.

Some recent literature deserves to be mentioned in the context of eigenmodes expression in functional MRI or EEG data, see recent papers: (Glomb et al., 2020; Preti and Van De Ville, 2019; Tewarie et al., 2020). How much different is the damped wave equation on the connectome compared to (Caputo et al., 2017).

Alswaihli, J., Potthast, R., Bojak, I., Saddy, D., and Hutt, A. (2018). Kernel reconstruction for delayed neural field equations. J. Math. Neurosci. 8, 3.

Atay, F. M., and Hutt, A. (2004). Stability and bifurcations in neural fields with finite propagation speed and general connectivity. SIAM J. Appl. Math. 65, 644–666.

Caputo, J.-G., Khames, I., Knippel, A., and Panayotaros, P. (2017). Periodic orbits in nonlinear wave equations on networks. J. Phys. A Math. Theor. 50, 375101.

Glomb, K., Queralt, J. R., Pascucci, D., Defferrard, M., Tourbier, S., Carboni, M., et al. (2020). Connectome spectral analysis to track EEG task dynamics on a subsecond scale. Neuroimage 221, 117137.

Hutt, A., and Atay, F. M. (2005). Analysis of nonlocal neural fields for both general and gamma-distributed connectivities. Phys. D Nonlinear Phenom. 203, 30–54.

Preti, M. G., and Van De Ville, D. (2019). Decoupling of brain function from structure reveals regional behavioral specialization in humans. Nat. Commun. 10, 1–7.

Shamsara, E., Yamakou, M. E., Atay, F. M., and Jost, J. (2019). Dynamics of neural fields with exponential temporal kernel. arXiv Prepr. arXiv1908.06324.

Tewarie, P., Abeysuriya, R., Byrne, Á., O’Neill, G. C., Sotiropoulos, S. N., Brookes, M. J., et al. (2019). How do spatially distinct frequency specific MEG networks emerge from one underlying structural connectome? The role of the structural eigenmodes. Neuroimage 186, 211–220.

Tewarie, P., Prasse, B., Meier, J. M., Santos, F. A. N., Douw, L., Schoonheim, M., et al. (2020). Mapping functional brain networks from the structural connectome: relating the series expansion and eigenmode approaches. Neuroimage 216, 116805. doi:https://doi.org/10.1016/j.neuroimage.2020.116805.

Reviewer #3: The review has been uploaded as an attachment

**********

Have all data underlying the figures and results presented in the manuscript been provided?

Large-scale datasets should be made available via a public repository as described in the PLOS Computational Biology data availability policy, and numerical data that underlies graphs or summary statistics should be provided in spreadsheet form as supporting information.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Hamid Behjat

Reviewer #2: No

Reviewer #3: Yes: Enrico Cataldo

Figure Files:

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org.

Data Requirements:

Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5.

Reproducibility:

To enhance the reproducibility of your results, PLOS recommends that you deposit laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions, please see http://journals.plos.org/compbiol/s/submission-guidelines#loc-materials-and-methods

Attachment

Submitted filename: Cataldo_rev_PlosCB.pdf

PLoS Comput Biol. doi: 10.1371/journal.pcbi.1008310.r003

Decision Letter 1

Daniele Marinazzo

11 Dec 2020

Dear Mr. Aqil,

We are pleased to inform you that your manuscript 'Graph neural fields: a framework for spatiotemporal dynamical models on the human connectome' has been provisionally accepted for publication in PLOS Computational Biology.

Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. A member of our team will be in touch with a set of requests.

Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated.

IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript.

Should you, your institution's press office or the journal office choose to press release your paper, you will automatically be opted out of early publication. We ask that you notify us now if you or your institution is planning to press release the article. All press must be co-ordinated with PLOS.

Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Computational Biology. 

Best regards,

Daniele Marinazzo

Deputy Editor

PLOS Computational Biology

Daniele Marinazzo

Deputy Editor

PLOS Computational Biology

***********************************************************

Reviewer's Responses to Questions

Comments to the Authors:

Please note here if the review is uploaded as an attachment.

Reviewer #1: This reviewer's previous comments have been sufficiently addressed/responded to. Thank you for investing considerable effort in improving the manuscript, through enhancing the organization and adding complementary results.

Reviewer #2: I very much appreciate the effort of the authors. They have sufficiently addressed all my comments and I am happy to recommend this paper for publication.

Reviewer #3: Dear Editor,

the authors have updated the paper, adding the suggested changes.

The paper can be accepted for publication.

**********

Have all data underlying the figures and results presented in the manuscript been provided?

Large-scale datasets should be made available via a public repository as described in the PLOS Computational Biology data availability policy, and numerical data that underlies graphs or summary statistics should be provided in spreadsheet form as supporting information.

Reviewer #1: Yes

Reviewer #2: None

Reviewer #3: Yes

**********

PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Hamid Behjat

Reviewer #2: No

Reviewer #3: No

PLoS Comput Biol. doi: 10.1371/journal.pcbi.1008310.r004

Acceptance letter

Daniele Marinazzo

23 Jan 2021

PCOMPBIOL-D-20-01590R1

Graph neural fields: a framework for spatiotemporal dynamical models on the human connectome

Dear Dr Aqil,

I am pleased to inform you that your manuscript has been formally accepted for publication in PLOS Computational Biology. Your manuscript is now with our production department and you will be notified of the publication date in due course.

The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript.

Soon after your final files are uploaded, unless you have opted out, the early version of your manuscript will be published online. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers.

Thank you again for supporting PLOS Computational Biology and open-access publishing. We are looking forward to publishing your work!

With kind regards,

Melanie Wincott

PLOS Computational Biology | Carlyle House, Carlyle Road, Cambridge CB4 3DN | United Kingdom ploscompbiol@plos.org | Phone +44 (0) 1223-442824 | ploscompbiol.org | @PLOSCompBiol

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Table. Parameter set for 1D analysis and simulations.

    This parameter set was obtained by a qualitative comparison of the Wilson-Cowan model’s harmonic and temporal spectra with empirical data, and used to illustrate how graph properties affect neural field dynamics in one dimension.

    (PDF)

    S2 Table. Parameter set for connectome-wide analysis and simulations.

    This parameter set was obtained by quantiatively fitting the Wilson-Cowan model’s harmonic power spectrum to that of resting-state fMRI data, and used for all connectome-wide analysis and numerical simulations.

    (PDF)

    S1 Fig. Spatial convolution examples on 1-dimensional graphs.

    To illustrate spatial convolution on graphs, we apply different spatial convolution filters from Table 2 to an impulse function centered on the middle vertex of a one-dimensional grid-graph with spacing h = 1 units. The resulting functions, normalized to have unit amplitude, show the shapes of the graph kernels. Note that the rectangular kernel convolution operator in Panel (E) exhibits the Gibbs phenomenon [64], which is a known feature of finite Fourier representations of functions with jump discontinuities. Solutions to this problem have been offered [65], but they are beyond the scope of the current work. Thus, we suggest avoiding spatial kernels with jump discontinuities in the context of graph neural fields. Open boundary conditions can be implemented by extending the graph beyond the image size, and periodic boundaries by adding edges connecting vertices on opposite sides of the graph. We also note that, if desired, spectral kernels can be obtained using polynomial approximation schemes, which obviates the need to diagonalize the graph Laplacian matrix [66]. For large datasets (for example natural images databases), it might be computationally advantageous to apply convolutions with symmetric kernels through graph filters, rather than with standard discrete convolution methods. Blurring/smoothing a 2-dimensional image with a spatial Gaussian kernel is equivalent to applying the graph Gaussian kernel to the image-function defined on a 2-dimensional square-grid graph. Spatial convolutions on graphs become linear matrix-vector products, which are highly optimized and easily parallelizable operations; the bulk of the computational cost for graph convolutions consists in the initial computation of the filter itself, which has to be performed only once per kernel. The approach described here is limited to symmetric kernels. In some special cases, asymmetric kernels may be practically obtained by introducing suitable asymmetries in the graph edges. For example, consider a grid graph in two dimensions, with additional edges connecting bottom-left and top-right vertices of each square in the grid. Because of the broken lattice symmetry, a Gaussian kernel on this non-grid graph will behave like a spatially elliptic Gaussian, angled at 45 degrees, analogously to modelling a spatially asymmetric diffusion process on the graph.

    (TIF)

    S2 Fig. The damped-wave equation on the human connectome gives rise to propagation with characteristic speed and wavelength.

    Shown are snapshots of simulated cortical activity that is governed by the damped wave equation with time-step δt = 1 and parameters a = 3 ⋅ 105, b = 5 ⋅ 103.

    (TIF)

    S3 Fig. Varying the parameters of the damped-wave equation alters the dynamics of propagation on the human connectome.

    Shown are snapshots of simulated cortical activity that is governed by the damped wave equation with time-step δt = 1 and parameters a = 1.5 ⋅ 105, b = 2.5 ⋅ 103.

    (TIF)

    S4 Fig. Dynamics of the damped-wave equation on the human connectome include non-local propagation along white-matter fibers.

    Shown are snapshots of simulated cortical activity that is governed by the damped wave equation with time-step δt = 1 and parameters a = 1.5 ⋅ 105, b = 2.5 ⋅ 103.

    (TIF)

    S5 Fig. Resting-state fMRI and numerical simulation of the Wilson-Cowan graph neural field model on the human connectome.

    Panel A shows resting-state brain activity, as fluctuations of the BOLD fMRI signal about the mean at each vertex. Panel B shows snapshots of activity from the stochastic Wilson-Cowan graph neural field model, simulated using the parameters of S2 Table. The model activity was temporally downsampled to match the TR of fMRI data, and rescaled by β to match the scale of the BOLD signal. No spatial or temporal smoothing was applied. Note that the two hemispheric surfaces are physically separate, and inter-hemispheric propagation is allowed through white matter fibers.

    (TIF)

    S6 Fig. FMRI functional connectivity.

    High-resolution PDF version of Fig 6.

    (PDF)

    S7 Fig. Model functional connectivity.

    High-resolution PDF version of Fig 7.

    (PDF)

    S8 Fig. Functional connectivity comparison (inset 1).

    High-resolution PDF version of Fig 8.

    (PDF)

    S9 Fig. Functional connectivity comparison (inset 2).

    High-resolution PDF version of Fig 9.

    (PDF)

    S1 Appendix. Spatiotemporal convolutions on graphs.

    Here, we generalize the formulation of spatial convolutions on graphs to spatiotemporal convolutions on graphs, allowing the definition of a broader class of graph neural fields.

    (PDF)

    S2 Appendix. Reaction-diffusion neural activity models on graphs.

    In this section we show how graph filters can also be used to implement the graph equivalents of neural activity models that can be directly written as partial differential equations [36, 53] and, among others, comprise damped wave and reaction-diffusion equations.

    (PDF)

    S3 Appendix. Damped wave and telegrapher’s equation on graphs.

    The damped-wave describes the dynamics of simultaneous diffusion and wave propagation, and is thus of interest in the context of modelling activity propagation in neural tissue [53]. Nonlinear variants of the wave equation on graphs have also been the subject of previous analytical studies [67]. Here, we solve the graph equivalent of the damped-wave equation and of the telegrapher’s equation, which is of interest in the context of modelling action potentials [68].

    (PDF)

    S4 Appendix. Wilson-Cowan model linear stability analysis.

    In order to compute meaningful spatiotemporal observables with CHAOSS for a given set of parameters, it is first necessary to find a steady state and compute its stability to perturbations. Here, we provide solutions to the steady-state equations and a general linear stability analysis for the Wilson-Cowan model on graphs.

    (PDF)

    Attachment

    Submitted filename: Cataldo_rev_PlosCB.pdf

    Attachment

    Submitted filename: reponse_GNFs_edited.pdf

    Data Availability Statement

    Data used in this work were made available by the Human Connectome Project (HCP), WU-Minn Consortium. We use fMRI, MRI and DTI data of an individual subject (#100307), available from https://db.humanconnectome.org/data/projects/HCP_1200. All code used for analysis and simulations is available for use and review at https://github.com/marcoaqil/Graph-Stochastic-Wilson-Cowan-Model. Together, these two links contain all the data and all the code used for this work.


    Articles from PLoS Computational Biology are provided here courtesy of PLOS

    RESOURCES